1
|
Autokinesis Reveals a Threshold for Perception of Visual Motion. Neuroscience 2024; 543:101-107. [PMID: 38432549 PMCID: PMC10965040 DOI: 10.1016/j.neuroscience.2024.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024]
Abstract
In natural viewing conditions, the brain can optimally integrate retinal and extraretinal signals to maintain a stable visual perception. These mechanisms, however, may fail in circumstances where extraction of a motion signal is less viable such as impoverished visual scenes. This can result in a phenomenon known as autokinesis in which one may experience apparent motion of a small visual stimulus in an otherwise completely dark environment. In this study, we examined the effect of autokinesis on visual perception of motion in human observers. We used a novel method with optical tracking in which the visual motion was reported manually by the observer. Experiment results show at lower speeds of motion, the perceived direction of motion was more aligned with the effect of autokinesis, whereas in the light or at higher speeds in the dark, it was more aligned with the actual direction of motion. These findings have important implications for understanding how the stability of visual representation in the brain can affect accurate perception of motion signals.
Collapse
|
2
|
Autonomous Spinal Robotic System for Transforaminal Lumbar Epidural Injections: A Proof of Concept of Study. Global Spine J 2024; 14:138-145. [PMID: 35467447 PMCID: PMC10676186 DOI: 10.1177/21925682221096625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
STUDY DESIGN Phantom study. OBJECTIVE The aim of our study is to demonstrate in a proof-of-concept model whether the use of a marker less autonomous robotic controlled injection delivery system will increase accuracy in the lumbar spine. METHODS Ideal transforaminal epidural injection trajectories (bilateral L2/3, L3/4, L4/5, L5/S1 and S1) were planned out on a virtual pre-operative planning software by 1 experienced provider. Twenty transforaminal epidural injections were administered in a lumbar spine phantom model, 10 using a freehand procedure, and 10 using a marker less autonomous spinal robotic system. Procedural accuracy, defined as the difference between pre-operative planning and actual post-operative needle tip distance (mm) and angular orientation (degrees), were assessed between the freehand and robotic procedures. RESULTS Procedural accuracy for robotically placed transforaminal epidural injections was significantly higher with the difference in pre- and post-operative needle tip distance being 20.1 (±5.0) mm in the freehand procedure and 11.4 (±3.9) mm in the robotically placed procedure (P < .001). Needle tip precision for the freehand technique was 15.6 mm (26.3 - 10.7) compared to 10.1 mm (16.3 - 6.1) for the robotic technique. Differences in needle angular orientation deviation were 5.6 (±3.3) degrees in the robotically placed procedure and 12.0 (±4.8) degrees in the freehand procedure (P = .003). CONCLUSION The robotic system allowed for comparable placement of transforaminal epidural injections as a freehand technique by an experienced provider, with additional benefits of improved accuracy and precision.
Collapse
|
3
|
A Fully Differentiable Framework for 2D/3D Registration and the Projective Spatial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:275-285. [PMID: 37549070 PMCID: PMC10879149 DOI: 10.1109/tmi.2023.3299588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/09/2023]
Abstract
Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
Collapse
|
4
|
Data-driven Shape Sensing of Continuum Dexterous Manipulators Using Embedded Capacitive Sensor. PROCEEDINGS OF IEEE SENSORS. IEEE INTERNATIONAL CONFERENCE ON SENSORS 2023; 2023:10.1109/sensors56945.2023.10324929. [PMID: 38577480 PMCID: PMC10994196 DOI: 10.1109/sensors56945.2023.10324929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/06/2024]
Abstract
We propose a novel inexpensive embedded capacitive sensor (ECS) for sensing the shape of Continuum Dexterous Manipulators (CDMs). Our approach addresses some limitations associated with the prevalent Fiber Bragg Grating (FBG) sensors, such as temperature sensitivity and high production costs. ECSs are calibrated using a vision-based system. The calibration of the ECS is performed by a recurrent neural network that uses the kinematic data collected from the vision-based system along with the uncalibrated data from ECSs. We evaluated the performance on a 3D printed prototype of a cable-driven CDM with multiple markers along its length. Using data from three ECSs along the length of the CDM, we computed the angle and position of its tip with respect to its base and compared the results to the measurements of the visual-based system. We found a 6.6% tip position error normalized to the length of the CDM. The work shows the early feasibility of using ECSs for shape sensing and feedback control of CDMs and discusses potential future improvements.
Collapse
|
5
|
Feasibility Study of Using Augmented Mirrors for Alignment Task during Orthopaedic Procedures in Mixed Reality. ... IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT (ISMAR-ADJUNCT). IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY ADJUNCT (ISMAR-ADJUNCT) 2023; 2023:650-651. [PMID: 38566770 PMCID: PMC10986428 DOI: 10.1109/ismar-adjunct60411.2023.00139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Accurate depth estimation poses a significant challenge in egocentric Augmented Reality (AR), particularly for precision-dependent tasks in the medical field, such as needle or tool insertions during percutaneous procedures. Augmented Mirrors (AMs) provide a unique solution to this problem by offering additional non-egocentric viewpoints that enhance spatial understanding of an AR scene. Despite the perceptual advantages of using AMs, their practical utility has yet to be thoroughly tested. In this work, we present results from a pilot study involving five participants tasked with simulating epidural injection procedures in an AR environment, both with and without the aid of an AM. Our findings indicate that using AM contributes to reducing mental effort while improving alignment accuracy. These results highlight the potential of AM as a powerful tool for AR-enabled medical procedures, setting the stage for future exploration involving medical professionals.
Collapse
|
6
|
Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14228:133-143. [PMID: 38617200 PMCID: PMC11016332 DOI: 10.1007/978-3-031-43996-4_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.
Collapse
|
7
|
Skin Lesion Correspondence Localization in Total Body Photography. ARXIV 2023:arXiv:2307.09642v2. [PMID: 37576124 PMCID: PMC10418525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Longitudinal tracking of skin lesions - finding correspondence, changes in morphology, and texture - is beneficial to the early detection of melanoma. However, it has not been well investigated in the context of full-body imaging. We propose a novel framework combining geometric and texture information to localize skin lesion correspondence from a source scan to a target scan in total body photography (TBP). Body landmarks or sparse correspondence are first created on the source and target 3D textured meshes. Every vertex on each of the meshes is then mapped to a feature vector characterizing the geodesic distances to the landmarks on that mesh. Then, for each lesion of interest (LOI) on the source, its corresponding location on the target is first coarsely estimated using the geometric information encoded in the feature vectors and then refined using the texture information. We evaluated the framework quantitatively on both a public and a private dataset, for which our success rates (at 10 mm criterion) are comparable to the only reported longitudinal study. As full-body 3D capture becomes more prevalent and has higher quality, we expect the proposed method to constitute a valuable step in the longitudinal tracking of skin lesions.
Collapse
|
8
|
An autonomous X-ray image acquisition and interpretation system for assisting percutaneous pelvic fracture fixation. Int J Comput Assist Radiol Surg 2023; 18:1201-1208. [PMID: 37213057 PMCID: PMC11002911 DOI: 10.1007/s11548-023-02941-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/25/2023] [Indexed: 05/23/2023]
Abstract
PURPOSE Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.
Collapse
|
9
|
Platform for investigating continuum manipulator behavior in orthopedics. Int J Comput Assist Radiol Surg 2023; 18:1329-1334. [PMID: 37162733 PMCID: PMC10986430 DOI: 10.1007/s11548-023-02945-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/28/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE The use of robotic continuum manipulators has been proposed to facilitate less-invasive orthopedic surgical procedures. While tools and strategies have been developed, critical challenges such as system control and intra-operative guidance are under-addressed. Simulation tools can help solve these challenges, but several gaps limit their utility for orthopedic surgical systems, particularly those with continuum manipulators. Herein, a simulation platform which addresses these gaps is presented as a tool to better understand and solve challenges for minimally invasive orthopedic procedures. METHODS An open-source surgical simulation software package was developed in which a continuum manipulator can interact with any volume model such as to drill bone volumes segmented from a 3D computed tomography (CT) image. Paired simulated X-ray images of the scene can also be generated. As compared to previous works, tool-anatomy interactions use a physics-based approach which leads to more stable behavior and wider procedure applicability. A new method for representing low-level volumetric drilling behavior is also introduced to capture material variability within bone as well as patient-specific properties from a CT. RESULTS Similar interaction between a continuum manipulator and phantom bone was also demonstrated between a simulated manipulator and volumetric obstacle models. High-level material- and tool-driven behavior was shown to emerge directly from the improved low-level interactions, rather than by need of manual programming. CONCLUSION This platform is a promising tool for developing and investigating control algorithms for tasks such as curved drilling. The generation of simulated X-ray images that correspond to the scene is useful for developing and validating image guidance models. The improvements to volumetric drilling offer users the ability to better tune behavior for specific tools and procedures and enable research to improve surgical simulation model fidelity. This platform will be used to develop and test control algorithms for image-guided curved drilling procedures in the femur.
Collapse
|
10
|
Design and Fabrication of a Fiber Bragg Grating Shape Sensor for Shape Reconstruction of a Continuum Manipulator. IEEE SENSORS JOURNAL 2023; 23:12915-12929. [PMID: 38558829 PMCID: PMC10977927 DOI: 10.1109/jsen.2023.3274146] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Continuum dexterous manipulators (CDMs) are suitable for performing tasks in a constrained environment due to their high dexterity and maneuverability. Despite the inherent advantages of CDMs in minimally invasive surgery, real-time control of CDMs' shape during nonconstant curvature bending is still challenging. This study presents a novel approach for the design and fabrication of a large deflection fiber Bragg grating (FBG) shape sensor embedded within the lumens inside the walls of a CDM with a large instrument channel. The shape sensor consisted of two fibers, each with three FBG nodes. A shape-sensing model was introduced to reconstruct the centerline of the CDM based on FBG wavelengths. Different experiments, including shape sensor tests and CDM shape reconstruction tests, were conducted to assess the overall accuracy of the shape-sensing. The FBG sensor evaluation results revealed the linear curvature-wavelength relationship with the large curvature detection of 0.045 mm and a high wavelength shift of up to 5.50 nm at a 90° bending angle in both the bending directions. The CDM's shape reconstruction experiments in a free environment demonstrated the shape-tracking accuracy of 0.216 ± 0.126 mm for positive/negative deflections. Also, the CDM shape reconstruction error for three cases of bending with obstacles was observed to be 0.436 ± 0.370 mm for the proximal case, 0.485 ± 0.418 mm for the middle case, and 0.312 ± 0.261 mm for the distal case. This study indicates the adequate performance of the FBG sensor and the effectiveness of the model for tracking the shape of the large-deflection CDM with nonconstant-curvature bending for minimally invasive orthopedic applications.
Collapse
|
11
|
Visualization in 2D/3D registration matters for assuring technology-assisted image-guided surgery. Int J Comput Assist Radiol Surg 2023; 18:1017-1024. [PMID: 37079247 PMCID: PMC10986429 DOI: 10.1007/s11548-023-02888-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/27/2023] [Indexed: 04/21/2023]
Abstract
PURPOSE Image-guided navigation and surgical robotics are the next frontiers of minimally invasive surgery. Assuring safety in high-stakes clinical environments is critical for their deployment. 2D/3D registration is an essential, enabling algorithm for most of these systems, as it provides spatial alignment of preoperative data with intraoperative images. While these algorithms have been studied widely, there is a need for verification methods to enable human stakeholders to assess and either approve or reject registration results to ensure safe operation. METHODS To address the verification problem from the perspective of human perception, we develop novel visualization paradigms and use a sampling method based on approximate posterior distribution to simulate registration offsets. We then conduct a user study with 22 participants to investigate how different visualization paradigms (Neutral, Attention-Guiding, Correspondence-Suggesting) affect human performance in evaluating the simulated 2D/3D registration results using 12 pelvic fluoroscopy images. RESULTS All three visualization paradigms allow users to perform better than random guessing to differentiate between offsets of varying magnitude. The novel paradigms show better performance than the neutral paradigm when using an absolute threshold to differentiate acceptable and unacceptable registrations (highest accuracy: Correspondence-Suggesting (65.1%), highest F1 score: Attention-Guiding (65.7%)), as well as when using a paradigm-specific threshold for the same discrimination (highest accuracy: Attention-Guiding (70.4%), highest F1 score: Corresponding-Suggesting (65.0%)). CONCLUSION This study demonstrates that visualization paradigms do affect the human-based assessment of 2D/3D registration errors. However, further exploration is needed to understand this effect better and develop more effective methods to assure accuracy. This research serves as a crucial step toward enhanced surgical autonomy and safety assurance in technology-assisted image-guided surgery.
Collapse
|
12
|
Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. NAT MACH INTELL 2023; 5:294-308. [PMID: 38523605 PMCID: PMC10959504 DOI: 10.1038/s42256-023-00629-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/06/2023] [Indexed: 03/26/2024]
Abstract
Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.
Collapse
|
13
|
A Surgical Robotic System for Osteoporotic Hip Augmentation: System Development and Experimental Evaluation. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2023; 5:18-29. [PMID: 37213937 PMCID: PMC10195101 DOI: 10.1109/tmrb.2023.3241589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Minimally-invasive Osteoporotic Hip Augmentation (OHA) by injecting bone cement is a potential treatment option to reduce the risk of hip fracture. This treatment can significantly benefit from computer-assisted planning and execution system to optimize the pattern of cement injection. We present a novel robotic system for the execution of OHA that consists of a 6-DOF robotic arm and integrated drilling and injection component. The minimally-invasive procedure is performed by registering the robot and preoperative images to the surgical scene using multiview image-based 2D/3D registration with no external fiducial attached to the body. The performance of the system is evaluated through experimental sawbone studies as well as cadaveric experiments with intact soft tissues. In the cadaver experiments, distance errors of 3.28mm and 2.64mm for entry and target points and orientation error of 2.30° are calculated. Moreover, the mean surface distance error of 2.13mm with translational error of 4.47mm is reported between injected and planned cement profiles. The experimental results demonstrate the first application of the proposed Robot-Assisted combined Drilling and Injection System (RADIS), incorporating biomechanical planning and intraoperative fiducial-less 2D/3D registration on human cadavers with intact soft tissues.
Collapse
|
14
|
STTAR: Surgical Tool Tracking using Off-the-Shelf Augmented Reality Head-Mounted Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:10.1109/TVCG.2023.3238309. [PMID: 37021885 PMCID: PMC10959446 DOI: 10.1109/tvcg.2023.3238309] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The use of Augmented Reality (AR) for navigation purposes has shown beneficial in assisting physicians during the performance of surgical procedures. These applications commonly require knowing the pose of surgical tools and patients to provide visual information that surgeons can use during the performance of the task. Existing medical-grade tracking systems use infrared cameras placed inside the Operating Room (OR) to identify retro-reflective markers attached to objects of interest and compute their pose. Some commercially available AR Head-Mounted Displays (HMDs) use similar cameras for self-localization, hand tracking, and estimating the objects' depth. This work presents a framework that uses the built-in cameras of AR HMDs to enable accurate tracking of retro-reflective markers without the need to integrate any additional electronics into the HMD. The proposed framework can simultaneously track multiple tools without having previous knowledge of their geometry and only requires establishing a local network between the headset and a workstation. Our results show that the tracking and detection of the markers can be achieved with an accuracy of 0.09±0.06 mm on lateral translation, 0.42 ±0.32 mm on longitudinal translation and 0.80 ±0.39° for rotations around the vertical axis. Furthermore, to showcase the relevance of the proposed framework, we evaluate the system's performance in the context of surgical procedures. This use case was designed to replicate the scenarios of k-wire insertions in orthopedic procedures. For evaluation, seven surgeons were provided with visual navigation and asked to perform 24 injections using the proposed framework. A second study with ten participants served to investigate the capabilities of the framework in the context of more general scenarios. Results from these studies provided comparable accuracy to those reported in the literature for AR-based navigation procedures.
Collapse
|
15
|
Remote orthopedic robotic surgery: make fracture treatment no longer limited by geography. Sci Bull (Beijing) 2023; 68:14-17. [PMID: 36682857 DOI: 10.1016/j.scib.2022.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
16
|
Review of Enhanced Handheld Surgical Drills. Crit Rev Biomed Eng 2023; 51:29-50. [PMID: 37824333 PMCID: PMC10874117 DOI: 10.1615/critrevbiomedeng.2023049106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The handheld drill has been used as a conventional surgical tool for centuries. Alongside the recent successes of surgical robots, the development of new and enhanced medical drills has improved surgeon ability without requiring the high cost and consuming setup times that plague medical robot systems. This work provides an overview of enhanced handheld surgical drill research focusing on systems that include some form of image guidance and do not require additional hardware that physically supports or guides drilling. Drilling is reviewed by main contribution divided into audio-, visual-, or hardware-enhanced drills. A vision for future work to enhance handheld drilling systems is also discussed.
Collapse
|
17
|
Towards Visualizing Early-stage Osteonecrosis using Intraoperative Imaging Modalities. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1234-1242. [PMID: 38179232 PMCID: PMC10766436 DOI: 10.1080/21681163.2022.2157329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 11/19/2022] [Indexed: 12/23/2022]
Abstract
Osteonecrosis of the Femoral Head (ONFH) is a progressive disease characterized by the death of bone cells due to the loss of blood supply. Early detection and treatment of this disease are vital in avoiding Total Hip Replacement. Early stages of ONFH can be diagnosed using Magnetic Resonance Imaging (MRI), commonly used intra-operative imaging modalities such as fluoroscopy frequently fail to depict the lesion. Therefore, increasing the difficulty of intra-operative localization of osteonecrosis. This work introduces a novel framework that enables the localization of necrotic lesions in Computed Tomography (CT) as a step toward localizing and visualizing necrotic lesions in intra-operative images. The proposed framework uses Deep Learning algorithms to enable automatic segmentation of femur, pelvis, and necrotic lesions in MRI. An additional step performs semi-automatic segmentation of these anatomies, excluding the necrotic lesions, in CT. A final step performs pairwise registration of the corresponding anatomies, allowing for the localization and visualization of the necrosis in CT. To investigate the feasibility of integrating the proposed framework in the surgical workflow, we conducted experiments on MRIs and CTs containing early-stage ONFH. Our results indicate that the proposed framework is able to segment the anatomical structures of interest and accurately register the femurs and pelvis of the corresponding volumes, allowing for the visualization and localization of the ONFH in CT and generated X-rays, which could enable intra-operative visualization of the necrotic lesions for surgical procedures such as core decompression of the femur.
Collapse
|
18
|
Towards 2D/3D Registration of the Preoperative MRI to Intraoperative Fluoroscopic Images for Visualization of Bone Defects. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1096-1105. [PMID: 37555198 PMCID: PMC10406464 DOI: 10.1080/21681163.2022.2152375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/19/2022] [Indexed: 12/23/2022]
Abstract
Magnetic Resonance Imaging (MRI) is a medical imaging modality that allows for the evaluation of soft-tissue diseases and the assessment of bone quality. Preoperative MRI volumes are used by surgeons to identify defected bones, perform the segmentation of lesions, and generate surgical plans before the surgery. Nevertheless, conventional intraoperative imaging modalities such as fluoroscopy are less sensitive in detecting potential lesions. In this work, we propose a 2D/3D registration pipeline that aims to register preoperative MRI with intraoperative 2D fluoroscopic images. To showcase the feasibility of our approach, we use the core decompression procedure as a surgical example to perform 2D/3D femur registration. The proposed registration pipeline is evaluated using digitally reconstructed radiographs (DRRs) to simulate the intraoperative fluoroscopic images. The resulting transformation from the registration is later used to create overlays of preoperative MRI annotations and planning data to provide intraoperative visual guidance to surgeons. Our results suggest that the proposed registration pipeline is capable of achieving reasonable transformation between MRI and digitally reconstructed fluoroscopic images for intraoperative visualization applications.
Collapse
|
19
|
Towards Reducing Visual Workload in Surgical Navigation: Proof-of-concept of an Augmented Reality Haptic Guidance System. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1073-1080. [PMID: 38487569 PMCID: PMC10938944 DOI: 10.1080/21681163.2022.2152372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 11/19/2022] [Indexed: 12/12/2022]
Abstract
The integration of navigation capabilities into the operating room has enabled surgeons take on more precise procedures guided by a pre-operative plan. Traditionally, navigation information based on this plan is presented using monitors in the surgical theater. But the monitors force the surgeon to frequently look away from the surgical area. Alternative technologies, such as augmented reality, have enabled surgeons to visualize navigation information in-situ. However, burdening the visual field with additional information can be distracting. In this work, we propose integrating haptic feedback into a surgical tool handle to enable surgical guidance capabilities. This property reduces the amount of visual information, freeing surgeons to maintain visual attention over the patient and the surgical site. To investigate the feasibility of this guidance paradigm we conducted a pilot study with six subjects. Participants traced paths, pinpointed locations and matched alignments with a mock surgical tool featuring a novel haptic handle. We collected quantitative data, tracking user's accuracy and time to completion as well as subjective cognitive load. Our results show that haptic feedback can guide participants using a tool to sub-millimeter and sub-degree accuracy with only little training. Participants were able to match a location with an average error of 0.82 mm , desired pivot alignments with an average error of 0.83 ° and desired rotations to 0.46 °.
Collapse
|
20
|
Fluoroscopy-Guided Robotic System for Transforaminal Lumbar Epidural Injections. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2022; 4:901-909. [PMID: 37790985 PMCID: PMC10544812 DOI: 10.1109/tmrb.2022.3196321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
We present an autonomous robotic spine needle injection system using fluoroscopic image-based navigation. Our system includes patient-specific planning, intra-operative image-based 2D/3D registration and navigation, and automatic robot-guided needle injection. We performed intensive simulation studies to validate the registration accuracy. We achieved a mean spine vertebrae registration error of 0.8 ± 0.3 mm, 0.9 ± 0.7 degrees, mean injection device registration error of 0.2 ± 0.6 mm, 1.2 ± 1.3 degrees, in translation and rotation, respectively. We then conducted cadaveric studies comparing our system to an experienced clinician's free-hand injections. We achieved a mean needle tip translational error of 5.1 ± 2.4 mm and needle orientation error of 3.6 ± 1.9 degrees for robotic injections, compared to 7.6 ± 2.8 mm and 9.9 ± 4.7 degrees for clinician's free-hand injections, respectively. During injections, all needle tips were placed within the defined safety zones for this application. The results suggest the feasibility of using our image-guided robotic injection system for spinal orthopedic applications.
Collapse
|
21
|
A Dexterous Robotic System for Autonomous Debridement of Osteolytic Bone Lesions in Confined Spaces: Human Cadaver Studies. IEEE T ROBOT 2022; 38:1213-1229. [PMID: 35633946 PMCID: PMC9138669 DOI: 10.1109/tro.2021.3091283] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
This article presents a dexterous robotic system for autonomous debridement of osteolytic bone lesions in confined spaces. The proposed system is distinguished from the state-of-the-art orthopedics systems because it combines a rigid-link robot with a continuum manipulator (CM) that enhances reach in difficult-to-access spaces often encountered in surgery. The CM is equipped with flexible debriding instruments and fiber Bragg grating sensors. The surgeon plans on the patient’s preoperative computed tomography and the robotic system performs the task autonomously under the surgeon’s supervision. An optimization-based controller generates control commands on the fly to execute the task while satisfying physical and safety constraints. The system design and controller are discussed and extensive simulation, phantom and human cadaver experiments are carried out to evaluate the performance, workspace, and dexterity in confined spaces. Mean and standard deviation of target placement are 0.5 and 0.18 mm, and the robotic system covers 91% of the workspace behind an acetabular implant in treatment of hip osteolysis, compared to the 54% that is achieved by conventional rigid tools.
Collapse
|
22
|
Abstract
We present an image-based navigation solution for a surgical robotic system with a Continuum Manipulator (CM). Our navigation system uses only fluoroscopic images from a mobile C-arm to estimate the CM shape and pose with respect to the bone anatomy. The CM pose and shape estimation is achieved using image intensity-based 2D/3D registration. A learning-based framework is used to automatically detect the CM in X-ray images, identifying landmark features that are used to initialize and regularize image registration. We also propose a modified hand-eye calibration method that numerically optimizes the hand-eye matrix during image registration. The proposed navigation system for CM positioning was tested in simulation and cadaveric studies. In simulation, the proposed registration achieved a mean error of 1.10±0.72 mm between the CM tip and a target entry point on the femur. In cadaveric experiments, the mean CM tip position error was 2.86±0.80 mm after registration and repositioning of the CM. The results suggest that our proposed fluoroscopic navigation is feasible to guide the CM in orthopedic applications.
Collapse
|
23
|
Reconstruction of Orthographic Mosaics From Perspective X-Ray Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3165-3177. [PMID: 34181536 DOI: 10.1109/tmi.2021.3093198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Image stitching is a prominent challenge in medical imaging, where the limited field-of-view captured by single images prohibits holistic analysis of patient anatomy. The barrier that prevents straight-forward mosaicing of 2D images is depth mismatch due to parallax. In this work, we leverage the Fourier slice theorem to aggregate information from multiple transmission images in parallax-free domains using fundamental principles of X-ray image formation. The details of the stitched image are subsequently restored using a novel deep learning strategy that exploits similarity measures designed around frequency, as well as dense and sparse spatial image content. Our work provides evidence that reconstruction of orthographic mosaics is possible with realistic motions of the C-arm involving both translation and rotation. We also show that these orthographic mosaics enable metric measurements of clinically relevant quantities directly on the 2D image plane.
Collapse
|
24
|
The Impact of Machine Learning on 2D/3D Registration for Image-Guided Interventions: A Systematic Review and Perspective. Front Robot AI 2021; 8:716007. [PMID: 34527706 PMCID: PMC8436154 DOI: 10.3389/frobt.2021.716007] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based navigation is widely considered the next frontier of minimally invasive surgery. It is believed that image-based navigation will increase the access to reproducible, safe, and high-precision surgery as it may then be performed at acceptable costs and effort. This is because image-based techniques avoid the need of specialized equipment and seamlessly integrate with contemporary workflows. Furthermore, it is expected that image-based navigation techniques will play a major role in enabling mixed reality environments, as well as autonomous and robot-assisted workflows. A critical component of image guidance is 2D/3D registration, a technique to estimate the spatial relationships between 3D structures, e.g., preoperative volumetric imagery or models of surgical instruments, and 2D images thereof, such as intraoperative X-ray fluoroscopy or endoscopy. While image-based 2D/3D registration is a mature technique, its transition from the bench to the bedside has been restrained by well-known challenges, including brittleness with respect to optimization objective, hyperparameter selection, and initialization, difficulties in dealing with inconsistencies or multiple objects, and limited single-view performance. One reason these challenges persist today is that analytical solutions are likely inadequate considering the complexity, variability, and high-dimensionality of generic 2D/3D registration problems. The recent advent of machine learning-based approaches to imaging problems that, rather than specifying the desired functional mapping, approximate it using highly expressive parametric models holds promise for solving some of the notorious challenges in 2D/3D registration. In this manuscript, we review the impact of machine learning on 2D/3D registration to systematically summarize the recent advances made by introduction of this novel technology. Grounded in these insights, we then offer our perspective on the most pressing needs, significant open problems, and possible next steps.
Collapse
|
25
|
Automated Implant Resizing for Single-Stage Cranioplasty. IEEE Robot Autom Lett 2021; 6:6624-6631. [PMID: 34395869 DOI: 10.1109/lra.2021.3095286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Patient-specific customized cranial implants (CCIs) are designed to fill the bony voids in the cranial and craniofacial skeleton. The current clinical approach during single-stage cranioplasty involves a surgeon modifying an oversized CCI to fit a patient's skull defect. The manual process, however, can be imprecise and time-consuming. This paper presents an automated surgical workflow with a robotic workstation for intraoperative CCI modification that provides higher resizing accuracy compared to the manual approach. We proposed a 2-scan method for intraoperative patient-to-CT registration using reattachable fiducial markers to address the registration issue caused by the clinical draping requirement. First, the draped defected skull was 3D scanned and registered to the CT space using our proposed 2-scan registration method. Next, our algorithm generates a robot cutting toolpath based on the 3D defect model. The robot then performs automatic 3D scanning to localize the implant and resizes the implant to match the cranial defect. We evaluated the implant resizing accuracy of the proposed paradigm against the resizing accuracy of the manual approach by an expert surgeon on two plastic skulls and two cadavers. The evaluation results showed that our system was able to decrease the bone gap distance by more than 60% and 30% on plastic skulls and cadavers respectively compared to the manual approach, indicating lower risk of post-surgical complication and better aesthetic restoration.
Collapse
|
26
|
A biomechanically-guided planning and execution paradigm for osteoporotic hip augmentation: Experimental evaluation of the biomechanics and temperature-rise. Clin Biomech (Bristol, Avon) 2021; 87:105392. [PMID: 34174676 PMCID: PMC8550980 DOI: 10.1016/j.clinbiomech.2021.105392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/29/2021] [Accepted: 04/30/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Augmentation of the proximal femur with bone cement (femoroplasty) has been identified as a potential preventive approach to reduce the risk of fracture. Femoroplasty, however, is associated with a risk of thermal damage as well as the leakage of bone cement or blockage of blood supply when large volumes of cement are introduced inside the bone. METHODS Six pairs of cadaveric femora were augmented using a newly proposed planning paradigm and an in-house navigation system to control the location and volume of the injected cement. To evaluate the risk of thermal damage, we recorded the peak temperature of bone at three regions of interest as well as the exposure time for temperature rise of 8 °C, 10 °C, and 12 °C in these regions. Augmentation was followed by mechanical testing to failure resembling a sideway fall on the greater trochanter. FINDINGS Results of the fracture tests correlated with those of simulations for the yield load (R2 = 0.77) and showed that femoroplasty can significantly improve the yield load (42%, P < 0.001) and yield energy (139%, P = 0.062) of the specimens. Meanwhile, temperature recordings of the bone surface showed that the areas close to the greater trochanter will be exposed to more critical temperature rise than the trochanteric crest and femoral neck areas. INTERPRETATION The new planning paradigm offers a more efficient injection strategy with injection volume of 9.1 ml on average. Meanwhile, temperature recordings of bone surfaces suggest that risk of thermal necrosis remains as a concern with femoroplasty using Polymethylmethacrylate.
Collapse
|
27
|
An Active Steering Hand-held Robotic System for Minimally Invasive Orthopaedic Surgery Using a Continuum Manipulator. IEEE Robot Autom Lett 2021; 6:1622-1629. [PMID: 33869745 PMCID: PMC8052093 DOI: 10.1109/lra.2021.3059634] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents the development and experimental evaluation of an active steering hand-held robotic system for milling and curved drilling in minimally invasive orthopaedic interventions. The system comprises a cable-driven continuum dexterous manipulator (CDM), an actuation unit with a handpiece, and a flexible, rotary cutting tool. Compared to conventional rigid drills, the proposed system enhances dexterity and reach in confined spaces in surgery, while providing direct control to the surgeon with sufficient stability while cutting/milling hard tissue. Of note, for cases that require precise motion, the system is able to be mounted on a positioning robot for additional controllability. A proportional-derivative (PD) controller for regulating drive cable tension is proposed for the stable steering of the CDM during cutting operations. The robotic system is characterized and tested with various tool rotational speeds and cable tensions, demonstrating successful cutting of three-dimensional and curvilinear tool paths in simulated cancellous bone and bone phantom. Material removal rates (MRRs) of up to 571 mm3/s are achieved for stable cutting, demonstrating great improvement over previous related works.
Collapse
|
28
|
Vibration-based drilling depth estimation of bone. Int J Med Robot 2021; 17:e2233. [PMID: 33533110 DOI: 10.1002/rcs.2233] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 01/17/2021] [Accepted: 01/18/2021] [Indexed: 01/15/2023]
Abstract
Drilling is one of the most common forms of tissue removal procedures, and drilling to a desired depth contributes to avoid injury to the soft tissue beyond and ensure implant stability. The deformation of the human musculoskeletal system has been a common problem in many drilling processes, making it difficult to achieve accurate estimation of the drilling depth. To remedy this problem, a dynamic model is presented to describe the relationship between the axial vibration of the drill and the feed rate. During drilling process, the amplitude of the main harmonic is estimated from the high-frequency component of the acceleration signal, while the short-time integral of the low-frequency part is calculated. Both the initial contact of the drilling tool to the bone and breakthrough are identified by comparing either the harmonic amplitude or the short-time integral. The harmonic amplitude is mapped to the data from a non-contact position sensor tracking the feed rate of the drill. Multiple drilling experiments on both a handheld device and a robotic cutting system demonstrated the effectiveness, stability and accuracy of the method when estimating depth. The mean maximum error for drilling depth estimation is less than 15% of the simulated bone thickness when using the handheld device, while the mean maximum error is less than 5% for the robotic cutting system.
Collapse
|
29
|
Development and Pre-Clinical Analysis of Spatiotemporal-Aware Augmented Reality in Orthopedic Interventions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:765-778. [PMID: 33166252 PMCID: PMC8317976 DOI: 10.1109/tmi.2020.3037013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. As a consequence, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. The main contribution of this paper is to reveal how exemplary workflows are redefined by taking full advantage of head-mounted displays when entirely co-registered with the imaging system at all times. The awareness of the system from the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. Our system achieved an error of 4.76 ± 2.91mm for placing K-wire in a fracture management procedure, and yielded errors of 1.57 ± 1.16° and 1.46 ± 1.00° in the abduction and anteversion angles, respectively, for total hip arthroplasty (THA). We compared the results with the outcomes from baseline standard operative and non-immersive AR procedures, which had yielded errors of [4.61mm, 4.76°, 4.77°] and [5.13mm, 1.78°, 1.43°], respectively, for wire placement, and abduction and anteversion during THA. We hope that our holistic approach towards improving the interface of surgery not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications and provide novel approaches of documenting procedures for training purposes.
Collapse
|
30
|
Data-Driven Shape Sensing of a Surgical Continuum Manipulator Using an Uncalibrated Fiber Bragg Grating Sensor. IEEE SENSORS JOURNAL 2021; 21:3066-3076. [PMID: 33746624 PMCID: PMC7978403 DOI: 10.1109/jsen.2020.3028208] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article proposes a data-driven learning-based approach for shape sensing and Distal-end Position Estimation (DPE) of a surgical Continuum Manipulator (CM) in constrained environments using Fiber Bragg Grating (FBG) sensors. The proposed approach uses only the sensory data from an unmodeled uncalibrated sensor embedded in the CM to estimate the shape and DPE. It serves as an alternate to the conventional mechanics-based sensor-model-dependent approach which relies on several sensor and CM geometrical assumptions. Unlike the conventional approach where the shape is reconstructed from proximal to distal end of the device, we propose a reversed approach where the distal-end position is estimated first and given this information, shape is then reconstructed from distal to proximal end. The proposed methodology yields more accurate DPE by avoiding accumulation of integration errors in conventional approaches. We study three data-driven models, namely a linear regression model, a Deep Neural Network (DNN), and a Temporal Neural Network (TNN) and compare DPE and shape reconstruction results. Additionally, we test both approaches (data-driven and model-dependent) against internal and external disturbances to the CM and its environment such as incorporation of flexible medical instruments into the CM and contacts with obstacles in taskspace. Using the data-driven (DNN) and model-dependent approaches, the following max absolute errors are observed for DPE: 0.78 mm and 2.45 mm in free bending motion, 0.11 mm and 3.20 mm with flexible instruments, and 1.22 mm and 3.19 mm with taskspace obstacles, indicating superior performance of the proposed data-driven approach compared to the conventional approaches.
Collapse
|
31
|
A Surgical Robotic System for Treatment of Pelvic Osteolysis Using an FBG-Equipped Continuum Manipulator and Flexible Instruments. IEEE/ASME TRANSACTIONS ON MECHATRONICS : A JOINT PUBLICATION OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY AND THE ASME DYNAMIC SYSTEMS AND CONTROL DIVISION 2021; 26:369-380. [PMID: 34025108 PMCID: PMC8132934 DOI: 10.1109/tmech.2020.3020504] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
This paper presents the development and experimental evaluation of a redundant robotic system for the less-invasive treatment of osteolysis (bone degradation) behind the acetabular implant during total hip replacement revision surgery. The system comprises a rigid-link positioning robot and a Continuum Dexterous Manipulator (CDM) equipped with highly flexible debriding tools and a Fiber Bragg Grating (FBG)-based sensor. The robot and the continuum manipulator are controlled concurrently via an optimization-based framework using the Tip Position Estimation (TPE) from the FBG sensor as feedback. Performance of the system is evaluated on a setup that consists of an acetabular cup and saw-bone phantom simulating the bone behind the cup. Experiments consist of performing the surgical procedure on the simulated phantom setup. CDM TPE using FBGs, target location placement, cutting performance, and the concurrent control algorithm capability in achieving the desired tasks are evaluated. Mean and standard deviation of the CDM TPE from the FBG sensor and the robotic system are 0.50 mm, and 0.18 mm, respectively. Using the developed surgical system, accurate positioning and successful cutting of desired straight-line and curvilinear paths on saw-bone phantoms behind the cup with different densities are demonstrated. Compared to the conventional rigid tools, the workspace reach behind the acetabular cup is 2.47 times greater when using the developed robotic system.
Collapse
|
32
|
Fast and automatic periacetabular osteotomy fragment pose estimation using intraoperatively implanted fiducials and single-view fluoroscopy. Phys Med Biol 2020; 65:245019. [PMID: 32590372 DOI: 10.1088/1361-6560/aba089] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Accurate and consistent mental interpretation of fluoroscopy to determine the position and orientation of acetabular bone fragments in 3D space is difficult. We propose a computer assisted approach that uses a single fluoroscopic view and quickly reports the pose of an acetabular fragment without any user input or initialization. Intraoperatively, but prior to any osteotomies, two constellations of metallic ball-bearings (BBs) are injected into the wing of a patient's ilium and lateral superior pubic ramus. One constellation is located on the expected acetabular fragment, and the other is located on the remaining, larger, pelvis fragment. The 3D locations of each BB are reconstructed using three fluoroscopic views and 2D/3D registrations to a preoperative CT scan of the pelvis. The relative pose of the fragment is established by estimating the movement of the two BB constellations using a single fluoroscopic view taken after osteotomy and fragment relocation. BB detection and inter-view correspondences are automatically computed throughout the processing pipeline. The proposed method was evaluated on a multitude of fluoroscopic images collected from six cadaveric surgeries performed bilaterally on three specimens. Mean fragment rotation error was 2.4 ± 1.0 degrees, mean translation error was 2.1 ± 0.6 mm, and mean 3D lateral center edge angle error was 1.0 ± 0.5 degrees. The average runtime of the single-view pose estimation was 0.7 ± 0.2 s. The proposed method demonstrates accuracy similar to other state of the art systems which require optical tracking systems or multiple-view 2D/3D registrations with manual input. The errors reported on fragment poses and lateral center edge angles are within the margins required for accurate intraoperative evaluation of femoral head coverage.
Collapse
|
33
|
Generalizing Spatial Transformers to Projective Geometry with Applications to 2D/3D Registration. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12263:329-339. [PMID: 33135014 DOI: 10.1007/978-3-030-59716-0_32] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Differentiable rendering is a technique to connect 3D scenes with corresponding 2D images. Since it is differentiable, processes during image formation can be learned. Previous approaches to differentiable rendering focus on mesh-based representations of 3D scenes, which is inappropriate for medical applications where volumetric, voxelized models are used to represent anatomy. We propose a novel Projective Spatial Transformer module that generalizes spatial transformers to projective geometry, thus enabling differentiable volume rendering. We demonstrate the usefulness of this architecture on the example of 2D/3D registration between radiographs and CT scans. Specifically, we show that our transformer enables end-to-end learning of an image processing and projection model that approximates an image similarity function that is convex with respect to the pose parameters, and can thus be optimized effectively using conventional gradient descent. To the best of our knowledge, we are the first to describe the spatial transformers in the context of projective transmission imaging, including rendering and pose estimation. We hope that our developments will benefit related 3D research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
Collapse
|
34
|
Abstract
Femoroplasty is a proposed alternative therapeutic method for preventing osteoporotic hip fractures in the elderly. Previously developed navigation system for femoroplasty required the attachment of an external X-ray fiducial to the femur. We propose a fiducial-free 2D/3D registration pipeline using fluoroscopic images for robot-assisted femoroplasty. Intraoperative fluoroscopic images are taken from multiple views to perform registration of the femur and drilling/injection device. The proposed method was tested through comprehensive simulation and cadaveric studies. Performance was evaluated on the registration error of the femur and the drilling/injection device. In simulations, the proposed approach achieved a mean accuracy of 1.26±0.74 mm for the relative planned injection entry point; 0.63±0.21° and 0.17±0.19° for the femur injection path direction and device guide direction, respectively. In the cadaver studies, a mean error of 2.64 ± 1.10 mm was achieved between the planned entry point and the device guide tip. The biomechanical analysis showed that even with a 4 mm translational deviation from the optimal injection path, the yield load prior to fracture increased by 40.7%. This result suggests that the fiducial-less 2D/3D registration is sufficiently accurate to guide robot assisted femoroplasty.
Collapse
|
35
|
Augmented Reality for Acetabular Component Placement in Direct Anterior Total Hip Arthroplasty. J Arthroplasty 2020; 35:1636-1641.e3. [PMID: 32063415 DOI: 10.1016/j.arth.2020.01.025] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 12/20/2019] [Accepted: 01/10/2020] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Malposition of the acetabular component of a hip prosthesis can lead to poor outcomes. Traditional placement with fluoroscopic guidance results in a 35% malpositioning rate. We compared the (1) accuracy and precision of component placement, (2) procedure time, (3) radiation dose, and (4) usability of a novel 3-dimensional augmented reality (AR) guidance system vs standard fluoroscopic guidance for acetabular component placement. METHODS We simulated component placement using a radiopaque foam pelvis. Cone-beam computed tomographic data and optical data from a red-green-blue-depth camera were coregistered to create the AR environment. Eight orthopedic surgery trainees completed component placement using both methods. We measured component position (inclination, anteversion), procedure time, radiation dose, and usability (System Usability Scale score, Surgical Task Load Index value). Alpha = .05. RESULTS Compared with fluoroscopic technique, AR technique was significantly more accurate for achieving target inclination (P = .01) and anteversion (P = .02) and more precise for achieving target anteversion (P < .01). AR technique was faster (mean ± standard deviation, 1.8 ± 0.25 vs 3.9 ± 1.6 minute; P < .01), and participants rated it as significantly easier to use according to both scales (P < .05). Radiation dose was not significantly different between techniques (P = .48). CONCLUSION A novel 3-dimensional AR guidance system produced more accurate inclination and anteversion and more precise anteversion in the placement of the acetabular component of a hip prosthesis. AR guidance was faster and easier to use than standard fluoroscopic guidance and did not involve greater radiation dose.
Collapse
|
36
|
High-Resolution Optical Fiber Shape Sensing of Continuum Robots: A Comparative Study. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2020; 2020. [PMID: 34422444 DOI: 10.1109/icra40945.2020.9197454] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Flexible medical instruments, such as Continuum Dexterous Manipulators (CDM), constitute an important class of tools for minimally invasive surgery. Accurate CDM shape reconstruction during surgery is of great importance, yet a challenging task. Fiber Bragg grating (FBG) sensors have demonstrated great potential in shape sensing and consequently tip position estimation of CDMs. However, due to the limited number of sensing locations, these sensors can only accurately recover basic shapes, and become unreliable in the presence of obstacles or many inflection points such as s-bends. Optical Frequency Domain Reflectometry (OFDR), on the other hand, can achieve much higher spatial resolution, and can therefore accurately reconstruct more complex shapes. Additionally, Random Optical Gratings by Ultraviolet laser Exposure (ROGUEs) can be written in the fibers to increase signal to noise ratio of the sensors. In this comparison study, the tip position error is used as a metric to compare both FBG and OFDR shape reconstructions for a 35 mm long CDM developed for orthopedic surgeries, using a pair of stereo cameras as ground truth. Three sets of experiments were conducted to measure the accuracy of each technique in various surgical scenarios. The tip position error for the OFDR (and FBG) technique was found to be 0.32 (0.83) mm in free-bending environment, 0.41 (0.80) mm when interacting with obstacles, and 0.45 (2.27) mm in s-bending. Moreover, the maximum tip position error remains sub-millimeter for the OFDR reconstruction, while it reaches 3.40 mm for FBG reconstruction. These results propose a cost-effective, robust and more accurate alternative to FBG sensors for reconstructing complex CDM shapes.
Collapse
|
37
|
Solar spectral conversion based on plastic films of lanthanide-doped ionosilicas for photovoltaics: Down-shifting layers and luminescent solar concentrators. J RARE EARTH 2020. [DOI: 10.1016/j.jre.2020.01.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
38
|
Automatic annotation of hip anatomy in fluoroscopy for robust and efficient 2D/3D registration. Int J Comput Assist Radiol Surg 2020; 15:759-769. [PMID: 32333361 PMCID: PMC7263976 DOI: 10.1007/s11548-020-02162-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 04/03/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Fluoroscopy is the standard imaging modality used to guide hip surgery and is therefore a natural sensor for computer-assisted navigation. In order to efficiently solve the complex registration problems presented during navigation, human-assisted annotations of the intraoperative image are typically required. This manual initialization interferes with the surgical workflow and diminishes any advantages gained from navigation. In this paper, we propose a method for fully automatic registration using anatomical annotations produced by a neural network. METHODS Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy. Training data are obtained using a computationally intensive, intraoperatively incompatible, 2D/3D registration of the pelvis and each femur. Ground truth 2D segmentation labels and anatomical landmark locations are established using projected 3D annotations. Intraoperative registration couples a traditional intensity-based strategy with annotations inferred by the network and requires no human assistance. RESULTS Ground truth segmentation labels and anatomical landmarks were obtained in 366 fluoroscopic images across 6 cadaveric specimens. In a leave-one-subject-out experiment, networks trained on these data obtained mean dice coefficients for left and right hemipelves, left and right femurs of 0.86, 0.87, 0.90, and 0.84, respectively. The mean 2D landmark localization error was 5.0 mm. The pelvis was registered within [Formula: see text] for 86% of the images when using the proposed intraoperative approach with an average runtime of 7 s. In comparison, an intensity-only approach without manual initialization registered the pelvis to [Formula: see text] in 18% of images. CONCLUSIONS We have created the first accurately annotated, non-synthetic, dataset of hip fluoroscopy. By using these annotations as training data for neural networks, state-of-the-art performance in fluoroscopic segmentation and landmark localization was achieved. Integrating these annotations allows for a robust, fully automatic, and efficient intraoperative registration during fluoroscopic navigation of the hip.
Collapse
|
39
|
Reflective-AR Display: An Interaction Methodology for Virtual-to-Real Alignment in Medical Robotics. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2972831] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
40
|
SCADE: Simultaneous Sensor Calibration and Deformation Estimation of FBG-Equipped Unmodeled Continuum Manipulators. IEEE T ROBOT 2020; 36:222-239. [PMID: 32661460 PMCID: PMC7357879 DOI: 10.1109/tro.2019.2946726] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this article, we present a novel stochastic algorithm called simultaneous sensor calibration and deformation estimation (SCADE) to address the problem of modeling deformation behavior of a generic continuum manipulator (CM) in free and obstructed environments. In SCADE, using a novel mathematical formulation, we introduce a priori model-independent filtering algorithm to fuse the continuous and inaccurate measurements of an embedded sensor (e.g., magnetic or piezoelectric sensors) with an intermittent but accurate data of an external imaging system (e.g., optical trackers or cameras). The main motivation of this article is the crucial need of obtaining an accurate shape/position estimation of a CM utilized in a surgical intervention. In these robotic procedures, the CM is typically equipped with an embedded sensing unit (ESU) while an external imaging modality (e.g., ultrasound or a fluoroscopy machine) is also available in the surgical site. The results of two different set of prior experiments in free and obstructed environments were used to evaluate the efficacy of SCADE algorithm. The experiments were performed with a CM specifically designed for orthopaedic interventions equipped with an inaccurate Fiber Bragg Grating (FBG) ESU and overhead camera. The results demonstrated the successful performance of the SCADE algorithm in simultaneous estimation of unknown deformation behavior of the utilized unmodeled CM together with realizing the time-varying drift of the poor-calibrated FBG sensing unit. Moreover, the results showed the phenomenal out-performance of the SCADE algorithm in estimation of the CM's tip position as compared to FBG-based position estimations.
Collapse
|
41
|
Abstract
OBJECTIVE State-of-the-art navigation systems for pelvic osteotomies use optical systems with external fiducials. In this paper, we propose the use of X-ray navigation for pose estimation of periacetabular fragments without fiducials. METHODS A two-dimensional/three-dimensional (2-D/3-D) registration pipeline was developed to recover fragment pose. This pipeline was tested through an extensive simulation study and six cadaveric surgeries. Using osteotomy boundaries in the fluoroscopic images, the preoperative plan was refined to more accurately match the intraoperative shape. RESULTS In simulation, average fragment pose errors were 1.3 ° /1.7 mm when the planned fragment matched the intraoperative fragment, 2.2 ° /2.1 mm when the plan was not updated to match the true shape, and 1.9 ° /2.0 mm when the fragment shape was intraoperatively estimated. In cadaver experiments, the average pose errors were 2.2 ° /2.2 mm, 3.8 ° /2.5 mm, and 3.5 ° /2.2 mm when registering with the actual fragment shape, a preoperative plan, and an intraoperatively refined plan, respectively. Average errors of the lateral center edge angle were less than 2 ° for all fragment shapes in simulation and cadaver experiments. CONCLUSION The proposed pipeline is capable of accurately reporting femoral head coverage within a range clinically identified for long-term joint survivability. SIGNIFICANCE Human interpretation of fragment pose is challenging and usually restricted to rotation about a single anatomical axis. The proposed pipeline provides an intraoperative estimate of rigid pose with respect to all anatomical axes, is compatible with minimally invasive incisions, and has no dependence on external fiducials.
Collapse
|
42
|
Towards Skill Transfer via Learning-Based Guidance in Human-Robot Interaction: An Application to Orthopaedic Surgical Drilling Skill. J INTELL ROBOT SYST 2019. [DOI: 10.1007/s10846-019-01082-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
43
|
Significance of preoperative planning for prophylactic augmentation of osteoporotic hip: A computational modeling study. J Biomech 2019; 94:75-81. [PMID: 31371101 PMCID: PMC6736717 DOI: 10.1016/j.jbiomech.2019.07.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Revised: 07/08/2019] [Accepted: 07/11/2019] [Indexed: 12/22/2022]
Abstract
A potential effective treatment for prevention of osteoporotic hip fractures is augmentation of the mechanical properties of the femur by injecting it with bone cement. This therapy, however, is only in research stage and can benefit substantially from computational simulations to optimize the pattern of cement injection. Some studies have considered a patient-specific planning paradigm for Osteoporotic Hip Augmentation (OHA). Despite their biomechanical advantages, customized plans require advanced surgical systems for implementation. Other studies, therefore, have suggested a more generalized injection strategy. The goal of this study is to investigate as to whether the additional computational overhead of the patient-specific planning can significantly improve the bone strength as compared to the generalized injection strategies attempted in the literature. For this purpose, numerical models were developed from high resolution CT images (n = 4). Through finite element analysis and hydrodynamic simulations, we compared the biomechanical efficiency of the customized cement-based augmentation along with three generalized injection strategies developed previously. Two series of simulations were studied, one with homogeneous and one with inhomogeneous material properties for the osteoporotic bone. The customized cement-based augmentation inhomogeneous models showed that injection of only 10 ml of bone cement can significantly increase the yield load (79.6%, P < 0.01) and yield energy (199%, P < 0.01) of an osteoporotic femur. This increase is significantly higher than those of the generalized injections proposed previously (23.8% on average). Our findings suggest that OHA can significantly benefit from a patient-specific plan that determines the pattern and volume of the injected cement.
Collapse
|
44
|
Enabling machine learning in X-ray-based procedures via realistic simulation of image formation. Int J Comput Assist Radiol Surg 2019; 14:1517-1528. [PMID: 31187399 PMCID: PMC7297499 DOI: 10.1007/s11548-019-02011-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/03/2019] [Indexed: 12/19/2022]
Abstract
PURPOSE Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
Collapse
|
45
|
Co-localized augmented human and X-ray observers in collaborative surgical ecosystem. Int J Comput Assist Radiol Surg 2019; 14:1553-1563. [PMID: 31350704 DOI: 10.1007/s11548-019-02035-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 07/18/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Image-guided percutaneous interventions are safer alternatives to conventional orthopedic and trauma surgeries. To advance surgical tools in complex bony structures during these procedures with confidence, a large number of images is acquired. While image-guidance is the de facto standard to guarantee acceptable outcome, when these images are presented on monitors far from the surgical site the information content cannot be associated easily with the 3D patient anatomy. METHODS In this article, we propose a collaborative augmented reality (AR) surgical ecosystem to jointly co-localize the C-arm X-ray and surgeon viewer. The technical contributions of this work include (1) joint calibration of a visual tracker on a C-arm scanner and its X-ray source via a hand-eye calibration strategy, and (2) inside-out co-localization of human and X-ray observers in shared tracking and augmentation environments using vision-based simultaneous localization and mapping. RESULTS We present a thorough evaluation of the hand-eye calibration procedure. Results suggest convergence when using 50 pose pairs or more. The mean translation and rotation errors at convergence are 5.7 mm and [Formula: see text], respectively. Further, user-in-the-loop studies were conducted to estimate the end-to-end target augmentation error. The mean distance between landmarks in real and virtual environment was 10.8 mm. CONCLUSIONS The proposed AR solution provides a shared augmented experience between the human and X-ray viewer. The collaborative surgical AR system has the potential to simplify hand-eye coordination for surgeons or intuitively inform C-arm technologists for prospective X-ray view-point planning.
Collapse
|
46
|
Inroads Toward Robot-Assisted Internal Fixation of Bone Fractures Using a Bendable Medical Screw and the Curved Drilling Technique. PROCEEDINGS OF THE ... IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS. IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS 2019; 2018:595-600. [PMID: 31259041 DOI: 10.1109/biorob.2018.8487926] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Internal fixation is a common orthopedic procedure in which a rigid screw is used to fix fragments of a fractured bone together and expedite the healing process. However, the rigidity of the screw, geometry of the fractured anatomy (e.g. femur and pelvis), and patient's age can cause an array of complications during screw placement, such as improper fracture healing due to misalignment of the bone fragments, lengthy procedure time and subsequently high radiation exposure. To address these issues, we propose a minimally invasive robot-assisted procedure comprising of a continuum robot, called ortho-snake, together with a novel bendable medical screw (BMS) for fixating the fractures. We describe the implementation of a curved drilling technique and focus on the design, manufacturing, and evaluation of a novel BMS, which can passively morph into the drilled curved tunnels with various curvatures. We evaluate the performance and efficacy of the proposed BMS using both finite element simulations as well as experiments conducted on synthetic bone samples.
Collapse
|
47
|
Learning to detect anatomical landmarks of the pelvis in X-rays from arbitrary views. Int J Comput Assist Radiol Surg 2019; 14:1463-1473. [PMID: 31006106 DOI: 10.1007/s11548-019-01975-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 04/09/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. METHODS In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of [Formula: see text]. RESULTS On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. CONCLUSION We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.
Collapse
|
48
|
Autonomous Data-Driven Manipulation of Unknown Anisotropic Deformable Tissues Using Unmodelled Continuum Manipulators. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2018.2888896] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
49
|
Interactive Flying Frustums (IFFs): spatially aware surgical data visualization. Int J Comput Assist Radiol Surg 2019; 14:913-922. [PMID: 30863981 DOI: 10.1007/s11548-019-01943-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 03/07/2019] [Indexed: 10/27/2022]
Abstract
PURPOSE As the trend toward minimally invasive and percutaneous interventions continues, the importance of appropriate surgical data visualization becomes more evident. Ineffective interventional data display techniques that yield poor ergonomics that hinder hand-eye coordination, and therefore promote frustration which can compromise on-task performance up to adverse outcome. A very common example of ineffective visualization is monitors attached to the base of mobile C-arm X-ray systems. METHODS We present a spatially and imaging geometry-aware paradigm for visualization of fluoroscopic images using Interactive Flying Frustums (IFFs) in a mixed reality environment. We exploit the fact that the C-arm imaging geometry can be modeled as a pinhole camera giving rise to an 11-degree-of-freedom view frustum on which the X-ray image can be translated while remaining valid. Visualizing IFFs to the surgeon in an augmented reality environment intuitively unites the virtual 2D X-ray image plane and the real 3D patient anatomy. To achieve this visualization, the surgeon and C-arm are tracked relative to the same coordinate frame using image-based localization and mapping, with the augmented reality environment being delivered to the surgeon via a state-of-the-art optical see-through head-mounted display. RESULTS The root-mean-squared error of C-arm source tracking after hand-eye calibration was determined as [Formula: see text] and [Formula: see text] in rotation and translation, respectively. Finally, we demonstrated the application of spatially aware data visualization for internal fixation of pelvic fractures and percutaneous vertebroplasty. CONCLUSION Our spatially aware approach to transmission image visualization effectively unites patient anatomy with X-ray images by enabling spatial image manipulation that abides image formation. Our proof-of-principle findings indicate potential applications for surgical tasks that mostly rely on orientational information such as placing the acetabular component in total hip arthroplasty, making us confident that the proposed augmented reality concept can pave the way for improving surgical performance and visuo-motor coordination in fluoroscopy-guided surgery.
Collapse
|
50
|
Abstract
In addition to liver disorders, hepatitis C virus (HCV) is also associated with extrahepatic immune manifestations and B-cell non-Hodgkin lymphoma (NHL), especially marginal zone lymphoma, de novo or transformed diffuse large B-cell lymphoma and to a lesser extent, follicular lymphoma. Epidemiological data and clinical observations argue for an association between HCV and lymphoproliferative disorders. The causative role of HCV in NHL has been further supported by the response to antiviral therapy. Pathophysiological processes at stake leading from HCV infection to overt lymphoma still need to be further elucidated. Based on reported biological studies, several mechanisms of transformation seem however to emerge. A strong body of evidence supports the hypothesis of an indirect transformation mechanism by which sustained antigenic stimulation leads from oligoclonal to monoclonal expansion and sometimes to frank lymphoma, mostly of marginal zone subtype. By infecting lymphocytes, HCV could play a direct role in cellular transformation, particularly in de novo large B-cell lymphoma. Finally, HCV is associated with follicular lymphoma in a subset of patients. In this setting, it may be hypothesized that inflammatory cytokines stimulate proliferation and transformation of IgH-BCL2 clones that are increased during chronic HCV infection. Unraveling the pathogenesis of HCV-related B-cell lymphoproliferation is of prime importance to optimize therapeutic strategies, especially with the recent development of new direct-acting antiviral drugs.
Collapse
|