1
|
Zhang W, Zhao L, Gou H, Gong Y, Zhou Y, Feng Q. PRSCS-Net: Progressive 3D/2D rigid Registration network with the guidance of Single-view Cycle Synthesis. Med Image Anal 2024; 97:103283. [PMID: 39094463 DOI: 10.1016/j.media.2024.103283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 07/08/2024] [Accepted: 07/17/2024] [Indexed: 08/04/2024]
Abstract
The 3D/2D registration for 3D pre-operative images (computed tomography, CT) and 2D intra-operative images (X-ray) plays an important role in image-guided spine surgeries. Conventional iterative-based approaches suffer from time-consuming processes. Existing learning-based approaches require high computational costs and face poor performance on large misalignment because of projection-induced losses or ill-posed reconstruction. In this paper, we propose a Progressive 3D/2D rigid Registration network with the guidance of Single-view Cycle Synthesis, named PRSCS-Net. Specifically, we first introduce the differentiable backward/forward projection operator into the single-view cycle synthesis network, which reconstructs corresponding 3D geometry features from two 2D intra-operative view images (one from the input, and the other from the synthesis). In this way, the problem of limited views during reconstruction can be solved. Subsequently, we employ a self-reconstruction path to extract latent representation from pre-operative 3D CT images. The following pose estimation process will be performed in the 3D geometry feature space, which can solve the dimensional gap, greatly reduce the computational complexity, and ensure that the features extracted from pre-operative and intra-operative images are as relevant as possible to pose estimation. Furthermore, to enhance the ability of our model for handling large misalignment, we develop a progressive registration path, including two sub-registration networks, aiming to estimate the pose parameters via two-step warping volume features. Finally, our proposed method has been evaluated on a public dataset CTSpine1k and an in-house dataset C-ArmLSpine for 3D/2D registration. Results demonstrate that PRSCS-Net achieves state-of-the-art registration performance in terms of registration accuracy, robustness, and generalizability compared with existing methods. Thus, PRSCS-Net has potential for clinical spinal disease surgical planning and surgical navigation systems.
Collapse
Affiliation(s)
- Wencong Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Lei Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Hang Gou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yanggang Gong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Yujia Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Shi J, Shen J, Zhang C, Guo W, Wang F. Robot-assisted versus traditional surgery in the treatment of intertrochanteric fractures: a meta-analysis. J Robot Surg 2024; 18:221. [PMID: 38780662 PMCID: PMC11116270 DOI: 10.1007/s11701-024-01979-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Intramedullary nail fixation of intertrochanteric fractures assisted by orthopedic surgical robot navigation is a new surgical method, but there are few studies comparing its efficacy with traditional intramedullary nail fixation. We aimed to assess whether robot-assisted internal fixation confers certain surgical advantages through a literature review. PubMed, EMBASE, Cochrane Library, China National Knowledge Infrastructure (CNKI) and Wan fang Data Knowledge service Platform were searched to collect randomized and non-randomized studies on patients with calcaneal fractures. Five studies were identified to compare the clinical indexes. For the clinical indexes, the technology of robot-assisted is generally feasible, in time to operation, intraoperative fluoroscopy times, blood loss, pine insertion, tip apex distance (TAD), and Harris score (P < 0.05). However, on the complication and excellent and good rate after operation did not show good efficacy compared with the traditional group (P > 0.05). Based on the current evidence, For the short-term clinical index, the advantages of robot-assisted are clear. The long-term clinical effects of the two methods are also good, but the robot-assisted shows better. However, the quality of some studies is low, and more high-quality randomized controlled trials (RCTs) are needed for further verification.
Collapse
Affiliation(s)
- Jiaxiao Shi
- Department of Orthopaedics, Hebei Province Cangzhou Hospital of Integrated Traditional Chinese Medicine-Western Medicine, Cangzhou, China.
- Hebei Key Laboratory of Integrated Traditional and Western Medicine in Osteoarthrosis Research (Preparing), Cangzhou, China.
| | - Jiaxin Shen
- Department of Intensive Care Unit, Cangzhou Central Hospital, Cangzhou, 061001, China
| | - Chaochao Zhang
- Department of Orthopaedics, Hebei Province Cangzhou Hospital of Integrated Traditional Chinese Medicine-Western Medicine, Cangzhou, China
- Hebei Key Laboratory of Integrated Traditional and Western Medicine in Osteoarthrosis Research (Preparing), Cangzhou, China
| | - Wei Guo
- Department of Orthopaedics, Hebei Province Cangzhou Hospital of Integrated Traditional Chinese Medicine-Western Medicine, Cangzhou, China
- Hebei Key Laboratory of Integrated Traditional and Western Medicine in Osteoarthrosis Research (Preparing), Cangzhou, China
| | - Fangfang Wang
- Department of Orthopaedics, Hebei Province Cangzhou Hospital of Integrated Traditional Chinese Medicine-Western Medicine, Cangzhou, China
- Hebei Key Laboratory of Integrated Traditional and Western Medicine in Osteoarthrosis Research (Preparing), Cangzhou, China
| |
Collapse
|
3
|
Zhang C, Liu J, Bian L, Xiang S, Liu J, Guan W. FMB: Dual-view fusion and registration of 2D DSA images and 3D MRA images for neurointerventional-based procedures. Comput Biol Med 2024; 171:107987. [PMID: 38350395 DOI: 10.1016/j.compbiomed.2024.107987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/03/2024] [Accepted: 01/13/2024] [Indexed: 02/15/2024]
Abstract
OBJECTIVE Alignment between preoperative images (high-resolution magnetic resonance imaging, magnetic resonance angiography) and intraoperative medical images (digital subtraction angiography) is currently required in neurointerventional surgery. Treating a lesion is usually guided by a 2D DSA silhouette image. DSA silhouette images increase procedure time and radiation exposure time due to the lack of anatomical information, but information from MRA images can be utilized to compensate for this in order to improve procedure efficiency. In this paper, we abstract this into the problem of relative pose and correspondence between a 3D point and its 2D projection. Multimodal images have a large amount of noise and anomalies that are difficult to resolve using conventional methods. According to our research, there are fewer multimodal fusion methods to perform the full procedure. APPROACH Therefore, the paper introduces a registration pipeline for multimodal images with fused dual views is presented. Deep learning methods are introduced to accomplish feature extraction of multimodal images to automate the process. Besides, the paper proposes a registration method based on the Factor of Maximum Bounds (FMB). The key insights are to relax the constraints on the lower bound, enhance the constraints on the upper bounds, and mine more local consensus information in the point set using a second perspective to generate accurate pose estimation. MAIN RESULTS Compared to existing 2D/3D point set registration methods, this method utilizes a different problem formulation, searches the rotation and translation space more efficiently, and improves registration speed. SIGNIFICANCE Experiments with synthesized and real data show that the proposed method was achieved in accuracy, robustness, and time efficiency.
Collapse
Affiliation(s)
- Chenyu Zhang
- College of Electronic Information Engineering, Beihang University, 100191, Beijing, China.
| | - Jiaxin Liu
- College of Electronic Information Engineering, Beihang University, 100191, Beijing, China.
| | - Lisong Bian
- Neurosurgery Department, Haidian Hospital, 100080, Beijing, China.
| | - Sishi Xiang
- Neurosurgery Department, Xuanwu Hospital, Capital Medical University, 100053, Beijing, China.
| | - Jun Liu
- College of Electronic Information Engineering, Beihang University, 100191, Beijing, China.
| | - Wenxue Guan
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China.
| |
Collapse
|
4
|
Xu Z, Zhang X, Wang Y, Hao X, Liu M, Sun J, Zhao Z. Comparison of Bone-setting Robots and Conventional Reduction in the Treatment of Intertrochanteric Fracture: A Retrospective Study. Orthop Surg 2024; 16:312-319. [PMID: 38086603 PMCID: PMC10834210 DOI: 10.1111/os.13954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 02/03/2024] Open
Abstract
OBJECTIVE Intertrochanteric fracture of the femur is a common fracture in older people. Due to the poor systemic condition and prognosis of elderly patients, it is prone to more complications. We introduce the bone-setting concept in the design of the robots, which are used for intertrochanteric fracture of the femur reduction. The purpose of this study is to compare the effect of bone-setting robots and conventional reduction in the treatment of intertrochanteric fracture of the femur (IFF). METHODS From June 2021 to January 2023, 60 patients with IFF who were treated surgically were assigned to bone-setting robots group and conventional reduction methods group in this retrospective study. The reduction time, operation time, total time, intraoperative blood loss, incision length, fluoroscopy time, and the follow-up time were reviewed. The visual analogue scale (VAS) and Harris scores were used for functional assessment. For continuous variables, independent t-tests were applied; for categorical data, the chi-square test was applied. The significance level as p < 0.05. RESULTS Among the 60 patients with IFF, 31 were assigned to the bone-setting robots group, and 29 were assigned to the conventional reduction methods group. Both groups with a similar baseline in the number, gender, age, and classification (p > 0.05). The reduction time, operation time, total time, intraoperative blood loss, and fluoroscopy time were less than those in the bone-setting robots reduction group compared to the conventional reduction group. In the bone-setting robots reduction group, the preoperative VAS score was 6.2 ± 1.3, the Harris score was 35.3 ± 3.1, 1 week after surgery VAS score was 3.3 ± 1.2, the Harris score was 57.3 ± 3.7, and at the last follow-up VAS score was 2.4 ± 0.8, and the Harris score was 88.7 ± 3.4. While in the conventional reduction group, the preoperative VAS score was 6.3 ± 1.3, the Harris score was 35.9 ± 2.9, 1 week after surgery VAS score was 4.8 ± 1.4, the Harris score was 46.8 ± 2.8, and at the last follow-up VAS score was 2.6 ± 0.8, and the Harris score was 87.3 ± 3.3. There were no significant differences between the two groups at the preoperative and 6-month postoperative follow-ups in VAS score and Harris score (p > 0.05, p > 0.05, respectively). But the difference was statistically significant at the one-week postoperative follow-up in VAS and Harris scores (p < 0.001). CONCLUSION The bone-setting robots can better protect the "fracture environment" and have the advantages of being precise, minimally invasive, simple, short time, low radiation, and rapid fracture recovery. The clinical effect of closed repair of IFF is ideal.
Collapse
Affiliation(s)
- Zhanmin Xu
- Tianjin Fourth Centre HospitalTianjinChina
| | - Xinan Zhang
- Tianjin University of Traditional Chinese MedicineTianjinChina
| | | | | | - Meiyue Liu
- Tianjin Fourth Centre HospitalTianjinChina
| | | | | |
Collapse
|
5
|
Margalit A, Phalen H, Gao C, Ma J, Suresh KV, Jain P, Farvardin A, Taylor RH, Armand M, Chattre A, Jain A. Autonomous Spinal Robotic System for Transforaminal Lumbar Epidural Injections: A Proof of Concept of Study. Global Spine J 2024; 14:138-145. [PMID: 35467447 PMCID: PMC10676186 DOI: 10.1177/21925682221096625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
STUDY DESIGN Phantom study. OBJECTIVE The aim of our study is to demonstrate in a proof-of-concept model whether the use of a marker less autonomous robotic controlled injection delivery system will increase accuracy in the lumbar spine. METHODS Ideal transforaminal epidural injection trajectories (bilateral L2/3, L3/4, L4/5, L5/S1 and S1) were planned out on a virtual pre-operative planning software by 1 experienced provider. Twenty transforaminal epidural injections were administered in a lumbar spine phantom model, 10 using a freehand procedure, and 10 using a marker less autonomous spinal robotic system. Procedural accuracy, defined as the difference between pre-operative planning and actual post-operative needle tip distance (mm) and angular orientation (degrees), were assessed between the freehand and robotic procedures. RESULTS Procedural accuracy for robotically placed transforaminal epidural injections was significantly higher with the difference in pre- and post-operative needle tip distance being 20.1 (±5.0) mm in the freehand procedure and 11.4 (±3.9) mm in the robotically placed procedure (P < .001). Needle tip precision for the freehand technique was 15.6 mm (26.3 - 10.7) compared to 10.1 mm (16.3 - 6.1) for the robotic technique. Differences in needle angular orientation deviation were 5.6 (±3.3) degrees in the robotically placed procedure and 12.0 (±4.8) degrees in the freehand procedure (P = .003). CONCLUSION The robotic system allowed for comparable placement of transforaminal epidural injections as a freehand technique by an experienced provider, with additional benefits of improved accuracy and precision.
Collapse
Affiliation(s)
- Adam Margalit
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Henry Phalen
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Cong Gao
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Justin Ma
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Krishna V. Suresh
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Punya Jain
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Amirhossein Farvardin
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Russell H. Taylor
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Mehran Armand
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Akhil Chattre
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Amit Jain
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
6
|
Gao C, Feng A, Liu X, Taylor RH, Armand M, Unberath M. A Fully Differentiable Framework for 2D/3D Registration and the Projective Spatial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:275-285. [PMID: 37549070 PMCID: PMC10879149 DOI: 10.1109/tmi.2023.3299588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/09/2023]
Abstract
Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
Collapse
|
7
|
Burton W, Crespo IR, Andreassen T, Pryhoda M, Jensen A, Myers C, Shelburne K, Banks S, Rullkoetter P. Fully automatic tracking of native glenohumeral kinematics from stereo-radiography. Comput Biol Med 2023; 163:107189. [PMID: 37393783 DOI: 10.1016/j.compbiomed.2023.107189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/12/2023] [Accepted: 06/19/2023] [Indexed: 07/04/2023]
Abstract
The current work introduces a system for fully automatic tracking of native glenohumeral kinematics in stereo-radiography sequences. The proposed method first applies convolutional neural networks to obtain segmentation and semantic key point predictions in biplanar radiograph frames. Preliminary bone pose estimates are computed by solving a non-convex optimization problem with semidefinite relaxations to register digitized bone landmarks to semantic key points. Initial poses are then refined by registering computed tomography-based digitally reconstructed radiographs to captured scenes, which are masked by segmentation maps to isolate the shoulder joint. A particular neural net architecture which exploits subject-specific geometry is also introduced to improve segmentation predictions and increase robustness of subsequent pose estimates. The method is evaluated by comparing predicted glenohumeral kinematics to manually tracked values from 17 trials capturing 4 dynamic activities. Median orientation differences between predicted and ground truth poses were 1.7∘ and 8.6∘ for the scapula and humerus, respectively. Joint-level kinematics differences were less than 2∘ in 65%, 13%, and 63% of frames for XYZ orientation DoFs based on Euler angle decompositions. Automation of kinematic tracking can increase scalability of tracking workflows in research, clinical, or surgical applications.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA.
| | - Ignacio Rivero Crespo
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Thor Andreassen
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Moira Pryhoda
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Andrew Jensen
- Department of Mechanical and Aerospace Engineering, University of Florida, 939 Center Dr., Gainesville, FL, 32611, USA
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Scott Banks
- Department of Mechanical and Aerospace Engineering, University of Florida, 939 Center Dr., Gainesville, FL, 32611, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| |
Collapse
|
8
|
Killeen BD, Gao C, Oguine KJ, Darcy S, Armand M, Taylor RH, Osgood G, Unberath M. An autonomous X-ray image acquisition and interpretation system for assisting percutaneous pelvic fracture fixation. Int J Comput Assist Radiol Surg 2023; 18:1201-1208. [PMID: 37213057 PMCID: PMC11002911 DOI: 10.1007/s11548-023-02941-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/25/2023] [Indexed: 05/23/2023]
Abstract
PURPOSE Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.
Collapse
Affiliation(s)
| | - Cong Gao
- Johns Hopkins University, Baltimore, 21210, MD, USA
| | | | - Sean Darcy
- Johns Hopkins University, Baltimore, 21210, MD, USA
| | - Mehran Armand
- Johns Hopkins University, Baltimore, 21210, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, USA
| | | | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
9
|
Gao C, Killeen BD, Hu Y, Grupp RB, Taylor RH, Armand M, Unberath M. Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. NAT MACH INTELL 2023; 5:294-308. [PMID: 38523605 PMCID: PMC10959504 DOI: 10.1038/s42256-023-00629-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/06/2023] [Indexed: 03/26/2024]
Abstract
Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin D. Killeen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Robert B. Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
10
|
Bakhtiarinejad M, Gao C, Farvardin A, Zhu G, Wang Y, Oni JK, Taylor RH, Armand M. A Surgical Robotic System for Osteoporotic Hip Augmentation: System Development and Experimental Evaluation. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2023; 5:18-29. [PMID: 37213937 PMCID: PMC10195101 DOI: 10.1109/tmrb.2023.3241589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Minimally-invasive Osteoporotic Hip Augmentation (OHA) by injecting bone cement is a potential treatment option to reduce the risk of hip fracture. This treatment can significantly benefit from computer-assisted planning and execution system to optimize the pattern of cement injection. We present a novel robotic system for the execution of OHA that consists of a 6-DOF robotic arm and integrated drilling and injection component. The minimally-invasive procedure is performed by registering the robot and preoperative images to the surgical scene using multiview image-based 2D/3D registration with no external fiducial attached to the body. The performance of the system is evaluated through experimental sawbone studies as well as cadaveric experiments with intact soft tissues. In the cadaver experiments, distance errors of 3.28mm and 2.64mm for entry and target points and orientation error of 2.30° are calculated. Moreover, the mean surface distance error of 2.13mm with translational error of 4.47mm is reported between injected and planned cement profiles. The experimental results demonstrate the first application of the proposed Robot-Assisted combined Drilling and Injection System (RADIS), incorporating biomechanical planning and intraoperative fiducial-less 2D/3D registration on human cadavers with intact soft tissues.
Collapse
Affiliation(s)
- Mahsan Bakhtiarinejad
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Cong Gao
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Amirhossein Farvardin
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Gang Zhu
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Yu Wang
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Julius K Oni
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD 21287, USA
| |
Collapse
|
11
|
Ku PC, Martin-Gomez A, Gao C, Grupp R, Mears SC, Armand M. Towards 2D/3D Registration of the Preoperative MRI to Intraoperative Fluoroscopic Images for Visualization of Bone Defects. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1096-1105. [PMID: 37555198 PMCID: PMC10406464 DOI: 10.1080/21681163.2022.2152375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/19/2022] [Indexed: 12/23/2022]
Abstract
Magnetic Resonance Imaging (MRI) is a medical imaging modality that allows for the evaluation of soft-tissue diseases and the assessment of bone quality. Preoperative MRI volumes are used by surgeons to identify defected bones, perform the segmentation of lesions, and generate surgical plans before the surgery. Nevertheless, conventional intraoperative imaging modalities such as fluoroscopy are less sensitive in detecting potential lesions. In this work, we propose a 2D/3D registration pipeline that aims to register preoperative MRI with intraoperative 2D fluoroscopic images. To showcase the feasibility of our approach, we use the core decompression procedure as a surgical example to perform 2D/3D femur registration. The proposed registration pipeline is evaluated using digitally reconstructed radiographs (DRRs) to simulate the intraoperative fluoroscopic images. The resulting transformation from the registration is later used to create overlays of preoperative MRI annotations and planning data to provide intraoperative visual guidance to surgeons. Our results suggest that the proposed registration pipeline is capable of achieving reasonable transformation between MRI and digitally reconstructed fluoroscopic images for intraoperative visualization applications.
Collapse
Affiliation(s)
- Ping-Cheng Ku
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Cong Gao
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Robert Grupp
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Simon C. Mears
- Department of Orthopaedic Surgery, University of Arkansas for Medical Sciences, AR, USA
| | - Mehran Armand
- Biomechanical- and Image-Guided Surgical Systems (BIGSS), Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
12
|
Gao C, Phalen H, Margalit A, Ma JH, Ku PC, Unberath M, Taylor RH, Jain A, Armand M. Fluoroscopy-Guided Robotic System for Transforaminal Lumbar Epidural Injections. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2022; 4:901-909. [PMID: 37790985 PMCID: PMC10544812 DOI: 10.1109/tmrb.2022.3196321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
We present an autonomous robotic spine needle injection system using fluoroscopic image-based navigation. Our system includes patient-specific planning, intra-operative image-based 2D/3D registration and navigation, and automatic robot-guided needle injection. We performed intensive simulation studies to validate the registration accuracy. We achieved a mean spine vertebrae registration error of 0.8 ± 0.3 mm, 0.9 ± 0.7 degrees, mean injection device registration error of 0.2 ± 0.6 mm, 1.2 ± 1.3 degrees, in translation and rotation, respectively. We then conducted cadaveric studies comparing our system to an experienced clinician's free-hand injections. We achieved a mean needle tip translational error of 5.1 ± 2.4 mm and needle orientation error of 3.6 ± 1.9 degrees for robotic injections, compared to 7.6 ± 2.8 mm and 9.9 ± 4.7 degrees for clinician's free-hand injections, respectively. During injections, all needle tips were placed within the defined safety zones for this application. The results suggest the feasibility of using our image-guided robotic injection system for spinal orthopedic applications.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Henry Phalen
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Adam Margalit
- Department of Orthopaedic Surgery, Baltimore, MD, USA 21224
| | - Justin H Ma
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Ping-Cheng Ku
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Amit Jain
- Department of Orthopaedic Surgery, Baltimore, MD, USA 21224
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
- Department of Orthopaedic Surgery, Baltimore, MD, USA 21224
- Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA 21224
| |
Collapse
|
13
|
Nguyen HP, Kim T, Kim S. Markerless registration approach using dynamic touchable region model. Int J Med Robot 2022; 18:e2376. [DOI: 10.1002/rcs.2376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 01/27/2022] [Accepted: 01/29/2022] [Indexed: 11/09/2022]
Affiliation(s)
- Hang Phuong Nguyen
- Department of Electrical, Electronic, and Computer Engineering University of Ulsan Ulsan South Korea
| | - Taeho Kim
- Department of Electrical, Electronic, and Computer Engineering University of Ulsan Ulsan South Korea
| | - Sungmin Kim
- Department of Electrical, Electronic, and Computer Engineering University of Ulsan Ulsan South Korea
| |
Collapse
|
14
|
Gao C, Phalen H, Sefati S, Ma J, Taylor R, Unberath M, Armand M. Fluoroscopic Navigation for a Surgical Robotic System Including a Continuum Manipulator. IEEE Trans Biomed Eng 2022; 69:453-464. [PMID: 34270412 PMCID: PMC8817231 DOI: 10.1109/tbme.2021.3097631] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
We present an image-based navigation solution for a surgical robotic system with a Continuum Manipulator (CM). Our navigation system uses only fluoroscopic images from a mobile C-arm to estimate the CM shape and pose with respect to the bone anatomy. The CM pose and shape estimation is achieved using image intensity-based 2D/3D registration. A learning-based framework is used to automatically detect the CM in X-ray images, identifying landmark features that are used to initialize and regularize image registration. We also propose a modified hand-eye calibration method that numerically optimizes the hand-eye matrix during image registration. The proposed navigation system for CM positioning was tested in simulation and cadaveric studies. In simulation, the proposed registration achieved a mean error of 1.10±0.72 mm between the CM tip and a target entry point on the femur. In cadaveric experiments, the mean CM tip position error was 2.86±0.80 mm after registration and repositioning of the CM. The results suggest that our proposed fluoroscopic navigation is feasible to guide the CM in orthopedic applications.
Collapse
|
15
|
Unberath M, Gao C, Hu Y, Judish M, Taylor RH, Armand M, Grupp R. The Impact of Machine Learning on 2D/3D Registration for Image-Guided Interventions: A Systematic Review and Perspective. Front Robot AI 2021; 8:716007. [PMID: 34527706 PMCID: PMC8436154 DOI: 10.3389/frobt.2021.716007] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based navigation is widely considered the next frontier of minimally invasive surgery. It is believed that image-based navigation will increase the access to reproducible, safe, and high-precision surgery as it may then be performed at acceptable costs and effort. This is because image-based techniques avoid the need of specialized equipment and seamlessly integrate with contemporary workflows. Furthermore, it is expected that image-based navigation techniques will play a major role in enabling mixed reality environments, as well as autonomous and robot-assisted workflows. A critical component of image guidance is 2D/3D registration, a technique to estimate the spatial relationships between 3D structures, e.g., preoperative volumetric imagery or models of surgical instruments, and 2D images thereof, such as intraoperative X-ray fluoroscopy or endoscopy. While image-based 2D/3D registration is a mature technique, its transition from the bench to the bedside has been restrained by well-known challenges, including brittleness with respect to optimization objective, hyperparameter selection, and initialization, difficulties in dealing with inconsistencies or multiple objects, and limited single-view performance. One reason these challenges persist today is that analytical solutions are likely inadequate considering the complexity, variability, and high-dimensionality of generic 2D/3D registration problems. The recent advent of machine learning-based approaches to imaging problems that, rather than specifying the desired functional mapping, approximate it using highly expressive parametric models holds promise for solving some of the notorious challenges in 2D/3D registration. In this manuscript, we review the impact of machine learning on 2D/3D registration to systematically summarize the recent advances made by introduction of this novel technology. Grounded in these insights, we then offer our perspective on the most pressing needs, significant open problems, and possible next steps.
Collapse
Affiliation(s)
- Mathias Unberath
- Advanced Robotics and Computationally Augmented Environments (ARCADE) Lab, Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | | | | | | | | | | | | |
Collapse
|