1
|
Killeen BD, Zhang H, Wang LJ, Liu Z, Kleinbeck C, Rosen M, Taylor RH, Osgood G, Unberath M. Stand in surgeon's shoes: virtual reality cross-training to enhance teamwork in surgery. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03138-7. [PMID: 38642297 DOI: 10.1007/s11548-024-03138-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 03/28/2024] [Indexed: 04/22/2024]
Abstract
PURPOSE Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.
Collapse
Affiliation(s)
| | - Han Zhang
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Liam J Wang
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Zixuan Liu
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Constantin Kleinbeck
- Johns Hopkins University, Baltimore, MD, 21218, USA
- Friedrich-Alexander-Universität, Erlangen, Germany
| | | | | | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Medicine, Baltimore, MD, 21218, USA
| | | |
Collapse
|
2
|
Cho SM, Joo HH, Golla P, Sahu M, Shankar A, Trakimas DR, Creighton F, Akst L, Taylor RH, Galaiya D. Tremor Assessment in Robot-Assisted Microlaryngeal Surgery Using Computer Vision-Based Tool Tracking. Otolaryngol Head Neck Surg 2024. [PMID: 38488231 DOI: 10.1002/ohn.714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 01/30/2024] [Accepted: 02/09/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVE Use microscopic video-based tracking of laryngeal surgical instruments to investigate the effect of robot assistance on instrument tremor. STUDY DESIGN Experimental trial. SETTING Tertiary Academic Medical Center. METHODS In this randomized cross-over trial, 36 videos were recorded from 6 surgeons performing left and right cordectomies on cadaveric pig larynges. These recordings captured 3 distinct conditions: without robotic assistance, with robot-assisted scissors, and with robot-assisted graspers. To assess tool tremor, we employed computer vision-based algorithms for tracking surgical tools. Absolute tremor bandpower and normalized path length were utilized as quantitative measures. Wilcoxon rank sum exact tests were employed for statistical analyses and comparisons between trials. Additionally, surveys were administered to assess the perceived ease of use of the robotic system. RESULTS Absolute tremor bandpower showed a significant decrease when using robot-assisted instruments compared to freehand instruments (P = .012). Normalized path length significantly decreased with robot-assisted compared to freehand trials (P = .001). For the scissors, robot-assisted trials resulted in a significant decrease in absolute tremor bandpower (P = .002) and normalized path length (P < .001). For the graspers, there was no significant difference in absolute tremor bandpower (P = .4), but there was a significantly lower normalized path length in the robot-assisted trials (P = .03). CONCLUSION This study demonstrated that computer-vision-based approaches can be used to assess tool motion in simulated microlaryngeal procedures. The results suggest that robot assistance is capable of reducing instrument tremor.
Collapse
Affiliation(s)
- Sue M Cho
- Department of Computer Science, Johns Hopkins, Baltimore, Maryland, USA
| | - Henry H Joo
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Pranathi Golla
- Department of Mechanical Engineering, Johns Hopkins, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins, Baltimore, Maryland, USA
| | - Ahjeetha Shankar
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Danielle R Trakimas
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Francis Creighton
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Lee Akst
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| |
Collapse
|
3
|
Song H, Yang S, Wu Z, Moradi H, Taylor RH, Kang JU, Salcudean SE, Boctor EM. Arc-to-line frame registration method for ultrasound and photoacoustic image-guided intraoperative robot-assisted laparoscopic prostatectomy. Int J Comput Assist Radiol Surg 2024; 19:199-208. [PMID: 37610603 DOI: 10.1007/s11548-023-02984-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 06/05/2023] [Indexed: 08/24/2023]
Abstract
PURPOSE To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modality in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker (PM) method, where the ultrasound/photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument. METHODS An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principle of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. RESULTS The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics (i.e., 1.15 ± 0.29 mm from the experimental evaluation). The result is also comparable with our previous approach (i.e., 1.05 ± 0.37 mm), and the proposed method can be implemented with a normal white light stereo camera and does not require highly accurate localization of the PM. CONCLUSION The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.
Collapse
Affiliation(s)
- Hyunwoo Song
- Department of Computer Science, Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
- Laboratory for Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Shuojue Yang
- Laboratory for Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Zijian Wu
- Laboratory for Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Hamid Moradi
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
| | - Russell H Taylor
- Department of Computer Science, Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
- Laboratory for Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Jin U Kang
- Department of Electrical and Computer Engineering, Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
- Laboratory for Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
| | - Emad M Boctor
- Department of Computer Science, Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA.
- Laboratory for Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD, 21218, USA.
| |
Collapse
|
4
|
He Z, Dai J, Ho JDL, Tong HS, Wang X, Fang G, Liang L, Cheung CL, Guo Z, Chang HC, Iordachita I, Taylor RH, Poon WS, Chan DTM, Kwok KW. Interactive Multi-Stage Robotic Positioner for Intra-Operative MRI-Guided Stereotactic Neurosurgery. Adv Sci (Weinh) 2024; 11:e2305495. [PMID: 38072667 PMCID: PMC10870025 DOI: 10.1002/advs.202305495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 10/30/2023] [Indexed: 02/17/2024]
Abstract
Magnetic resonance imaging (MRI) demonstrates clear advantages over other imaging modalities in neurosurgery with its ability to delineate critical neurovascular structures and cancerous tissue in high-resolution 3D anatomical roadmaps. However, its application has been limited to interventions performed based on static pre/post-operative imaging, where errors accrue from stereotactic frame setup, image registration, and brain shift. To leverage the powerful intra-operative functions of MRI, e.g., instrument tracking, monitoring of physiological changes and tissue temperature in MRI-guided bilateral stereotactic neurosurgery, a multi-stage robotic positioner is proposed. The system positions cannula/needle instruments using a lightweight (203 g) and compact (Ø97 × 81 mm) skull-mounted structure that fits within most standard imaging head coils. With optimized design in soft robotics, the system operates in two stages: i) manual coarse adjustment performed interactively by the surgeon (workspace of ±30°), ii) automatic fine adjustment with precise (<0.2° orientation error), responsive (1.4 Hz bandwidth), and high-resolution (0.058°) soft robotic positioning. Orientation locking provides sufficient transmission stiffness (4.07 N/mm) for instrument advancement. The system's clinical workflow and accuracy is validated with lab-based (<0.8 mm) and MRI-based testing on skull phantoms (<1.7 mm) and a cadaver subject (<2.2 mm). Custom-made wireless omni-directional tracking markers facilitated robot registration under MRI.
Collapse
Affiliation(s)
- Zhuoliang He
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Jing Dai
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Justin Di-Lang Ho
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Hon-Sing Tong
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Xiaomei Wang
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
- Multi-Scale Medical Robotics Center, Hong Kong, 999077, China
| | - Ge Fang
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Liyuan Liang
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, 999077, China
- Multi-Scale Medical Robotics Center, Hong Kong, 999077, China
| | - Chim-Lee Cheung
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
| | - Ziyan Guo
- Department of Medical Physics and Biomedical Engineering, University College London, London, WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, WC1E 6BT, UK
| | - Hing-Chiu Chang
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, 999077, China
- Multi-Scale Medical Robotics Center, Hong Kong, 999077, China
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Russell H Taylor
- Department of Computer Science and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Wai-Sang Poon
- Division of Neurosurgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong, 999077, China
- Neuromedicine Center, Shenzhen Hospital, The University of Hong Kong, Shenzhen, 518053, China
| | - Danny Tat-Ming Chan
- Division of Neurosurgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong, 999077, China
- Multi-Scale Medical Robotics Center, Hong Kong, 999077, China
| | - Ka-Wai Kwok
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, 999077, China
- Multi-Scale Medical Robotics Center, Hong Kong, 999077, China
| |
Collapse
|
5
|
Alamdar A, Usevitch DE, Wu J, Taylor RH, Gehlbach P, Iordachita I. Steady-Hand Eye Robot 3.0: Optimization and Benchtop Evaluation for Subretinal Injection. IEEE Trans Med Robot Bionics 2024; 6:135-145. [PMID: 38304756 PMCID: PMC10831842 DOI: 10.1109/tmrb.2023.3336975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
Subretinal injection methods and other procedures for treating retinal conditions and diseases (many considered incurable) have been limited in scope due to limited human motor control. This study demonstrates the next generation, cooperatively controlled Steady-Hand Eye Robot (SHER 3.0), a precise and intuitive-to-use robotic platform achieving clinical standards for targeting accuracy and resolution for subretinal injections. The system design and basic kinematics are reported and a deflection model for the incorporated delta stage and validation experiments are presented. This model optimizes the delta stage parameters, maximizing the global conditioning index and minimizing torsional compliance. Five tests measuring accuracy, repeatability, and deflection show the optimized stage design achieves a tip accuracy of < 30 μm, tip repeatability of 9.3 μm and 0.02°, and deflections between 20-350 μm/N. Future work will use updated control models to refine tip positioning outcomes and will be tested on in vivo animal models.
Collapse
Affiliation(s)
- Alireza Alamdar
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD USA
| | - David E. Usevitch
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD USA
| | - Jiahao Wu
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD USA
| | - Russell H. Taylor
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD USA
| |
Collapse
|
6
|
Margalit A, Phalen H, Gao C, Ma J, Suresh KV, Jain P, Farvardin A, Taylor RH, Armand M, Chattre A, Jain A. Autonomous Spinal Robotic System for Transforaminal Lumbar Epidural Injections: A Proof of Concept of Study. Global Spine J 2024; 14:138-145. [PMID: 35467447 PMCID: PMC10676186 DOI: 10.1177/21925682221096625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
STUDY DESIGN Phantom study. OBJECTIVE The aim of our study is to demonstrate in a proof-of-concept model whether the use of a marker less autonomous robotic controlled injection delivery system will increase accuracy in the lumbar spine. METHODS Ideal transforaminal epidural injection trajectories (bilateral L2/3, L3/4, L4/5, L5/S1 and S1) were planned out on a virtual pre-operative planning software by 1 experienced provider. Twenty transforaminal epidural injections were administered in a lumbar spine phantom model, 10 using a freehand procedure, and 10 using a marker less autonomous spinal robotic system. Procedural accuracy, defined as the difference between pre-operative planning and actual post-operative needle tip distance (mm) and angular orientation (degrees), were assessed between the freehand and robotic procedures. RESULTS Procedural accuracy for robotically placed transforaminal epidural injections was significantly higher with the difference in pre- and post-operative needle tip distance being 20.1 (±5.0) mm in the freehand procedure and 11.4 (±3.9) mm in the robotically placed procedure (P < .001). Needle tip precision for the freehand technique was 15.6 mm (26.3 - 10.7) compared to 10.1 mm (16.3 - 6.1) for the robotic technique. Differences in needle angular orientation deviation were 5.6 (±3.3) degrees in the robotically placed procedure and 12.0 (±4.8) degrees in the freehand procedure (P = .003). CONCLUSION The robotic system allowed for comparable placement of transforaminal epidural injections as a freehand technique by an experienced provider, with additional benefits of improved accuracy and precision.
Collapse
Affiliation(s)
- Adam Margalit
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Henry Phalen
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Cong Gao
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Justin Ma
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Krishna V. Suresh
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Punya Jain
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Amirhossein Farvardin
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Russell H. Taylor
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Mehran Armand
- Johns Hopkins Whiting School of Engineering, Laboratory for Computational Sensing and Robotics, Baltimore, MD, USA
| | - Akhil Chattre
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Amit Jain
- Department of Orthopaedic Surgery, The Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
7
|
Munawar A, Li Z, Nagururu N, Trakimas D, Kazanzides P, Taylor RH, Creighton FX. Fully immersive virtual reality for skull-base surgery: surgical training and beyond. Int J Comput Assist Radiol Surg 2024; 19:51-59. [PMID: 37347346 DOI: 10.1007/s11548-023-02956-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/08/2023] [Indexed: 06/23/2023]
Abstract
PURPOSE A virtual reality (VR) system, where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills using measurements that are otherwise hard to collect in reality. Thus, we present the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines surgical simulation software with a high-fidelity hardware setup. METHODS FIVRS allows surgeons to follow normal clinical workflows inside the VR environment. FIVRS uses advanced rendering designs and drilling algorithms for realistic bone ablation. A head-mounted display with ergonomics similar to that of surgical microscopes is used to improve immersiveness. Extensive multi-modal data are recorded for post-analysis, including eye gaze, motion, force, and video of the surgery. A user-friendly interface is also designed to ease the learning curve of using FIVRS. RESULTS We present results from a user study involving surgeons with various levels of expertise. The preliminary data recorded by FIVRS differentiate between participants with different levels of expertise, promising future research on automatic skill assessment. Furthermore, informal feedback from the study participants about the system's intuitiveness and immersiveness was positive. CONCLUSION We present FIVRS, a fully immersive VR system for skull-base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open source and provides feature-rich data in an industry-standard format.
Collapse
Affiliation(s)
- Adnan Munawar
- Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Zhaoshuo Li
- Johns Hopkins University, Baltimore, MD, 21218, USA
| | | | | | | | | | | |
Collapse
|
8
|
Gao C, Feng A, Liu X, Taylor RH, Armand M, Unberath M. A Fully Differentiable Framework for 2D/3D Registration and the Projective Spatial Transformers. IEEE Trans Med Imaging 2024; 43:275-285. [PMID: 37549070 PMCID: PMC10879149 DOI: 10.1109/tmi.2023.3299588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/09/2023]
Abstract
Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
Collapse
|
9
|
Ding AS, Lu A, Li Z, Sahu M, Galaiya D, Siewerdsen JH, Unberath M, Taylor RH, Creighton FX. A Self-Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone-Beam CT Imaging. Otolaryngol Head Neck Surg 2023; 169:988-998. [PMID: 36883992 PMCID: PMC11060418 DOI: 10.1002/ohn.317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/19/2023] [Accepted: 02/19/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time-consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot-assisted procedures in this space. This study evaluates a state-of-the-art deep learning pipeline for semantic segmentation of temporal bone anatomy. STUDY DESIGN A descriptive study of a segmentation network. SETTING Academic institution. METHODS A total of 15 high-resolution cone-beam temporal bone computed tomography (CT) data sets were included in this study. All images were co-registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U-Net (nnU-Net), an open-source 3-dimensional semantic segmentation neural network, were compared against ground-truth segmentations using modified Hausdorff distances (mHD) and Dice scores. RESULTS Fivefold cross-validation with nnU-Net between predicted and ground-truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas-based segmentation propagation showed significantly higher Dice scores for all structures (p < .05). CONCLUSION Using an open-source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand-segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot-assisted systems for the temporal bone.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
10
|
Killeen BD, Zhang H, Mangulabnan J, Armand M, Taylor RH, Osgood G, Unberath M. Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous Pelvic Fixation. Med Image Comput Comput Assist Interv 2023; 14228:133-143. [PMID: 38617200 PMCID: PMC11016332 DOI: 10.1007/978-3-031-43996-4_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.
Collapse
Affiliation(s)
| | - Han Zhang
- Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | - Greg Osgood
- Johns Hopkins University, Baltimore, MD, USA
| | | |
Collapse
|
11
|
Killeen BD, Gao C, Oguine KJ, Darcy S, Armand M, Taylor RH, Osgood G, Unberath M. An autonomous X-ray image acquisition and interpretation system for assisting percutaneous pelvic fracture fixation. Int J Comput Assist Radiol Surg 2023; 18:1201-1208. [PMID: 37213057 PMCID: PMC11002911 DOI: 10.1007/s11548-023-02941-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 04/25/2023] [Indexed: 05/23/2023]
Abstract
PURPOSE Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.
Collapse
Affiliation(s)
| | - Cong Gao
- Johns Hopkins University, Baltimore, 21210, MD, USA
| | | | - Sean Darcy
- Johns Hopkins University, Baltimore, 21210, MD, USA
| | - Mehran Armand
- Johns Hopkins University, Baltimore, 21210, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, USA
| | | | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, USA
| | | |
Collapse
|
12
|
Li Z, Shu H, Liang R, Goodridge A, Sahu M, Creighton FX, Taylor RH, Unberath M. TAToo: vision-based joint tracking of anatomy and tool for skull-base surgery. Int J Comput Assist Radiol Surg 2023; 18:1303-1310. [PMID: 37266885 DOI: 10.1007/s11548-023-02959-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/12/2023] [Indexed: 06/03/2023]
Abstract
PURPOSE Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. METHODS We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of the patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. RESULTS We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below [Formula: see text]. We further illustrate how TAToo may be used in a surgical navigation setting. CONCLUSIONS We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.
Collapse
Affiliation(s)
- Zhaoshuo Li
- Johns Hopkins University, Baltimore, MD, USA.
| | | | - Ruixing Liang
- Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins Medicine, Baltimore, MD, USA
| | | | - Manish Sahu
- Johns Hopkins University, Baltimore, MD, USA
| | | | | | | |
Collapse
|
13
|
Chen Y, Goodridge A, Sahu M, Kishore A, Vafaee S, Mohan H, Sapozhnikov K, Creighton FX, Taylor RH, Galaiya D. A force-sensing surgical drill for real-time force feedback in robotic mastoidectomy. Int J Comput Assist Radiol Surg 2023; 18:1167-1174. [PMID: 37171660 PMCID: PMC11060417 DOI: 10.1007/s11548-023-02873-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/14/2023] [Indexed: 05/13/2023]
Abstract
PURPOSE Robotic assistance in otologic surgery can reduce the task load of operating surgeons during the removal of bone around the critical structures in the lateral skull base. However, safe deployment into the anatomical passageways necessitates the development of advanced sensing capabilities to actively limit the interaction forces between the surgical tools and critical anatomy. METHODS We introduce a surgical drill equipped with a force sensor that is capable of measuring accurate tool-tissue interaction forces to enable force control and feedback to surgeons. The design, calibration and validation of the force-sensing surgical drill mounted on a cooperatively controlled surgical robot are described in this work. RESULTS The force measurements on the tip of the surgical drill are validated with raw-egg drilling experiments, where a force sensor mounted below the egg serves as ground truth. The average root mean square error for points and path drilling experiments is 41.7 (± 12.2) mN and 48.3 (± 13.7) mN, respectively. CONCLUSION The force-sensing prototype measures forces with sub-millinewton resolution and the results demonstrate that the calibrated force-sensing drill generates accurate force measurements with minimal error compared to the measured drill forces. The development of such sensing capabilities is crucial for the safe use of robotic systems in a clinical context.
Collapse
Affiliation(s)
- Yuxin Chen
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Anna Goodridge
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Manish Sahu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
| | - Aditi Kishore
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Seena Vafaee
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Harsha Mohan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Katherina Sapozhnikov
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Francis X Creighton
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Deepa Galaiya
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
14
|
Phalen H, Munawar A, Jain A, Taylor RH, Armand M. Platform for investigating continuum manipulator behavior in orthopedics. Int J Comput Assist Radiol Surg 2023; 18:1329-1334. [PMID: 37162733 PMCID: PMC10986430 DOI: 10.1007/s11548-023-02945-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/28/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE The use of robotic continuum manipulators has been proposed to facilitate less-invasive orthopedic surgical procedures. While tools and strategies have been developed, critical challenges such as system control and intra-operative guidance are under-addressed. Simulation tools can help solve these challenges, but several gaps limit their utility for orthopedic surgical systems, particularly those with continuum manipulators. Herein, a simulation platform which addresses these gaps is presented as a tool to better understand and solve challenges for minimally invasive orthopedic procedures. METHODS An open-source surgical simulation software package was developed in which a continuum manipulator can interact with any volume model such as to drill bone volumes segmented from a 3D computed tomography (CT) image. Paired simulated X-ray images of the scene can also be generated. As compared to previous works, tool-anatomy interactions use a physics-based approach which leads to more stable behavior and wider procedure applicability. A new method for representing low-level volumetric drilling behavior is also introduced to capture material variability within bone as well as patient-specific properties from a CT. RESULTS Similar interaction between a continuum manipulator and phantom bone was also demonstrated between a simulated manipulator and volumetric obstacle models. High-level material- and tool-driven behavior was shown to emerge directly from the improved low-level interactions, rather than by need of manual programming. CONCLUSION This platform is a promising tool for developing and investigating control algorithms for tasks such as curved drilling. The generation of simulated X-ray images that correspond to the scene is useful for developing and validating image guidance models. The improvements to volumetric drilling offer users the ability to better tune behavior for specific tools and procedures and enable research to improve surgical simulation model fidelity. This platform will be used to develop and test control algorithms for image-guided curved drilling procedures in the femur.
Collapse
Affiliation(s)
- Henry Phalen
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
| | - Adnan Munawar
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Amit Jain
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
15
|
Hernández I, Soberanis-Mukul R, Mangulabnan JE, Sahu M, Winter J, Vedula S, Ishii M, Hager G, Taylor RH, Unberath M. Investigating keypoint descriptors for camera relocalization in endoscopy surgery. Int J Comput Assist Radiol Surg 2023; 18:1135-1142. [PMID: 37160580 PMCID: PMC10958396 DOI: 10.1007/s11548-023-02918-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/12/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE Recent advances in computer vision and machine learning have resulted in endoscopic video-based solutions for dense reconstruction of the anatomy. To effectively use these systems in surgical navigation, a reliable image-based technique is required to constantly track the endoscopic camera's position within the anatomy, despite frequent removal and re-insertion. In this work, we investigate the use of recent learning-based keypoint descriptors for six degree-of-freedom camera pose estimation in intraoperative endoscopic sequences and under changes in anatomy due to surgical resection. METHODS Our method employs a dense structure from motion (SfM) reconstruction of the preoperative anatomy, obtained with a state-of-the-art patient-specific learning-based descriptor. During the reconstruction step, each estimated 3D point is associated with a descriptor. This information is employed in the intraoperative sequences to establish 2D-3D correspondences for Perspective-n-Point (PnP) camera pose estimation. We evaluate this method in six intraoperative sequences that include anatomical modifications obtained from two cadaveric subjects. RESULTS Show that this approach led to translation and rotation errors of 3.9 mm and 0.2 radians, respectively, with 21.86% of localized cameras averaged over the six sequences. In comparison to an additional learning-based descriptor (HardNet++), the selected descriptor can achieve a better percentage of localized cameras with similar pose estimation performance. We further discussed potential error causes and limitations of the proposed approach. CONCLUSION Patient-specific learning-based descriptors can relocalize images that are well distributed across the inspected anatomy, even where the anatomy is modified. However, camera relocalization in endoscopic sequences remains a persistently challenging problem, and future research is necessary to increase the robustness and accuracy of this technique.
Collapse
Affiliation(s)
| | | | | | - Manish Sahu
- Johns Hopkins University, Baltimore, 21211, MD, USA
| | - Jonas Winter
- Johns Hopkins University, Baltimore, 21211, MD, USA
| | | | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, 21287, MD, USA
| | | | - Russell H Taylor
- Johns Hopkins University, Baltimore, 21211, MD, USA
- Johns Hopkins Medical Institutions, Baltimore, 21287, MD, USA
| | - Mathias Unberath
- Johns Hopkins University, Baltimore, 21211, MD, USA
- Johns Hopkins Medical Institutions, Baltimore, 21287, MD, USA
| |
Collapse
|
16
|
Cho SM, Grupp RB, Gomez C, Gupta I, Armand M, Osgood G, Taylor RH, Unberath M. Visualization in 2D/3D registration matters for assuring technology-assisted image-guided surgery. Int J Comput Assist Radiol Surg 2023; 18:1017-1024. [PMID: 37079247 PMCID: PMC10986429 DOI: 10.1007/s11548-023-02888-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/27/2023] [Indexed: 04/21/2023]
Abstract
PURPOSE Image-guided navigation and surgical robotics are the next frontiers of minimally invasive surgery. Assuring safety in high-stakes clinical environments is critical for their deployment. 2D/3D registration is an essential, enabling algorithm for most of these systems, as it provides spatial alignment of preoperative data with intraoperative images. While these algorithms have been studied widely, there is a need for verification methods to enable human stakeholders to assess and either approve or reject registration results to ensure safe operation. METHODS To address the verification problem from the perspective of human perception, we develop novel visualization paradigms and use a sampling method based on approximate posterior distribution to simulate registration offsets. We then conduct a user study with 22 participants to investigate how different visualization paradigms (Neutral, Attention-Guiding, Correspondence-Suggesting) affect human performance in evaluating the simulated 2D/3D registration results using 12 pelvic fluoroscopy images. RESULTS All three visualization paradigms allow users to perform better than random guessing to differentiate between offsets of varying magnitude. The novel paradigms show better performance than the neutral paradigm when using an absolute threshold to differentiate acceptable and unacceptable registrations (highest accuracy: Correspondence-Suggesting (65.1%), highest F1 score: Attention-Guiding (65.7%)), as well as when using a paradigm-specific threshold for the same discrimination (highest accuracy: Attention-Guiding (70.4%), highest F1 score: Corresponding-Suggesting (65.0%)). CONCLUSION This study demonstrates that visualization paradigms do affect the human-based assessment of 2D/3D registration errors. However, further exploration is needed to understand this effect better and develop more effective methods to assure accuracy. This research serves as a crucial step toward enhanced surgical autonomy and safety assurance in technology-assisted image-guided surgery.
Collapse
Affiliation(s)
- Sue Min Cho
- Johns Hopkins University, Baltimore, MD, USA.
| | | | | | - Iris Gupta
- Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Greg Osgood
- Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Russell H Taylor
- Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Mathias Unberath
- Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
17
|
Shu H, Liang R, Li Z, Goodridge A, Zhang X, Ding H, Nagururu N, Sahu M, Creighton FX, Taylor RH, Munawar A, Unberath M. Twin-S: a digital twin for skull base surgery. Int J Comput Assist Radiol Surg 2023; 18:1077-1084. [PMID: 37160583 DOI: 10.1007/s11548-023-02863-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 02/28/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE Digital twins are virtual replicas of real-world objects and processes, and they have potential applications in the field of surgical procedures, such as enhancing situational awareness. We introduce Twin-S, a digital twin framework designed specifically for skull base surgeries. METHODS Twin-S is a novel framework that combines high-precision optical tracking and real-time simulation, making it possible to integrate it into image-guided interventions. To guarantee accurate representation, Twin-S employs calibration routines to ensure that the virtual model precisely reflects all real-world processes. Twin-S models and tracks key elements of skull base surgery, including surgical tools, patient anatomy, and surgical cameras. Importantly, Twin-S mirrors real-world drilling and updates the virtual model at frame rate of 28. RESULTS Our evaluation of Twin-S demonstrates its accuracy, with an average error of 1.39 mm during the drilling process. Our study also highlights the benefits of Twin-S, such as its ability to provide augmented surgical views derived from the continuously updated virtual model, thus offering additional situational awareness to the surgeon. CONCLUSION We present Twin-S, a digital twin environment for skull base surgery. Twin-S captures the real-world surgical progresses and updates the virtual model in real time through the use of modern tracking technologies. Future research that integrates vision-based techniques could further increase the accuracy of Twin-S.
Collapse
Affiliation(s)
| | - Ruixing Liang
- Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins Medicine, Baltimore, MD, USA
| | - Zhaoshuo Li
- Johns Hopkins University, Baltimore, MD, USA
| | | | | | - Hao Ding
- Johns Hopkins University, Baltimore, MD, USA
| | | | - Manish Sahu
- Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | | |
Collapse
|
18
|
Ebrahimi A, Sefati S, Gehlbach P, Taylor RH, Iordachita I. Simultaneous Online Registration-Independent Stiffness Identification and Tip Localization of Surgical Instruments in Robot-assisted Eye Surgery. IEEE T ROBOT 2023; 39:1373-1387. [PMID: 37377922 PMCID: PMC10292740 DOI: 10.1109/tro.2022.3201393] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.
Collapse
Affiliation(s)
- Ali Ebrahimi
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Shahriar Sefati
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD, 21287, USA
| | - Russell H Taylor
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Computer Science and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and also Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
19
|
Wang Y, Kwok KW, Cleary K, Taylor RH, Iordachita I. Flexible Needle Bending Model for Spinal Injection Procedures. IEEE Robot Autom Lett 2023; 8:1343-1350. [PMID: 37637101 PMCID: PMC10448781 DOI: 10.1109/lra.2023.3239310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
An in situ needle manipulation technique used by physicians when performing spinal injections is modeled to study its effect on needle shape and needle tip position. A mechanics-based model is proposed and solved using finite element method. A test setup is presented to mimic the needle manipulation motion. Tissue phantoms made from plastisol as well as porcine skeletal muscle samples are used to evaluate the model accuracy against medical images. The effect of different compression models as well as model parameters on model accuracy is studied, and the effect of needle-tissue interaction on the needle remote center of motion is examined. With the correct combination of compression model and model parameters, the model simulation is able to predict needle tip position within submillimeter accuracy.
Collapse
Affiliation(s)
- Yanzhou Wang
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ka-Wai Kwok
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong, China
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Russell H Taylor
- Department of Computer Science and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
20
|
Li AZ, Khan M, Nguyen NT, Breitman L, Luca J, Van Doren E, Gia Kieu Ngan N, Thị Hoàng Yến N, Dang K, Tan Tai T, Taylor RH. Huế dental students' use and perception of an online dental learning platform: A pilot study. J Dent Educ 2023; 87:401-407. [PMID: 36377379 DOI: 10.1002/jdd.13121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 08/16/2022] [Accepted: 09/17/2022] [Indexed: 11/16/2022]
Abstract
PURPOSE/OBJECTIVES Online educational materials are growing in use, and dental students worldwide can benefit from higher quality and more accessible online supplemental resources. This study was created to evaluate the learning resources non-English speaking dental students desire and to pilot My Dental Key (MDK), an English, evidence-based, online dental educational platform. METHODS Third to sixth year dental students at the Huế University of Medicine and Pharmacy were asked to pilot MDK over a 5-week period and were invited to answer three surveys throughout the study. A preliminary survey was given to gauge the participants' (n = 209) preferences regarding the use of English-based dental educational resources. Participants (n = 58) completed a presurvey prior to accessing MDK. After the 5-week period, participants (n = 38) were given a postsurvey to evaluate the platform's effectiveness as a supplemental educational resource. RESULTS Overall, we found that: (1) students desire credible online supplemental resources in addition to current resources provided by their school, (2) the multimodal content that MDK provides is a strength that bridges language barriers (3) participants perceived that the content on MDK would help them in class and when treating patients. CONCLUSIONS Improving the quality of online supplemental dental resources will have the capability to progress the current educational landscape, and further resources should be created to best serve the global dental community.
Collapse
Affiliation(s)
- Alice Z Li
- Harvard School of Dental Medicine, Boston, Massachusetts, USA
| | - Mariam Khan
- Tufts University School of Dental Medicine, Boston, Massachusetts, USA
| | - Nicholas T Nguyen
- School of Dentistry in Baltimore, University of Maryland, College Park, Maryland, USA
| | - Leela Breitman
- UNC Adams School of Dentistry, Chapel Hill, North Carolina, USA
| | - Jennifer Luca
- The Ohio State University College of Dentistry, Columbus, Ohio, USA
| | - Emily Van Doren
- Harvard School of Dental Medicine, Boston, Massachusetts, USA
| | - Nguyen Gia Kieu Ngan
- Faculty of Odonto-Stomatology, Huế University of Medicine and Pharmacy, Huế, Vietnam
| | | | - Khoa Dang
- Huế University of Medicine and Pharmacy, Huế, Vietnam
| | - Tran Tan Tai
- Faculty of Odonto-Stomatology, Huế University of Medicine and Pharmacy, Huế, Vietnam
| | - Russell H Taylor
- Faculty at Harvard School of Dental Medicine, Brookline, Massachusetts, USA
| |
Collapse
|
21
|
Song H, Moradi H, Jiang B, Xu K, Wu Y, Taylor RH, Deguet A, Kang JU, Salcudean SE, Boctor EM. Real-time intraoperative surgical guidance system in the da Vinci surgical robot based on transrectal ultrasound/photoacoustic imaging with photoacoustic markers: an ex vivo demonstration. IEEE Robot Autom Lett 2023; 8:1287-1294. [PMID: 37997605 PMCID: PMC10664816 DOI: 10.1109/lra.2022.3191788] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
This paper introduces the first integrated real-time intraoperative surgical guidance system, in which an endoscope camera of da Vinci surgical robot and a transrectal ultrasound (TRUS) transducer are co-registered using photoacoustic markers that are detected in both fluorescence (FL) and photoacoustic (PA) imaging. The co-registered system enables the TRUS transducer to track the laser spot illuminated by a pulsed-laser-diode attached to the surgical instrument, providing both FL and PA images of the surgical region-of-interest (ROI). As a result, the generated photoacoustic marker is visualized and localized in the da Vinci endoscopic FL images, and the corresponding tracking can be conducted by rotating the TRUS transducer to display the PA image of the marker. A quantitative evaluation revealed that the average registration and tracking errors were 0.84 mm and 1.16°, respectively. This study shows that the co-registered photoacoustic marker tracking can be effectively deployed intraoperatively using TRUS+PA imaging providing functional guidance of the surgical ROI.
Collapse
Affiliation(s)
- Hyunwoo Song
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Hamid Moradi
- Department of Electrical and Computer Engineering, the University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Baichuan Jiang
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Keshuai Xu
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Yixuan Wu
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H Taylor
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Anton Deguet
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Jin U Kang
- Department of Electrical and Computer Engineering, Whiting school of Engineering, the Johns Hopkins University, Baltimore, MD 21211 USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, the University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Emad M Boctor
- Department of Computer Science, Whiting School of Engineering, the Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
22
|
Gao C, Killeen BD, Hu Y, Grupp RB, Taylor RH, Armand M, Unberath M. Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. NAT MACH INTELL 2023; 5:294-308. [PMID: 38523605 PMCID: PMC10959504 DOI: 10.1038/s42256-023-00629-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/06/2023] [Indexed: 03/26/2024]
Abstract
Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin D. Killeen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Robert B. Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
23
|
Bakhtiarinejad M, Gao C, Farvardin A, Zhu G, Wang Y, Oni JK, Taylor RH, Armand M. A Surgical Robotic System for Osteoporotic Hip Augmentation: System Development and Experimental Evaluation. IEEE Trans Med Robot Bionics 2023; 5:18-29. [PMID: 37213937 PMCID: PMC10195101 DOI: 10.1109/tmrb.2023.3241589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Minimally-invasive Osteoporotic Hip Augmentation (OHA) by injecting bone cement is a potential treatment option to reduce the risk of hip fracture. This treatment can significantly benefit from computer-assisted planning and execution system to optimize the pattern of cement injection. We present a novel robotic system for the execution of OHA that consists of a 6-DOF robotic arm and integrated drilling and injection component. The minimally-invasive procedure is performed by registering the robot and preoperative images to the surgical scene using multiview image-based 2D/3D registration with no external fiducial attached to the body. The performance of the system is evaluated through experimental sawbone studies as well as cadaveric experiments with intact soft tissues. In the cadaver experiments, distance errors of 3.28mm and 2.64mm for entry and target points and orientation error of 2.30° are calculated. Moreover, the mean surface distance error of 2.13mm with translational error of 4.47mm is reported between injected and planned cement profiles. The experimental results demonstrate the first application of the proposed Robot-Assisted combined Drilling and Injection System (RADIS), incorporating biomechanical planning and intraoperative fiducial-less 2D/3D registration on human cadavers with intact soft tissues.
Collapse
Affiliation(s)
- Mahsan Bakhtiarinejad
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Cong Gao
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Amirhossein Farvardin
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Gang Zhu
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Yu Wang
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Julius K Oni
- Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Orthopaedic Surgery, Johns Hopkins University, Baltimore, MD 21287, USA
| |
Collapse
|
24
|
Killeen BD, Winter J, Gu W, Martin-Gomez A, Taylor RH, Osgood G, Unberath M. Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems. Comput Methods Biomech Biomed Eng Imaging Vis 2022; 11:1130-1135. [PMID: 37555199 PMCID: PMC10406465 DOI: 10.1080/21681163.2022.2154272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/14/2022]
Abstract
Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.
Collapse
Affiliation(s)
- Benjamin D Killeen
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Jonas Winter
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Wenhao Gu
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Alejandro Martin-Gomez
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Greg Osgood
- Department of Orthopaedic Surgery, Johns Hopkins Hospital, Baltimore, MD, USA
| | - Mathias Unberath
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
25
|
Gao C, Phalen H, Margalit A, Ma JH, Ku PC, Unberath M, Taylor RH, Jain A, Armand M. Fluoroscopy-Guided Robotic System for Transforaminal Lumbar Epidural Injections. IEEE Trans Med Robot Bionics 2022; 4:901-909. [PMID: 37790985 PMCID: PMC10544812 DOI: 10.1109/tmrb.2022.3196321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
We present an autonomous robotic spine needle injection system using fluoroscopic image-based navigation. Our system includes patient-specific planning, intra-operative image-based 2D/3D registration and navigation, and automatic robot-guided needle injection. We performed intensive simulation studies to validate the registration accuracy. We achieved a mean spine vertebrae registration error of 0.8 ± 0.3 mm, 0.9 ± 0.7 degrees, mean injection device registration error of 0.2 ± 0.6 mm, 1.2 ± 1.3 degrees, in translation and rotation, respectively. We then conducted cadaveric studies comparing our system to an experienced clinician's free-hand injections. We achieved a mean needle tip translational error of 5.1 ± 2.4 mm and needle orientation error of 3.6 ± 1.9 degrees for robotic injections, compared to 7.6 ± 2.8 mm and 9.9 ± 4.7 degrees for clinician's free-hand injections, respectively. During injections, all needle tips were placed within the defined safety zones for this application. The results suggest the feasibility of using our image-guided robotic injection system for spinal orthopedic applications.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Henry Phalen
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Adam Margalit
- Department of Orthopaedic Surgery, Baltimore, MD, USA 21224
| | - Justin H Ma
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Ping-Cheng Ku
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
| | - Amit Jain
- Department of Orthopaedic Surgery, Baltimore, MD, USA 21224
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA 21211
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA 21211
- Department of Orthopaedic Surgery, Baltimore, MD, USA 21224
- Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA 21224
| |
Collapse
|
26
|
Ding AS, Lu A, Li Z, Galaiya D, Ishii M, Siewerdsen JH, Taylor RH, Creighton FX. Automated Extraction of Anatomical Measurements From Temporal Bone CT Imaging. Otolaryngol Head Neck Surg 2022; 167:731-738. [PMID: 35133916 PMCID: PMC9357851 DOI: 10.1177/01945998221076801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 01/10/2022] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Proposed methods of minimally invasive and robot-assisted procedures within the temporal bone require measurements of surgically relevant distances and angles, which often require time-consuming manual segmentation of preoperative imaging. This study aims to describe an automatic segmentation and measurement extraction pipeline of temporal bone cone-beam computed tomography (CT) scans. STUDY DESIGN Descriptive study of temporal bone measurements. SETTING Academic institution. METHODS A propagation template composed of 16 temporal bone CT scans was formed with relevant anatomical structures and landmarks manually segmented. Next, 52 temporal bone CT scans were autonomously segmented using deformable registration techniques from the Advanced Normalization Tools Python package. Anatomical measurements were extracted via in-house Python algorithms. Extracted measurements were compared to ground truth values from manual segmentations. RESULTS Paired t test analyses showed no statistical difference between measurements using this pipeline and ground truth measurements from manually segmented images. Mean (SD) malleus manubrium length was 4.39 (0.34) mm. Mean (SD) incus short and long processes were 2.91 (0.18) mm and 3.53 (0.38) mm, respectively. The mean (SD) maximal diameter of the incus long process was 0.74 (0.17) mm. The first and second facial nerve genus had mean (SD) angles of 68.6 (6.7) degrees and 111.1 (5.3) degrees, respectively. The facial recess had a mean (SD) span of 3.21 (0.46) mm. Mean (SD) minimum distance between the external auditory canal and tegmen was 3.79 (1.05) mm. CONCLUSIONS This is the first study to automatically extract relevant temporal bone anatomical measurements from CT scans using segmentation propagation. Measurements from these models can streamline preoperative planning, improve future segmentation techniques, and help develop future image-guided or robot-assisted systems for temporal bone procedures.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Masaru Ishii
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
27
|
Wu Y, Kang J, Lesniak WG, Lisok A, Zhang HK, Taylor RH, Pomper MG, Boctor EM. System-level optimization in spectroscopic photoacoustic imaging of prostate cancer. Photoacoustics 2022; 27:100378. [PMID: 36068804 PMCID: PMC9441267 DOI: 10.1016/j.pacs.2022.100378] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 02/17/2022] [Accepted: 06/06/2022] [Indexed: 05/25/2023]
|
28
|
Patel N, Urias M, Ebrahimi A, Taylor RH, Gehlbach P, Iordachita I. Force-based Control for Safe Robot-assisted Retinal Interventions: In Vivo Evaluation in Animal Studies. IEEE Trans Med Robot Bionics 2022; 4:578-587. [PMID: 36033345 PMCID: PMC9410268 DOI: 10.1109/tmrb.2022.3191441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, robotic assistance in vitreoretinal surgery has moved from a benchtop environment to the operating rooms. Emerging robotic systems improve tool manoeuvrability and provide precise tool motions in a constrained intraocular environment and reduce/remove hand tremor. However, often due to their stiff and bulky mechanical structure, they diminish the perception of tool-to-sclera (scleral) forces, on which the surgeon relies, for eyeball manipulation. In this paper we measure these scleral forces and actively control the robot to keep them under a predefined threshold. Scleral forces are measured using a Fiber Bragg Grating (FBG) based force sensing instrument in an in vivo rabbit eye model in manual, cooperative robotic assistance with no scleral force control (NC), adaptive scleral force norm control (ANC) and adaptive scleral force component control (ACC) methods. To the best of our knowledge, this is the first time that the scleral forces are measured in an in vivo eye model during robot assisted vitreoretinal procedures. An experienced retinal surgeon repeated an intraocular tool manipulation (ITM) task 10 times in four in vivo rabbit eyes and a phantom eyeball, for a total of 50 repetitions in each control mode. Statistical analysis shows that the ANC and ACC control schemes restrict the duration of the undesired scleral forces to 4.41% and 14.53% as compared to 43.30% and 35.28% in manual and NC cases, respectively during the in vivo studies. These results show that the active robot control schemes can maintain applied scleral forces below a desired threshold during robot-assisted vitreoretinal surgery. The scleral forces measurements in this study may enable a better understanding of tool-to-sclera interactions during vitreoretinal surgery and the proposed control strategies could be extended to other microsurgery and robot-assisted interventions.
Collapse
Affiliation(s)
- Niravkumar Patel
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
- Indian Institute of Technology Madras, Chennai, India
| | - Muller Urias
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Ali Ebrahimi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
| |
Collapse
|
29
|
Connolly L, Deguet A, Leonard S, Tokuda J, Ungi T, Krieger A, Kazanzides P, Mousavi P, Fichtinger G, Taylor RH. Bridging 3D Slicer and ROS2 for Image-Guided Robotic Interventions. Sensors (Basel) 2022; 22:5336. [PMID: 35891016 PMCID: PMC9324680 DOI: 10.3390/s22145336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 06/15/2023]
Abstract
Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.
Collapse
Affiliation(s)
- Laura Connolly
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Anton Deguet
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | - Simon Leonard
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | | | - Tamas Ungi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Axel Krieger
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | - Peter Kazanzides
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Russell H. Taylor
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| |
Collapse
|
30
|
Abstract
HYPOTHESIS Automated image registration techniques can successfully determine anatomical variation in human temporal bones with statistical shape modeling. BACKGROUND There is a lack of knowledge about inter-patient anatomical variation in the temporal bone. Statistical shape models (SSMs) provide a powerful method for quantifying variation of anatomical structures in medical images but are time-intensive to manually develop. This study presents SSMs of temporal bone anatomy using automated image-registration techniques. METHODS Fifty-three cone-beam temporal bone CTs were included for SSM generation. The malleus, incus, stapes, bony labyrinth, and facial nerve were automatically segmented using 3D Slicer and a template-based segmentation propagation technique. Segmentations were then used to construct SSMs using MATLAB. The first three principal components of each SSM were analyzed to describe shape variation. RESULTS Principal component analysis of middle and inner ear structures revealed novel modes of anatomical variation. The first three principal components for the malleus represented variability in manubrium length (mean: 4.47 mm; ±2-SDs: 4.03-5.03 mm) and rotation about its long axis (±2-SDs: -1.6° to 1.8° posteriorly). The facial nerve exhibits variability in first and second genu angles. The bony labyrinth varies in the angle between the posterior and superior canals (mean: 88.9°; ±2-SDs: 83.7°-95.7°) and cochlear orientation (±2-SDs: -4.0° to 3.0° anterolaterally). CONCLUSIONS SSMs of temporal bone anatomy can inform surgeons on clinically relevant inter-patient variability. Anatomical variation elucidated by these models can provide novel insight into function and pathophysiology. These models also allow further investigation of anatomical variation based on age, BMI, sex, and geographical location.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Alexander Lu
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Deepa Galaiya
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Masaru Ishii
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Jeffrey H. Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland
| | - Francis X. Creighton
- Department of Otolaryngology – Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
31
|
Ding AS, Lu A, Li Z, Galaiya D, Siewerdsen JH, Taylor RH, Creighton FX. Automated Registration-Based Temporal Bone Computed Tomography Segmentation for Applications in Neurotologic Surgery. Otolaryngol Head Neck Surg 2022; 167:133-140. [PMID: 34491849 PMCID: PMC10072909 DOI: 10.1177/01945998211044982] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE This study investigates the accuracy of an automated method to rapidly segment relevant temporal bone anatomy from cone beam computed tomography (CT) images. Implementation of this segmentation pipeline has potential to improve surgical safety and decrease operative time by augmenting preoperative planning and interfacing with image-guided robotic surgical systems. STUDY DESIGN Descriptive study of predicted segmentations. SETTING Academic institution. METHODS We have developed a computational pipeline based on the symmetric normalization registration method that predicts segmentations of anatomic structures in temporal bone CT scans using a labeled atlas. To evaluate accuracy, we created a data set by manually labeling relevant anatomic structures (eg, ossicles, labyrinth, facial nerve, external auditory canal, dura) for 16 deidentified high-resolution cone beam temporal bone CT images. Automated segmentations from this pipeline were compared against ground-truth manual segmentations by using modified Hausdorff distances and Dice scores. Runtimes were documented to determine the computational requirements of this method. RESULTS Modified Hausdorff distances and Dice scores between predicted and ground-truth labels were as follows: malleus (0.100 ± 0.054 mm; Dice, 0.827 ± 0.068), incus (0.100 ± 0.033 mm; Dice, 0.837 ± 0.068), stapes (0.157 ± 0.048 mm; Dice, 0.358 ± 0.100), labyrinth (0.169 ± 0.100 mm; Dice, 0.838 ± 0.060), and facial nerve (0.522 ± 0.278 mm; Dice, 0.567 ± 0.130). A quad-core 16GB RAM workstation completed this segmentation pipeline in 10 minutes. CONCLUSIONS We demonstrated submillimeter accuracy for automated segmentation of temporal bone anatomy when compared against hand-segmented ground truth using our template registration pipeline. This method is not dependent on the training data volume that plagues many complex deep learning models. Favorable runtime and low computational requirements underscore this method's translational potential.
Collapse
Affiliation(s)
- Andy S Ding
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA.,Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X Creighton
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
32
|
Abstract
Surgical robots have been widely adopted with over 4000 robots being used in practice daily. However, these are telerobots that are fully controlled by skilled human surgeons. Introducing "surgeon-assist"-some forms of autonomy-has the potential to reduce tedium and increase consistency, analogous to driver-assist functions for lanekeeping, cruise control, and parking. This article examines the scientific and technical backgrounds of robotic autonomy in surgery and some ethical, social, and legal implications. We describe several autonomous surgical tasks that have been automated in laboratory settings, and research concepts and trends.
Collapse
Affiliation(s)
- Paolo Fiorini
- Department of Computer Science, University of Verona, 37134 Verona, Italy
| | - Ken Y. Goldberg
- Department of Industrial Engineering and Operations Research and the Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Yunhui Liu
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong, China
| | - Russell H. Taylor
- Department of Computer Science, the Department of Mechanical Engineering, the Department of Radiology, the Department of Surgery, and the Department of Otolaryngology, Head-and-Neck Surgery, Johns Hopkins University, Baltimore, MD 21218 USA, and also with the Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
33
|
Liu X, Li Z, Ishii M, Hager GD, Taylor RH, Unberath M. SAGE: SLAM with Appearance and Geometry Prior for Endoscopy. IEEE Int Conf Robot Autom 2022; 2022:5587-5593. [PMID: 36937551 PMCID: PMC10018746 DOI: 10.1109/icra46639.2022.9812257] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In endoscopy, many applications (e.g., surgical navigation) would benefit from a real-time method that can simultaneously track the endoscope and reconstruct the dense 3D geometry of the observed anatomy from a monocular endoscopic video. To this end, we develop a Simultaneous Localization and Mapping system by combining the learning-based appearance and optimizable geometry priors and factor graph optimization. The appearance and geometry priors are explicitly learned in an end-to-end differentiable training pipeline to master the task of pair-wise image alignment, one of the core components of the SLAM system. In our experiments, the proposed SLAM system is shown to robustly handle the challenges of texture scarceness and illumination variation that are commonly seen in endoscopy. The system generalizes well to unseen endoscopes and subjects and performs favorably compared with a state-of-the-art feature-based SLAM system. The code repository is available at https://github.com/lppllppl920/SAGE-SLAM.git.
Collapse
Affiliation(s)
- Xingtong Liu
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Zhaoshuo Li
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, MD 21224 USA
| | - Gregory D Hager
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Russell H Taylor
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Mathias Unberath
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| |
Collapse
|
34
|
Munawar A, Wu JY, Fischer GS, Taylor RH, Kazanzides P. Open Simulation Environment for Learning and Practice of Robot-Assisted Surgical Suturing. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3146900] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
35
|
Xiao B, Alamdar A, Song K, Ebrahimi A, Gehlbach P, Taylor RH, Iordachita I. Delta Robot Kinematic Calibration for Precise Robot-Assisted Retinal Surgery. Int Symp Med Robot 2022; 2022:10.1109/ismr48347.2022.9807517. [PMID: 36129421 PMCID: PMC9484559 DOI: 10.1109/ismr48347.2022.9807517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
High precision is required for ophthalmic robotic systems. This paper presents the kinematic calibration for the delta robot which is part of the next generation of Steady-Hand Eye Robot (SHER). A linear error model is derived based on geometric error parameters. Two experiments with different ranges of workspace are conducted with laser sensors measuring displacement. The error parameters are identified and applied in the kinematics to compensate for modeling error. To achieve better accuracy, Bernstein polynomials are adopted to fit the error residuals after compensation. After the kinematic calibration process, the error residuals of the delta robot are reduced to satisfy the clinical requirements.
Collapse
Affiliation(s)
- Boyang Xiao
- LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Alireza Alamdar
- LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Kefan Song
- LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ali Ebrahimi
- LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | | | | |
Collapse
|
36
|
Sefati S, Hegeman R, Iordachita I, Taylor RH, Armand M. A Dexterous Robotic System for Autonomous Debridement of Osteolytic Bone Lesions in Confined Spaces: Human Cadaver Studies. IEEE T ROBOT 2022; 38:1213-1229. [PMID: 35633946 PMCID: PMC9138669 DOI: 10.1109/tro.2021.3091283] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
This article presents a dexterous robotic system for autonomous debridement of osteolytic bone lesions in confined spaces. The proposed system is distinguished from the state-of-the-art orthopedics systems because it combines a rigid-link robot with a continuum manipulator (CM) that enhances reach in difficult-to-access spaces often encountered in surgery. The CM is equipped with flexible debriding instruments and fiber Bragg grating sensors. The surgeon plans on the patient’s preoperative computed tomography and the robotic system performs the task autonomously under the surgeon’s supervision. An optimization-based controller generates control commands on the fly to execute the task while satisfying physical and safety constraints. The system design and controller are discussed and extensive simulation, phantom and human cadaver experiments are carried out to evaluate the performance, workspace, and dexterity in confined spaces. Mean and standard deviation of target placement are 0.5 and 0.18 mm, and the robotic system covers 91% of the workspace behind an acetabular implant in treatment of hip osteolysis, compared to the 54% that is achieved by conventional rigid tools.
Collapse
Affiliation(s)
- Shahriar Sefati
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Rachel Hegeman
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Mehran Armand
- Department of Orthopedic Surgery, The Johns Hopkins Medical School, Baltimore, MD 21205 USA
| |
Collapse
|
37
|
Wang Y, Zheng H, Taylor RH, Samuel Au KW. A Handheld Steerable Surgical Drill With a Novel Miniaturized Articulated Joint Module for Dexterous Confined-Space Bone Work. IEEE Trans Biomed Eng 2022; 69:2926-2934. [PMID: 35263248 DOI: 10.1109/tbme.2022.3157818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Steerable surgical drills have the potential to minimize intraoperative tissue damage to patients. However, due to their large shaft diameters, large bend radii, and small bend angles, existing steerable drills are unsuitable for dexterous operations in confined spaces. This article presents a handheld steerable drill with a 4.5-mm miniaturized tip, capable of abruptly bending up to 65. METHODS To achieve a small tip diameter and a large bend angle, we propose a novel articulated joint module composed of a tendon-driven geared rolling joint and a double universal joint for steering the drill shaft and transmitting drilling torques, respectively. We integrate this joint module with a customized compact actuation unit into a handheld device. The integrated handheld steerable drill is slim and lightweight, supporting burdenless, single-handed grips and easy integration into existing surgical procedures. RESULTS Experiments and analysis showed the proposed steerable drill has high distal dexterity and is capable to remove target tissues dexterously through a small passage/incision with minimized collateral damage. CONCLUSION The results suggest the potential of the proposed miniaturized articulated drill for dexterous bone work in confined spaces. SIGNIFICANCE By enhancing distal dexterity and reach for surgeons when dealing with hard bony tissues, the proposed device can potentially minimize surgical invasiveness and thus collateral tissue damage to patients for a better clinical outcome.
Collapse
|
38
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 69] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
39
|
Ding AS, Capostagno S, Razavi CR, Li Z, Taylor RH, Carey JP, Creighton FX. Volumetric Accuracy Analysis of Virtual Safety Barriers for Cooperative-Control Robotic Mastoidectomy. Otol Neurotol 2021; 42:e1513-e1517. [PMID: 34325455 PMCID: PMC8595530 DOI: 10.1097/mao.0000000000003309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
HYPOTHESIS Virtual fixtures can be enforced in cooperative-control robotic mastoidectomies with submillimeter accuracy. BACKGROUND Otologic procedures are well-suited for robotic assistance due to consistent osseous landmarks. We have previously demonstrated the feasibility of cooperative-control robots (CCRs) for mastoidectomy. CCRs manipulate instruments simultaneously with the surgeon, allowing the surgeon to control instruments with robotic augmentation of motion. CCRs can also enforce virtual fixtures, which are safety barriers that prevent motion into undesired locations. Previous studies have validated the ability of CCRs to allow a novice surgeon to safely complete a cortical mastoidectomy. This study provides objective accuracy data for CCR-imposed safety barriers in cortical mastoidectomies. METHODS Temporal bone phantoms were registered to a CCR using preoperative computed tomography (CT) imaging. Virtual fixtures were created using 3D Slicer, with 2D planes placed along the external auditory canal, tegmen, and sigmoid, converging on the antrum. Five mastoidectomies were performed by a novice surgeon, moving the drill to the limit of the barriers. Postoperative CT scans were obtained, and Dice coefficients and Hausdorff distances were calculated. RESULTS The average modified Hausdorff distance between drilled bone and the preplanned volume was 0.351 ± 0.093 mm. Compared with the preplanned volume of 0.947 cm3, the mean volume of bone removed was 1.045 cm3 (difference of 0.0982 cm3 or 10.36%), with an average Dice coefficient of 0.741 (range, 0.665-0.802). CONCLUSIONS CCR virtual fixtures can be enforced with a high degree of accuracy. Future studies will focus on improving accuracy and developing 3D fixtures around relevant surgical anatomy.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Sarah Capostagno
- Department of Biomedical Engineering, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Christopher R. Razavi
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA
| | - John P. Carey
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francis X. Creighton
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
40
|
Roth R, Wu J, Alamdar A, Taylor RH, Gehlbach P, Iordachita I. Towards a Clinically Optimized Tilt Mechanism for Bilateral Micromanipulation with Steady-Hand Eye Robot. Int Symp Med Robot 2021; 2021:10.1109/ismr48346.2021.9661579. [PMID: 35141730 PMCID: PMC8822603 DOI: 10.1109/ismr48346.2021.9661579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Cooperative robotic systems for vitreoretinal surgery can enable novel surgical approaches by allowing the surgeon to perform procedures with enhanced stabilization and high accuracy tool movements. This paper presents the optimization and design of a four-bar linkage type tilt mechanism for a novel Steady-Hand Eye Robot (SHER) which can be used equivalently on both, the left and right patient side, during a bilateral approach with two robots. In this optimization, it is desirable to limit the workspace needed for compensation motions that ensure a virtual remote center of motion (V-RCM). The safety space around the patient, the space for the surgeon's hand and maintaining positional accuracy are also included in the optimization. The applicability of the resulting optimized mechanism was confirmed with a design prototype in a representative mock-up of the surgical setting allowing multiple directions of robot approach towards a medical phantom.
Collapse
Affiliation(s)
- Robert Roth
- Technical University of Munich, Germany and with LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Jiahao Wu
- T Stone Robotics Institute, the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, HKSAR, and with LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Alireza Alamdar
- LCSR at the Johns Hopkins University, Baltimore, MD 21218 USA
| | | | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | | |
Collapse
|
41
|
Ebrahimi A, Urias MG, Patel N, Taylor RH, Gehlbach P, Iordachita I. Adaptive Control Improves Sclera Force Safety in Robot-Assisted Eye Surgery: A Clinical Study. IEEE Trans Biomed Eng 2021; 68:3356-3365. [PMID: 33822717 PMCID: PMC8492795 DOI: 10.1109/tbme.2021.3071135] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The integration of robotics into retinal microsurgery leads to a reduction in surgeon perception of tool-to-tissue interaction forces. This blunting of human tactile sensory input, which is due to the inflexible mass and large inertia of the robotic arm as compared to the milli-Newton scale of the interaction forces and fragile tissues during ophthalmic surgery, identifies a potential iatrogenic risk during robotic eye surgery. In this paper, we aim to evaluate two variants of an adaptive force control scheme implemented on the Steady-Hand Eye Robot (SHER) that are intended to mitigate the risk of unsafe scleral forces. The present study enrolled ten retina fellows and ophthalmology residents into a simulated procedure, which simply asked the trainees to follow retinal vessels in a model retina surgery environment. For this purpose, we have developed a force-sensing (equipped with Fiber Bragg Grating (FBG)) instrument to attach to the robot. A piezo-actuated linear stage for creating random lateral motions to the eyeball phantom has been provided to simulate disturbances during surgery. The SHER and all of its dependencies were set up in an operating room in the Wilmer Eye Institute at the Johns Hopkins Hospital. The clinicians conducted robot-assisted experiments with the adaptive controls incorporated as well as freehand manipulations. The results indicate that the Adaptive Norm Control (ANC) method, is able to maintain scleral forces at predetermined safe levels better than even freehand manipulations. Novice clinicians in robot training however, subjectively preferred freehand maneuvers over robotic manipulations. Clinician preferences once highly skilled with the robot is not assessed in this study.
Collapse
|
42
|
Zhou M, Wu J, Ebrahimi A, Patel N, He C, Gehlbach P, Taylor RH, Knoll A, Nasseri MA, Iordachita I. Spotlight-based 3D Instrument Guidance for Retinal Surgery. Int Symp Med Robot 2021; 2020. [PMID: 34595483 DOI: 10.1109/ismr48331.2020.9312952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-to-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.
Collapse
Affiliation(s)
- Mingchuan Zhou
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA.,Chair of Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München 85748 Germany
| | - Jiahao Wu
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA.,T Stone Robotics Institute, the Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, HKSAR, China
| | - Ali Ebrahimi
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Niravkumar Patel
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Changyan He
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Russell H Taylor
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Alois Knoll
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Technische Universität München, München 85748 Germany
| | - M Ali Nasseri
- Augenklinik und Poliklinik, Klinikum rechts der Isar der Technische Universität München, München 81675 Germany
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics at the Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
43
|
Razavi C, Galaiya D, Vafaee S, Yin R, Carey JP, Taylor RH, Creighton FX. Three dimensional printing of a low-cost middle-ear training model for surgical management of otosclerosis. Laryngoscope Investig Otolaryngol 2021; 6:1133-1136. [PMID: 34693002 PMCID: PMC8513458 DOI: 10.1002/lio2.646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 08/09/2021] [Accepted: 08/13/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Surgical management of otosclerosis is technically challenging with studies demonstrating that outcomes are commensurate with surgical experience. Moreover, experts apply less force on the ossicular chain during prosthesis placement than their novice counterparts. Given the predicted decreasing patient pool and the rising cost of human temporal bone specimens it has become more challenging for trainees to receive adequate intraoperative or laboratory-based experience in this procedure. As such, there is a need for a low-cost training model for the procedure. Here we describe such a model. METHODS A surgical model of the middle ear was designed using computer aided design (CAD) software. The model consists of four components, the superior three dimensional (3D)-printed component representing the external auditory canal, a 90° torsion spring representing the incus, a 3D-printed base with a stapedotomy underlying the torsion spring, and a 3D-printed phone holder to facilitate video-recording of trials and subsequent calculation of the force applied on the modeled incus. Force applied on the incus is calculated based on Hooke's Law from post-trial computer-vision analysis of recorded video following experimental determination of the spring constant of the modeled incus. RESULTS The described model was manufactured with a total cost of $56.50. The spring constant was experimentally determined to be 97.0 mN mm/deg, resulting in an ability to detect force applied to the modeled incus across a range of 1.2 to 5200 mN. CONCLUSIONS We have created a low-cost middle-ear training model with measurable objective performance outcomes. The range of detectable force exceeds expected values for the task.Level of Evidence: IV.
Collapse
Affiliation(s)
- Christopher Razavi
- Department of Otolaryngology – Head and Neck SurgeryJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Deepa Galaiya
- Department of Otolaryngology – Head and Neck SurgeryJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Seena Vafaee
- Laboratory for Computational Sensing and RoboticsJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Rui Yin
- Laboratory for Computational Sensing and RoboticsJohns Hopkins UniversityBaltimoreMarylandUSA
| | - John P. Carey
- Department of Otolaryngology – Head and Neck SurgeryJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Russell H. Taylor
- Laboratory for Computational Sensing and RoboticsJohns Hopkins UniversityBaltimoreMarylandUSA
| | - Francis X. Creighton
- Department of Otolaryngology – Head and Neck SurgeryJohns Hopkins University School of MedicineBaltimoreMarylandUSA
| |
Collapse
|
44
|
Wang Y, Li G, Kwok KW, Cleary K, Taylor RH, Iordachita I. Towards Safe In Situ Needle Manipulation for Robot Assisted Lumbar Injection in Interventional MRI. Rep U S 2021; 2021:1835-1842. [PMID: 35173994 PMCID: PMC8845499 DOI: 10.1109/iros51168.2021.9636220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lumbar injection is an image-guided procedure performed manually for diagnosis and treatment of lower back pain and leg pain. Previously, we have developed and verified an MR-Conditional robotic solution to assisting the needle insertion process. Drawing on our clinical experiences, a virtual remote center of motion (RCM) constraint is implemented to enable our robot to mimic a clinician's hand motion to adjust the needle tip position in situ. Force and image data are collected to study the needle behavior in gel phantoms during this motion, and a mechanics-based needle-tissue interaction model is proposed and evaluated to further examine the underlying physics. This work extends the commonly-adopted notion of an RCM for flexible needles, and introduces new motion parameters to describe the needle behavior. The model parameters can be tuned to match the experimental result to sub-millimeter accuracy, and this proposed needle manipulation method presents a safer alternative to laterally translating the needle during in situ needle adjustments.
Collapse
Affiliation(s)
- Yanzhou Wang
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Gang Li
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ka-Wai Kwok
- Department of Mechanical Engineering, The University of Hong Kong, Hong Kong
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Russell H Taylor
- Department of Computer Science and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| | - Iulian Iordachita
- Department of Mechanical Engineering and Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
45
|
Unberath M, Gao C, Hu Y, Judish M, Taylor RH, Armand M, Grupp R. The Impact of Machine Learning on 2D/3D Registration for Image-Guided Interventions: A Systematic Review and Perspective. Front Robot AI 2021; 8:716007. [PMID: 34527706 PMCID: PMC8436154 DOI: 10.3389/frobt.2021.716007] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/30/2021] [Indexed: 11/13/2022] Open
Abstract
Image-based navigation is widely considered the next frontier of minimally invasive surgery. It is believed that image-based navigation will increase the access to reproducible, safe, and high-precision surgery as it may then be performed at acceptable costs and effort. This is because image-based techniques avoid the need of specialized equipment and seamlessly integrate with contemporary workflows. Furthermore, it is expected that image-based navigation techniques will play a major role in enabling mixed reality environments, as well as autonomous and robot-assisted workflows. A critical component of image guidance is 2D/3D registration, a technique to estimate the spatial relationships between 3D structures, e.g., preoperative volumetric imagery or models of surgical instruments, and 2D images thereof, such as intraoperative X-ray fluoroscopy or endoscopy. While image-based 2D/3D registration is a mature technique, its transition from the bench to the bedside has been restrained by well-known challenges, including brittleness with respect to optimization objective, hyperparameter selection, and initialization, difficulties in dealing with inconsistencies or multiple objects, and limited single-view performance. One reason these challenges persist today is that analytical solutions are likely inadequate considering the complexity, variability, and high-dimensionality of generic 2D/3D registration problems. The recent advent of machine learning-based approaches to imaging problems that, rather than specifying the desired functional mapping, approximate it using highly expressive parametric models holds promise for solving some of the notorious challenges in 2D/3D registration. In this manuscript, we review the impact of machine learning on 2D/3D registration to systematically summarize the recent advances made by introduction of this novel technology. Grounded in these insights, we then offer our perspective on the most pressing needs, significant open problems, and possible next steps.
Collapse
Affiliation(s)
- Mathias Unberath
- Advanced Robotics and Computationally Augmented Environments (ARCADE) Lab, Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | | | | | | | | | | | | |
Collapse
|
46
|
Vagvolgyi BP, Khrenov M, Cope J, Deguet A, Kazanzides P, Manzoor S, Taylor RH, Krieger A. Telerobotic Operation of Intensive Care Unit Ventilators. Front Robot AI 2021; 8:612964. [PMID: 34250025 PMCID: PMC8264200 DOI: 10.3389/frobt.2021.612964] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 06/07/2021] [Indexed: 01/18/2023] Open
Abstract
Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.
Collapse
Affiliation(s)
- Balazs P Vagvolgyi
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Mikhail Khrenov
- Department of Mechanical Engineering, A. James Clark School of Engineering, University of Maryland, College Park, MD, United States
| | - Jonathan Cope
- Anaesthesia and Critical Care Medicine, Johns Hopkins Hospital, Baltimore, MD, United States
| | - Anton Deguet
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Peter Kazanzides
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Sajid Manzoor
- Anaesthesia and Critical Care Medicine, Johns Hopkins Hospital, Baltimore, MD, United States
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Axel Krieger
- Laboratory for Computational Sensing and Robotics, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States.,Department of Mechanical Engineering, A. James Clark School of Engineering, University of Maryland, College Park, MD, United States
| |
Collapse
|
47
|
Huang J, Cai Y, Chu X, Taylor RH, Au KWS. Non-Fixed Contact Manipulation Control Framework for Deformable Objects With Active Contact Adjustment. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3062302] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
48
|
Ma JH, Sefati S, Taylor RH, Armand M. An Active Steering Hand-held Robotic System for Minimally Invasive Orthopaedic Surgery Using a Continuum Manipulator. IEEE Robot Autom Lett 2021; 6:1622-1629. [PMID: 33869745 PMCID: PMC8052093 DOI: 10.1109/lra.2021.3059634] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents the development and experimental evaluation of an active steering hand-held robotic system for milling and curved drilling in minimally invasive orthopaedic interventions. The system comprises a cable-driven continuum dexterous manipulator (CDM), an actuation unit with a handpiece, and a flexible, rotary cutting tool. Compared to conventional rigid drills, the proposed system enhances dexterity and reach in confined spaces in surgery, while providing direct control to the surgeon with sufficient stability while cutting/milling hard tissue. Of note, for cases that require precise motion, the system is able to be mounted on a positioning robot for additional controllability. A proportional-derivative (PD) controller for regulating drive cable tension is proposed for the stable steering of the CDM during cutting operations. The robotic system is characterized and tested with various tool rotational speeds and cable tensions, demonstrating successful cutting of three-dimensional and curvilinear tool paths in simulated cancellous bone and bone phantom. Material removal rates (MRRs) of up to 571 mm3/s are achieved for stable cutting, demonstrating great improvement over previous related works.
Collapse
Affiliation(s)
- Justin H Ma
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Shahriar Sefati
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins University Medical School, Baltimore, MD, USA
| |
Collapse
|
49
|
|
50
|
Su H, Di Lallo A, Murphy RR, Taylor RH, Garibaldi BT, Krieger A. Physical human–robot interaction for clinical care in infectious environments. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00324-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|