1
|
Olsen RG, Svendsen MBS, Tolsgaard MG, Konge L, Røder A, Bjerrum F. Surgical gestures can be used to assess surgical competence in robot-assisted surgery : A validity investigating study of simulated RARP. J Robot Surg 2024; 18:47. [PMID: 38244130 PMCID: PMC10799775 DOI: 10.1007/s11701-023-01807-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 12/23/2023] [Indexed: 01/22/2024]
Abstract
To collect validity evidence for the assessment of surgical competence through the classification of general surgical gestures for a simulated robot-assisted radical prostatectomy (RARP). We used 165 video recordings of novice and experienced RARP surgeons performing three parts of the RARP procedure on the RobotiX Mentor. We annotated the surgical tasks with different surgical gestures: dissection, hemostatic control, application of clips, needle handling, and suturing. The gestures were analyzed using idle time (periods with minimal instrument movements) and active time (whenever a surgical gesture was annotated). The distribution of surgical gestures was described using a one-dimensional heat map, snail tracks. All surgeons had a similar percentage of idle time but novices had longer phases of idle time (mean time: 21 vs. 15 s, p < 0.001). Novices used a higher total number of surgical gestures (number of phases: 45 vs. 35, p < 0.001) and each phase was longer compared with those of the experienced surgeons (mean time: 10 vs. 8 s, p < 0.001). There was a different pattern of gestures between novices and experienced surgeons as seen by a different distribution of the phases. General surgical gestures can be used to assess surgical competence in simulated RARP and can be displayed as a visual tool to show how performance is improving. The established pass/fail level may be used to ensure the competence of the residents before proceeding with supervised real-life surgery. The next step is to investigate if the developed tool can optimize automated feedback during simulator training.
Collapse
Affiliation(s)
- Rikke Groth Olsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark.
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Røder
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| |
Collapse
|
2
|
Lee S, Shetty AS, Cavuoto L. Modeling of Learning Processes Using Continuous-Time Markov Chain for Virtual-Reality-Based Surgical Training in Laparoscopic Surgery. IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES 2023; 17:462-473. [PMID: 38617582 PMCID: PMC11013959 DOI: 10.1109/tlt.2023.3236899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Recent usage of Virtual Reality (VR) technology in surgical training has emerged because of its cost-effectiveness, time savings, and cognition-based feedback generation. However, the quantitative evaluation of its effectiveness in training is still not studied thoroughly. This paper demonstrates the effectiveness of a VR-based surgical training simulator in laparoscopic surgery and investigates how stochastic modeling represented as Continuous-time Markov-chain (CTMC) can be used to explicit the training status of the surgeon. By comparing the training in real environments and in VR-based training simulators, the authors also explore the validity of the VR simulator in laparoscopic surgery. The study further aids in establishing learning models of surgeons, supporting continuous evaluation of training processes for the derivation of real-time feedback by CTMC-based modeling.
Collapse
Affiliation(s)
- Seunghan Lee
- Industrial and Systems Engineering Department at Mississippi State University
| | | | - Lora Cavuoto
- Industrial and Systems Engineering at the University at Buffalo, Buffalo, NY, USA
| |
Collapse
|
3
|
An Interaction-Based Bayesian Network Framework for Surgical Workflow Segmentation. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126401. [PMID: 34199188 PMCID: PMC8296226 DOI: 10.3390/ijerph18126401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 06/03/2021] [Accepted: 06/08/2021] [Indexed: 11/25/2022]
Abstract
Recognizing and segmenting surgical workflow is important for assessing surgical skills as well as hospital effectiveness, and plays a crucial role in maintaining and improving surgical and healthcare systems. Most evidence supporting this remains signal-, video-, and/or image-based. Furthermore, casual evidence of the interaction between surgical staff remains challenging to gather and is largely absent. Here, we collected the real-time movement data of the surgical staff during a neurosurgery to explore cooperation networks among different surgical roles, namely surgeon, assistant nurse, scrub nurse, and anesthetist, and to segment surgical workflows to further assess surgical effectiveness. We installed a zone position system (ZPS) in an operating room (OR) to effectively record high-frequency high-resolution movements of all surgical staff. Measuring individual interactions in a closed, small area is difficult, and surgical workflow classification has uncertainties associated with the surgical staff in terms of their varied training and operation skills, patients in terms of their initial states and biological differences, and surgical procedures in terms of their complexities. We proposed an interaction-based framework to recognize the surgical workflow and integrated a Bayesian network (BN) to solve the uncertainty issues. Our results suggest that the proposed BN method demonstrates good performance with a high accuracy of 70%. Furthermore, it semantically explains the interaction and cooperation among surgical staff.
Collapse
|
4
|
Li RQ, Zhou XH, Bian GB, Xie XL, Hou ZG. Recognition of Endovascular Manipulations using Recurrent Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:7010-7013. [PMID: 31947452 DOI: 10.1109/embc.2019.8856298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The ability to accurately recognize elementary surgical gestures is a stepping stone to automated surgical assessment and surgical training. In this paper, a long short-term memory (LSTM) recurrent neural network is applied to the task of recognizing six typical manipulations in percutaneous coronary intervention (PCI). The manipulation mentioned above is referring to the atomic surgical operation, also called surgeme in many research. Instead of using the video data or kinematic data of surgical instruments, we propose to use the kinematic data of the operator's hand acquired by our wearable data glove to recognize the manipulations. To establish a baseline for comparison, a method based on Hidden Markov Model (HMM) is applied because HMM is frequently used in the tasks of surgical sequence learning. Two cross-validation schemes are used in our experiments, they both illustrate that our LSTM-based method far outperforms the HMM-based method. To our knowledge, this is the first paper to apply the LSTM recurrent neural network in the field of PCI.
Collapse
|
5
|
Peng W, Xing Y, Liu R, Li J, Zhang Z. An automatic skill evaluation framework for robotic surgery training. Int J Med Robot 2018; 15:e1964. [PMID: 30281892 DOI: 10.1002/rcs.1964] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 08/22/2018] [Accepted: 09/24/2018] [Indexed: 01/02/2023]
Abstract
BACKGROUND To provide feedback to surgeons in robotic surgery training, many surgical skill evaluation methods have been developed. However, they hardly focus on the performance of the surgical motion segments. This paper proposes a method of specifying a trainee's skill weakness in the surgical training. METHODS This paper proposed an automatic skill evaluation framework by comparing the trainees' operations with the template operation in each surgical motion segment, which is mainly based on dynamic time warping (DTW) and continuous hidden Markov model (CHMM). RESULTS The feasibility of this proposed framework has been preliminarily verified. For specifying the skill weakness in instrument handling and efficiency, the result of this proposed framework was significantly correlated with that of manual scoring. CONCLUSION The automatic skill evaluation framework has shown its superiority in efficiency, objectivity, and being targeted, which can be used in robotic surgery training.
Collapse
Affiliation(s)
- Wenjia Peng
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| | - Yuan Xing
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| | - Ruida Liu
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| | - Jinhua Li
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| | - Zemin Zhang
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China
| |
Collapse
|
6
|
Vedula SS, Ishii M, Hager GD. Objective Assessment of Surgical Technical Skill and Competency in the Operating Room. Annu Rev Biomed Eng 2017; 19:301-325. [PMID: 28375649 DOI: 10.1146/annurev-bioeng-071516-044435] [Citation(s) in RCA: 73] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Training skillful and competent surgeons is critical to ensure high quality of care and to minimize disparities in access to effective care. Traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. Simultaneously, technological developments are enabling capture and analysis of large amounts of complex surgical data. These developments are motivating a "surgical data science" approach to objective computer-aided technical skill evaluation (OCASE-T) for scalable, accurate assessment; individualized feedback; and automated coaching. We define the problem space for OCASE-T and summarize 45 publications representing recent research in this domain. We find that most studies on OCASE-T are simulation based; very few are in the operating room. The algorithms and validation methodologies used for OCASE-T are highly varied; there is no uniform consensus. Future research should emphasize competency assessment in the operating room, validation against patient outcomes, and effectiveness for surgical training.
Collapse
Affiliation(s)
- S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| | - Masaru Ishii
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Gregory D Hager
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| |
Collapse
|
7
|
Phase Segmentation Methods for an Automatic Surgical Workflow Analysis. Int J Biomed Imaging 2017; 2017:1985796. [PMID: 28408921 PMCID: PMC5376475 DOI: 10.1155/2017/1985796] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2016] [Accepted: 01/05/2017] [Indexed: 11/18/2022] Open
Abstract
In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result.
Collapse
|
8
|
Ahmidi N, Tao L, Sefati S, Gao Y, Lea C, Haro BB, Zappella L, Khudanpur S, Vidal R, Hager GD. A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery. IEEE Trans Biomed Eng 2017; 64:2025-2041. [PMID: 28060703 DOI: 10.1109/tbme.2016.2647680] [Citation(s) in RCA: 94] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
OBJECTIVE State-of-the-art techniques for surgical data analysis report promising results for automated skill assessment and action recognition. The contributions of many of these techniques, however, are limited to study-specific data and validation metrics, making assessment of progress across the field extremely challenging. METHODS In this paper, we address two major problems for surgical data analysis: First, lack of uniform-shared datasets and benchmarks, and second, lack of consistent validation processes. We address the former by presenting the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a public dataset that we have created to support comparative research benchmarking. JIGSAWS contains synchronized video and kinematic data from multiple performances of robotic surgical tasks by operators of varying skill. We address the latter by presenting a well-documented evaluation methodology and reporting results for six techniques for automated segmentation and classification of time-series data on JIGSAWS. These techniques comprise four temporal approaches for joint segmentation and classification: hidden Markov model, sparse hidden Markov model (HMM), Markov semi-Markov conditional random field, and skip-chain conditional random field; and two feature-based ones that aim to classify fixed segments: bag of spatiotemporal features and linear dynamical systems. RESULTS Most methods recognize gesture activities with approximately 80% overall accuracy under both leave-one-super-trial-out and leave-one-user-out cross-validation settings. CONCLUSION Current methods show promising results on this shared dataset, but room for significant progress remains, particularly for consistent prediction of gesture activities across different surgeons. SIGNIFICANCE The results reported in this paper provide the first systematic and uniform evaluation of surgical activity recognition techniques on the benchmark database.
Collapse
|
9
|
Sun X, Byrns S, Cheng I, Zheng B, Basu A. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery. J Med Syst 2016; 41:24. [DOI: 10.1007/s10916-016-0665-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Accepted: 12/06/2016] [Indexed: 11/28/2022]
|
10
|
Rafii-Tari H, Payne CJ, Yang GZ. Current and emerging robot-assisted endovascular catheterization technologies: a review. Ann Biomed Eng 2013; 42:697-715. [PMID: 24281653 DOI: 10.1007/s10439-013-0946-8] [Citation(s) in RCA: 154] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2013] [Accepted: 11/14/2013] [Indexed: 11/30/2022]
Abstract
Endovascular techniques have been embraced as a minimally-invasive treatment approach within different disciplines of interventional radiology and cardiology. The current practice of endovascular procedures, however, is limited by a number of factors including exposure to high doses of X-ray radiation, limited 3D imaging, and lack of contact force sensing from the endovascular tools and the vascular anatomy. More recently, advances in steerable catheters and development of master/slave robots have aimed to improve these practices by removing the operator from the radiation source and increasing the precision and stability of catheter motion with added degrees-of-freedom. Despite their increased application and a growing research interest in this area, many such systems have been designed without considering the natural manipulation skills and ergonomic preferences of the operators. Existing studies on tool interactions and natural manipulation skills of the operators are limited. In this manuscript, new technical developments in different aspects of robotic endovascular intervention including catheter instrumentation, intra-operative imaging and navigation techniques, as well as master/slave based robotic catheterization platforms are reviewed. We further address emerging trends and new research opportunities towards more widespread clinical acceptance of robotically assisted endovascular technologies.
Collapse
Affiliation(s)
- Hedyeh Rafii-Tari
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK,
| | | | | |
Collapse
|
11
|
Monserrat C, Lucas A, Hernández-Orallo J, Rupérez MJ. Automatic supervision of gestures to guide novice surgeons during training. Surg Endosc 2013; 28:1360-70. [PMID: 24196559 DOI: 10.1007/s00464-013-3285-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2013] [Accepted: 10/11/2013] [Indexed: 11/28/2022]
Abstract
BACKGROUND Virtual surgery simulators enable surgeons to learn by themselves, shortening their learning curves. Virtual simulators offer an objective evaluation of the surgeon's skills at the end of each training session. The considered evaluation parameters are based on the analysis of the surgeon's gestures performed throughout the training session. Currently, this information is usually known by surgeons only at the end of the training session, but very limited during the training performance. In this paper, we present a novel method for automatic and interactive evaluation of the surgeon's skills that is able to supervise inexperienced surgeons during their training session with surgical simulators. METHODS The method is based on the assumption that the sequence of gestures carried out by an expert surgeon in the simulator can be translated into a sequence (a character string) that should be reproduced by a novice surgeon during a training session. In this work, a string-matching algorithm has been modified to calculate the alignment and distance between the sequences of both expert and novice during the training performance. RESULTS The results have shown that it is possible to distinguish between different skill levels at all times during the surgical training session. CONCLUSIONS The main contribution of this paper is a method where the difference between an expert's sequence of gestures and a novice's ongoing sequence is used to guide inexperienced surgeons. This is possible by indicating to novices the gesture corrections to be applied during surgical training as continuous expert supervision would do.
Collapse
Affiliation(s)
- C Monserrat
- LabHuman, Ciudad Politécnica de la Innovación, Universitat Politècnica de València, Cubo Azul, Edif. 8B, Acceso N, Camino de Vera s/n, 46022, Valencia, Spain,
| | | | | | | |
Collapse
|
12
|
Surgical gesture classification from video and kinematic data. Med Image Anal 2013; 17:732-45. [PMID: 23706754 DOI: 10.1016/j.media.2013.04.007] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Revised: 03/22/2013] [Accepted: 04/15/2013] [Indexed: 11/21/2022]
Abstract
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone.
Collapse
|
13
|
Tao L, Zappella L, Hager GD, Vidal R. Surgical gesture segmentation and recognition. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:339-46. [PMID: 24505779 DOI: 10.1007/978-3-642-40760-4_43] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.
Collapse
Affiliation(s)
- Lingling Tao
- Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Luca Zappella
- Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Gregory D Hager
- Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - René Vidal
- Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
14
|
Okken LM, Chmarra MK, Hiemstra E, Jansen FW, Dankelman J. Assessment of joystick and wrist control in hand-held articulated laparoscopic prototypes. Surg Endosc 2012; 26:1977-85. [PMID: 22234593 PMCID: PMC3372775 DOI: 10.1007/s00464-011-2138-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2011] [Accepted: 12/15/2011] [Indexed: 11/28/2022]
Abstract
BACKGROUND Various steerable instruments with flexible distal tip have been developed for laparoscopic surgery. The problem of steering such instruments, however, remains a challenge, because no study investigated which control method is the most suitable. This study was designed to examine whether thumb (joystick) or wrist control method is designated for prototypes of steerable instruments by means of motion analysis. METHODS Five experts and 12 novices participated. Each participant performed a needle-driving task in three directions (right → left, up → down, and down → up) with two prototypes (wrist and thumb) and a conventional instrument. Novices performed the tasks in three sessions, whereas experts performed one session only. The order of performing the tasks was determined by Latin squares design. Assessment of performance was done by means of five motion analysis parameters, a newly developed matrix for assigning penalty points, and a questionnaire. RESULTS The thumb-controlled prototype outperformed the wrist-controlled prototype. Comparison of the results obtained in each task showed that regarding penalty points, the up → down task was the most difficult to perform. CONCLUSIONS The thumb control is more suitable for steerable instruments than the wrist control. To avoid uncontrolled movements and difficulties with applying forces to the tissue while keeping the tip of the instrument at the constant angle, adding a "locking" feature is necessary. It is advisable not to perform the needle driving task in the up → down direction.
Collapse
Affiliation(s)
- Linde M Okken
- Department of Gynecology, Leiden University Medical Center, Leiden, The Netherlands
| | | | | | | | | |
Collapse
|
15
|
Haro BB, Zappella L, Vidal R. Surgical gesture classification from video data. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2012; 15:34-41. [PMID: 23285532 DOI: 10.1007/978-3-642-33415-3_5] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on kinematic and dynamic cues, such as time to completion, speed, forces, torque, or robot trajectories. In this paper we show that in a typical surgical training setup, video data can be equally discriminative. To that end, we propose and evaluate three approaches to surgical gesture classification from video. In the first one, we model each video clip from each surgical gesture as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words and use a bag-of-features (BoF) approach to classify new video clips. In the third approach, we use multiple kernel learning to combine the LDS and BoF approaches. Our experiments show that methods based on video data perform equally well as the state-of-the-art approaches based on kinematic data.
Collapse
|
16
|
Sparse Hidden Markov Models for Surgical Gesture Classification and Skill Evaluation. INFORMATION PROCESSING IN COMPUTER-ASSISTED INTERVENTIONS 2012. [DOI: 10.1007/978-3-642-30618-1_17] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
17
|
Modeling and segmentation of surgical workflow from laparoscopic video. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2010; 13:400-7. [PMID: 20879425 DOI: 10.1007/978-3-642-15711-0_50] [Citation(s) in RCA: 72] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Modeling and analyzing surgeries based on signals that are obtained automatically from the operating room (OR) is a field of recent interest. It can be valuable for analyzing and understanding surgical workflow, for skills evaluation and developing context-aware ORs. In minimally invasive surgery, laparoscopic video is easy to record but it is challenging to extract meaningful information from it. We propose a method that uses additional information about tool usage to perform a dimensionality reduction on image features. Using Canonical Correlation Analysis (CCA) a projection of a high-dimensional image feature space to a low dimensional space is obtained such that semantic information is extracted from the video. To model a surgery based on the signals in the reduced feature space two different statistical models are compared. The capability of segmenting a new surgery into phases only based on the video is evaluated. Dynamic Time Warping which strongly depends on the temporal order in combination with CCA shows the best results.
Collapse
|
18
|
King RC, Atallah L, Lo BPL, Yang GZ. Development of a Wireless Sensor Glove for Surgical Skills Assessment. ACTA ACUST UNITED AC 2009; 13:673-9. [PMID: 19726263 DOI: 10.1109/titb.2009.2029614] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Rachel C King
- Department of Computing, Imperial College London, London SW7 2AZ, UK.
| | | | | | | |
Collapse
|
19
|
|
20
|
Modeling the Model Athlete: Automatic Coaching of Rowing Technique. LECTURE NOTES IN COMPUTER SCIENCE 2008. [DOI: 10.1007/978-3-540-89689-0_41] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
21
|
Ladikos A, Benhimane S, Navab N. Real-time 3D reconstruction for collision avoidance in interventional environments. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2008; 11:526-34. [PMID: 18982645 DOI: 10.1007/978-3-540-85990-1_63] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
With the increased presence of automated devices such as C-arms and medical robots and the introduction of a multitude of surgical tools, navigation systems and patient monitoring devices, collision avoidance has become an issue of practical value in interventional environments. In this paper, we present a real-time 3D reconstruction system for interventional environments which aims at predicting collisions by building a 3D representation of all the objects in the room. The 3D reconstruction is used to determine whether other objects are in the working volume of the device and to alert the medical staff before a collision occurs. In the case of C-arms, this allows faster rotational and angular movement which could for instance be used in 3D angiography to obtain a better reconstruction of contrasted vessels. The system also prevents staff to unknowingly enter the working volume of a device. This is of relevance in complex environments with many devices. The recovered 3D representation also opens the path to many new applications utilizing this data such as workflow analysis, 3D video generation or interventional room planning. To validate our claims, we performed several experiments with a real C-arm that show the validity of the approach. This system is currently being transferred to an interventional room in our university hospital.
Collapse
|
22
|
Modeling and online recognition of surgical phases using Hidden Markov Models. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2008; 11:627-35. [PMID: 18982657 DOI: 10.1007/978-3-540-85990-1_75] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
The amount of signals that can be recorded during a surgery, like tracking data or state of instruments, is constantly growing. These signals can be used to better understand surgical workflow and to build surgical assist systems that are aware of the current state of a surgery. This is a crucial issue for designing future systems that provide context-sensitive information and user interfaces. In this paper, Hidden Markov Models (HMM) are used to model a laparoscopic cholecystectomy. Seventeen signals, representing tool usage, from twelve surgeries are used to train the model. The use of a model merging approach is proposed to build the HMM topology and compared to other methods of initializing a HMM. The merging method allows building a model at a very fine level of detail that also reveals the workflow of a surgery in a human-understandable way. Results for detecting the current phase of a surgery and for predicting the remaining time of the procedure are presented.
Collapse
|
23
|
Leff DR, Orihuela-Espina F, Atallah L, Darzi A, Yang GZ. Functional near infrared spectroscopy in novice and expert surgeons--a manifold embedding approach. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2007; 10:270-7. [PMID: 18044578 DOI: 10.1007/978-3-540-75759-7_33] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Monitoring expertise development in surgery is likely to benefit from evaluations of cortical brain function. Brain behaviour is dynamic and nonlinear. The aim of this paper is to evaluate the application of a nonlinear dimensionality reduction technique to enhance visualisation of multidimensional functional Near Infrared Spectroscopy (fNIRS) data. Manifold embedding is applied to prefrontal haemodynamic signals obtained during a surgical knot tying task from a group of 62 healthy subjects with varying surgical expertise. The proposed method makes no assumption about the functionality of the data set and is shown to be capable of recovering the intrinsic low dimensional structure of in vivo brain data. After manifold embedding, Earth Mover's Distance (EMD) is used to quantify different patterns of cortical behaviour associated with surgical expertise and analyse the degree of inter-hemispheric channel pair symmetry.
Collapse
Affiliation(s)
- Daniel Richard Leff
- Royal Society/Wolfson Medical Image Computing Laboratory, Department of Biosurgery and Surgical Technology, Imperial College London, United Kingdom.
| | | | | | | | | |
Collapse
|
24
|
Padoy N, Blum T, Essa I, Feussner H, Berger MO, Navab N. A Boosted Segmentation Method for Surgical Workflow Analysis. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2007 2007; 10:102-9. [DOI: 10.1007/978-3-540-75757-3_13] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|