1
|
Kil I, Eidt JF, Singapogu RB, Groff RE. Assessment of Open Surgery Suturing Skill: Image-based Metrics Using Computer Vision. JOURNAL OF SURGICAL EDUCATION 2024; 81:983-993. [PMID: 38749810 PMCID: PMC11181522 DOI: 10.1016/j.jsurg.2024.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/06/2024] [Accepted: 03/27/2024] [Indexed: 06/11/2024]
Abstract
OBJECTIVE This paper presents a computer vision algorithm for extraction of image-based metrics for suturing skill assessment and the corresponding results from an experimental study of resident and attending surgeons. DESIGN A suturing simulator that adapts the radial suturing task from the Fundamentals of Vascular Surgery (FVS) skills assessment is used to collect data. The simulator includes a camera positioned under the suturing membrane, which records needle and thread movement during the suturing task. A computer vision algorithm processes the video data and extracts objective metrics inspired by expert surgeons' recommended best practice, to "follow the curvature of the needle." PARTICIPANTS AND RESULTS Experimental data from a study involving subjects with various levels of suturing expertise (attending surgeons and surgery residents) are presented. Analysis shows that attendings and residents had statistically different performance on 6 of 9 image-based metrics, including the four new metrics introduced in this paper: Needle Tip Path Length, Needle Swept Area, Needle Tip Area and Needle Sway Length. CONCLUSION AND SIGNIFICANCE These image-based process metrics may be represented graphically in a manner conducive to training. The results demonstrate the potential of image-based metrics for assessment and training of suturing skill in open surgery.
Collapse
Affiliation(s)
- Irfan Kil
- Department of Electrical & Computer Engineering, Clemson University, Clemson, South Carolina.
| | - John F Eidt
- Division of Vascular Surgery, Baylor Scott & White Heart and Vascular Hospital, Dallas, Texas.
| | | | - Richard E Groff
- Department of Electrical & Computer Engineering, Clemson University, Clemson, South Carolina.
| |
Collapse
|
2
|
Xu J, Anastasiou D, Booker J, Burton OE, Layard Horsfall H, Salvadores Fernandez C, Xue Y, Stoyanov D, Tiwari MK, Marcus HJ, Mazomenos EB. A Deep Learning Approach to Classify Surgical Skill in Microsurgery Using Force Data from a Novel Sensorised Surgical Glove. SENSORS (BASEL, SWITZERLAND) 2023; 23:8947. [PMID: 37960645 PMCID: PMC10650455 DOI: 10.3390/s23218947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023]
Abstract
Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network's decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorized surgical glove contains valuable information regarding surgical skill.
Collapse
Affiliation(s)
- Jialang Xu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Dimitrios Anastasiou
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - James Booker
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Oliver E. Burton
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Hugo Layard Horsfall
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Carmen Salvadores Fernandez
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Yang Xue
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Manish K. Tiwari
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Hani J. Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Evangelos B. Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| |
Collapse
|
3
|
Baghdadi A, Guo E, Lama S, Singh R, Chow M, Sutherland GR. Force Profile as Surgeon-Specific Signature. ANNALS OF SURGERY OPEN 2023; 4:e326. [PMID: 37746608 PMCID: PMC10513276 DOI: 10.1097/as9.0000000000000326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 07/22/2023] [Indexed: 09/26/2023] Open
Abstract
Objective To investigate the notion that a surgeon's force profile can be the signature of their identity and performance. Summary background data Surgeon performance in the operating room is an understudied topic. The advent of deep learning methods paired with a sensorized surgical device presents an opportunity to incorporate quantitative insight into surgical performance and processes. Using a device called the SmartForceps System and through automated analytics, we have previously reported surgeon force profile, surgical skill, and task classification. However, an investigation of whether an individual surgeon can be identified by surgical technique has yet to be studied. Methods In this study, we investigate multiple neural network architectures to identify the surgeon associated with their time-series tool-tissue forces using bipolar forceps data. The surgeon associated with each 10-second window of force data was labeled, and the data were randomly split into 80% for model training and validation (10% validation) and 20% for testing. Data imbalance was mitigated through subsampling from more populated classes with a random size adjustment based on 0.1% of sample counts in the respective class. An exploratory analysis of force segments was performed to investigate underlying patterns differentiating individual surgical techniques. Results In a dataset of 2819 ten-second time segments from 89 neurosurgical cases, the best-performing model achieved a micro-average area under the curve of 0.97, a testing F1-score of 0.82, a sensitivity of 82%, and a precision of 82%. This model was a time-series ResNet model to extract features from the time-series data followed by a linearized output into the XGBoost algorithm. Furthermore, we found that convolutional neural networks outperformed long short-term memory networks in performance and speed. Using a weighted average approach, an ensemble model was able to identify an expert surgeon with 83.8% accuracy using a validation dataset. Conclusions Our results demonstrate that each surgeon has a unique force profile amenable to identification using deep learning methods. We anticipate our models will enable a quantitative framework to provide bespoke feedback to surgeons and to track their skill progression longitudinally. Furthermore, the ability to recognize individual surgeons introduces the mechanism of correlating outcome to surgeon performance.
Collapse
Affiliation(s)
- Amir Baghdadi
- From the Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute University of Calgary, Calgary, Alberta, Canada
| | - Eddie Guo
- From the Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute University of Calgary, Calgary, Alberta, Canada
| | - Sanju Lama
- From the Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute University of Calgary, Calgary, Alberta, Canada
| | - Rahul Singh
- From the Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute University of Calgary, Calgary, Alberta, Canada
| | - Michael Chow
- Department of Surgery, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Garnette R. Sutherland
- From the Project neuroArm, Department of Clinical Neurosciences, and Hotchkiss Brain Institute University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
4
|
Tonbul G, Topalli D, Cagiltay NE. A systematic review on classification and assessment of surgical skill levels for simulation-based training programs. Int J Med Inform 2023; 177:105121. [PMID: 37290214 DOI: 10.1016/j.ijmedinf.2023.105121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 05/29/2023] [Accepted: 06/02/2023] [Indexed: 06/10/2023]
Abstract
BACKGROUND Nowadays, advances in medical informatics have made minimally invasive surgery (MIS) procedures the preferred choice. However, there are several problems with the education programs in terms of surgical skill acquisition. For instance, defining and objectively measuring surgical skill levels is a challenging process. Accordingly, the aim of this study is to conduct a literature review for an investigation of the current approaches for classifying the surgical skill levels and for identifying the skill training tools and measurement methods. MATERIALS AND METHODS In this research, a search is conducted and a corpus is created. Exclusion and inclusion criteria are applied by limiting the number of articles based on surgical education, training approximations, hand movements, and endoscopic or laparoscopic operations. To satisfy these criteria, 57 articles are included in the corpus of this study. RESULTS Currently used surgical skill assessment approaches have been summarized. Results show that various classification approaches for the surgical skill level definitions are being used. Besides, many studies are conducted by omitting particularly important skill levels in between. Additionally, some inconsistencies are also identified across the skill level classification studies. CONCLUSION In order to improve the benefits of simulation-based training programs, a standardized interdisciplinary approach should be developed. For this reason, specific to each surgical procedure, the required skills should be identified. Additionally, appropriate measures for assessing these skills, which can be defined in simulation-based MIS training environments, should be refined. Finally, the skill levels gained during the developmental stages of these skills, with their threshold values referencing the identified measures, should be redefined in a standardized manner.
Collapse
Affiliation(s)
- Gokcen Tonbul
- Graduate School of Natural and Applied Sciences, Atilim University, Ankara, Turkey; Strategy and Technology Research Center, Baskent University, Ankara, Turkey.
| | - Damla Topalli
- Department of Computer Engineering, Atilim University, Ankara, Turkey
| | | |
Collapse
|
5
|
Pan-Doh N, Sikder S, Woreta FA, Handa JT. Using the language of surgery to enhance ophthalmology surgical education. Surg Open Sci 2023; 14:52-59. [PMID: 37528917 PMCID: PMC10387608 DOI: 10.1016/j.sopen.2023.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 07/09/2023] [Indexed: 08/03/2023] Open
Abstract
Background Currently, surgical education utilizes a combination of the apprentice model, wet-lab training, and simulation, but due to reliance on subjective data, the quality of teaching and assessment can be variable. The "language of surgery," an established concept in engineering literature whose incorporation into surgical education has been limited, is defined as the description of each surgical maneuver using quantifiable metrics. This concept is different from the traditional notion of surgical language, generally thought of as the qualitative definitions and terminology used by surgeons. Methods A literature search was conducted through April 2023 using MEDLINE/PubMed using search terms to investigate wet-lab, virtual simulators, and robotics in ophthalmology, along with the language of surgery and surgical education. Articles published before 2005 were mostly excluded, although a few were included on a case-by-case basis. Results Surgical maneuvers can be quantified by leveraging technological advances in virtual simulators, video recordings, and surgical robots to create a language of surgery. By measuring and describing maneuver metrics, the learning surgeon can adjust surgical movements in an appropriately graded fashion that is based on objective and standardized data. The main contribution is outlining a structured education framework that details how surgical education could be improved by incorporating the language of surgery, using ophthalmology surgical education as an example. Conclusion By describing each surgical maneuver in quantifiable, objective, and standardized terminology, a language of surgery can be created that can be used to learn, teach, and assess surgical technical skill with an approach that minimizes bias. Key message The "language of surgery," defined as the quantification of each surgical movement's characteristics, is an established concept in the engineering literature. Using ophthalmology surgical education as an example, we describe a structured education framework based on the language of surgery to improve surgical education. Classifications Surgical education, robotic surgery, ophthalmology, education standardization, computerized assessment, simulations in teaching. Competencies Practice-Based Learning and Improvement.
Collapse
Affiliation(s)
- Nathan Pan-Doh
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shameema Sikder
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Fasika A. Woreta
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - James T. Handa
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
6
|
Baghdadi A, Lama S, Singh R, Sutherland GR. Tool-tissue force segmentation and pattern recognition for evaluating neurosurgical performance. Sci Rep 2023; 13:9591. [PMID: 37311965 DOI: 10.1038/s41598-023-36702-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/08/2023] [Indexed: 06/15/2023] Open
Abstract
Surgical data quantification and comprehension expose subtle patterns in tasks and performance. Enabling surgical devices with artificial intelligence provides surgeons with personalized and objective performance evaluation: a virtual surgical assist. Here we present machine learning models developed for analyzing surgical finesse using tool-tissue interaction force data in surgical dissection obtained from a sensorized bipolar forceps. Data modeling was performed using 50 neurosurgery procedures that involved elective surgical treatment for various intracranial pathologies. The data collection was conducted by 13 surgeons of varying experience levels using sensorized bipolar forceps, SmartForceps System. The machine learning algorithm constituted design and implementation for three primary purposes, i.e., force profile segmentation for obtaining active periods of tool utilization using T-U-Net, surgical skill classification into Expert and Novice, and surgical task recognition into two primary categories of Coagulation versus non-Coagulation using FTFIT deep learning architectures. The final report to surgeon was a dashboard containing recognized segments of force application categorized into skill and task classes along with performance metrics charts compared to expert level surgeons. Operating room data recording of > 161 h containing approximately 3.6 K periods of tool operation was utilized. The modeling resulted in Weighted F1-score = 0.95 and AUC = 0.99 for force profile segmentation using T-U-Net, Weighted F1-score = 0.71 and AUC = 0.81 for surgical skill classification, and Weighted F1-score = 0.82 and AUC = 0.89 for surgical task recognition using a subset of hand-crafted features augmented to FTFIT neural network. This study delivers a novel machine learning module in a cloud, enabling an end-to-end platform for intraoperative surgical performance monitoring and evaluation. Accessed through a secure application for professional connectivity, a paradigm for data-driven learning is established.
Collapse
Affiliation(s)
- Amir Baghdadi
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Sanju Lama
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Rahul Singh
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Garnette R Sutherland
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
7
|
Brown JD, Kuchenbecker KJ. Effects of automated skill assessment on robotic surgery training. Int J Med Robot 2023; 19:e2492. [PMID: 36524325 DOI: 10.1002/rcs.2492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Several automated skill-assessment approaches have been proposed for robotic surgery, but their utility is not well understood. This article investigates the effects of one machine-learning-based skill-assessment approach on psychomotor skill development in robotic surgery training. METHODS N = 29 trainees (medical students and residents) with no robotic surgery experience performed five trials of inanimate peg transfer with an Intuitive Surgical da Vinci Standard robot. Half of the participants received no post-trial feedback. The other half received automatically calculated scores from five Global Evaluative Assessment of Robotic Skill domains post-trial. RESULTS There were no significant differences between the groups regarding overall improvement or skill improvement rate. However, participants who received post-trial feedback rated their overall performance improvement significantly lower than participants who did not receive feedback. CONCLUSIONS These findings indicate that automated skill evaluation systems might improve trainee self-awareness but not accelerate early stage psychomotor skill development in robotic surgery training.
Collapse
Affiliation(s)
- Jeremy D Brown
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Katherine J Kuchenbecker
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| |
Collapse
|
8
|
Bykanov A, Danilov G, Kostumov V, Pilipenko O, Nutfullin B, Rastvorova O, Pitskhelauri D. Artificial Intelligence Technologies in the Microsurgical Operating Room (Review). Sovrem Tekhnologii Med 2023; 15:86-94. [PMID: 37389018 PMCID: PMC10306972 DOI: 10.17691/stm2023.15.2.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Indexed: 07/01/2023] Open
Abstract
Surgery performed by a novice neurosurgeon under constant supervision of a senior surgeon with the experience of thousands of operations, able to handle any intraoperative complications and predict them in advance, and never getting tired, is currently an elusive dream, but can become a reality with the development of artificial intelligence methods. This paper has presented a review of the literature on the use of artificial intelligence technologies in the microsurgical operating room. Searching for sources was carried out in the PubMed text database of medical and biological publications. The key words used were "surgical procedures", "dexterity", "microsurgery" AND "artificial intelligence" OR "machine learning" OR "neural networks". Articles in English and Russian were considered with no limitation to publication date. The main directions of research on the use of artificial intelligence technologies in the microsurgical operating room have been highlighted. Despite the fact that in recent years machine learning has been increasingly introduced into the medical field, a small number of studies related to the problem of interest have been published, and their results have not proved to be of practical use yet. However, the social significance of this direction is an important argument for its development.
Collapse
Affiliation(s)
- A.E. Bykanov
- Neurosurgeon, 7 Department of Neurosurgery, Researcher; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - G.V. Danilov
- Academic Secretary; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - V.V. Kostumov
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - O.G. Pilipenko
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - B.M. Nutfullin
- PhD Student, Programmer, the CMC Faculty; Lomonosov Moscow State University, 1 Leninskiye Gory, Moscow, 119991, Russia
| | - O.A. Rastvorova
- Resident, 7 Department of Neurosurgery; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - D.I. Pitskhelauri
- Professor, Head of the 7 Department of Neurosurgery; National Medical Research Center for Neurosurgery named after Academician N.N. Burdenko, Ministry of Healthcare of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| |
Collapse
|
9
|
Kil I, Eidt JF, Groff RE, Singapogu RB. Assessment of open surgery suturing skill: Simulator platform, force-based, and motion-based metrics. Front Med (Lausanne) 2022; 9:897219. [PMID: 36111107 PMCID: PMC9468321 DOI: 10.3389/fmed.2022.897219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/05/2022] [Indexed: 11/29/2022] Open
Abstract
Objective This paper focuses on simulator-based assessment of open surgery suturing skill. We introduce a new surgical simulator designed to collect synchronized force, motion, video and touch data during a radial suturing task adapted from the Fundamentals of Vascular Surgery (FVS) skill assessment. The synchronized data is analyzed to extract objective metrics for suturing skill assessment. Methods The simulator has a camera positioned underneath the suturing membrane, enabling visual tracking of the needle during suturing. Needle tracking data enables extraction of meaningful metrics related to both the process and the product of the suturing task. To better simulate surgical conditions, the height of the system and the depth of the membrane are both adjustable. Metrics for assessment of suturing skill based on force/torque, motion, and physical contact are presented. Experimental data are presented from a study comparing attending surgeons and surgery residents. Results Analysis shows force metrics (absolute maximum force/torque in z-direction), motion metrics (yaw, pitch, roll), physical contact metric, and image-enabled force metrics (orthogonal and tangential forces) are found to be statistically significant in differentiating suturing skill between attendings and residents. Conclusion and significance The results suggest that this simulator and accompanying metrics could serve as a useful tool for assessing and teaching open surgery suturing skill.
Collapse
Affiliation(s)
- Irfan Kil
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - John F. Eidt
- Division of Vascular Surgery, Baylor Scott & White Heart and Vascular Hospital, Dallas, TX, United States
| | - Richard E. Groff
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - Ravikiran B. Singapogu
- Department of Bioengineering, Clemson University, Clemson, SC, United States
- *Correspondence: Ravikiran B. Singapogu
| |
Collapse
|
10
|
Gumbs AA, Grasso V, Bourdel N, Croner R, Spolverato G, Frigerio I, Illanes A, Abu Hilal M, Park A, Elyan E. The Advances in Computer Vision That Are Enabling More Autonomous Actions in Surgery: A Systematic Review of the Literature. SENSORS (BASEL, SWITZERLAND) 2022; 22:4918. [PMID: 35808408 PMCID: PMC9269548 DOI: 10.3390/s22134918] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/21/2022] [Accepted: 06/21/2022] [Indexed: 12/28/2022]
Abstract
This is a review focused on advances and current limitations of computer vision (CV) and how CV can help us obtain to more autonomous actions in surgery. It is a follow-up article to one that we previously published in Sensors entitled, "Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery?" As opposed to that article that also discussed issues of machine learning, deep learning and natural language processing, this review will delve deeper into the field of CV. Additionally, non-visual forms of data that can aid computerized robots in the performance of more autonomous actions, such as instrument priors and audio haptics, will also be highlighted. Furthermore, the current existential crisis for surgeons, endoscopists and interventional radiologists regarding more autonomy during procedures will be discussed. In summary, this paper will discuss how to harness the power of CV to keep doctors who do interventions in the loop.
Collapse
Affiliation(s)
- Andrew A. Gumbs
- Departement de Chirurgie Digestive, Centre Hospitalier Intercommunal de, Poissy/Saint-Germain-en-Laye, 78300 Poissy, France
- Department of Surgery, University of Magdeburg, 39106 Magdeburg, Germany;
| | - Vincent Grasso
- Family Christian Health Center, 31 West 155th St., Harvey, IL 60426, USA;
| | - Nicolas Bourdel
- Gynecological Surgery Department, CHU Clermont Ferrand, 1, Place Lucie-Aubrac Clermont-Ferrand, 63100 Clermont-Ferrand, France;
- EnCoV, Institut Pascal, UMR6602 CNRS, UCA, Clermont-Ferrand University Hospital, 63000 Clermont-Ferrand, France
- SurgAR-Surgical Augmented Reality, 63000 Clermont-Ferrand, France
| | - Roland Croner
- Department of Surgery, University of Magdeburg, 39106 Magdeburg, Germany;
| | - Gaya Spolverato
- Department of Surgical, Oncological and Gastroenterological Sciences, University of Padova, 35122 Padova, Italy;
| | - Isabella Frigerio
- Department of Hepato-Pancreato-Biliary Surgery, Pederzoli Hospital, 37019 Peschiera del Garda, Italy;
| | - Alfredo Illanes
- INKA-Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany;
| | - Mohammad Abu Hilal
- Unità Chirurgia Epatobiliopancreatica, Robotica e Mininvasiva, Fondazione Poliambulanza Istituto Ospedaliero, Via Bissolati, 57, 25124 Brescia, Italy;
| | - Adrian Park
- Anne Arundel Medical Center, Johns Hopkins University, Annapolis, MD 21401, USA;
| | - Eyad Elyan
- School of Computing, Robert Gordon University, Aberdeen AB10 7JG, UK;
| |
Collapse
|
11
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
12
|
Yan J, Huang K, Lindgren K, Bonaci T, Chizeck HJ. Continuous Operator Authentication for Teleoperated Systems Using Hidden Markov Models. ACM TRANSACTIONS ON CYBER-PHYSICAL SYSTEMS 2022. [DOI: 10.1145/3488901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
In this article, we present a novel approach for continuous operator authentication in teleoperated robotic processes based on Hidden Markov Models (HMM). While HMMs were originally developed and widely used in speech recognition, they have shown great performance in human motion and activity modeling. We make an analogy between human language and teleoperated robotic processes (i.e., words are analogous to a teleoperator’s gestures, sentences are analogous to the entire teleoperated task or process) and implement HMMs to model the teleoperated task. To test the continuous authentication performance of the proposed method, we conducted two sets of analyses. We built a virtual reality (VR) experimental environment using a commodity VR headset (HTC Vive) and haptic feedback enabled controller (Sensable PHANToM Omni) to simulate a real teleoperated task. An experimental study with 10 subjects was then conducted. We also performed simulated continuous operator authentication by using the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). The performance of the model was evaluated based on the continuous (real-time) operator authentication accuracy as well as resistance to a simulated impersonation attack. The results suggest that the proposed method is able to achieve 70% (VR experiment) and 81% (JIGSAWS dataset) continuous classification accuracy with as short as a 1-second sample window. It is also capable of detecting an impersonation attack in real-time.
Collapse
|
13
|
Gao Y, Yan P, Kruger U, Cavuoto L, Schwaitzberg S, De S, Intes X. Functional Brain Imaging Reliably Predicts Bimanual Motor Skill Performance in a Standardized Surgical Task. IEEE Trans Biomed Eng 2021; 68:2058-2066. [PMID: 32755850 PMCID: PMC8265734 DOI: 10.1109/tbme.2020.3014299] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Currently, there is a dearth of objective metrics for assessing bi-manual motor skills, which are critical for high-stakes professions such as surgery. Recently, functional near-infrared spectroscopy (fNIRS) has been shown to be effective at classifying motor task types, which can be potentially used for assessing motor performance level. In this work, we use fNIRS data for predicting the performance scores in a standardized bi-manual motor task used in surgical certification and propose a deep-learning framework 'Brain-NET' to extract features from the fNIRS data. Our results demonstrate that the Brain-NET is able to predict bi-manual surgical motor skills based on neuroimaging data accurately ( R2=0.73). Furthermore, the classification ability of the Brain-NET model is demonstrated based on receiver operating characteristic (ROC) curves and area under the curve (AUC) values of 0.91. Hence, these results establish that fNIRS associated with deep learning analysis is a promising method for a bedside, quick and cost-effective assessment of bi-manual skill levels.
Collapse
|
14
|
van Amsterdam B, Clarkson MJ, Stoyanov D. Gesture Recognition in Robotic Surgery: A Review. IEEE Trans Biomed Eng 2021; 68:2021-2035. [PMID: 33497324 DOI: 10.1109/tbme.2021.3054828] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. METHODS An article search was performed on 5 bibliographic databases with the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surgeme, action, trajectory, segmentation, recognition, parsing. Selected articles were classified based on the level of supervision required for training and divided into different groups representing major frameworks for time series analysis and data modelling. RESULTS A total of 52 articles were reviewed. The research field is showing rapid expansion, with the majority of articles published in the last 4 years. Deep-learning-based temporal models with discriminative feature extraction and multi-modal data integration have demonstrated promising results on small surgical datasets. Currently, unsupervised methods perform significantly less well than the supervised approaches. CONCLUSION The development of large and diverse open-source datasets of annotated demonstrations is essential for development and validation of robust solutions for surgical gesture recognition. While new strategies for discriminative feature extraction and knowledge transfer, or unsupervised and semi-supervised approaches, can mitigate the need for data and labels, they have not yet been demonstrated to achieve comparable performance. Important future research directions include detection and forecast of gesture-specific errors and anomalies. SIGNIFICANCE This paper is a comprehensive and structured analysis of surgical gesture recognition methods aiming to summarize the status of this rapidly evolving field.
Collapse
|
15
|
Rastegari E, Orn D, Zahiri M, Nelson C, Ali H, Siu KC. Assessing Laparoscopic Surgical Skills Using Similarity Network Models: A Pilot Study. Surg Innov 2021; 28:600-610. [PMID: 33745371 DOI: 10.1177/15533506211002753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background: Medical devices are becoming more complex, and doctors need to learn quickly how to use new medical tools. However, it is challenging to objectively assess the fundamental laparoscopic surgical skill level and determine skill readiness for advancement. There is a lack of objective models to compare performance between medical trainees and experienced doctors. Methods: This article discusses the use of similarity network models for individual tasks and a combination of tasks to show the level of similarity between residents and medical students while performing each task and their overall laparoscopic surgical skill level using a medical device (eg laparoscopic instruments). When a medical student is connected to most residents, that student is competent to the next training level. Performance of sixteen participants (5 residents and 11 students) while performing 3 tasks in 3 different training schedules is used in this study. Results: The promising result shows the general positive progression of students over 4 training sessions. Our results also indicate that students with different training schedules have different performance levels. Students' progress in performing a task is quicker if the training sessions are held more closely compared to when the training sessions are far apart in time. Conclusions: This study provides a graph-based framework for evaluating new learners' performance on medical devices and their readiness for advancement. This similarity network method could be used to classify students' performance using similarity thresholds, facilitating decision-making related to training and progression through curricula.
Collapse
Affiliation(s)
- Elham Rastegari
- Department of Business Intelligence and Analytics, 6216Creighton University, Omaha, NE, USA
| | - Donovan Orn
- College of Information Science and Technology, 14720University of Nebraska at Omaha, Omaha, NE, USA
| | - Mohsen Zahiri
- Senior Research Scientist, BioSensics LLC, Watertown, MA, USA
| | - Carl Nelson
- Department of Mechanical and Materials Engineering, 14719University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Hesham Ali
- College of Information Science and Technology, 14720University of Nebraska at Omaha, Omaha, NE, USA
| | - Ka-Chun Siu
- College of Allied Health Professions, University of Nebraska Medical Center, Omaha, NE, USA
| |
Collapse
|
16
|
Visual Intelligence: Prediction of Unintentional Surgical-Tool-Induced Bleeding during Robotic and Laparoscopic Surgery. ROBOTICS 2021. [DOI: 10.3390/robotics10010037] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Unintentional vascular damage can result from a surgical instrument’s abrupt movements during minimally invasive surgery (laparoscopic or robotic). A novel real-time image processing algorithm based on local entropy is proposed that can detect abrupt movements of surgical instruments and predict bleeding occurrence. The uniform nature of the texture of surgical tools is utilized to segment the tools from the background. By comparing changes in entropy over time, the algorithm determines when the surgical instruments are moved abruptly. We tested the algorithm using 17 videos of minimally invasive surgery, 11 of which had tool-induced bleeding. Our preliminary testing shows that the algorithm is 88% accurate and 90% precise in predicting bleeding. The average advance warning time for the 11 videos is 0.662 s, with the standard deviation being 0.427 s. The proposed approach has the potential to eventually lead to a surgical early warning system or even proactively attenuate tool movement (for robotic surgery) to avoid dangerous surgical outcomes.
Collapse
|
17
|
Davids J, Makariou SG, Ashrafian H, Darzi A, Marcus HJ, Giannarou S. Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation. World Neurosurg 2021; 149:e669-e686. [PMID: 33588081 DOI: 10.1016/j.wneu.2021.01.117] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/22/2021] [Accepted: 01/23/2021] [Indexed: 12/22/2022]
Abstract
BACKGROUND/OBJECTIVE Technical skill acquisition is an essential component of neurosurgical training. Educational theory suggests that optimal learning and improvement in performance depends on the provision of objective feedback. Therefore, the aim of this study was to develop a vision-based framework based on a novel representation of surgical tool motion and interactions capable of automated and objective assessment of microsurgical skill. METHODS Videos were obtained from 1 expert, 6 intermediate, and 12 novice surgeons performing arachnoid dissection in a validated clinical model using a standard operating microscope. A mask region convolutional neural network framework was used to segment the tools present within the operative field in a recorded video frame. Tool motion analysis was achieved using novel triangulation metrics. Performance of the framework in classifying skill levels was evaluated using the area under the curve and accuracy. Objective measures of classifying the surgeons' skill level were also compared using the Mann-Whitney U test, and a value of P < 0.05 was considered statistically significant. RESULTS The area under the curve was 0.977 and the accuracy was 84.21%. A number of differences were found, which included experts having a lower median dissector velocity (P = 0.0004; 190.38 ms-1 vs. 116.38 ms-1), and a smaller inter-tool tip distance (median 46.78 vs. 75.92; P = 0.0002) compared with novices. CONCLUSIONS Automated and objective analysis of microsurgery is feasible using a mask region convolutional neural network, and a novel tool motion and interaction representation. This may support technical skills training and assessment in neurosurgery.
Collapse
Affiliation(s)
- Joseph Davids
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Savvas-George Makariou
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom
| | - Hani J Marcus
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Stamatia Giannarou
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom.
| |
Collapse
|
18
|
Yasin R, Simaan N. Joint-level force sensing for indirect hybrid force/position control of continuum robots with friction. Int J Rob Res 2020. [DOI: 10.1177/0278364920979721] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Continuum robots offer the dexterity and obstacle circumvention capabilities necessary to enable surgery in deep surgical sites. They also can enable joint-level ex situ force sensing (JEFS), which provides an estimate of end-effector wrenches given joint-level forces. Prior works on JEFS relied on a restrictive embodiment with minimal actuation line friction and captured model and frictional actuation transmission uncertainties using a configuration space formulation. In this work, we overcome these limitations. First, frictional losses are canceled using a feed-forward term based on support vector regression in joint space. Then, regression maps and their interpolation are used to account for actuation hysteresis. The residual joint-force error is then further minimized using a least-squares model parameter update. An indirect hybrid force/position controller using JEFS is presented with evaluation carried out on a realistic pre-clinically deployable insertable robotic effectors platform (IREP) for single-port access surgery. Automated mock force-controlled ablation, exploration, and knot tightening are evaluated. A user study involving the daVinci Research Kit surgeon console and the IREP as a surgical slave was carried out to compare the performance of users with and without force feedback based on JEFS for force-controlled ablation and knot tightening. Results in automated experiments and a user study of telemanipulated experiments suggest that intrinsic force-sensing can achieve levels of force uncertainty and force regulation errors of the order of 0.2 N. Using JEFS and automated task execution, repeatability, and force regulation accuracy is shown to be comparable to using a commercial force sensor for human-in-the-loop feedback.
Collapse
Affiliation(s)
- Rashid Yasin
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Nabil Simaan
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
19
|
Review of surgical robotic systems for keyhole and endoscopic procedures: state of the art and perspectives. Front Med 2020; 14:382-403. [DOI: 10.1007/s11684-020-0781-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 03/05/2020] [Indexed: 02/06/2023]
|
20
|
Iwai T, Kanno T, Miyazaki T, Haraguchi D, Kawashima K. Pneumatically driven surgical forceps displaying a magnified grasping torque. Int J Med Robot 2020; 16:e2051. [PMID: 31710158 PMCID: PMC7154778 DOI: 10.1002/rcs.2051] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 10/23/2019] [Accepted: 10/23/2019] [Indexed: 11/13/2022]
Abstract
BACKGROUND Sensing the grasping force and displaying the force for the operator are important for safe operation in robot-assisted surgery. Although robotic forceps that senses the force by force sensors or driving torque of electric motors is proposed, the force sensors and the motors have some problems such as increase in weight and difficulty of the sterilization. METHOD We developed a pneumatically driven robotic forceps that estimates the grasping torque and display the magnified torque for the operator. The robotic forceps has a master device and a slave robot, and they are integrated. In the slave side, the grasping torque is estimated by the pressure change in the pneumatic cylinder. A pneumatic bellows display the torque through a linkage. RESULTS We confirmed that the slave robot follows the motion of the master, and the grasping torque is estimated in the accuracy of 7 mNm and is magnified and displayed for the operator. CONCLUSIONS The pneumatically driven robotic forceps has the capability in the estimation of the grasping torque and display of the torque. Regarding future work, the usability and fatigues of the surgeons must be evaluated.
Collapse
Affiliation(s)
- Takuya Iwai
- Department of Biomechanics, Institute of Biomaterials and BioengineeringTokyo Medical and Dental UniversityTokyoJapan
| | - Takahiro Kanno
- Department of Biomechanics, Institute of Biomaterials and BioengineeringTokyo Medical and Dental UniversityTokyoJapan
| | - Tetsuro Miyazaki
- Department of Biomechanics, Institute of Biomaterials and BioengineeringTokyo Medical and Dental UniversityTokyoJapan
| | - Daisuke Haraguchi
- Department of Laboratory for Future Interdisciplinary Research of Science and TechnologyInstitute of Innovative Research, Tokyo Institute of TechnologyYokohamaJapan
| | - Kenji Kawashima
- Department of Biomechanics, Institute of Biomaterials and BioengineeringTokyo Medical and Dental UniversityTokyoJapan
| |
Collapse
|
21
|
Azari DP, Hu YH, Miller BL, Le BV, Radwin RG. Using Surgeon Hand Motions to Predict Surgical Maneuvers. HUMAN FACTORS 2019; 61:1326-1339. [PMID: 31013463 DOI: 10.1177/0018720819838901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.
Collapse
Affiliation(s)
| | - Yu Hen Hu
- University of Wisconsin-Madison, USA
| | | | | | | |
Collapse
|
22
|
Cifuentes J, Boulanger P, Pham MT, Prieto F, Moreau R. Gesture Classification Using LSTM Recurrent Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:6864-6867. [PMID: 31947417 DOI: 10.1109/embc.2019.8857592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
The classification of human hand gestures has gained widespread recognition as a natural and powerful way to interact intuitively and efficiently with computers. Specifically, this approach has facilitated the development of a number of important applications in the medical training field, specially as a way to objectively evaluate surgical tasks of novices compared to an expert surgeon. In this paper, 3D medical gestures, acquired by an instrumented laparoscopic forceps, were classified using Long Short Term Memory (LSTM) recurrent neural networks (RNN). Recognition results are based on gesture dynamics and a comparison of gesture trajectories between novices to an expert motion are presented. Using LSTM RNN, we were able to achieve a recognition rate of 99.1%.
Collapse
|
23
|
|
24
|
Dias RD, Gupta A, Yule SJ. Using Machine Learning to Assess Physician Competence: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:427-439. [PMID: 30113364 DOI: 10.1097/acm.0000000000002414] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE To identify the different machine learning (ML) techniques that have been applied to automate physician competence assessment and evaluate how these techniques can be used to assess different competence domains in several medical specialties. METHOD In May 2017, MEDLINE, EMBASE, PsycINFO, Web of Science, ACM Digital Library, IEEE Xplore Digital Library, PROSPERO, and Cochrane Database of Systematic Reviews were searched for articles published from inception to April 30, 2017. Studies were included if they applied at least one ML technique to assess medical students', residents', fellows', or attending physicians' competence. Information on sample size, participants, study setting and design, medical specialty, ML techniques, competence domains, outcomes, and methodological quality was extracted. MERSQI was used to evaluate quality, and a qualitative narrative synthesis of the medical specialties, ML techniques, and competence domains was conducted. RESULTS Of 4,953 initial articles, 69 met inclusion criteria. General surgery (24; 34.8%) and radiology (15; 21.7%) were the most studied specialties; natural language processing (24; 34.8%), support vector machine (15; 21.7%), and hidden Markov models (14; 20.3%) were the ML techniques most often applied; and patient care (63; 91.3%) and medical knowledge (45; 65.2%) were the most assessed competence domains. CONCLUSIONS A growing number of studies have attempted to apply ML techniques to physician competence assessment. Although many studies have investigated the feasibility of certain techniques, more validation research is needed. The use of ML techniques may have the potential to integrate and analyze pragmatic information that could be used in real-time assessments and interventions.
Collapse
Affiliation(s)
- Roger D Dias
- R.D. Dias is instructor in emergency medicine, Department of Emergency Medicine and STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; ORCID: http://orcid.org/0000-0003-4959-5052. A. Gupta is research scientist, Center for Surgery and Public Health, Brigham and Women's Hospital, Boston, Massachusetts. S.J. Yule is associate professor of surgery, Harvard Medical School, and faculty, Department of Surgery and STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts
| | | | | |
Collapse
|
25
|
Kowalewski KF, Garrow CR, Schmidt MW, Benner L, Müller-Stich BP, Nickel F. Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying. Surg Endosc 2019; 33:3732-3740. [DOI: 10.1007/s00464-019-06667-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 01/17/2019] [Indexed: 12/17/2022]
|
26
|
Performance Assessment. COMPREHENSIVE HEALTHCARE SIMULATION: SURGERY AND SURGICAL SUBSPECIALTIES 2019. [DOI: 10.1007/978-3-319-98276-2_9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
27
|
Overtoom EM, Horeman T, Jansen FW, Dankelman J, Schreuder HWR. Haptic Feedback, Force Feedback, and Force-Sensing in Simulation Training for Laparoscopy: A Systematic Overview. JOURNAL OF SURGICAL EDUCATION 2019; 76:242-261. [PMID: 30082239 DOI: 10.1016/j.jsurg.2018.06.008] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 04/24/2018] [Accepted: 06/13/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVES To provide a systematic overview of the literature assessing the value of haptic and force feedback in current simulators teaching laparoscopic surgical skills. DATA SOURCES The databases of Pubmed, Cochrane, Embase, Web of Science, and Google Scholar were searched to retrieve relevant studies published until January 31st, 2017. The search included laparoscopic surgery, simulation, and haptic or force feedback and all relevant synonyms. METHODS Duplicates were removed, and titles and abstracts screened. The remaining articles were subsequently screened full text and included in this review if they followed the inclusion criteria. A total of 2 types of feedback have been analyzed and will be discussed separately: haptic- and force feedback. RESULTS A total of 4023 articles were found, of which 87 could be used in this review. A descriptive analysis of the data is provided. Results of the added value of haptic interface devices in virtual reality are variable. Haptic feedback is most important for more complex tasks. The interface devices do not require the highest level of fidelity. Haptic feedback leads to a shorter learning curve with a steadier upward trend. Concerning force feedback, force parameters are measured through force sensing systems in the instrument and/or the environment. These parameters, especially in combination with motion parameters, provide box trainers with an objective evaluation of laparoscopic skills. Feedback of force-use both real time and postpractice has been shown to improve training. CONCLUSIONS Haptic feedback is added to virtual reality simulators to increase the fidelity and thereby improve training effect. Variable results have been found from adding haptic feedback. It is most important for more complex tasks, but results in only minor improvements for novice surgeons. Force parameters and force feedback in box trainers have been shown to improve training results.
Collapse
Affiliation(s)
- Evelien M Overtoom
- Department of Gynaecology and Reproductive Medicine, University Medical Center Utrecht and Department of Gynaecologic Oncology, UMC Utrecht Cancer Centre, Utrecht, The Netherlands
| | - Tim Horeman
- Department of Biomechanical Engineering, Faculty of Mechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Frank-Willem Jansen
- Department of Biomechanical Engineering, Faculty of Mechanical Engineering, Delft University of Technology, Delft, The Netherlands; Department of Gynaecology, Leiden University Medical Centre, Leiden, The Netherlands
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Faculty of Mechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Henk W R Schreuder
- Department of Gynaecology and Reproductive Medicine, University Medical Center Utrecht and Department of Gynaecologic Oncology, UMC Utrecht Cancer Centre, Utrecht, The Netherlands.
| |
Collapse
|
28
|
Towards Expert-Based Speed–Precision Control in Early Simulator Training for Novice Surgeons. INFORMATION 2018. [DOI: 10.3390/info9120316] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Simulator training for image-guided surgical interventions would benefit from intelligent systems that detect the evolution of task performance, and take control of individual speed–precision strategies by providing effective automatic performance feedback. At the earliest training stages, novices frequently focus on getting faster at the task. This may, as shown here, compromise the evolution of their precision scores, sometimes irreparably, if it is not controlled for as early as possible. Artificial intelligence could help make sure that a trainee reaches her/his optimal individual speed–accuracy trade-off by monitoring individual performance criteria, detecting critical trends at any given moment in time, and alerting the trainee as early as necessary when to slow down and focus on precision, or when to focus on getting faster. It is suggested that, for effective benchmarking, individual training statistics of novices are compared with the statistics of an expert surgeon. The speed–accuracy functions of novices trained in a large number of experimental sessions reveal differences in individual speed–precision strategies, and clarify why such strategies should be automatically detected and controlled for before further training on specific surgical task models, or clinical models, may be envisaged. How expert benchmark statistics may be exploited for automatic performance control is explained.
Collapse
|
29
|
Forestier G, Petitjean F, Senin P, Despinoy F, Huaulmé A, Fawaz HI, Weber J, Idoumghar L, Muller PA, Jannin P. Surgical motion analysis using discriminative interpretable patterns. Artif Intell Med 2018; 91:3-11. [PMID: 30172445 DOI: 10.1016/j.artmed.2018.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 07/06/2018] [Accepted: 08/13/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.
Collapse
Affiliation(s)
- Germain Forestier
- IRIMAS, Université de Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - François Petitjean
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Pavel Senin
- Los Alamos National Laboratory, University Of Hawai'i at Mānoa, United States.
| | - Fabien Despinoy
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | - Arnaud Huaulmé
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | | | | | | | | | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| |
Collapse
|
30
|
Cifuentes J, Moreau R, Prieto F, Boulanger P. Surgical gesture classification using Dynamic Time Warping and affine velocity. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:2275-2278. [PMID: 29060351 DOI: 10.1109/embc.2017.8037309] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Minimally Invasive Surgery (MIS) has become widespread as an important surgical technique due to its advantages related to pain relief and short recovery time periods. However, this approach implies the acquisition of special surgical skills, which represents a challenge in the objective assessment of surgical gestures. In this way, several studies shown that kinematics and kinetic analysis of hand movement is a valuable assessment tool of basic surgical skills in MIS. In addition, recent researches proved that human motion performed during surgery can be described as a sequence of constant affine velocity movements. In this paper, we present a novel method to classify gestures based on an affine velocity analysis of 3D motion and an implementation of the Dynamic Time Warping algorithm. In particular, affine velocity calculation correlates kinematics and geometrical variables such as curvature, torsion, and euclidean velocity, reducing the dimension of the conventional 3D problem. In this way, using the simplicity of dynamic time warping algorithm allows us to perform an accurate classification, easier to implement and understand. Experimental validation of the algorithm is presented based on the position and orientation data of a laparoscope instrument, determiMinimally Invasive Surgery (MIS) has become widespread as an important surgical technique due to its advantages related to pain relief and short recovery time periods. However, this approach implies the acquisition of special surgical skills, which represents a challenge in the objective assessment of surgical gestures. In this way, several studies shown that kinematics and kinetic analysis of hand movement is a valuable assessment tool of basic surgical skills in MIS. In addition, recent researches proved that human motion performed during surgery can be described as a sequence of constant affine velocity movements. In this paper, we present a novel method to classify gestures based on an affine velocity analysis of 3D motion and an implementation of the Dynamic Time Warping algorithm. In particular, affine velocity calculation correlates kinematics and geometrical variables such as curvature, torsion, and euclidean velocity, reducing the dimension of the conventional 3D problem. In this way, using the simplicity of dynamic time warping algorithm allows us to perform an accurate classification, easier to implement and understand. Experimental validation of the algorithm is presented based on the position and orientation data of a laparoscope instrument, determined by six cameras. Results show the advantages of the proposed method compared to conventional Multidimensional Dynamic Time Warping to classify surgical gestures in MIS.ned by six cameras. Results show the advantages of the proposed method compared to conventional Multidimensional Dynamic Time Warping to classify surgical gestures in MIS.
Collapse
|
31
|
Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg 2018; 13:443-455. [PMID: 29380122 DOI: 10.1007/s11548-018-1704-z] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 01/08/2018] [Indexed: 10/18/2022]
Abstract
PURPOSE Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). METHODS We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. RESULTS We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. CONCLUSION Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.
Collapse
|
32
|
An HMM-based recognition framework for endovascular manipulations. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:3393-3396. [PMID: 29060625 DOI: 10.1109/embc.2017.8037584] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Robotic surgical systems are becoming increasingly popular for the treatment of cardiovascular diseases. However, most of them have been designed without considering techniques and skills of natural surgical manipulations, which are key factors to clinical success of percutaneous coronary intervention. This paper proposes an HMM-based framework to recognize six typical endovascular manipulations for surgical skill analysis. A simulative surgical platform is built for endovascular manipulations assessed by five subjects (1 expert and 4 novices). The performances of the proposed framework are evaluated by three experimental schemes with the optimal model parameters. The results show that endovascular manipulations are recognized with high accuracy and reliable performance. Furthermore, the acceptable results can also be applied to the design of next generation vascular interventional robots.
Collapse
|
33
|
Fard MJ, Ameri S, Darin Ellis R, Chinnam RB, Pandya AK, Klein MD. Automated robot-assisted surgical skill evaluation: Predictive analytics approach. Int J Med Robot 2017; 14. [PMID: 28660725 DOI: 10.1002/rcs.1850] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Revised: 06/01/2017] [Accepted: 06/02/2017] [Indexed: 12/29/2022]
Abstract
BACKGROUND Surgical skill assessment has predominantly been a subjective task. Recently, technological advances such as robot-assisted surgery have created great opportunities for objective surgical evaluation. In this paper, we introduce a predictive framework for objective skill assessment based on movement trajectory data. Our aim is to build a classification framework to automatically evaluate the performance of surgeons with different levels of expertise. METHODS Eight global movement features are extracted from movement trajectory data captured by a da Vinci robot for surgeons with two levels of expertise - novice and expert. Three classification methods - k-nearest neighbours, logistic regression and support vector machines - are applied. RESULTS The result shows that the proposed framework can classify surgeons' expertise as novice or expert with an accuracy of 82.3% for knot tying and 89.9% for a suturing task. CONCLUSION This study demonstrates and evaluates the ability of machine learning methods to automatically classify expert and novice surgeons using global movement features.
Collapse
Affiliation(s)
- Mahtab J Fard
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - Sattar Ameri
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - R Darin Ellis
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - Ratna B Chinnam
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, Michigan, USA
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan, USA
| | - Michael D Klein
- Department of Surgery, Wayne State University School of Medicine and Pediatric Surgery, Children's Hospital of Michigan, Detroit, Michigan, USA
| |
Collapse
|
34
|
Vedula SS, Ishii M, Hager GD. Objective Assessment of Surgical Technical Skill and Competency in the Operating Room. Annu Rev Biomed Eng 2017; 19:301-325. [PMID: 28375649 DOI: 10.1146/annurev-bioeng-071516-044435] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Training skillful and competent surgeons is critical to ensure high quality of care and to minimize disparities in access to effective care. Traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. Simultaneously, technological developments are enabling capture and analysis of large amounts of complex surgical data. These developments are motivating a "surgical data science" approach to objective computer-aided technical skill evaluation (OCASE-T) for scalable, accurate assessment; individualized feedback; and automated coaching. We define the problem space for OCASE-T and summarize 45 publications representing recent research in this domain. We find that most studies on OCASE-T are simulation based; very few are in the operating room. The algorithms and validation methodologies used for OCASE-T are highly varied; there is no uniform consensus. Future research should emphasize competency assessment in the operating room, validation against patient outcomes, and effectiveness for surgical training.
Collapse
Affiliation(s)
- S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| | - Masaru Ishii
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Gregory D Hager
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| |
Collapse
|
35
|
Hessinger M, Pilic T, Werthschutzky R, Pott PP. Miniaturized force/torque sensor for in vivo measurements of tissue characteristics. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:2022-2025. [PMID: 28268727 DOI: 10.1109/embc.2016.7591123] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper presents the development of a surgical instrument to measure interaction forces/torques with organic tissue during operation. The focus is on the design progress of the sensor element, consisting of a spoke wheel deformation element with a diameter of 12 mm and eight inhomogeneous doped piezoresistive silicon strain gauges on an integrated full-bridge assembly with an edge length of 500 μm. The silicon chips are contacted to flex-circuits via flip chip and bonded on the substrate with a single component adhesive. A signal processing board with an 18 bit serial A/D converter is integrated into the sensor. The design concept of the handheld surgical sensor device consists of an instrument coupling, the six-axis sensor, a wireless communication interface and battery. The nominal force of the sensing element is 10 N and the nominal torque is 1 N-m in all spatial directions. A first characterization of the force sensor results in a maximal systematic error of 4.92 % and random error of 1.13 %.
Collapse
|
36
|
Ahmidi N, Tao L, Sefati S, Gao Y, Lea C, Haro BB, Zappella L, Khudanpur S, Vidal R, Hager GD. A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery. IEEE Trans Biomed Eng 2017; 64:2025-2041. [PMID: 28060703 DOI: 10.1109/tbme.2016.2647680] [Citation(s) in RCA: 94] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
OBJECTIVE State-of-the-art techniques for surgical data analysis report promising results for automated skill assessment and action recognition. The contributions of many of these techniques, however, are limited to study-specific data and validation metrics, making assessment of progress across the field extremely challenging. METHODS In this paper, we address two major problems for surgical data analysis: First, lack of uniform-shared datasets and benchmarks, and second, lack of consistent validation processes. We address the former by presenting the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), a public dataset that we have created to support comparative research benchmarking. JIGSAWS contains synchronized video and kinematic data from multiple performances of robotic surgical tasks by operators of varying skill. We address the latter by presenting a well-documented evaluation methodology and reporting results for six techniques for automated segmentation and classification of time-series data on JIGSAWS. These techniques comprise four temporal approaches for joint segmentation and classification: hidden Markov model, sparse hidden Markov model (HMM), Markov semi-Markov conditional random field, and skip-chain conditional random field; and two feature-based ones that aim to classify fixed segments: bag of spatiotemporal features and linear dynamical systems. RESULTS Most methods recognize gesture activities with approximately 80% overall accuracy under both leave-one-super-trial-out and leave-one-user-out cross-validation settings. CONCLUSION Current methods show promising results on this shared dataset, but room for significant progress remains, particularly for consistent prediction of gesture activities across different surgeons. SIGNIFICANCE The results reported in this paper provide the first systematic and uniform evaluation of surgical activity recognition techniques on the benchmark database.
Collapse
|
37
|
Rossa C, Lehmann T, Sloboda R, Usmani N, Tavakoli M. A data-driven soft sensor for needle deflection in heterogeneous tissue using just-in-time modelling. Med Biol Eng Comput 2016; 55:1401-1414. [PMID: 27943086 DOI: 10.1007/s11517-016-1599-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2016] [Accepted: 11/28/2016] [Indexed: 10/20/2022]
Abstract
Global modelling has traditionally been the approach taken to estimate needle deflection in soft tissue. In this paper, we propose a new method based on local data-driven modelling of needle deflection. External measurement of needle-tissue interactions is collected from several insertions in ex vivo tissue to form a cloud of data. Inputs to the system are the needle insertion depth, axial rotations, and the forces and torques measured at the needle base by a force sensor. When a new insertion is performed, the just-in-time learning method estimates the model outputs given the current inputs to the needle-tissue system and the historical database. The query is compared to every observation in the database and is given weights according to some similarity criteria. Only a subset of historical data that is most relevant to the query is selected and a local linear model is fit to the selected points to estimate the query output. The model outputs the 3D deflection of the needle tip and the needle insertion force. The proposed approach is validated in ex vivo multilayered biological tissue in different needle insertion scenarios. Experimental results in five different case studies indicate an accuracy in predicting needle deflection of 0.81 and 1.24 mm in the horizontal and vertical lanes, respectively, and an accuracy of 0.5 N in predicting the needle insertion force over 216 needle insertions.
Collapse
Affiliation(s)
- Carlos Rossa
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 2V4, Canada.
| | - Thomas Lehmann
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 2V4, Canada
| | - Ronald Sloboda
- Cross Cancer Institute and the Department of Oncology, University of Alberta, Edmonton, AB, T6G 1Z2, Canada
| | - Nawaid Usmani
- Cross Cancer Institute and the Department of Oncology, University of Alberta, Edmonton, AB, T6G 1Z2, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, T6G 2V4, Canada
| |
Collapse
|
38
|
Brown JD, O Brien CE, Leung SC, Dumon KR, Lee DI, Kuchenbecker KJ. Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer. IEEE Trans Biomed Eng 2016; 64:2263-2275. [PMID: 28113295 DOI: 10.1109/tbme.2016.2634861] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Most trainees begin learning robotic minimally invasive surgery by performing inanimate practice tasks with clinical robots such as the Intuitive Surgical da Vinci. Expert surgeons are commonly asked to evaluate these performances using standardized five-point rating scales, but doing such ratings is time consuming, tedious, and somewhat subjective. This paper presents an automatic skill evaluation system that analyzes only the contact force with the task materials, the broad-bandwidth accelerations of the robotic instruments and camera, and the task completion time. METHODS We recruited N = 38 participants of varying skill in robotic surgery to perform three trials of peg transfer with a da Vinci Standard robot instrumented with our Smart Task Board. After calibration, three individuals rated these trials on five domains of the Global Evaluative Assessment of Robotic Skill (GEARS) structured assessment tool, providing ground-truth labels for regression and classification machine learning algorithms that predict GEARS scores based on the recorded force, acceleration, and time signals. RESULTS Both machine learning approaches produced scores on the reserved testing sets that were in good to excellent agreement with the human raters, even when the force information was not considered. Furthermore, regression predicted GEARS scores more accurately and efficiently than classification. CONCLUSION A surgeon's skill at robotic peg transfer can be reliably rated via regression using features gathered from force, acceleration, and time sensors external to the robot. SIGNIFICANCE We expect improved trainee learning as a result of providing these automatic skill ratings during inanimate task practice on a surgical robot.
Collapse
|
39
|
Genovese B, Yin S, Sareh S, Devirgilio M, Mukdad L, Davis J, Santos VJ, Benharash P. Surgical Hand Tracking in Open Surgery Using a Versatile Motion Sensing System: Are We There Yet? Am Surg 2016; 82:872-875. [DOI: 10.1177/000313481608201002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
With changes in work hour limitations, there is an increasing need for objective determination of technical proficiency. Electromagnetic hand-motion analysis has previously shown only time to completion and number of movements to correlation with expertise. The present study was undertaken to evaluate the efficacy of hand-motion-tracking analysis in determining surgical skill proficiency. A nine-degree-of-freedom sensor was used and mounted on the superior aspect of a needle driver. A one-way analysis of variance and Welch's t test were performed to evaluate significance between subjects. Four Novices, four Trainees, and three Experts performed a large vessel patch anastomosis on a phantom tissue. Path length, total number of movements, absolute velocity, and total time were analyzed between groups. Compared to the Novices, Expert subjects exhibited significantly decreased total number of movements, decreased instrument path length, and decreased total time to complete tasks. There were no significant differences found in absolute velocity between groups. In this pilot study, we have identified significant differences in patterns of motion between Novice and Expert subjects. These data warrant further analysis for its predictive value in larger cohorts at different levels of training and may be a useful tool in competence-based training paradigms in the future.
Collapse
Affiliation(s)
- Bradley Genovese
- Division of Cardiac Surgery, University of California at Los Angeles, Los Angeles, California
- Center for Advanced Surgical and Interventional Technology, University of California at Los Angeles, Los Angeles, California
| | - Steven Yin
- Electrical Engineering Department, University of California at Los Angeles, Los Angeles, California; and
| | - Sohail Sareh
- Division of Cardiac Surgery, University of California at Los Angeles, Los Angeles, California
| | - Michael Devirgilio
- Division of Cardiac Surgery, University of California at Los Angeles, Los Angeles, California
| | - Laith Mukdad
- Division of Cardiac Surgery, University of California at Los Angeles, Los Angeles, California
| | - Jessica Davis
- Division of Cardiac Surgery, University of California at Los Angeles, Los Angeles, California
| | - Veronica J. Santos
- Center for Advanced Surgical and Interventional Technology, University of California at Los Angeles, Los Angeles, California
- Mechanical and Aerospace Engineering Department, University of California at Los Angeles, Los Angeles, California
| | - Peyman Benharash
- Division of Cardiac Surgery, University of California at Los Angeles, Los Angeles, California
- Center for Advanced Surgical and Interventional Technology, University of California at Los Angeles, Los Angeles, California
| |
Collapse
|
40
|
Fard MJ, Pandya AK, Chinnam RB, Klein MD, Ellis RD. Distance-based time series classification approach for task recognition with application in surgical robot autonomy. Int J Med Robot 2016; 13. [DOI: 10.1002/rcs.1766] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Revised: 07/09/2016] [Accepted: 07/12/2016] [Indexed: 11/09/2022]
Affiliation(s)
- Mahtab J. Fard
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| | - Abhilash K. Pandya
- Department of Electrical and Computer Engineering; Wayne State University; Detroit MI USA
| | - Ratna B. Chinnam
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| | - Michael D. Klein
- Department of Surgery; Wayne State University School of Medicine and Pediatric Surgery, Children's Hospital of Michigan; Detroit MI USA
| | - R. Darin Ellis
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| |
Collapse
|
41
|
Pinzon D, Byrns S, Zheng B. Prevailing Trends in Haptic Feedback Simulation for Minimally Invasive Surgery. Surg Innov 2016; 23:415-21. [PMID: 26839212 DOI: 10.1177/1553350616628680] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Background The amount of direct hand-tool-tissue interaction and feedback in minimally invasive surgery varies from being attenuated in laparoscopy to being completely absent in robotic minimally invasive surgery. The role of haptic feedback during surgical skill acquisition and its emphasis in training have been a constant source of controversy. This review discusses the major developments in haptic simulation as they relate to surgical performance and the current research questions that remain unanswered. Search Strategy An in-depth review of the literature was performed using PubMed. Results A total of 198 abstracts were returned based on our search criteria. Three major areas of research were identified, including advancements in 1 of the 4 components of haptic systems, evaluating the effectiveness of haptic integration in simulators, and improvements to haptic feedback in robotic surgery. Conclusions Force feedback is the best method for tissue identification in minimally invasive surgery and haptic feedback provides the greatest benefit to surgical novices in the early stages of their training. New technology has improved our ability to capture, playback and enhance to utility of haptic cues in simulated surgery. Future research should focus on deciphering how haptic training in surgical education can increase performance, safety, and improve training efficiency.
Collapse
Affiliation(s)
- David Pinzon
- University of Alberta, Edmonton, Alberta, Canada
| | - Simon Byrns
- University of Alberta, Edmonton, Alberta, Canada
| | - Bin Zheng
- University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
42
|
Hand-tool-tissue interaction forces in neurosurgery for haptic rendering. Med Biol Eng Comput 2015; 54:1229-41. [PMID: 26718558 DOI: 10.1007/s11517-015-1439-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 12/12/2015] [Indexed: 10/22/2022]
Abstract
Haptics provides sensory stimuli that represent the interaction with a virtual or tele-manipulated object, and it is considered a valuable navigation and manipulation tool during tele-operated surgical procedures. Haptic feedback can be provided to the user via cutaneous information and kinesthetic feedback. Sensory subtraction removes the kinesthetic component of the haptic feedback, having only the cutaneous component provided to the user. Such a technique guarantees a stable haptic feedback loop, while it keeps the transparency of the tele-operation system high, which means that the system faithfully replicates and render back the user's directives. This work focuses on checking whether the interaction forces during a bench model neurosurgery operation can lie in the solely cutaneous perception of the human finger pads. If this assumption is found true, it would be possible to exploit sensory subtraction techniques for providing surgeons with feedback from neurosurgery. We measured the forces exerted to surgical tools by three neurosurgeons performing typical actions on a brain phantom, using contact force sensors, while the forces exerted by the tools to the phantom tissue were recorded using a load cell placed under the brain phantom box. The measured surgeon-tool contact forces were 0.01-3.49 N for the thumb and 0.01-6.6 N for index and middle finger, whereas the measured tool-tissue interaction forces were from six to 11 times smaller than the contact forces, i.e., 0.01-0.59 N. The measurements for the contact forces fit the range of the cutaneous sensitivity for the human finger pad; thus, we can say that, in a tele-operated robotic neurosurgery scenario, it would possible to render forces at the fingertip level by conveying haptic cues solely through the cutaneous channel of the surgeon's finger pads. This approach would allow high transparency and high stability of the haptic feedback loop in a tele-operation system.
Collapse
|
43
|
Huang E, Wyles SM, Chern H, Kim E, O'Sullivan P. From novice to master surgeon: improving feedback with a descriptive approach to intraoperative assessment. Am J Surg 2015; 212:180-7. [PMID: 26611717 DOI: 10.1016/j.amjsurg.2015.04.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 03/04/2015] [Accepted: 04/27/2015] [Indexed: 11/16/2022]
Abstract
BACKGROUND A developmental and descriptive approach to assessing trainee intraoperative performance was explored. METHODS Semistructured interviews with 20 surgeon educators were recorded, transcribed, deidentified, and analyzed using a grounded theory approach to identify emergent themes. Two researchers independently coded the transcripts. Emergent themes were also compared to existing theories of skill acquisition. RESULTS Surgeon educators characterized intraoperative surgical performance as an integrated practice of multiple skill categories and included anticipating, planning for contingencies, monitoring progress, self-efficacy, and "working knowledge." Comments concerning progression through stages, broadly characterized as "technician," "anatomist," "anticipator," "strategist," and "executive," formed a narrative about each stage of development. CONCLUSIONS The developmental trajectory with narrative, descriptive profiles of surgeons working toward mastery provide a standardized vocabulary for communicating feedback, while fostering reflection on trainee progress. Viewing surgical performance as integrated practice rather than the conglomerate of isolated skills enhances authentic assessment.
Collapse
Affiliation(s)
- Emily Huang
- Department of Surgery, University of California, San Francisco, 513 Parnassus Avenue, S-321, San Francisco, CA, 94143-0470, USA.
| | - Susannah M Wyles
- Department of Surgery, University of California, San Francisco, 513 Parnassus Avenue, S-321, San Francisco, CA, 94143-0470, USA
| | - Hueylan Chern
- Department of Surgery, University of California, San Francisco, 513 Parnassus Avenue, S-321, San Francisco, CA, 94143-0470, USA
| | - Edward Kim
- Department of Surgery, University of California, San Francisco, 513 Parnassus Avenue, S-321, San Francisco, CA, 94143-0470, USA
| | - Patricia O'Sullivan
- Department of Medicine, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
44
|
Yoshimoto S, Kuroda Y, Imura M, Oshiro O, Sato K. Smart sensing of tool/tissue interaction by resistive coupling. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:628-31. [PMID: 24109765 DOI: 10.1109/embc.2013.6609578] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
A smart sensing of tool-tissue interaction is required to monitor the surgical task without disturbing the tool manipulation. We proposed a new tactile sensing method that enables us to detect the tool-tissue interaction with a simple hardware by resistive coupling. The system consists of two electrodes, a bridge circuit and a differential amplifier for the robust sensing of the contact resistance between the tool and tissue. In order to evaluate the sensing method, we investigated the relationship between the sensor output and the deformation of a wet sponge sample by retraction task. According to the model fitting of the deformation-output profile, we concluded that the proposed sensor provide enough reproducibility in the simple situation. Furthermore, we confirmed that the developed sensor works with a biological sample.
Collapse
|
45
|
|
46
|
Horeman T, Sun S, Tuijthof GJM, Jansen FW, Meijerink JWJHJ, Dankelman J. Design of a box trainer for objective assessment of technical skills in single-port surgery. JOURNAL OF SURGICAL EDUCATION 2015; 72:606-617. [PMID: 25890791 DOI: 10.1016/j.jsurg.2015.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Revised: 01/28/2015] [Accepted: 02/06/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVE Laparoscopic single-port (SP) surgery uses only a single entry point for all instruments. The approach of SP has been applied in multiple laparoscopic disciplines owing to its improved cosmetic result. However, in SP surgery, instrument movements are further restricted, resulting in increased instrument collisions compared with standard multiport (MP) laparoscopy. METHODS Our goal was to develop a trainer that can quantitatively measure task time, force and motion data during both MP and SP training to investigate the influence of instrument configuration on performance. Custom-made abdominal force sensors and accelerometers were integrated into a new training box that can be used in an SP and an MP configuration. This new box trainer measures forces, acceleration, and tilt angles during training of SP and MP laparoscopy. With the new trainer, 13 novices performed a tissue manipulation task to test whether significant differences exist between MP and SP in maximum abdominal force, maximum tissue manipulation force, maximum acceleration, and tilt angles of the handles. RESULTS The results show that the task time (SP-145s, standard deviation (SD) = 103 vs MP-61s SD = 16), maximum abdominal force (SP-8.4N, SD = 2.0 vs MP-left (L)-3.3N, SD = 0.8 and MP-right (R)-5.8N, SD = 2.1), tissue manipulation force (SP-10.4N, SD = 3.6 and MP-5.6N, SD = 1.3), maximum acceleration (MP-L-9m/s(2), SD = 5 vs SP-L-14m/s(2), SD = 7), and tilt angles of the left handle are significantly higher in SP. CONCLUSIONS AND DISCUSSION This study shows that the new trainer can be used to find the most important differences in instrument and tissue handling, which is an important step toward the assessment of surgical skills needed for safe SP surgery depending on force and motion-based parameters.
Collapse
Affiliation(s)
- Tim Horeman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands; Department of Orthopaedics, Academic Medical Center, Amsterdam, The Netherlands.
| | - Siyu Sun
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - Gabrielle J M Tuijthof
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands; Department of Orthopaedics, Academic Medical Center, Amsterdam, The Netherlands
| | - Frank William Jansen
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands; Department of Gynecology, Leiden Medical University, Leiden, The Netherlands
| | | | - Jenny Dankelman
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
47
|
A study of crowdsourced segment-level surgical skill assessment using pairwise rankings. Int J Comput Assist Radiol Surg 2015; 10:1435-47. [PMID: 26133652 DOI: 10.1007/s11548-015-1238-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Accepted: 06/04/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE Currently available methods for surgical skills assessment are either subjective or only provide global evaluations for the overall task. Such global evaluations do not inform trainees about where in the task they need to perform better. In this study, we investigated the reliability and validity of a framework to generate objective skill assessments for segments within a task, and compared assessments from our framework using crowdsourced segment ratings from surgically untrained individuals and expert surgeons against manually assigned global rating scores. METHODS Our framework includes (1) a binary classifier trained to generate preferences for pairs of task segments (i.e., given a pair of segments, specification of which one was performed better), (2) computing segment-level percentile scores based on the preferences, and (3) predicting task-level scores using the segment-level scores. We conducted a crowdsourcing user study to obtain manual preferences for segments within a suturing and knot-tying task from a crowd of surgically untrained individuals and a group of experts. We analyzed the inter-rater reliability of preferences obtained from the crowd and experts, and investigated the validity of task-level scores obtained using our framework. In addition, we compared accuracy of the crowd and expert preference classifiers, as well as the segment- and task-level scores obtained from the classifiers. RESULTS We observed moderate inter-rater reliability within the crowd (Fleiss' kappa, κ = 0.41) and experts (κ = 0.55). For both the crowd and experts, the accuracy of an automated classifier trained using all the task segments was above par as compared to the inter-rater agreement [crowd classifier 85 % (SE 2 %), expert classifier 89 % (SE 3 %)]. We predicted the overall global rating scores (GRS) for the task with a root-mean-squared error that was lower than one standard deviation of the ground-truth GRS. We observed a high correlation between segment-level scores (ρ ≥ 0.86) obtained using the crowd and expert preference classifiers. The task-level scores obtained using the crowd and expert preference classifier were also highly correlated with each other (ρ ≥ 0.84), and statistically equivalent within a margin of two points (for a score ranging from 6 to 30). Our analyses, however, did not demonstrate statistical significance in equivalence of accuracy between the crowd and expert classifiers within a 10 % margin. CONCLUSIONS Our framework implemented using crowdsourced pairwise comparisons leads to valid objective surgical skill assessment for segments within a task, and for the task overall. Crowdsourcing yields reliable pairwise comparisons of skill for segments within a task with high efficiency. Our framework may be deployed within surgical training programs for objective, automated, and standardized evaluation of technical skills.
Collapse
|
48
|
Dargar S, Brino C, Matthes K, Sankaranarayanan G, De S. Characterization of force and torque interactions during a simulated transgastric appendectomy procedure. IEEE Trans Biomed Eng 2014; 62:890-9. [PMID: 25398173 DOI: 10.1109/tbme.2014.2369956] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
We have developed an instrumented endoscope grip handle equipped with a six-axis load cell and measured forces and torques during a simulated transgastric natural orifice translumenal endoscopic surgery appendectomy procedure performed in an EASIE-R ex vivo simulator. The data were collected from ten participating surgeons of varying degrees of expertise which was analyzed to compute a set of six force and torque parameters for each coordinate axis for each of the nine tasks of the appendectomy procedure. The mean push/pull force was found to be 3.64 N (σ = 3.54 N) in the push direction and the mean torque was 3.3 N · mm (σ = 38.6 N · mm) in the counterclockwise direction about the push/pull axis. Most interestingly, the force and torque data about the nondominant x and z axes showed a statistically significant difference (p < 0.05) between the expert and novice groups for five of the nine tasks. These data may be useful in developing surgical platforms especially new haptic devices and simulation systems for emerging natural orifice procedures.
Collapse
|
49
|
Beyer-Berjot L, Palter V, Grantcharov T, Aggarwal R. Advanced training in laparoscopic abdominal surgery: a systematic review. Surgery 2014; 156:676-88. [PMID: 24947643 DOI: 10.1016/j.surg.2014.04.044] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2014] [Accepted: 04/18/2014] [Indexed: 01/08/2023]
Abstract
BACKGROUND Simulation has spread widely this last decade, especially in laparoscopic surgery, and training out of the operating room has proven its positive impact on basic skills during real laparoscopic procedures. Few articles dealing with advanced training in laparoscopic abdominal surgery, however, have been published. Such training may decrease learning curves in the operating room for junior surgeons with limited access to complex laparoscopic procedures as a primary operator. METHODS Two reviewers, using MEDLINE, EMBASE, and The Cochrane Library conducted a systematic research with combinations of the following keywords: (teaching OR education OR computer simulation) AND laparoscopy AND (gastric OR stomach OR colorectal OR colon OR rectum OR small bowel OR liver OR spleen OR pancreas OR advanced surgery OR advanced procedure OR complex procedure). Additional studies were searched in the reference lists of all included articles. RESULTS Fifty-four original studies were retrieved. Their level of evidence was low: most of the studies were case series and one fifth were purely descriptive, but there were eight randomized trials. Pig models and video trainers as well as gastric and colorectal procedures were mainly assessed. The retrieved studies showed some encouraging trends in terms of trainee satisfaction with improvement after training, but the improvements were mainly on the training tool itself. Some tools have been proven to be construct-valid. CONCLUSION Higher-quality studies are required to appraise educational value in this field.
Collapse
Affiliation(s)
- Laura Beyer-Berjot
- Division of Surgery, Department of Surgery and Cancer, St. Mary's Campus, Imperial College Healthcare NHS Trust, London, UK; Center for Surgical Teaching and Research (CERC), Aix-Marseille Université, Marseille, France.
| | - Vanessa Palter
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Teodor Grantcharov
- Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Rajesh Aggarwal
- Division of Surgery, Department of Surgery and Cancer, St. Mary's Campus, Imperial College Healthcare NHS Trust, London, UK; Department of Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
50
|
Pugh CM. Application of national testing standards to simulation-based assessments of clinical palpation skills. Mil Med 2014; 178:55-63. [PMID: 24084306 DOI: 10.7205/milmed-d-13-00215] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
With the advent of simulation technology, several types of data acquisition methods have been used to capture hands-on clinical performance. Motion sensors, pressure sensors, and tool-tip interaction software are a few of the broad categories of approaches that have been used in simulation-based assessments. The purpose of this article is to present a focused review of 3 sensor-enabled simulations that are currently being used for patient-centered assessments of clinical palpation skills. The first part of this article provides a review of technology components, capabilities, and metrics. The second part provides a detailed discussion regarding validity evidence and implications using the Standards for Educational and Psychological Testing as an organizational and evaluative framework. Special considerations are given to content domain and creation of clinical scenarios from a developer's perspective. The broader relationship of this work to the science of touch is also considered.
Collapse
Affiliation(s)
- Carla M Pugh
- Department of Surgery, University of Wisconsin, 600 Highland Avenue-CSC 785B, Madison, WI 53792
| |
Collapse
|