1
|
Silva C, Nascimento D, Dantas GG, Fonseca K, Hespanhol L, Rego A, Araújo-Filho I. Impact of artificial intelligence on the training of general surgeons of the future: a scoping review of the advances and challenges. Acta Cir Bras 2024; 39:e396224. [PMID: 39319900 PMCID: PMC11414521 DOI: 10.1590/acb396224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 08/01/2024] [Indexed: 09/26/2024] Open
Abstract
PURPOSE To explore artificial intelligence's impact on surgical education, highlighting its advantages and challenges. METHODS A comprehensive search across databases such as PubMed, Scopus, Scientific Electronic Library Online (SciELO), Embase, Web of Science, and Google Scholar was conducted to compile relevant studies. RESULTS Artificial intelligence offers several advantages in surgical training. It enables highly realistic simulation environments for the safe practice of complex procedures. Artificial intelligence provides personalized real-time feedback, improving trainees' skills. It efficiently processes clinical data, enhancing diagnostics and surgical planning. Artificial intelligence-assisted surgeries promise precision and minimally invasive procedures. Challenges include data security, resistance to artificial intelligence adoption, and ethical considerations. CONCLUSIONS Stricter policies and regulatory compliance are needed for data privacy. Addressing surgeons' and educators' reluctance to embrace artificial intelligence is crucial. Integrating artificial intelligence into curricula and providing ongoing training are vital. Ethical, bioethical, and legal aspects surrounding artificial intelligence demand attention. Establishing clear ethical guidelines, ensuring transparency, and implementing supervision and accountability are essential. As artificial intelligence evolves in surgical training, research and development remain crucial. Future studies should explore artificial intelligence-driven personalized training and monitor ethical and legal regulations. In summary, artificial intelligence is shaping the future of general surgeons, offering advanced simulations, personalized feedback, and improved patient care. However, addressing data security, adoption resistance, and ethical concerns is vital. Adapting curricula and providing continuous training are essential to maximize artificial intelligence's potential, promoting ethical and safe surgery.
Collapse
Affiliation(s)
- Caroliny Silva
- Universidade Federal do Rio Grande do Norte – General Surgery Department – Natal (RN) – Brazil
| | - Daniel Nascimento
- Universidade Federal do Rio Grande do Norte – General Surgery Department – Natal (RN) – Brazil
| | - Gabriela Gomes Dantas
- Universidade Federal do Rio Grande do Norte – General Surgery Department – Natal (RN) – Brazil
| | - Karoline Fonseca
- Universidade Federal do Rio Grande do Norte – General Surgery Department – Natal (RN) – Brazil
| | - Larissa Hespanhol
- Universidade Federal de Campina Grande – General Surgery Department – Campina Grande (PB) – Brazil
| | - Amália Rego
- Liga Contra o Câncer – Institute of Teaching, Research, and Innovation – Natal (RN) – Brazil
| | - Irami Araújo-Filho
- Universidade Federal do Rio Grande do Norte – General Surgery Department – Natal (RN) – Brazil
| |
Collapse
|
2
|
Kil I, Eidt JF, Singapogu RB, Groff RE. Assessment of Open Surgery Suturing Skill: Image-based Metrics Using Computer Vision. JOURNAL OF SURGICAL EDUCATION 2024; 81:983-993. [PMID: 38749810 PMCID: PMC11181522 DOI: 10.1016/j.jsurg.2024.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/06/2024] [Accepted: 03/27/2024] [Indexed: 06/11/2024]
Abstract
OBJECTIVE This paper presents a computer vision algorithm for extraction of image-based metrics for suturing skill assessment and the corresponding results from an experimental study of resident and attending surgeons. DESIGN A suturing simulator that adapts the radial suturing task from the Fundamentals of Vascular Surgery (FVS) skills assessment is used to collect data. The simulator includes a camera positioned under the suturing membrane, which records needle and thread movement during the suturing task. A computer vision algorithm processes the video data and extracts objective metrics inspired by expert surgeons' recommended best practice, to "follow the curvature of the needle." PARTICIPANTS AND RESULTS Experimental data from a study involving subjects with various levels of suturing expertise (attending surgeons and surgery residents) are presented. Analysis shows that attendings and residents had statistically different performance on 6 of 9 image-based metrics, including the four new metrics introduced in this paper: Needle Tip Path Length, Needle Swept Area, Needle Tip Area and Needle Sway Length. CONCLUSION AND SIGNIFICANCE These image-based process metrics may be represented graphically in a manner conducive to training. The results demonstrate the potential of image-based metrics for assessment and training of suturing skill in open surgery.
Collapse
Affiliation(s)
- Irfan Kil
- Department of Electrical & Computer Engineering, Clemson University, Clemson, South Carolina.
| | - John F Eidt
- Division of Vascular Surgery, Baylor Scott & White Heart and Vascular Hospital, Dallas, Texas.
| | | | - Richard E Groff
- Department of Electrical & Computer Engineering, Clemson University, Clemson, South Carolina.
| |
Collapse
|
3
|
Gong Y, Mat Husin H, Erol E, Ortenzi V, Kuchenbecker KJ. AiroTouch: enhancing telerobotic assembly through naturalistic haptic feedback of tool vibrations. Front Robot AI 2024; 11:1355205. [PMID: 38835928 PMCID: PMC11148450 DOI: 10.3389/frobt.2024.1355205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/22/2024] [Indexed: 06/06/2024] Open
Abstract
Teleoperation allows workers to safely control powerful construction machines; however, its primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, hindering its use for assembly of pre-fabricated building components. Reliable, economical, and easy-to-implement haptic feedback could fill this perception gap and facilitate the broader use of robots in construction and other application areas. Thus, we adapted widely available commercial audio equipment to create AiroTouch, a naturalistic haptic feedback system that measures the vibration experienced by each robot tool and enables the operator to feel a scaled version of this vibration in real time. Accurate haptic transmission was achieved by optimizing the positions of the system's off-the-shelf accelerometers and voice-coil actuators. A study was conducted to evaluate how adding this naturalistic type of vibrotactile feedback affects the operator during telerobotic assembly. Thirty participants used a bimanual dexterous teleoperation system (Intuitive da Vinci Si) to build a small rigid structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that users took advantage of both tested versions of the naturalistic haptic feedback after gaining some experience with the task, causing significantly lower vibrations and forces in the second trial. Subjective responses indicate that haptic feedback increased the realism of the interaction and reduced the perceived task duration, task difficulty, and fatigue. As hypothesized, higher haptic feedback gains were chosen by users with larger hands and for the smaller sensed vibrations in the one-axis condition. These results elucidate important details for effective implementation of naturalistic vibrotactile feedback and demonstrate that our accessible audio-based approach could enhance user performance and experience during telerobotic assembly in construction and other application domains.
Collapse
Affiliation(s)
- Yijie Gong
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Haliza Mat Husin
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Ecda Erol
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Valerio Ortenzi
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Katherine J Kuchenbecker
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
- Mechanical Engineering, University of Stuttgart, Stuttgart, Germany
| |
Collapse
|
4
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
5
|
Xu J, Anastasiou D, Booker J, Burton OE, Layard Horsfall H, Salvadores Fernandez C, Xue Y, Stoyanov D, Tiwari MK, Marcus HJ, Mazomenos EB. A Deep Learning Approach to Classify Surgical Skill in Microsurgery Using Force Data from a Novel Sensorised Surgical Glove. SENSORS (BASEL, SWITZERLAND) 2023; 23:8947. [PMID: 37960645 PMCID: PMC10650455 DOI: 10.3390/s23218947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023]
Abstract
Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network's decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorized surgical glove contains valuable information regarding surgical skill.
Collapse
Affiliation(s)
- Jialang Xu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Dimitrios Anastasiou
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - James Booker
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Oliver E. Burton
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Hugo Layard Horsfall
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Carmen Salvadores Fernandez
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Yang Xue
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Manish K. Tiwari
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Hani J. Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Evangelos B. Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| |
Collapse
|
6
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
7
|
Baghdadi A, Lama S, Singh R, Sutherland GR. Tool-tissue force segmentation and pattern recognition for evaluating neurosurgical performance. Sci Rep 2023; 13:9591. [PMID: 37311965 DOI: 10.1038/s41598-023-36702-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/08/2023] [Indexed: 06/15/2023] Open
Abstract
Surgical data quantification and comprehension expose subtle patterns in tasks and performance. Enabling surgical devices with artificial intelligence provides surgeons with personalized and objective performance evaluation: a virtual surgical assist. Here we present machine learning models developed for analyzing surgical finesse using tool-tissue interaction force data in surgical dissection obtained from a sensorized bipolar forceps. Data modeling was performed using 50 neurosurgery procedures that involved elective surgical treatment for various intracranial pathologies. The data collection was conducted by 13 surgeons of varying experience levels using sensorized bipolar forceps, SmartForceps System. The machine learning algorithm constituted design and implementation for three primary purposes, i.e., force profile segmentation for obtaining active periods of tool utilization using T-U-Net, surgical skill classification into Expert and Novice, and surgical task recognition into two primary categories of Coagulation versus non-Coagulation using FTFIT deep learning architectures. The final report to surgeon was a dashboard containing recognized segments of force application categorized into skill and task classes along with performance metrics charts compared to expert level surgeons. Operating room data recording of > 161 h containing approximately 3.6 K periods of tool operation was utilized. The modeling resulted in Weighted F1-score = 0.95 and AUC = 0.99 for force profile segmentation using T-U-Net, Weighted F1-score = 0.71 and AUC = 0.81 for surgical skill classification, and Weighted F1-score = 0.82 and AUC = 0.89 for surgical task recognition using a subset of hand-crafted features augmented to FTFIT neural network. This study delivers a novel machine learning module in a cloud, enabling an end-to-end platform for intraoperative surgical performance monitoring and evaluation. Accessed through a secure application for professional connectivity, a paradigm for data-driven learning is established.
Collapse
Affiliation(s)
- Amir Baghdadi
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Sanju Lama
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Rahul Singh
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Garnette R Sutherland
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
8
|
Chua Z, Okamura AM. A Modular 3-Degrees-of-Freedom Force Sensor for Robot-Assisted Minimally Invasive Surgery Research. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115230. [PMID: 37299958 DOI: 10.3390/s23115230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/07/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
Effective force modulation during tissue manipulation is important for ensuring safe, robot-assisted, minimally invasive surgery (RMIS). Strict requirements for in vivo applications have led to prior sensor designs that trade off ease of manufacture and integration against force measurement accuracy along the tool axis. Due to this trade-off, there are no commercial, off-the-shelf, 3-degrees-of-freedom (3DoF) force sensors for RMIS available to researchers. This makes it challenging to develop new approaches to indirect sensing and haptic feedback for bimanual telesurgical manipulation. We present a modular 3DoF force sensor that integrates easily with an existing RMIS tool. We achieve this by relaxing biocompatibility and sterilizability requirements and by using commercial load cells and common electromechanical fabrication techniques. The sensor has a range of ±5 N axially and ±3 N laterally with errors of below 0.15 N and maximum errors below 11% of the sensing range in all directions. During telemanipulation, a pair of jaw-mounted sensors achieved average errors below 0.15 N in all directions. It achieved an average grip force error of 0.156 N. The sensor is for bimanual haptic feedback and robotic force control in delicate tissue telemanipulation. As an open-source design, the sensors can be adapted to suit other non-RMIS robotic applications.
Collapse
Affiliation(s)
- Zonghe Chua
- Department of Electrical, Computer and Systems Engineering, Case Western Reserve University, 10900 Euclid Avenue, Glennan Building 514A, Cleveland, OH 44106, USA
| | - Allison M Okamura
- Department of Mechanical Engineering, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
9
|
Brown JD, Kuchenbecker KJ. Effects of automated skill assessment on robotic surgery training. Int J Med Robot 2023; 19:e2492. [PMID: 36524325 DOI: 10.1002/rcs.2492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Several automated skill-assessment approaches have been proposed for robotic surgery, but their utility is not well understood. This article investigates the effects of one machine-learning-based skill-assessment approach on psychomotor skill development in robotic surgery training. METHODS N = 29 trainees (medical students and residents) with no robotic surgery experience performed five trials of inanimate peg transfer with an Intuitive Surgical da Vinci Standard robot. Half of the participants received no post-trial feedback. The other half received automatically calculated scores from five Global Evaluative Assessment of Robotic Skill domains post-trial. RESULTS There were no significant differences between the groups regarding overall improvement or skill improvement rate. However, participants who received post-trial feedback rated their overall performance improvement significantly lower than participants who did not receive feedback. CONCLUSIONS These findings indicate that automated skill evaluation systems might improve trainee self-awareness but not accelerate early stage psychomotor skill development in robotic surgery training.
Collapse
Affiliation(s)
- Jeremy D Brown
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Katherine J Kuchenbecker
- Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| |
Collapse
|
10
|
Autonomous sequential surgical skills assessment for the peg transfer task in a laparoscopic box-trainer system with three cameras. ROBOTICA 2023. [DOI: 10.1017/s0263574723000218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Abstract
In laparoscopic surgery, surgeons should develop several manual laparoscopic skills before carrying out real operative procedures using a low-cost box trainer. The Fundamentals of Laparoscopic Surgery (FLS) program was developed as a program to assess fundamental knowledge and surgical skills, required for basic laparoscopic surgery. The peg transfer task is a hands-on exam in the FLS program that assists a trainee to understand the relative minimum amount of grasping force necessary to move the pegs from one place to another place without dropping them. In this paper, an autonomous, sequential assessment algorithm based on deep learning, a multi-object detection method, and, several sequential If-Then conditional statements have been developed to monitor each step of a surgeon’s performance. Images from three different cameras are used to assess whether the surgeon executes the peg transfer task correctly and to display a notification on any errors on the monitor immediately. This algorithm improves the performance of a laparoscopic box-trainer system using top, side, and front cameras and removes the need for any human monitoring during a peg transfer task. The developed algorithm can detect each object and its status during a peg transfer task and notifies the resident about the correct or failed outcome. In addition, this system can correctly determine the peg transfer execution time, and the move, carry, and dropped states for each object by the top, side, and front-mounted cameras. Based on the experimental results, the proposed surgical skill assessment system can identify each object at a high score of fidelity, and the train-validation total loss for the single-shot detector (SSD) ResNet50 v1 was about 0.05. Also, the mean average precision (mAP) and Intersection over Union (IoU) of this detection system were 0.741, and 0.75, respectively. This project is a collaborative research effort between the Department of Electrical and Computer Engineering and the Department of Surgery, at Western Michigan University.
Collapse
|
11
|
An explainable machine learning method for assessing surgical skill in liposuction surgery. Int J Comput Assist Radiol Surg 2022; 17:2325-2336. [PMID: 36167953 DOI: 10.1007/s11548-022-02739-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 08/12/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Surgical skill assessment has received growing interest in surgery training and quality control due to its essential role in competency assessment and trainee feedback. However, the current assessment methods rarely provide corresponding feedback guidance while giving ability evaluation. We aim to validate an explainable surgical skill assessment method that automatically evaluates the trainee performance of liposuction surgery and provides visual postoperative and real-time feedback. METHODS In this study, machine learning using a model-agnostic interpretable method based on stroke segmentation was introduced to objectively evaluate surgical skills. We evaluated the method on liposuction surgery datasets that consisted of motion and force data for classification tasks. RESULTS Our classifier achieved optimistic accuracy in clinical and imitation liposuction surgery models, ranging from 89 to 94%. With the help of SHapley Additive exPlanations (SHAP), we deeply explore the potential rules of liposuction operation between surgeons with variant experiences and provide real-time feedback based on the ML model to surgeons with undesirable skills. CONCLUSION Our results demonstrate the strong abilities of explainable machine learning methods in objective surgical skill assessment. We believe that the machine learning model based on interpretive methods proposed in this article can improve the evaluation and training of liposuction surgery and provide objective assessment and training guidance for other surgeries.
Collapse
|
12
|
Kil I, Eidt JF, Groff RE, Singapogu RB. Assessment of open surgery suturing skill: Simulator platform, force-based, and motion-based metrics. Front Med (Lausanne) 2022; 9:897219. [PMID: 36111107 PMCID: PMC9468321 DOI: 10.3389/fmed.2022.897219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/05/2022] [Indexed: 11/29/2022] Open
Abstract
Objective This paper focuses on simulator-based assessment of open surgery suturing skill. We introduce a new surgical simulator designed to collect synchronized force, motion, video and touch data during a radial suturing task adapted from the Fundamentals of Vascular Surgery (FVS) skill assessment. The synchronized data is analyzed to extract objective metrics for suturing skill assessment. Methods The simulator has a camera positioned underneath the suturing membrane, enabling visual tracking of the needle during suturing. Needle tracking data enables extraction of meaningful metrics related to both the process and the product of the suturing task. To better simulate surgical conditions, the height of the system and the depth of the membrane are both adjustable. Metrics for assessment of suturing skill based on force/torque, motion, and physical contact are presented. Experimental data are presented from a study comparing attending surgeons and surgery residents. Results Analysis shows force metrics (absolute maximum force/torque in z-direction), motion metrics (yaw, pitch, roll), physical contact metric, and image-enabled force metrics (orthogonal and tangential forces) are found to be statistically significant in differentiating suturing skill between attendings and residents. Conclusion and significance The results suggest that this simulator and accompanying metrics could serve as a useful tool for assessing and teaching open surgery suturing skill.
Collapse
Affiliation(s)
- Irfan Kil
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - John F. Eidt
- Division of Vascular Surgery, Baylor Scott & White Heart and Vascular Hospital, Dallas, TX, United States
| | - Richard E. Groff
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - Ravikiran B. Singapogu
- Department of Bioengineering, Clemson University, Clemson, SC, United States
- *Correspondence: Ravikiran B. Singapogu
| |
Collapse
|
13
|
Miura S, Kaneko T, Kawamura K, Kobayashi Y, Fujie MG. Brain activation measurement for motion gain decision of surgical endoscope manipulation. Int J Med Robot 2022; 18:e2371. [DOI: 10.1002/rcs.2371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 05/18/2021] [Accepted: 05/20/2021] [Indexed: 11/07/2022]
Affiliation(s)
- Satoshi Miura
- Department of Mechanical Engineering Tokyo Institute of Technology Tokyo Japan
| | - Taisei Kaneko
- Department of Modern Mechanical Engineering Waseda University Tokyo Japan
| | - Kazuya Kawamura
- Center for Frontier Medical Engineering Chiba University Chiba Japan
| | - Yo Kobayashi
- Healthcare Robotics Institute Future Robotics Organization Waseda University Tokyo Japan
| | - Masakatsu G. Fujie
- Healthcare Robotics Institute Future Robotics Organization Waseda University Tokyo Japan
| |
Collapse
|
14
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
15
|
Bilgic E, Gorgy A, Yang A, Cwintal M, Ranjbar H, Kahla K, Reddy D, Li K, Ozturk H, Zimmermann E, Quaiattini A, Abbasgholizadeh-Rahimi S, Poenaru D, Harley JM. Exploring the roles of artificial intelligence in surgical education: A scoping review. Am J Surg 2021; 224:205-216. [PMID: 34865736 DOI: 10.1016/j.amjsurg.2021.11.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND Technology-enhanced teaching and learning, including Artificial Intelligence (AI) applications, has started to evolve in surgical education. Hence, the purpose of this scoping review is to explore the current and future roles of AI in surgical education. METHODS Nine bibliographic databases were searched from January 2010 to January 2021. Full-text articles were included if they focused on AI in surgical education. RESULTS Out of 14,008 unique sources of evidence, 93 were included. Out of 93, 84 were conducted in the simulation setting, and 89 targeted technical skills. Fifty-six studies focused on skills assessment/classification, and 36 used multiple AI techniques. Also, increasing sample size, having balanced data, and using AI to provide feedback were major future directions mentioned by authors. CONCLUSIONS AI can help optimize the education of trainees and our results can help educators and researchers identify areas that need further investigation.
Collapse
Affiliation(s)
- Elif Bilgic
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrew Gorgy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Alison Yang
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Michelle Cwintal
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Hamed Ranjbar
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kalin Kahla
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Dheeksha Reddy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kexin Li
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Helin Ozturk
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Eric Zimmermann
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrea Quaiattini
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, McGill University, Montreal, Quebec, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila Quebec AI Institute, Montreal, Canada
| | - Dan Poenaru
- Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Department of Pediatric Surgery, McGill University, Canada
| | - Jason M Harley
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
16
|
Motaharifar M, Norouzzadeh A, Abdi P, Iranfar A, Lotfi F, Moshiri B, Lashay A, Mohammadi SF, Taghirad HD. Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic. Front Robot AI 2021; 8:612949. [PMID: 34476241 PMCID: PMC8407078 DOI: 10.3389/frobt.2021.612949] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 07/29/2021] [Indexed: 12/15/2022] Open
Abstract
This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.
Collapse
Affiliation(s)
- Mohammad Motaharifar
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
- Department of Electrical Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Norouzzadeh
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Parisa Abdi
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Arash Iranfar
- School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
| | - Faraz Lotfi
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Behzad Moshiri
- School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Alireza Lashay
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid D. Taghirad
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| |
Collapse
|
17
|
Kitaguchi D, Takeshita N, Matsuzaki H, Igaki T, Hasegawa H, Ito M. Development and Validation of a 3-Dimensional Convolutional Neural Network for Automatic Surgical Skill Assessment Based on Spatiotemporal Video Analysis. JAMA Netw Open 2021; 4:e2120786. [PMID: 34387676 PMCID: PMC8363914 DOI: 10.1001/jamanetworkopen.2021.20786] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
IMPORTANCE A high level of surgical skill is essential to prevent intraoperative problems. One important aspect of surgical education is surgical skill assessment, with pertinent feedback facilitating efficient skill acquisition by novices. OBJECTIVES To develop a 3-dimensional (3-D) convolutional neural network (CNN) model for automatic surgical skill assessment and to evaluate the performance of the model in classification tasks by using laparoscopic colorectal surgical videos. DESIGN, SETTING, AND PARTICIPANTS This prognostic study used surgical videos acquired prior to 2017. In total, 650 laparoscopic colorectal surgical videos were provided for study purposes by the Japan Society for Endoscopic Surgery, and 74 were randomly extracted. Every video had highly reliable scores based on the Endoscopic Surgical Skill Qualification System (ESSQS, range 1-100, with higher scores indicating greater surgical skill) established by the society. Data were analyzed June to December 2020. MAIN OUTCOMES AND MEASURES From the groups with scores less than the difference between the mean and 2 SDs, within the range spanning the mean and 1 SD, and greater than the sum of the mean and 2 SDs, 17, 26, and 31 videos, respectively, were randomly extracted. In total, 1480 video clips with a length of 40 seconds each were extracted for each surgical step (medial mobilization, lateral mobilization, inferior mesenteric artery transection, and mesorectal transection) and separated into 1184 training sets and 296 test sets. Automatic surgical skill classification was performed based on spatiotemporal video analysis using the fully automated 3-D CNN model, and classification accuracies and screening accuracies for the groups with scores less than the mean minus 2 SDs and greater than the mean plus 2 SDs were calculated. RESULTS The mean (SD) ESSQS score of all 650 intraoperative videos was 66.2 (8.6) points and for the 74 videos used in the study, 67.6 (16.1) points. The proposed 3-D CNN model automatically classified video clips into groups with scores less than the mean minus 2 SDs, within 1 SD of the mean, and greater than the mean plus 2 SDs with a mean (SD) accuracy of 75.0% (6.3%). The highest accuracy was 83.8% for the inferior mesenteric artery transection. The model also screened for the group with scores less than the mean minus 2 SDs with 94.1% sensitivity and 96.5% specificity and for group with greater than the mean plus 2 SDs with 87.1% sensitivity and 86.0% specificity. CONCLUSIONS AND RELEVANCE The results of this prognostic study showed that the proposed 3-D CNN model classified laparoscopic colorectal surgical videos with sufficient accuracy to be used for screening groups with scores greater than the mean plus 2 SDs and less than the mean minus 2 SDs. The proposed approach was fully automatic and easy to use for various types of surgery, and no special annotations or kinetics data extraction were required, indicating that this approach warrants further development for application to automatic surgical skill assessment.
Collapse
Affiliation(s)
- Daichi Kitaguchi
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Nobuyoshi Takeshita
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiroki Matsuzaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Takahiro Igaki
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Hiro Hasegawa
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| | - Masaaki Ito
- Surgical Device Innovation Office, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
- Department of Colorectal Surgery, National Cancer Center Hospital East, Kashiwanoha, Kashiwa, Chiba, Japan
| |
Collapse
|
18
|
Castillo-Segura P, Fernández-Panadero C, Alario-Hoyos C, Muñoz-Merino PJ, Delgado Kloos C. Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review. Artif Intell Med 2021; 112:102007. [PMID: 33581827 DOI: 10.1016/j.artmed.2020.102007] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 11/25/2020] [Accepted: 12/28/2020] [Indexed: 11/18/2022]
Abstract
The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT (Internet of Things), the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.
Collapse
Affiliation(s)
- Pablo Castillo-Segura
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | | | - Carlos Alario-Hoyos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Pedro J Muñoz-Merino
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Carlos Delgado Kloos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| |
Collapse
|
19
|
Wang Z, Kasman M, Martinez M, Rege R, Zeh H, Scott D, Fey AM. A Comparative Human-Centric Analysis of Virtual Reality and Dry Lab Training Tasks on the da Vinci Surgical Platform. ACTA ACUST UNITED AC 2020. [DOI: 10.1142/s2424905x19420078] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
There is a growing, widespread trend of adopting robot-assisted minimally invasive surgery (RMIS) in clinical care. Dry lab robot training and virtual reality simulation are commonly used to train surgical residents; however, it is unclear whether both types of training are equivalent or can be interchangeable and still achieve the same results in terms of training outcomes. In this paper, we take the first step in comparing the effects of physical and simulated surgical training tasks on human operator kinematics and physiological response to provide a richer understanding of exactly how the user interacts with the actual or simulated surgical robot. Four subjects, with expertise levels ranging from novice to expert surgeon, were recruited to perform three surgical tasks — Continuous Suture, Pick and Place, Tubes, with three repetitions — on two training platforms: (1) the da Vinci Si Skills Simulator and (2) da Vinci S robot, in a randomized order. We collected physiological response and kinematic movement data through body-worn sensors for a total of 72 individual experimental trials. A range of expertise was chosen for this experiment to wash out inherent differences based on expertise and only focus on inherent differences between the virtual reality and dry lab platforms. Our results show significant differences ([Formula: see text]-[Formula: see text]) between tasks done on the simulator and surgical robot. Specifically, robotic tasks resulted in significantly higher muscle activation and path length, and significantly lower economy of volume. The individual tasks also had significant differences in various kinematic and physiological metrics, leading to significant interaction effects between the task type and training platform. These results indicate that the presence of the robotic system may make surgical training tasks more difficult for the human operator. Thus, the potentially detrimental effects of virtual reality training alone are an important topic for future investigation.
Collapse
Affiliation(s)
- Ziheng Wang
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Michael Kasman
- Department of Electrical & Computer Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Marco Martinez
- Department of Surgery, Naval Medical Center, San Diego, CA 92134, USA
| | - Robert Rege
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Herbert Zeh
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Daniel Scott
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
20
|
Xia J, Huang D, Li Y, Qin N. Iterative learning of human partner’s desired trajectory for proactive human–robot collaboration. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2020. [DOI: 10.1007/s41315-020-00132-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AbstractA period-varying iterative learning control scheme is proposed for a robotic manipulator to learn a target trajectory that is planned by a human partner but unknown to the robot, which is a typical scenario in many applications. The proposed method updates the robot’s reference trajectory in an iterative manner to minimize the interaction force applied by the human. Although a repetitive human–robot collaboration task is considered, the task period is subject to uncertainty introduced by the human. To address this issue, a novel learning mechanism is proposed to achieve the control objective. Theoretical analysis is performed to prove the performance of the learning algorithm and robot controller. Selective simulations and experiments on a robotic arm are carried out to show the effectiveness of the proposed method in human–robot collaboration.
Collapse
|
21
|
Anh NX, Nataraja RM, Chauhan S. Towards near real-time assessment of surgical skills: A comparison of feature extraction techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105234. [PMID: 31794913 DOI: 10.1016/j.cmpb.2019.105234] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 10/31/2019] [Accepted: 11/18/2019] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Surgical skill assessment aims to objectively evaluate and provide constructive feedback for trainee surgeons. Conventional methods require direct observation with assessment from surgical experts which are both unscalable and subjective. The recent involvement of surgical robotic systems in the operating room has facilitated the ability of automated evaluation of the expertise level of trainees for certain representative maneuvers by using machine learning for motion analysis. The features extraction technique plays a critical role in such an automated surgical skill assessment system. METHODS We present a direct comparison of nine well-known feature extraction techniques which are statistical features, principal component analysis, discrete Fourier/Cosine transform, codebook, deep learning models and auto-encoder for automated surgical skills evaluation. Towards near real-time evaluation, we also investigate the effect of time interval on the classification accuracy and efficiency. RESULTS We validate the study on the benchmark robotic surgical training JIGSAWS dataset. An accuracy of 95.63, 90.17 and 90.26% by the Principal Component Analysis and 96.84, 92.75 and 95.36% by the deep Convolutional Neural Network for suturing, knot tying and needle passing, respectively, highlighted the effectiveness of these two techniques in extracting the most discriminative features among different surgical skill levels. CONCLUSIONS This study contributes toward the development of an online automated and efficient surgical skills assessment technique.
Collapse
Affiliation(s)
- Nguyen Xuan Anh
- Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, Australia
| | - Ramesh Mark Nataraja
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia
| | - Sunita Chauhan
- Department of Mechanical and Aerospace Engineering, Monash University, Melbourne, Australia.
| |
Collapse
|
22
|
Levin M, McKechnie T, Khalid S, Grantcharov TP, Goldenberg M. Automated Methods of Technical Skill Assessment in Surgery: A Systematic Review. JOURNAL OF SURGICAL EDUCATION 2019; 76:1629-1639. [PMID: 31272846 DOI: 10.1016/j.jsurg.2019.06.011] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 06/04/2019] [Accepted: 06/14/2019] [Indexed: 06/09/2023]
Abstract
OBJECTIVE The goal of the current study is to systematically review the literature addressing the use of automated methods to evaluate technical skills in surgery. BACKGROUND The classic apprenticeship model of surgical training includes subjective assessments of technical skill. However, automated methods to evaluate surgical technical skill have been recently studied. These automated methods are a more objective, versatile, and analytical way to evaluate a surgical trainee's technical skill. STUDY DESIGN A literature search of the Ovid Medline, Web of Science, and EMBASE Classic databases was performed. Articles evaluating automated methods for surgical technical skill assessment were abstracted. The quality of all included studies was assessed using the Medical Education Research Study Quality Instrument. RESULTS A total of 1715 articles were identified, 76 of which were selected for final analysis. An automated methods pathway was defined that included kinetics and computer vision data extraction methods. Automated methods included tool motion tracking, hand motion tracking, eye motion tracking, and muscle contraction analysis. Finally, machine learning, deep learning, and performance classification were used to analyse these methods. These methods of surgical skill assessment were used in the operating room and simulated environments. The average Medical Education Research Study Quality Instrument score across all studies was 10.86 (maximum score of 18). CONCLUSIONS Automated methods for technical skill assessment is a growing field in surgical education. We found quality studies evaluating these techniques across many environments and surgeries. More research must be done to ensure these techniques are further verified and implemented in surgical curricula.
Collapse
Affiliation(s)
- Marc Levin
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario, Canada.
| | - Tyler McKechnie
- Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Shuja Khalid
- Surgical Safety Technologies, Li Ka Shing International Knowledge Institute, Toronto, Ontario, Canada
| | - Teodor P Grantcharov
- Surgical Safety Technologies, Li Ka Shing International Knowledge Institute, Toronto, Ontario, Canada; Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Mitchell Goldenberg
- Surgical Safety Technologies, Li Ka Shing International Knowledge Institute, Toronto, Ontario, Canada; Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
23
|
Nguyen XA, Ljuhar D, Pacilli M, Nataraja RM, Chauhan S. Surgical skill levels: Classification and analysis using deep neural network model and motion signals. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 177:1-8. [PMID: 31319938 DOI: 10.1016/j.cmpb.2019.05.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 04/11/2019] [Accepted: 05/11/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. METHODS We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices) were recruited into the study. RESULTS The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. CONCLUSIONS This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.
Collapse
Affiliation(s)
- Xuan Anh Nguyen
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, Victoria, 3800, Australia
| | - Damir Ljuhar
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Maurizio Pacilli
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Ramesh Mark Nataraja
- Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Sunita Chauhan
- Department of Mechanical and Aerospace Engineering, Monash University, Clayton, Victoria, 3800, Australia.
| |
Collapse
|
24
|
A CNN-based prototype method of unstructured surgical state perception and navigation for an endovascular surgery robot. Med Biol Eng Comput 2019; 57:1875-1887. [DOI: 10.1007/s11517-019-02002-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Accepted: 06/05/2019] [Indexed: 01/12/2023]
|
25
|
Saracino A, Deguet A, Staderini F, Boushaki MN, Cianchi F, Menciassi A, Sinibaldi E. Haptic feedback in the da Vinci Research Kit (dVRK): A user study based on grasping, palpation, and incision tasks. Int J Med Robot 2019; 15:e1999. [PMID: 30970387 DOI: 10.1002/rcs.1999] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 02/24/2019] [Accepted: 04/04/2019] [Indexed: 12/11/2022]
Abstract
BACKGROUND It was suggested that the lack of haptic feedback, formerly considered a limitation for the da Vinci robotic system, does not affect robotic surgeons because of training and compensation based on visual feedback. However, conclusive studies are still missing, and the interest in force reflection is rising again. METHODS We integrated a seven-DoF master into the da Vinci Research Kit. We designed tissue grasping, palpation, and incision tasks with robotic surgeons, to be performed by three groups of users (expert surgeons, medical residents, and nonsurgeons, five users/group), either with or without haptic feedback. Task-specific quantitative metrics and a questionnaire were used for assessment. RESULTS Force reflection made a statistically significant difference for both palpation (improved inclusion detection rate) and incision (decreased tissue damage). CONCLUSIONS Haptic feedback can improve key surgical outcomes for tasks requiring a pronounced cognitive burden for the surgeon, to be possibly negotiated with longer completion times.
Collapse
Affiliation(s)
- Arianna Saracino
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy.,Center for Micro-BioRobotics, Istituto Italiano di Tecnologia, Pontedera, Italy
| | - Anton Deguet
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, Maryland
| | - Fabio Staderini
- Center of Oncological Minimally Invasive Surgery, Department of Surgery and Translational Medicine, University of Florence, Florence, Italy
| | | | - Fabio Cianchi
- Center of Oncological Minimally Invasive Surgery, Department of Surgery and Translational Medicine, University of Florence, Florence, Italy
| | - Arianna Menciassi
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy
| | - Edoardo Sinibaldi
- Center for Micro-BioRobotics, Istituto Italiano di Tecnologia, Pontedera, Italy
| |
Collapse
|
26
|
Dias RD, Gupta A, Yule SJ. Using Machine Learning to Assess Physician Competence: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:427-439. [PMID: 30113364 DOI: 10.1097/acm.0000000000002414] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
PURPOSE To identify the different machine learning (ML) techniques that have been applied to automate physician competence assessment and evaluate how these techniques can be used to assess different competence domains in several medical specialties. METHOD In May 2017, MEDLINE, EMBASE, PsycINFO, Web of Science, ACM Digital Library, IEEE Xplore Digital Library, PROSPERO, and Cochrane Database of Systematic Reviews were searched for articles published from inception to April 30, 2017. Studies were included if they applied at least one ML technique to assess medical students', residents', fellows', or attending physicians' competence. Information on sample size, participants, study setting and design, medical specialty, ML techniques, competence domains, outcomes, and methodological quality was extracted. MERSQI was used to evaluate quality, and a qualitative narrative synthesis of the medical specialties, ML techniques, and competence domains was conducted. RESULTS Of 4,953 initial articles, 69 met inclusion criteria. General surgery (24; 34.8%) and radiology (15; 21.7%) were the most studied specialties; natural language processing (24; 34.8%), support vector machine (15; 21.7%), and hidden Markov models (14; 20.3%) were the ML techniques most often applied; and patient care (63; 91.3%) and medical knowledge (45; 65.2%) were the most assessed competence domains. CONCLUSIONS A growing number of studies have attempted to apply ML techniques to physician competence assessment. Although many studies have investigated the feasibility of certain techniques, more validation research is needed. The use of ML techniques may have the potential to integrate and analyze pragmatic information that could be used in real-time assessments and interventions.
Collapse
Affiliation(s)
- Roger D Dias
- R.D. Dias is instructor in emergency medicine, Department of Emergency Medicine and STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; ORCID: http://orcid.org/0000-0003-4959-5052. A. Gupta is research scientist, Center for Surgery and Public Health, Brigham and Women's Hospital, Boston, Massachusetts. S.J. Yule is associate professor of surgery, Harvard Medical School, and faculty, Department of Surgery and STRATUS Center for Medical Simulation, Brigham and Women's Hospital, Boston, Massachusetts
| | | | | |
Collapse
|
27
|
Kowalewski KF, Garrow CR, Schmidt MW, Benner L, Müller-Stich BP, Nickel F. Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying. Surg Endosc 2019; 33:3732-3740. [DOI: 10.1007/s00464-019-06667-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 01/17/2019] [Indexed: 12/17/2022]
|
28
|
Miura S, Kawamura K, Kobayashi Y, Fujie MG. Using Brain Activation to Evaluate Arrangements Aiding Hand-Eye Coordination in Surgical Robot Systems. IEEE Trans Biomed Eng 2018; 66:2352-2361. [PMID: 30582521 DOI: 10.1109/tbme.2018.2889316] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
GOAL To realize intuitive, minimally invasive surgery, surgical robots are often controlled using master-slave systems. However, the surgical robot's structure often differs from that of the human body, so the arrangement between the monitor and master must reflect this physical difference. In this study, we validate the feasibility of an embodiment evaluation method that determines the arrangement between the monitor and master. In our constructed cognitive model, the brain's intraparietal sulcus activates significantly when somatic and visual feedback match. Using this model, we validate a cognitively appropriate arrangement between the monitor and master. METHODS In experiments, we measure participants' brain activation using an imaging device as they control the virtual surgical simulator. Two experiments are carried out that vary the monitor and hand positions. CONCLUSION There are two common arrangements of the monitor and master at the brain activation's peak: One is placing the monitor behind the master, so the user feels that the system is an extension of his arms into the monitor; the other arranges the monitor in front of the master, so the user feels the correspondence between his own arm and the virtual arm in the monitor. SIGNIFICANCE From these results, we conclude that the arrangement between the monitor and master impacts embodiment, enabling the participant to feel apparent posture matches in master-slave surgical robot systems.
Collapse
|
29
|
Ershad M, Rege R, Fey AM. Automatic Surgical Skill Rating Using Stylistic Behavior Components. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:1829-1832. [PMID: 30440751 DOI: 10.1109/embc.2018.8512593] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A gold standard in surgical skill rating and evaluation is direct observation, which a group of experts rate trainees based on a likert scale, by observing their performance during a surgical task. This method is time and resource intensive. To alleviate this burden, many studies have focused on automatic surgical skill assessment; however, the metrics suggested by the literature for automatic evaluation do not capture the stylistic behavior of the user. In addition very few studies focus on automatic rating of surgical skills based on available likert scales. In a previous study we presented a stylistic behavior lexicon for surgical skill. In this study we evaluate the lexicon's ability to automatically rate robotic surgical skill, based on the 6 domains in the Global Evaluative Assessment of Robotic Skills (GEARS). 14 subjects of different skill levels performed two surgical tasks on da Vinci surgical simulator. Different measurements were acquired as subjects performed the tasks, including limb (hand and arm) kinematics and joint (shoulder, elbow, wrist) positions. Posture videos of the subjects performing the task, as well as videos of the task being performed were viewed and rated by faculty experts based on the 6 domains in GEARS. The paired videos were also rated via crowd-sourcing based on our stylistic behavior lexicon. Two separate regression learner models, one using the sensor measurements and the other using crowd ratings for our proposed lexicon, were trained for each domain in GEARS. The results indicate that the scores predicted from both prediction models are in agreement with the gold standard faculty ratings.
Collapse
|
30
|
Wang Z, Majewicz Fey A. Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int J Comput Assist Radiol Surg 2018; 13:1959-1970. [PMID: 30255463 DOI: 10.1007/s11548-018-1860-1] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 09/11/2018] [Indexed: 12/18/2022]
Abstract
PURPOSE With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. METHODS We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. RESULTS We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1-3 second window, without needing an observation of entire training trial. CONCLUSION This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.
Collapse
Affiliation(s)
- Ziheng Wang
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.,Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
| |
Collapse
|
31
|
Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg 2018; 13:443-455. [PMID: 29380122 DOI: 10.1007/s11548-018-1704-z] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 01/08/2018] [Indexed: 10/18/2022]
Abstract
PURPOSE Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). METHODS We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. RESULTS We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. CONCLUSION Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.
Collapse
|
32
|
Oquendo YA, Riddle EW, Hiller D, Blinman TA, Kuchenbecker KJ. Automatically rating trainee skill at a pediatric laparoscopic suturing task. Surg Endosc 2017; 32:1840-1857. [PMID: 29071419 PMCID: PMC5845064 DOI: 10.1007/s00464-017-5873-6] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2017] [Accepted: 09/04/2017] [Indexed: 12/29/2022]
Abstract
BACKGROUND Minimally invasive surgeons must acquire complex technical skills while minimizing patient risk, a challenge that is magnified in pediatric surgery. Trainees need realistic practice with frequent detailed feedback, but human grading is tedious and subjective. We aim to validate a novel motion-tracking system and algorithms that automatically evaluate trainee performance of a pediatric laparoscopic suturing task. METHODS Subjects (n = 32) ranging from medical students to fellows performed two trials of intracorporeal suturing in a custom pediatric laparoscopic box trainer after watching a video of ideal performance. The motions of the tools and endoscope were recorded over time using a magnetic sensing system, and both tool grip angles were recorded using handle-mounted flex sensors. An expert rated the 63 trial videos on five domains from the Objective Structured Assessment of Technical Skill (OSATS), yielding summed scores from 5 to 20. Motion data from each trial were processed to calculate 280 features. We used regularized least squares regression to identify the most predictive features from different subsets of the motion data and then built six regression tree models that predict summed OSATS score. Model accuracy was evaluated via leave-one-subject-out cross-validation. RESULTS The model that used all sensor data streams performed best, achieving 71% accuracy at predicting summed scores within 2 points, 89% accuracy within 4, and a correlation of 0.85 with human ratings. 59% of the rounded average OSATS score predictions were perfect, and 100% were within 1 point. This model employed 87 features, including none based on completion time, 77 from tool tip motion, 3 from tool tip visibility, and 7 from grip angle. CONCLUSIONS Our novel hardware and software automatically rated previously unseen trials with summed OSATS scores that closely match human expert ratings. Such a system facilitates more feedback-intensive surgical training and may yield insights into the fundamental components of surgical skill.
Collapse
Affiliation(s)
- Yousi A Oquendo
- Department of Mechanical Engineering & Applied Mechanics, University of Pennsylvania, Philadelphia, USA.,Department of Computer & Information Science, University of Pennsylvania, Philadelphia, USA
| | - Elijah W Riddle
- Division of Pediatric General, Thoracic and Fetal Surgery, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Dennis Hiller
- Division of Pediatric General, Thoracic and Fetal Surgery, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Thane A Blinman
- Division of Pediatric General, Thoracic and Fetal Surgery, Children's Hospital of Philadelphia, Philadelphia, USA
| | - Katherine J Kuchenbecker
- Department of Mechanical Engineering & Applied Mechanics, University of Pennsylvania, Philadelphia, USA. .,Department of Computer & Information Science, University of Pennsylvania, Philadelphia, USA. .,Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Heisenbergstr. 3, 70569, Stuttgart, Germany.
| |
Collapse
|
33
|
|