1
|
Boyer TJ, Mitchell SA. Thank you artificial intelligence: Evidence-based just-in-time training via a large language model. Am J Surg 2024; 234:26-27. [PMID: 38609743 DOI: 10.1016/j.amjsurg.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Accepted: 04/02/2024] [Indexed: 04/14/2024]
Affiliation(s)
- Tanna J Boyer
- Department of Anesthesia at Indiana University School of Medicine, United States.
| | - Sally A Mitchell
- Department of Anesthesia at Indiana University School of Medicine, United States
| |
Collapse
|
2
|
Hoffmann H, Funke I, Peters P, Venkatesh DK, Egger J, Rivoir D, Röhrig R, Hölzle F, Bodenstedt S, Willemer MC, Speidel S, Puladi B. AIxSuture: vision-based assessment of open suturing skills. Int J Comput Assist Radiol Surg 2024; 19:1045-1052. [PMID: 38526613 PMCID: PMC11178625 DOI: 10.1007/s11548-024-03093-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 02/28/2024] [Indexed: 03/27/2024]
Abstract
PURPOSE Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.
Collapse
Affiliation(s)
- Hanna Hoffmann
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany.
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany.
- BMBF Research Hub 6 G-Life, TUD Dresden University of Technology, Dresden, Germany.
| | - Isabel Funke
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Philipp Peters
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Danush Kumar Venkatesh
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- School of Embedded Composite Artificial Intelligence (SECAI), TUD Dresden University of Technology, Dresden, Germany
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany
| | - Jan Egger
- Institute for AI in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Dominik Rivoir
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - Marie-Christin Willemer
- MITZ, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany
- BMBF Research Hub 6 G-Life, TUD Dresden University of Technology, Dresden, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
3
|
Janssen A, Donnelly C, Shaw T. A Taxonomy for Health Information Systems. J Med Internet Res 2024; 26:e47682. [PMID: 38820575 PMCID: PMC11179026 DOI: 10.2196/47682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 10/05/2023] [Accepted: 01/31/2024] [Indexed: 06/02/2024] Open
Abstract
The health sector is highly digitized, which is enabling the collection of vast quantities of electronic data about health and well-being. These data are collected by a diverse array of information and communication technologies, including systems used by health care organizations, consumer and community sources such as information collected on the web, and passively collected data from technologies such as wearables and devices. Understanding the breadth of IT that collect these data and how it can be actioned is a challenge for the significant portion of the digital health workforce that interact with health data as part of their duties but are not for informatics experts. This viewpoint aims to present a taxonomy categorizing common information and communication technologies that collect electronic data. An initial classification of key information systems collecting electronic health data was undertaken via a rapid review of the literature. Subsequently, a purposeful search of the scholarly and gray literature was undertaken to extract key information about the systems within each category to generate definitions of the systems and describe the strengths and limitations of these systems.
Collapse
Affiliation(s)
- Anna Janssen
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Candice Donnelly
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Tim Shaw
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| |
Collapse
|
4
|
Bellos T, Manolitsis I, Katsimperis S, Juliebø-Jones P, Feretzakis G, Mitsogiannis I, Varkarakis I, Somani BK, Tzelves L. Artificial Intelligence in Urologic Robotic Oncologic Surgery: A Narrative Review. Cancers (Basel) 2024; 16:1775. [PMID: 38730727 PMCID: PMC11083167 DOI: 10.3390/cancers16091775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024] Open
Abstract
With the rapid increase in computer processing capacity over the past two decades, machine learning techniques have been applied in many sectors of daily life. Machine learning in therapeutic settings is also gaining popularity. We analysed current studies on machine learning in robotic urologic surgery. We searched PubMed/Medline and Google Scholar up to December 2023. Search terms included "urologic surgery", "artificial intelligence", "machine learning", "neural network", "automation", and "robotic surgery". Automatic preoperative imaging, intraoperative anatomy matching, and bleeding prediction has been a major focus. Early artificial intelligence (AI) therapeutic outcomes are promising. Robot-assisted surgery provides precise telemetry data and a cutting-edge viewing console to analyse and improve AI integration in surgery. Machine learning enhances surgical skill feedback, procedure effectiveness, surgical guidance, and postoperative prediction. Tension-sensors on robotic arms and augmented reality can improve surgery. This provides real-time organ motion monitoring, improving precision and accuracy. As datasets develop and electronic health records are used more and more, these technologies will become more effective and useful. AI in robotic surgery is intended to improve surgical training and experience. Both seek precision to improve surgical care. AI in ''master-slave'' robotic surgery offers the detailed, step-by-step examination of autonomous robotic treatments.
Collapse
Affiliation(s)
- Themistoklis Bellos
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Manolitsis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Stamatios Katsimperis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | | | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, 26335 Patras, Greece;
| | - Iraklis Mitsogiannis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Varkarakis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Bhaskar K. Somani
- Department of Urology, University of Southampton, Southampton SO16 6YD, UK;
| | - Lazaros Tzelves
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| |
Collapse
|
5
|
Tsuyuki S, Miyahara K, Hoshina K, Kawahara T, Suhara M, Mochizuki Y, Taniguchi R, Takayama T. Motion capture device reveals a quick learning curve in vascular anastomosis training. Surg Today 2024; 54:275-281. [PMID: 37466703 PMCID: PMC10874910 DOI: 10.1007/s00595-023-02726-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 06/28/2023] [Indexed: 07/20/2023]
Abstract
PURPOSE Surgical procedures are often evaluated subjectively, and an objective evaluation has been considered difficult to make and rarely reported, especially in open surgery, where the range of motion is wide. This study evaluated the effectiveness of surgical suturing training as an educational tool using the Leap Motion Controller (LMC), which can capture hand movements and reproduce them as data comprising parametric elements. METHODS We developed an off-the-job training system (Off-JT) in our department, mainly using prosthetic grafts and various anastomotic methodologies with graded difficulty levels. We recruited 50 medical students (novice group) and 6 vascular surgeons (expert group) for the study. We evaluated four parameters for intraoperative skills: suturing time, slope of the roll, smoothness, and rate of excess motion. RESULTS All 4 parameters distinguished the skill of the novice group at 1 and 10 h off-JT. After 10 h of off-JT, all 4 parameters of the novices were comparable to those of the expert group. CONCLUSION Our education system using the LMC is relatively inexpensive and easy to set up, with a free application for analyses, serving as an effective and ubiquitous educational tool for young surgeons.
Collapse
Affiliation(s)
- Shota Tsuyuki
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Kazuhiro Miyahara
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Katsuyuki Hoshina
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Takuya Kawahara
- Clinical Research Promotion Center, The University of Tokyo Hospital, Tokyo, Japan
| | - Masamitsu Suhara
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yasuaki Mochizuki
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Ryosuke Taniguchi
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Toshio Takayama
- Department of Vascular Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1, Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
6
|
Tsai AY, Carter SR, Greene AC. Artificial intelligence in pediatric surgery. Semin Pediatr Surg 2024; 33:151390. [PMID: 38242061 DOI: 10.1016/j.sempedsurg.2024.151390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
Artificial intelligence (AI) is rapidly changing the landscape of medicine and is already being utilized in conjunction with medical diagnostics and imaging analysis. We hereby explore AI applications in surgery and examine its relevance to pediatric surgery, covering its evolution, current state, and promising future. The various fields of AI are explored including machine learning and applications to predictive analytics and decision support in surgery, computer vision and image analysis in preoperative planning, image segmentation, surgical navigation, and finally, natural language processing assist in expediting clinical documentation, identification of clinical indications, quality improvement, outcome research, and other types of automated data extraction. The purpose of this review is to familiarize the pediatric surgical community with the rise of AI and highlight the ongoing advancements and challenges in its adoption, including data privacy, regulatory considerations, and the imperative for interdisciplinary collaboration. We hope this review serves as a comprehensive guide to AI's transformative influence on surgery, demonstrating its potential to enhance pediatric surgical patient outcomes, improve precision, and usher in a new era of surgical excellence.
Collapse
Affiliation(s)
- Anthony Y Tsai
- Division of Pediatric Surgery, Penn State Health Children's Hospital, 500 University Drive, Hershey, PA 17033, United States.
| | - Stewart R Carter
- Division of Pediatric Surgery, University of Louisville School of Medicine, Louisville, KY, United States
| | - Alicia C Greene
- Division of Pediatric Surgery, Penn State Health Children's Hospital, 500 University Drive, Hershey, PA 17033, United States
| |
Collapse
|
7
|
El-Sayed C, Yiu A, Burke J, Vaughan-Shaw P, Todd J, Lin P, Kasmani Z, Munsch C, Rooshenas L, Campbell M, Bach SP. Measures of performance and proficiency in robotic assisted surgery: a systematic review. J Robot Surg 2024; 18:16. [PMID: 38217749 DOI: 10.1007/s11701-023-01756-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/07/2023] [Indexed: 01/15/2024]
Abstract
Robotic assisted surgery (RAS) has seen a global rise in adoption. Despite this, there is not a standardised training curricula nor a standardised measure of performance. We performed a systematic review across the surgical specialties in RAS and evaluated tools used to assess surgeons' technical performance. Using the PRISMA 2020 guidelines, Pubmed, Embase and the Cochrane Library were searched systematically for full texts published on or after January 2020-January 2022. Observational studies and RCTs were included; review articles and systematic reviews were excluded. The papers' quality and bias score were assessed using the Newcastle Ottawa Score for the observational studies and Cochrane Risk Tool for the RCTs. The initial search yielded 1189 papers of which 72 fit the eligibility criteria. 27 unique performance metrics were identified. Global assessments were the most common tool of assessment (n = 13); the most used was GEARS (Global Evaluative Assessment of Robotic Skills). 11 metrics (42%) were objective tools of performance. Automated performance metrics (APMs) were the most widely used objective metrics whilst the remaining (n = 15, 58%) were subjective. The results demonstrate variation in tools used to assess technical performance in RAS. A large proportion of the metrics are subjective measures which increases the risk of bias amongst users. A standardised objective metric which measures all domains of technical performance from global to cognitive is required. The metric should be applicable to all RAS procedures and easily implementable. Automated performance metrics (APMs) have demonstrated promise in their wide use of accurate measures.
Collapse
Affiliation(s)
- Charlotte El-Sayed
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom.
| | - A Yiu
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Burke
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Vaughan-Shaw
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - J Todd
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - P Lin
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - Z Kasmani
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - C Munsch
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - L Rooshenas
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - M Campbell
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| | - S P Bach
- RCS England/HEE Robotics Research Fellow, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
8
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
9
|
Chen G, Li L, Hubert J, Luo B, Yang K, Wang X. Effectiveness of a vision-based handle trajectory monitoring system in studying robotic suture operation. J Robot Surg 2023; 17:2791-2798. [PMID: 37728690 DOI: 10.1007/s11701-023-01713-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023]
Abstract
Data on surgical robots are not openly accessible, limiting further study of the operation trajectory of surgeons' hands. Therefore, a trajectory monitoring system should be developed to examine objective indicators reflecting the characteristic parameters of operations. 20 robotic experts and 20 first-year residents without robotic experience were included in this study. A dry-lab suture task was used to acquire relevant hand performance data. Novices completed training on the simulator and then performed the task, while the expert team completed the task after warm-up. Stitching errors were measured using a visual recognition method. Videos of operations were obtained using the camera array mounted on the robot, and the hand trajectory of the surgeons was reconstructed. The stitching accuracy, robotic control parameters, balance and dexterity parameters, and operation efficiency parameters were compared. Experts had smaller center distance (p < 0.001) and larger proximal distance between the hands (p < 0.001) compared with novices. The path and volume ratios between the left and right hands of novices were larger than those of experts (both p < 0.001) and the total volume of the operation range of experts was smaller (p < 0.001). The surgeon trajectory optical monitoring system is an effective and non-subjective method to distinguish skill differences. This demonstrates the potential of pan-platform use to evaluate task completion and help surgeons improve their robotic learning curve.
Collapse
Affiliation(s)
- Gaojie Chen
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
| | - Lu Li
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
| | - Jacques Hubert
- Department of Urology, CHRU Nancy Brabois University Hospital, Vandoeuvre-Lès-Nancy, France
- IADI-UL-INSERM (U1254), University Hospital, Vandoeuvre-Lès-Nancy, France
| | - Bin Luo
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China
- State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, Hubei, China
| | - Kun Yang
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
| | - Xinghuan Wang
- Department of Urology, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
- Medicine-Remote Mapping Associated Laboratory, ZhongNan Hospital, Wuhan University, No. 169 Donghu Road, Wuhan, 430071, Hubei, China.
| |
Collapse
|
10
|
Aghazadeh F, Zheng B, Tavakoli M, Rouhani H. Surgical tooltip motion metrics assessment using virtual marker: an objective approach to skill assessment for minimally invasive surgery. Int J Comput Assist Radiol Surg 2023; 18:2191-2202. [PMID: 37597089 DOI: 10.1007/s11548-023-03007-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 07/19/2023] [Indexed: 08/21/2023]
Abstract
PURPOSE Surgical skill assessment has primarily been performed using checklists or rating scales, which are prone to bias and subjectivity. To tackle this shortcoming, assessment of surgical tool motion can be implemented to objectively classify skill levels. Due to the challenges involved in motion tracking of surgical tooltips in minimally invasive surgeries, formerly used assessment approaches may not be feasible for real-world skill assessment. We proposed an assessment approach based on the virtual marker on surgical tooltips to derive the tooltip's 3D position and introduced a novel metric for surgical skill assessment. METHODS We obtained the 3D tooltip position based on markers placed on the tool handle. Then, we derived tooltip motion metrics to identify the metrics differentiating the skill levels for objective surgical skill assessment. We proposed a new tooltip motion metric, i.e., motion inconsistency, that can assess the skill level, and also can evaluate the stage of skill learning. In this study, peg transfer, dual transfer, and rubber band translocation tasks were included, and nine novices, five surgical residents and five attending general surgeons participated. RESULTS Our analyses showed that tooltip path length (p [Formula: see text] 0.007) and path length along the instrument axis (p [Formula: see text] 0.014) differed across the three skill levels in all the tasks and decreased by skill level. Tooltip motion inconsistency showed significant differences among the three skill levels in the dual transfer (p [Formula: see text] 0.025) and the rubber band translocation tasks (p [Formula: see text] 0.021). Lastly, bimanual dexterity differed across the three skill levels in all the tasks (p [Formula: see text] 0.012) and increased by skill level. CONCLUSION Depth perception ability (indicated by shorter tooltip path lengths along the instrument axis), bimanual dexterity, tooltip motion consistency, and economical tooltip movements (shorter tooltip path lengths) are related to surgical skill. Our findings can contribute to objective surgical skill assessment, reducing subjectivity, bias, and associated costs.
Collapse
Affiliation(s)
- Farzad Aghazadeh
- Department of Mechanical Engineering, 10-390 Donadeo Innovation Centre for Engineering, University of Alberta, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Hossein Rouhani
- Department of Mechanical Engineering, 10-390 Donadeo Innovation Centre for Engineering, University of Alberta, 9211-116 Street NW, Edmonton, AB, T6G 1H9, Canada.
| |
Collapse
|
11
|
Kaoukabani G, Gokcal F, Fanta A, Liu X, Shields M, Stricklin C, Friedman A, Kudsi OY. A multifactorial evaluation of objective performance indicators and video analysis in the context of case complexity and clinical outcomes in robotic-assisted cholecystectomy. Surg Endosc 2023; 37:8540-8551. [PMID: 37789179 DOI: 10.1007/s00464-023-10432-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 08/31/2023] [Indexed: 10/05/2023]
Abstract
BACKGROUND The increased digitization in robotic surgical procedures today enables surgeons to quantify their movements through data captured directly from the robotic system. These calculations, called objective performance indicators (OPIs), offer unprecedented detail into surgical performance. In this study, we link case- and surgical step-specific OPIs to case complexity, surgical experience and console utilization, and post-operative clinical complications across 87 robotic cholecystectomy (RC) cases. METHODS Videos of RCs performed by a principal surgeon with and without fellows were segmented into eight surgical steps and linked to patients' clinical data. Data for OPI calculations were extracted from an Intuitive Data Recorder and the da Vinci ® robotic system. RC cases were each assigned a Nassar and Parkland Grading score and categorized as standard or complex. OPIs were compared across complexity groups, console attributions, and post-surgical complication severities to determine objective relationships across variables. RESULTS Across cases, differences in camera control and head positioning metrics of the principal surgeon were observed when comparing standard and complex cases. Further, OPI differences across the principal surgeon and the fellow(s) were observed in standard cases and include differences in arm swapping, camera control, and clutching behaviors. Monopolar coagulation energy usage differences were also observed. Select surgical step duration differences were observed across complexities and console attributions, and additional surgical task analyses determine the adhesion removal and liver bed hemostasis steps to be the most impactful steps for case complexity and post-surgical complications, respectively. CONCLUSION This is the first study to establish the association between OPIs, case complexities, and clinical complications in RC. We identified OPI differences in intra-operative behaviors and post-surgical complications dependent on surgeon expertise and case complexity, opening the door for more standardized assessments of teaching cases, surgical behaviors and case complexities.
Collapse
Affiliation(s)
| | - Fahri Gokcal
- Good Samaritan Medical Center, Brockton, MA, USA
| | - Abeselom Fanta
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | - Xi Liu
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | - Mallory Shields
- Applied Research, Intuitive Surgical Inc., Peachtree City, GA, USA
| | | | | | | |
Collapse
|
12
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
13
|
Rodriguez Peñaranda N, Eissa A, Ferretti S, Bianchi G, Di Bari S, Farinha R, Piazza P, Checcucci E, Belenchón IR, Veccia A, Gomez Rivas J, Taratkin M, Kowalewski KF, Rodler S, De Backer P, Cacciamani GE, De Groote R, Gallagher AG, Mottrie A, Micali S, Puliatti S. Artificial Intelligence in Surgical Training for Kidney Cancer: A Systematic Review of the Literature. Diagnostics (Basel) 2023; 13:3070. [PMID: 37835812 PMCID: PMC10572445 DOI: 10.3390/diagnostics13193070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/17/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The prevalence of renal cell carcinoma (RCC) is increasing due to advanced imaging techniques. Surgical resection is the standard treatment, involving complex radical and partial nephrectomy procedures that demand extensive training and planning. Furthermore, artificial intelligence (AI) can potentially aid the training process in the field of kidney cancer. This review explores how artificial intelligence (AI) can create a framework for kidney cancer surgery to address training difficulties. Following PRISMA 2020 criteria, an exhaustive search of PubMed and SCOPUS databases was conducted without any filters or restrictions. Inclusion criteria encompassed original English articles focusing on AI's role in kidney cancer surgical training. On the other hand, all non-original articles and articles published in any language other than English were excluded. Two independent reviewers assessed the articles, with a third party settling any disagreement. Study specifics, AI tools, methodologies, endpoints, and outcomes were extracted by the same authors. The Oxford Center for Evidence-Based Medicine's evidence levels were employed to assess the studies. Out of 468 identified records, 14 eligible studies were selected. Potential AI applications in kidney cancer surgical training include analyzing surgical workflow, annotating instruments, identifying tissues, and 3D reconstruction. AI is capable of appraising surgical skills, including the identification of procedural steps and instrument tracking. While AI and augmented reality (AR) enhance training, challenges persist in real-time tracking and registration. The utilization of AI-driven 3D reconstruction proves beneficial for intraoperative guidance and preoperative preparation. Artificial intelligence (AI) shows potential for advancing surgical training by providing unbiased evaluations, personalized feedback, and enhanced learning processes. Yet challenges such as consistent metric measurement, ethical concerns, and data privacy must be addressed. The integration of AI into kidney cancer surgical training offers solutions to training difficulties and a boost to surgical education. However, to fully harness its potential, additional studies are imperative.
Collapse
Affiliation(s)
- Natali Rodriguez Peñaranda
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Ahmed Eissa
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
- Department of Urology, Faculty of Medicine, Tanta University, Tanta 31527, Egypt
| | - Stefania Ferretti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Giampaolo Bianchi
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Di Bari
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Rui Farinha
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Urology Department, Lusíadas Hospital, 1500-458 Lisbon, Portugal
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, FPO-IRCCS Candiolo Cancer Institute, 10060 Turin, Italy;
| | - Inés Rivero Belenchón
- Urology and Nephrology Department, Virgen del Rocío University Hospital, 41013 Seville, Spain;
| | - Alessandro Veccia
- Department of Urology, University of Verona, Azienda Ospedaliera Universitaria Integrata, 37126 Verona, Italy;
| | - Juan Gomez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 119435 Moscow, Russia;
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urosurgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany;
| | - Severin Rodler
- Department of Urology, University Hospital LMU Munich, 80336 Munich, Germany;
| | - Pieter De Backer
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium
| | - Giovanni Enrico Cacciamani
- USC Institute of Urology, Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089, USA;
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA 90089, USA
| | - Ruben De Groote
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Anthony G. Gallagher
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Faculty of Life and Health Sciences, Ulster University, Derry BT48 7JL, UK
| | - Alexandre Mottrie
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Salvatore Micali
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Puliatti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| |
Collapse
|
14
|
Kinoshita T, Komatsu M. Artificial Intelligence in Surgery and Its Potential for Gastric Cancer. J Gastric Cancer 2023; 23:400-409. [PMID: 37553128 PMCID: PMC10412972 DOI: 10.5230/jgc.2023.23.e27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/10/2023] Open
Abstract
Artificial intelligence (AI) has made significant progress in recent years, and many medical fields are attempting to introduce AI technology into clinical practice. Currently, much research is being conducted to evaluate that AI can be incorporated into surgical procedures to make them safer and more efficient, subsequently to obtain better outcomes for patients. In this paper, we review basic AI research regarding surgery and discuss the potential for implementing AI technology in gastric cancer surgery. At present, research and development is focused on AI technologies that assist the surgeon's understandings and judgment during surgery, such as anatomical navigation. AI systems are also being developed to recognize in which the surgical phase is ongoing. Such a surgical phase recognition systems is considered for effective storage of surgical videos and education, in the future, for use in systems to objectively evaluate the skill of surgeons. At this time, it is not considered practical to let AI make intraoperative decisions or move forceps automatically from an ethical standpoint, too. At present, AI research on surgery has various limitations, and it is desirable to develop practical systems that will truly benefit clinical practice in the future.
Collapse
Affiliation(s)
- Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan.
| | - Masaru Komatsu
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan
| |
Collapse
|
15
|
Liu Z, Bible J, Petersen L, Zhang Z, Roy-Chaudhury P, Singapogu R. Relating process and outcome metrics for meaningful and interpretable cannulation skill assessment: A machine learning paradigm. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107429. [PMID: 37119772 PMCID: PMC10291517 DOI: 10.1016/j.cmpb.2023.107429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 02/06/2023] [Accepted: 02/15/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVES The quality of healthcare delivery depends directly on the skills of clinicians. For patients on hemodialysis, medical errors or injuries caused during cannulation can lead to adverse outcomes, including potential death. To promote objective skill assessment and effective training, we present a machine learning approach, which utilizes a highly-sensorized cannulation simulator and a set of objective process and outcome metrics. METHODS In this study, 52 clinicians were recruited to perform a set of pre-defined cannulation tasks on the simulator. Based on data collected by sensors during their task performance, the feature space was then constructed based on force, motion, and infrared sensor data. Following this, three machine learning models- support vector machine (SVM), support vector regression (SVR), and elastic net (EN)- were constructed to relate the feature space to objective outcome metrics. Our models utilize classification based on the conventional skill classification labels as well as a new method that represents skill on a continuum. RESULTS With less than 5% of trials misplaced by two classes, the SVM model was effective in predicting skill based on the feature space. In addition, the SVR model effectively places both skill and outcome on a fine-grained continuum (versus discrete divisions) that is representative of reality. As importantly, the elastic net model enabled the identification of a set of process metrics that highly impact outcomes of the cannulation task, including smoothness of motion, needle angles, and pinch forces. CONCLUSIONS The proposed cannulation simulator, paired with machine learning assessment, demonstrates definite advantages over current cannulation training practices. The methods presented here can be adopted to drastically increase the effectiveness of skill assessment and training, thereby potentially improving clinical outcomes of hemodialysis treatment.
Collapse
Affiliation(s)
- Zhanhe Liu
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Joe Bible
- School of Mathematical and Statistical Sciences, Clemson University, O-110 Martin Hall, Clemson, 29634, SC, USA
| | - Lydia Petersen
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Ziyang Zhang
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Prabir Roy-Chaudhury
- UNC Kidney Center, University of North Carolina, Chapel Hill, NC, 28144, USA; (Bill Hefner) VA Medical Center, Salisbury, NC, 28144, USA
| | - Ravikiran Singapogu
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA.
| |
Collapse
|
16
|
Shafiei SB, Shadpour S, Mohler JL, Attwood K, Liu Q, Gutierrez C, Toussi MS. Developing surgical skill level classification model using visual metrics and a gradient boosting algorithm. ANNALS OF SURGERY OPEN 2023; 4:e292. [PMID: 37305561 PMCID: PMC10249659 DOI: 10.1097/as9.0000000000000292] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 04/24/2023] [Indexed: 06/13/2023] Open
Abstract
Objective Assessment of surgical skills is crucial for improving training standards and ensuring the quality of primary care. This study aimed to develop a gradient boosting classification model (GBM) to classify surgical expertise into inexperienced, competent, and experienced levels in robot-assisted surgery (RAS) using visual metrics. Methods Eye gaze data were recorded from 11 participants performing four subtasks; blunt dissection, retraction, cold dissection, and hot dissection using live pigs and the da Vinci robot. Eye gaze data were used to extract the visual metrics. One expert RAS surgeon evaluated each participant's performance and expertise level using the modified Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The extracted visual metrics were used to classify surgical skill levels and to evaluate individual GEARS metrics. Analysis of Variance (ANOVA) was used to test the differences for each feature across skill levels. Results Classification accuracies for blunt dissection, retraction, cold dissection, and burn dissection were 95%, 96%, 96%, and 96%, respectively. The time to complete only the retraction was significantly different among the 3 skill levels (p-value = 0.04). Performance was significantly different for 3 categories of surgical skill level for all subtasks (p-values<0.01). The extracted visual metrics were strongly associated with GEARS metrics (R2>0.7 for GEARS metrics evaluation models). Conclusions Machine learning (ML) algorithms trained by visual metrics of RAS surgeons can classify surgical skill levels and evaluate GEARS measures. The time to complete a surgical subtask may not be considered a stand-alone factor for skill level assessment.
Collapse
Affiliation(s)
- Somayeh B. Shafiei
- From the Department of Urology, Roswell Park Comprehensive Cancer Center in Buffalo, NY
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, Canada
| | - James L. Mohler
- From the Department of Urology, Roswell Park Comprehensive Cancer Center in Buffalo, NY
| | - Kristopher Attwood
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY
| | - Qian Liu
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY
| | | |
Collapse
|
17
|
Zha Y, Xue C, Liu Y, Ni J, De La Fuente JM, Cui D. Artificial intelligence in theranostics of gastric cancer, a review. MEDICAL REVIEW (2021) 2023; 3:214-229. [PMID: 37789960 PMCID: PMC10542883 DOI: 10.1515/mr-2022-0042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 04/26/2023] [Indexed: 10/05/2023]
Abstract
Gastric cancer (GC) is one of the commonest cancers with high morbidity and mortality in the world. How to realize precise diagnosis and therapy of GC owns great clinical requirement. In recent years, artificial intelligence (AI) has been actively explored to apply to early diagnosis and treatment and prognosis of gastric carcinoma. Herein, we review recent advance of AI in early screening, diagnosis, therapy and prognosis of stomach carcinoma. Especially AI combined with breath screening early GC system improved 97.4 % of early GC diagnosis ratio, AI model on stomach cancer diagnosis system of saliva biomarkers obtained an overall accuracy of 97.18 %, specificity of 97.44 %, and sensitivity of 96.88 %. We also discuss concept, issues, approaches and challenges of AI applied in stomach cancer. This review provides a comprehensive view and roadmap for readers working in this field, with the aim of pushing application of AI in theranostics of stomach cancer to increase the early discovery ratio and curative ratio of GC patients.
Collapse
Affiliation(s)
- Yiqian Zha
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Cuili Xue
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Yanlei Liu
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | - Jian Ni
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| | | | - Daxiang Cui
- Institute of Nano Biomedicine and Engineering, Shanghai Engineering Research Center for Intelligent Diagnosis and Treatment Instrument, School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- National Engineering Research Center for Nanotechnology, Shanghai, China
| |
Collapse
|
18
|
Rasic G, Parikh PP, Wang ML, Keric N, Jung HS, Ferguson BD, Altieri MS, Nahmias J. The silver lining of the pandemic in surgical education: virtual surgical education and recommendations for best practices. GLOBAL SURGICAL EDUCATION : JOURNAL OF THE ASSOCIATION FOR SURGICAL EDUCATION 2023; 2:59. [PMID: 38013862 PMCID: PMC10205563 DOI: 10.1007/s44186-023-00137-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 05/04/2023] [Accepted: 05/14/2023] [Indexed: 11/29/2023]
Abstract
Virtual education is an evolving field within the realm of surgical training. Since the onset of the COVID-19 pandemic, the application of virtual technologies in surgical education has undergone significant exploration and advancement. While originally developed to supplement in-person curricula for the development of clinical decision-making, virtual surgical education has expanded into the realms of clinical decision-making, surgical, and non-surgical skills acquisition. This manuscript aims to discuss the various applications of virtual surgical education as well as the advantages and disadvantages associated with each education modality, while offering recommendations on best practices and future directions.
Collapse
Affiliation(s)
- Gordana Rasic
- Department of Surgery, Boston Medical Center, Boston University Chobanian and Avedisian School of Medicine, Boston, MA USA
| | - Priti P. Parikh
- Department of Surgery, Boonshoft School of Medicine, Wright State University, Dayton, OH USA
| | - Ming-Li Wang
- Department of Surgery, University of New Mexico, Albuquerque, NM USA
| | - Natasha Keric
- Division of Trauma, Acute Care Surgery, and Surgical Critical Care, Department of Surgery, Banner-University Medical Center Phoenix, University of Arizona College of Medicine, Phoenix, AZ USA
| | - Hee Soo Jung
- Division of Acute Care and Regional General Surgery, Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI USA
| | - Benjamin D. Ferguson
- Division of Hepatopancreatobiliary Surgery, Department of Surgery, University of New Mexico, Albuquerque, NM USA
| | - Maria S. Altieri
- Division of Gastrointestinal Surgery, Department of Surgery, Pennsylvania Hospital, Penn Medicine, Philadelphia, PA USA
| | - Jeffry Nahmias
- Division of Trauma, Burns, and Surgical Critical Care, Department of Surgery, University of California Irvine, Orange, CA USA
| |
Collapse
|
19
|
Jackson KL, Durić Z, Engdahl SM, Santago II AC, DeStefano S, Gerber LH. Computer-assisted approaches for measuring, segmenting, and analyzing functional upper extremity movement: a narrative review of the current state, limitations, and future directions. FRONTIERS IN REHABILITATION SCIENCES 2023; 4:1130847. [PMID: 37113748 PMCID: PMC10126348 DOI: 10.3389/fresc.2023.1130847] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
The analysis of functional upper extremity (UE) movement kinematics has implications across domains such as rehabilitation and evaluating job-related skills. Using movement kinematics to quantify movement quality and skill is a promising area of research but is currently not being used widely due to issues associated with cost and the need for further methodological validation. Recent developments by computationally-oriented research communities have resulted in potentially useful methods for evaluating UE function that may make kinematic analyses easier to perform, generally more accessible, and provide more objective information about movement quality, the importance of which has been highlighted during the COVID-19 pandemic. This narrative review provides an interdisciplinary perspective on the current state of computer-assisted methods for analyzing UE kinematics with a specific focus on how to make kinematic analyses more accessible to domain experts. We find that a variety of methods exist to more easily measure and segment functional UE movement, with a subset of those methods being validated for specific applications. Future directions include developing more robust methods for measurement and segmentation, validating these methods in conjunction with proposed kinematic outcome measures, and studying how to integrate kinematic analyses into domain expert workflows in a way that improves outcomes.
Collapse
Affiliation(s)
- Kyle L. Jackson
- Department of Computer Science, George Mason University, Fairfax, VA, United States
- MITRE Corporation, McLean, VA, United States
| | - Zoran Durić
- Department of Computer Science, George Mason University, Fairfax, VA, United States
- Center for Adaptive Systems and Brain-Body Interactions, George Mason University, Fairfax, VA, United States
| | - Susannah M. Engdahl
- Center for Adaptive Systems and Brain-Body Interactions, George Mason University, Fairfax, VA, United States
- Department of Bioengineering, George Mason University, Fairfax, VA, United States
- American Orthotic & Prosthetic Association, Alexandria, VA, United States
| | | | | | - Lynn H. Gerber
- Center for Adaptive Systems and Brain-Body Interactions, George Mason University, Fairfax, VA, United States
- College of Public Health, George Mason University, Fairfax, VA, United States
- Inova Health System, Falls Church, VA, United States
| |
Collapse
|
20
|
Devin CL, Gillani M, Shields MC, Eldredge K, Kucera W, Rupji M, Purvis LA, Paul Olson TJ, Liu Y, Jarc A, Rosen SA. Ratio of Economy of Motion: A New Objective Performance Indicator to Assign Consoles During Dual-Console Robotic Proctectomy. Am Surg 2023:31348231161767. [PMID: 36898676 DOI: 10.1177/00031348231161767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
BACKGROUND Our group investigates objective performance indicators (OPIs) to analyze robotic colorectal surgery. Analyses of OPI data are difficult in dual-console procedures (DCPs) as there is currently no reliable, efficient, or scalable technique to assign console-specific OPIs during a DCP. We developed and validated a novel metric to assign tasks to appropriate surgeons during DCPs. METHODS A colorectal surgeon and fellow reviewed 21 unedited, dual-console proctectomy videos with no information to identify the operating surgeons. The reviewers watched a small number of random tasks and assigned "attending" or "trainee" to each task. Based on this sampling, the remainder of task assignments for each procedure was extrapolated. In parallel, we applied our newly developed OPI, ratio of economy of motion (rEOM), to assign consoles. Results from the 2 methods were compared. RESULTS A total of 1811 individual surgical tasks were recorded during 21 proctectomy videos. A median of 6.5 random tasks (137 total) were reviewed during each video, and the remainder of task assignments were extrapolated based on the 7.6% of tasks audited. The task assignment agreement was 91.2% for video review vs rEOM, with rEOM providing ground truth. It took 2.5 hours to manually review video and assign tasks. Ratio of economy of motion task assignment was immediately available based on OPI recordings and automated calculation. DISCUSSION We developed and validated rEOM as an accurate, efficient, and scalable OPI to assign individual surgical tasks to appropriate surgeons during DCPs. This new resource will be useful to everyone involved in OPI research across all surgical specialties.
Collapse
Affiliation(s)
- Courtney L Devin
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Mishal Gillani
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | | | - Kyle Eldredge
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Walter Kucera
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Manali Rupji
- Biostatistics Shared Resource, Winship Cancer Institute, 1371Emory University, Atlanta, GA, USA
| | - Lilia A Purvis
- Research Division, 19727Intuitive Surgical, Norcross, GA, USA
| | | | - Yuan Liu
- Biostatistics Shared Resource, Winship Cancer Institute, 1371Emory University, Atlanta, GA, USA.,Department of Biostatistics and Bioinformatics, Rollins School of Public Health, 1371Emory University, Atlanta, GA, USA
| | - Anthony Jarc
- Research Division, 19727Intuitive Surgical, Norcross, GA, USA
| | - Seth A Rosen
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
21
|
Toy S, Ozsoy S, Shafiei S, Antonenko P, Schwengel D. Using electroencephalography to explore neurocognitive correlates of procedural proficiency: A pilot study to compare experts and novices during simulated endotracheal intubation. Brain Cogn 2023; 165:105938. [PMID: 36527783 DOI: 10.1016/j.bandc.2022.105938] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/23/2022] [Accepted: 12/07/2022] [Indexed: 12/23/2022]
Abstract
The objective of this study was to explore the use of EEG as a measure of neurocognitive engagement during a procedural task. In this observational study, self-reported cognitive load, observed performance, and EEG signatures in experts and novices were compared during simulated endotracheal intubation. Twelve medical students (novices) and eight senior anesthesiology trainees (experts) were included in the study. Experts reported significantly lower cognitive load (P < 0.001) and outperformed novices based on the observational checklist (P < 0.001). EEG signatures differed significantly between the experts and novices. Experts showed a greater increase in delta and theta band amplitudes, especially in temporal and frontal locations and in right occipital areas for delta. A machine learning algorithm showed 83.3 % accuracy for expert-novice skill classification using the selected EEG features. Performance scores were positively correlated (P < 0.05) with event-related amplitudes for delta and theta bands at locations where experts and novices showed significant differences. Increased delta and frontal/midline theta oscillations on EEG suggested that experts had better attentional control than novices. This pilot study provides initial evidence that EEG may be a useful, noninvasive measure of neurocognitive engagement in operational settings and that it has the potential to complement traditional clinical skills assessment.
Collapse
Affiliation(s)
- Serkan Toy
- Department of Basic Science Education, Virginia Tech Carilion School of Medicine, Roanoke, VA, USA.
| | - Sahin Ozsoy
- NeuroField Inc, Santa Barbara, CA, USA; BioSoftPro, LLC, Kensington, MD 20895, USA.
| | - Somayeh Shafiei
- Urology Department of Roswell Park Comprehensive Cancer Center in Buffalo, NY, USA.
| | - Pavlo Antonenko
- Educational Technology, College of Education, University of Florida, Gainesville, FL, USA.
| | - Deborah Schwengel
- Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
22
|
Guerrero DT, Asaad M, Rajesh A, Hassan A, Butler CE. Advancing Surgical Education: The Use of Artificial Intelligence in Surgical Training. Am Surg 2023; 89:49-54. [PMID: 35570822 DOI: 10.1177/00031348221101503] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The technology of artificial intelligence (AI) has made significant in-roads into the field of medicine over the last decade. With surgery being a discipline where repetition is the key to mastery, the scope of AI presents enormous potential for resident education through the analysis of technique and delivery of structured feedback for performance improvement. In an era marred by a raging pandemic that has decreased exposure and opportunity, AI offers an attractive solution towards improving operating room efficiency, safe patient care in the hands of supervised residents and can ultimately culminate in reduced health care costs. Through this article, we elucidate the current adoption of the artificial intelligence technology and its prospects for advancing surgical education.
Collapse
Affiliation(s)
- David T Guerrero
- 12317University of Pittsburgh Medical School, Pittsburgh, PA, USA
| | - Malke Asaad
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Aashish Rajesh
- 14742University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Abbas Hassan
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Charles E Butler
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
23
|
Pipaliya RM, Raymond MJ, Rowley MA, Jasper PM, Meyer TA. Video Analysis of Otologic Instrument Movement During Resident Mastoidectomies. Otol Neurotol 2022; 43:e1115-e1120. [PMID: 36351226 DOI: 10.1097/mao.0000000000003730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE To measure surgical instrument movement during resident mastoidectomies and identify metrics that correlate with experience. STUDY DESIGN Retrospective case series. SETTING Tertiary care center. SUBJECTS Ten postgraduate year (PGY) 2, 6 PGY3, 7 PGY4, and 19 PGY5 recordings of mastoidectomy performed by otolaryngology residents. INTERVENTIONS One-minute intraoperative recordings of mastoidectomies performed during cochlear implantation were collected. Drill and suction-irrigator motion were analyzed with sports motion tracking software. MAIN OUTCOME MEASURES Mean instrument speed, angle, and angular velocity were calculated. Mann-Whitney U tests compared mean instrument metrics between PGY levels. Change in drill speed for seven residents between their PGY2 to PGY5 years was individually analyzed. RESULTS Mean drill speed was significantly greater for PGY5 residents compared with PGY2s (2.9 versus 1.8 cm/s, p = 0.001). Compared with PGY2 residents, suction speed was greater as a PGY5 (1.2 versus 0.9 cm/s; p = 0.201) and significantly greater as a PGY4 (1.5 versus 0.9 cm/s, p = 0.039). Of the seven residents individually analyzed, group mean drill speed increased by 0.4 cm/s, yearly. CONCLUSIONS Drill and suction-irrigator movement during the second minute of drilling of a cortical mastoidectomy seems to increase with resident level. Objective video analysis is a potential adjunct for differentiating novices from more experienced surgeons and monitoring surgical skills progress.
Collapse
Affiliation(s)
- Royal M Pipaliya
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina
| | | | | | | | | |
Collapse
|
24
|
Gaussian guided frame sequence encoder network for action quality assessment. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00892-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
AbstractCan a computer evaluate an athlete’s performance automatically? Many action quality assessment (AQA) methods have been proposed in recent years. Limited by the randomness of video sampling and the simple strategy of model training, the performance of the existing AQA methods can still be further improved. To achieve this goal, a Gaussian guided frame sequence encoder network is proposed in this paper. In the proposed method, the image feature of each video frame is extracted by Resnet model. And then, a frame sequence encoder network is applied to model temporal information and generate action quality feature. Finally, a fully connected network is designed to predict action quality score. To train the proposed method effectively, inspired by the final score calculation rule in Olympic game, Gaussian loss function is employed to compute the error between the predicted score and the label score. The proposed method is implemented on the AQA-7 and MTL–AQA datasets. The experimental results confirm that compared with the state-of-the-art methods, our proposed method achieves the better performance. And detailed ablation experiments are conducted to verify the effectiveness of each component in the module.
Collapse
|
25
|
Kasa K, Burns D, Goldenberg MG, Selim O, Whyne C, Hardisty M. Multi-Modal Deep Learning for Assessing Surgeon Technical Skill. SENSORS (BASEL, SWITZERLAND) 2022; 22:7328. [PMID: 36236424 PMCID: PMC9571767 DOI: 10.3390/s22197328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/23/2022] [Accepted: 09/23/2022] [Indexed: 06/16/2023]
Abstract
This paper introduces a new dataset of a surgical knot-tying task, and a multi-modal deep learning model that achieves comparable performance to expert human raters on this skill assessment task. Seventy-two surgical trainees and faculty were recruited for the knot-tying task, and were recorded using video, kinematic, and image data. Three expert human raters conducted the skills assessment using the Objective Structured Assessment of Technical Skill (OSATS) Global Rating Scale (GRS). We also designed and developed three deep learning models: a ResNet-based image model, a ResNet-LSTM kinematic model, and a multi-modal model leveraging the image and time-series kinematic data. All three models demonstrate performance comparable to the expert human raters on most GRS domains. The multi-modal model demonstrates the best overall performance, as measured using the mean squared error (MSE) and intraclass correlation coefficient (ICC). This work is significant since it demonstrates that multi-modal deep learning has the potential to replicate human raters on a challenging human-performed knot-tying task. The study demonstrates an algorithm with state-of-the-art performance in surgical skill assessment. As objective assessment of technical skill continues to be a growing, but resource-heavy, element of surgical education, this study is an important step towards automated surgical skill assessment, ultimately leading to reduced burden on training faculty and institutes.
Collapse
Affiliation(s)
- Kevin Kasa
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| | - David Burns
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 1A1, Canada
- Division of Orthopaedic Surgery, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Mitchell G. Goldenberg
- Division of Urology, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Omar Selim
- Department of Surgery, Royal Victoria Regional Health Center, Barrie, ON L4M 6M2, Canada
| | - Cari Whyne
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 1A1, Canada
- Division of Orthopaedic Surgery, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada
| | - Michael Hardisty
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Division of Orthopaedic Surgery, Department of Surgery, University of Toronto, Toronto, ON M5S 1A1, Canada
| |
Collapse
|
26
|
Wang Z, Yan Z, Xing Y, Wang H. Real‐time trajectory prediction of laparoscopic instrument tip based on long short‐term memory neural network in laparoscopic surgery training. Int J Med Robot 2022; 18:e2441. [DOI: 10.1002/rcs.2441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/03/2022] [Accepted: 07/11/2022] [Indexed: 11/10/2022]
Affiliation(s)
- Ziheng Wang
- School of Mechanical Engineering Tianjin University Tianjin China
| | - Zhengxiang Yan
- College of Intelligence and Computing Tianjin University Tianjin China
| | - Yuan Xing
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education School of Mechanical Engineering Tianjin University Tianjin China
| | - Honglei Wang
- Department of Gastrointestinal Surgery Tianjin Hospital of ITCWM Tianjin China
| |
Collapse
|
27
|
Zhang HB, Dong LJ, Lei Q, Yang LJ, Du JX. Label-reconstruction-based pseudo-subscore learning for action quality assessment in sporting events. APPL INTELL 2022; 53:10053-10067. [PMID: 35991679 PMCID: PMC9374585 DOI: 10.1007/s10489-022-03984-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/07/2022] [Indexed: 11/02/2022]
Abstract
Most existing action quality assessment (AQA) methods provide only an overall quality score for the input video and lack an evaluation of each substage of the movement process; thus, these methods cannot provide detailed feedback for users. Moreover, the existing datasets do not provide labels for substage quality assessment. To address these problems, in this work, a new label-reconstruction-based pseudo-subscore learning (PSL) method is proposed for AQA in sporting events. In the proposed method, the overall score of an action is not only regarded as a quality label but also used as a feature of the training set. A label-reconstruction-based learning algorithm is built to generate pseudo-subscore labels for the training set. Moreover, based on the pseudo-subscore labels and overall score labels, a multi-substage AQA model is fine-tuned from the PSL model to predict the action quality score of each substage and the overall score for an athlete. Several ablation experiments are performed to verify the effectiveness of each module. The experimental results show that our approach achieves state-of-the-art performance.
Collapse
Affiliation(s)
- Hong-Bo Zhang
- Department of Computer Science and Technology, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
| | - Li-Jia Dong
- Department of Computer Science and Technology, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
- Fujian Key Laboratory of Big Data Intelligence and Security, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
| | - Qing Lei
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
| | - Li-Jie Yang
- Fujian Key Laboratory of Big Data Intelligence and Security, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
| | - Ji-Xiang Du
- Fujian Key Laboratory of Big Data Intelligence and Security, Huaqiao University, Jimei, Xiamen, 361000 Fujian China
| |
Collapse
|
28
|
Boekestijn I, Azargoshasb S, van Oosterom MN, Slof LJ, Dibbets-Schneider P, Dankelman J, van Erkel AR, Rietbergen DDD, van Leeuwen FWB. Value-assessment of computer-assisted navigation strategies during percutaneous needle placement. Int J Comput Assist Radiol Surg 2022; 17:1775-1785. [PMID: 35934773 PMCID: PMC9468110 DOI: 10.1007/s11548-022-02719-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 07/04/2022] [Indexed: 11/05/2022]
Abstract
Abstract
Purpose
Navigational strategies create a scenario whereby percutaneous needle-based interventions of the liver can be guided using both pre-interventional 3D imaging datasets and dynamic interventional ultrasound (US). To score how such technologies impact the needle placement process, we performed kinematic analysis on different user groups.
Methods
Using a custom biopsy phantom, three consecutive exercises were performed by both novices and experts (n = 26). The exercise came in three options: (1) US-guidance, (2) US-guidance with pre-interventional image-registration (US + Reg) and (3) US-guidance with pre-interventional image-registration and needle-navigation (US + Reg + Nav). The traveled paths of the needle were digitized in 3D. Using custom software algorithms, kinematic metrics were extracted and related to dexterity, decision making indices to obtain overall performance scores (PS).
Results
Kinematic analysis helped quantifying the visual assessment of the needle trajectories. Compared to US-guidance, novices yielded most improvements using Reg (PSavg(US) = 0.43 vs. PSavg(US+Reg) = 0.57 vs. PSavg(US+Reg+Nav) = 0.51). Interestingly, the expert group yielded a reversed trend (PSavg(US) = 0.71 vs PSavg(US+Reg) = 0.58 vs PSavg(US+Reg+Nav) = 0.59).
Conclusion
Digitizing the movement trajectory allowed us to objectively assess the impact of needle-navigation strategies on percutaneous procedures. In particular, our findings suggest that these advanced technologies have a positive impact on the kinematics derived performance of novices.
Collapse
|
29
|
Zheng Y, Ershad M, Fey AM. Toward Correcting Anxious Movements Using Haptic Cues on the Da Vinci Surgical Robot. PROCEEDINGS OF THE ... IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS. IEEE/RAS-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL ROBOTICS AND BIOMECHATRONICS 2022; 2022:10.1109/biorob52689.2022.9925380. [PMID: 37408769 PMCID: PMC10321328 DOI: 10.1109/biorob52689.2022.9925380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
Surgical movements have an important stylistic quality that individuals without formal surgical training can use to identify expertise. In our prior work, we sought to characterize quantitative metrics associated with surgical style and developed a near-real-time detection framework for stylistic deficiencies using a commercial haptic device. In this paper, we implement bimanual stylistic detection on the da Vinci Research Kit (dVRK) and focus on one stylistic deficiency, "Anxious", which may describe movements under stressful conditions. Our goal is to potentially correct these "Anxious" movements by exploring the effects of three different types of haptic cues (time-variant spring, damper, and spring-damper feedback) on performance during a basic surgical training task using the da Vinci Research Kit (dVRK). Eight subjects were recruited to complete peg transfer tasks using a randomized order of haptic cues and with baseline trials between each task. Overall, all cues lead to a significant improvement over baseline economy of volume and time-variant spring haptic cues lead to significant improvements in reducing the classified "Anxious" movements and also corresponded with significantly lower path length and economy of volume for the non-dominant hand. This work is the first step in evaluating our stylistic detection model on a surgical robot and could lay the groundwork for future methods to actively and adaptively reduce the negative effect of stress in the operating room.
Collapse
Affiliation(s)
- Yi Zheng
- Department of Mechanical Engineering, the University of Texas at Austin, 204 East Dean Keeton Street, Austin, TX 78712, USA
| | - Marzieh Ershad
- Intuitive Surgical, Inc., 1020 Kifer Road Sunnyvale, CA 94086
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, the University of Texas at Austin, 204 East Dean Keeton Street, Austin, TX 78712, USA
- Department of Surgery, UT South-western Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390, USA
| |
Collapse
|
30
|
Chen Z, Terlizzi S, Da Col T, Marzullo A, Catellani M, Ferrigno G, De Momi E. Robot-assisted ex vivo neobladder reconstruction: preliminary results of surgical skill evaluation. Int J Comput Assist Radiol Surg 2022; 17:2315-2323. [PMID: 35802223 DOI: 10.1007/s11548-022-02712-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Accepted: 06/24/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Advanced developments in the medical field have gradually increased the public demand for surgical skill evaluation. However, this assessment always depends on the direct observation of experienced surgeons, which is time-consuming and variable. The introduction of robot-assisted surgery provides a new possibility for this evaluation paradigm. This paper aims at evaluating surgeon performance automatically with novel evaluation metrics based on different surgical data. METHODS Urologists ([Formula: see text]) from a hospital were requested to perform a simplified neobladder reconstruction on an ex vivo setup twice with different camera modalities ([Formula: see text]) randomly. They were divided into novices and experts ([Formula: see text], respectively) according to their experience in robot-assisted surgeries. Different performance metrics ([Formula: see text]) are proposed to achieve the surgical skill evaluation, considering both instruments and endoscope. Also, nonparametric tests are adopted to check if there are significant differences when evaluating surgeons performance. RESULTS When grouping according to four stages of neobladder reconstruction, statistically significant differences can be appreciated in phase 1 ([Formula: see text]) and phase 2 ([Formula: see text]) with normalized time-related metrics and camera movement-related metrics, respectively. On the other hand, considering experience grouping shows that both metrics are able to highlight statistically significant differences between novice and expert performances in the control protocol. It also shows that the camera-related performance of experts is significantly different ([Formula: see text]) when handling the endoscope manually and when it is automatic. CONCLUSION Surgical skill evaluation, using the approach in this paper, can effectively measure surgical procedures of surgeons with different experience. Preliminary results demonstrate that different surgical data can be fully utilized to improve the reliability of surgical evaluation. It also demonstrates its versatility and potential in the quantitative assessment of various surgical operations.
Collapse
Affiliation(s)
- Ziyang Chen
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.
| | - Serenella Terlizzi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Tommaso Da Col
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Aldo Marzullo
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | | | - Giancarlo Ferrigno
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
31
|
Zheng Y, Leonard G, Zeh H, Fey AM. Determining the Significant Kinematic Features for Characterizing Stress during Surgical Tasks Using Spatial Attention. JOURNAL OF MEDICAL ROBOTICS RESEARCH 2022; 7:2241006. [PMID: 37360054 PMCID: PMC10289589 DOI: 10.1142/s2424905x22410069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
It has been shown that intraoperative stress can have a negative effect on surgeon surgical skills during laparoscopic procedures. For novice surgeons, stressful conditions can lead to significantly higher velocity, acceleration, and jerk of the surgical instrument tips, resulting in faster but less smooth movements. However, it is still not clear which of these kinematic features (velocity, acceleration, or jerk) is the best marker for identifying the normal and stressed conditions. Therefore, in order to find the most significant kinematic feature that is affected by intraoperative stress, we implemented a spatial attention-based Long-Short-Term-Memory (LSTM) classifier. In a prior IRB approved experiment, we collected data from medical students performing an extended peg transfer task who were randomized into a control group and a group performing the task under external psychological stresses. In our prior work, we obtained "representative" normal or stressed movements from this dataset using kinematic data as the input. In this study, a spatial attention mechanism is used to describe the contribution of each kinematic feature to the classification of normal/stressed movements. We tested our classifier under Leave-One-User-Out (LOUO) cross-validation, and the classifier reached an overall accuracy of 77.11% for classifying "representative" normal and stressed movements using kinematic features as the input. More importantly, we also studied the spatial attention extracted from the proposed classifier. Velocity and acceleration on both sides had significantly higher attention for classifying a normal movement (p <= 0.0001); Velocity (p <= 0.015) and jerk (p <= 0.001) on non-dominant hand had significant higher attention for classifying a stressed movement, and it is worthy noting that the attention of jerk on non-dominant hand side had the largest increment when moving from describing normal movements to stressed movements (p = 0.0000). In general, we found that the jerk on non-dominant hand side can be used for characterizing the stressed movements for novice surgeons more effectively.
Collapse
Affiliation(s)
- Yi Zheng
- Department of Mechanical Engineering, the University of Texas at Austin, Address, Austin, TX, USA
| | - Grey Leonard
- Department of Surgery, the University of Texas Southwestern Medical Center, Address, Dallas, TX, USA
| | - Herbert Zeh
- Department of Surgery, the University of Texas Southwestern Medical Center, Address, Dallas, TX, USA
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, the University of Texas at Austin, Address, Austin, TX, USA
- Department of Surgery, the University of Texas Southwestern Medical Center, Address, Dallas, TX, USA
| |
Collapse
|
32
|
Hutchinson K, Li Z, Cantrell LA, Schenkman NS, Alemzadeh H. Analysis of executional and procedural errors in dry‐lab robotic surgery experiments. Int J Med Robot 2022; 18:e2375. [PMID: 35114732 PMCID: PMC9285717 DOI: 10.1002/rcs.2375] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 01/25/2022] [Accepted: 01/29/2022] [Indexed: 11/10/2022]
Abstract
Background Analysing kinematic and video data can help identify potentially erroneous motions that lead to sub‐optimal surgeon performance and safety‐critical events in robot‐assisted surgery. Methods We develop a rubric for identifying task and gesture‐specific executional and procedural errors and evaluate dry‐lab demonstrations of suturing and needle passing tasks from the JIGSAWS dataset. We characterise erroneous parts of demonstrations by labelling video data, and use distribution similarity analysis and trajectory averaging on kinematic data to identify parameters that distinguish erroneous gestures. Results Executional error frequency varies by task and gesture, and correlates with skill level. Some predominant error modes in each gesture are distinguishable by analysing error‐specific kinematic parameters. Procedural errors could lead to lower performance scores and increased demonstration times but also depend on surgical style. Conclusions This study provides insights into context‐dependent errors that can be used to design automated error detection mechanisms and improve training and skill assessment.
Collapse
Affiliation(s)
- Kay Hutchinson
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| | - Zongyu Li
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| | - Leigh A. Cantrell
- Department of Obstetrics and Gynecology University of Virginia Charlottesville Virginia USA
| | - Noah S. Schenkman
- Department of Urology University of Virginia Charlottesville Virginia USA
| | - Homa Alemzadeh
- Department of Electrical and Computer Engineering University of Virginia Charlottesville Virginia USA
| |
Collapse
|
33
|
Zheng Y, Leonard G, Zeh H, Fey AM. Frame-wise detection of surgeon stress levels during laparoscopic training using kinematic data. Int J Comput Assist Radiol Surg 2022; 17:785-794. [PMID: 35150407 PMCID: PMC10321330 DOI: 10.1007/s11548-022-02568-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/18/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Excessive stress experienced by the surgeon can have a negative effect on the surgeon's technical skills. The goal of this study is to evaluate and validate a deep learning framework for real-time detection of stressed surgical movements using kinematic data. METHODS 30 medical students were recruited as the subjects to perform a modified peg transfer task and were randomized into two groups, a control group (n=15) and a stressed group (n=15) that completed the task under deteriorating, simulated stressful conditions. To classify stressed movements, we first developed an attention-based Long-Short-Term-Memory recurrent neural network (LSTM) trained to classify normal/stressed trials and obtain the contribution of each data frame to the stress level classification. Next, we extracted the important frames from each trial and used another LSTM network to implement the frame-wise classification of normal and stressed movements. RESULTS The classification between normal and stressed trials using attention-based LSTM model reached an overall accuracy of 75.86% under Leave-One-User-Out (LOUO) cross-validation. The second LSTM classifier was able to distinguish between the typical normal and stressed movement with an accuracy of 74.96% with an 8-second observation under LOUO. Finally, the normal and stressed movements in stressed trials could be classified with the accuracy of 68.18% with a 16-second observation under LOUO. CONCLUSION In this study, we extracted the movements which are more likely to be affected by stress and validated the feasibility of using LSTM and kinematic data for frame-wise detection of stress level during laparoscopic training. The proposed classifier could be potentially be integrated with robot-assisted surgery platforms for stress management purposes.
Collapse
Affiliation(s)
- Yi Zheng
- The Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, 78712, USA.
| | - Grey Leonard
- The Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
| | - Herbert Zeh
- The Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
| | - Ann Majewicz Fey
- The Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, 78712, USA
- The Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
| |
Collapse
|
34
|
Ranking surgical skills using an attention-enhanced Siamese network with piecewise aggregated kinematic data. Int J Comput Assist Radiol Surg 2022; 17:1039-1048. [DOI: 10.1007/s11548-022-02581-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 02/16/2022] [Indexed: 11/25/2022]
|
35
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
36
|
Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: A Systematic Scoping Review of Interventions. JOURNAL OF SURGICAL EDUCATION 2022; 79:500-515. [PMID: 34756807 DOI: 10.1016/j.jsurg.2021.09.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/21/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To synthesize peer-reviewed evidence related to the use of artificial intelligence (AI) in surgical education DESIGN: We conducted and reported a scoping review according to the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis with extension for Scoping Reviews guideline and the fourth edition of the Joanna Briggs Institute Reviewer's Manual. We systematically searched eight interdisciplinary databases including MEDLINE-Ovid, ERIC, EMBASE, CINAHL, Web of Science: Core Collection, Compendex, Scopus, and IEEE Xplore. Databases were searched from inception until the date of search on April 13, 2021. SETTING/PARTICIPANTS We only examined original, peer-reviewed interventional studies that self-described as AI interventions, focused on medical education, and were relevant to surgical trainees (defined as medical or dental students, postgraduate residents, or surgical fellows) within the title and abstract (see Table 2). Animal, cadaveric, and in vivo studies were not eligible for inclusion. RESULTS After systematically searching eight databases and 4255 citations, our scoping review identified 49 studies relevant to artificial intelligence in surgical education. We found diverse interventions related to the evaluation of surgical competency, personalization of surgical education, and improvement of surgical education materials across surgical specialties. Many studies used existing surgical education materials, such as the Objective Structured Assessment of Technical Skills framework or the JHU-ISI Gesture and Skill Assessment Working Set database. Though most studies did not provide outcomes related to the implementation in medical schools (such as cost-effective analyses or trainee feedback), there are numerous promising interventions. In particular, many studies noted high accuracy in the objective characterization of surgical skill sets. These interventions could be further used to identify at-risk surgical trainees or evaluate teaching methods. CONCLUSIONS There are promising applications for AI in surgical education, particularly for the assessment of surgical competencies, though further evidence is needed regarding implementation and applicability.
Collapse
Affiliation(s)
| | - Dylan Young
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Noelle Crasto
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Mara Sobel
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada; Department of Obstetrics and Gynaecology, University of Toronto, Toronto, Ontario, Canada; The Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
37
|
Chen Z, Zhang Y, Yan Z, Dong J, Cai W, Ma Y, Jiang J, Dai K, Liang H, He J. Artificial intelligence assisted display in thoracic surgery: development and possibilities. J Thorac Dis 2022; 13:6994-7005. [PMID: 35070382 PMCID: PMC8743398 DOI: 10.21037/jtd-21-1240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 11/02/2021] [Indexed: 12/24/2022]
Abstract
In this golden age of rapid development of artificial intelligence (AI), researchers and surgeons realized that AI could contribute to healthcare in all aspects, especially in surgery. The popularity of low-dose computed tomography (LDCT) and the improvement of the video-assisted thoracoscopic surgery (VATS) not only bring opportunities for thoracic surgery but also bring challenges on the way forward. Preoperatively localizing lung nodules precisely, intraoperatively identifying anatomical structures accurately, and avoiding complications requires a visual display of individuals’ specific anatomy for surgical simulation and assistance. With the advance of AI-assisted display technologies, including 3D reconstruction/3D printing, virtual reality (VR), augmented reality (AR), and mixed reality (MR), computer tomography (CT) imaging in thoracic surgery has been fully utilized for transforming 2D images to 3D model, which facilitates surgical teaching, planning, and simulation. AI-assisted display based on surgical videos is a new surgical application, which is still in its infancy. Notably, it has potential applications in thoracic surgery education, surgical quality evaluation, intraoperative assistance, and postoperative analysis. In this review, we illustrated the current AI-assisted display applications based on CT in thoracic surgery; focused on the emerging AI applications in thoracic surgery based on surgical videos by reviewing its relevant researches in other surgical fields and anticipate its potential development in thoracic surgery.
Collapse
Affiliation(s)
- Zhuxing Chen
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Yudong Zhang
- Department of Thoracic Surgery, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Zeping Yan
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China.,Guangdong Association of Thoracic Diseases, Guangzhou, China
| | - Junguo Dong
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Weipeng Cai
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Yongfu Ma
- Department of Thoracic Surgery, the First Medical Centre, Chinese PLA General Hospital, Beijing, China
| | - Jipeng Jiang
- Department of Thoracic Surgery, the First Medical Centre, Chinese PLA General Hospital, Beijing, China
| | - Keyao Dai
- Department of Cardiothoracic Surgery, The Affiliated Hospital of Guangdong Medical University, Zhanjiang, China
| | - Hengrui Liang
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| |
Collapse
|
38
|
Bellini V, Valente M, Del Rio P, Bignami E. Artificial intelligence in thoracic surgery: a narrative review. J Thorac Dis 2021; 13:6963-6975. [PMID: 35070380 PMCID: PMC8743413 DOI: 10.21037/jtd-21-761] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/30/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVE The aim of this article is to review the current applications of artificial intelligence in thoracic surgery, from diagnosis and pulmonary disease management, to preoperative risk-assessment, surgical planning, and outcomes prediction. BACKGROUND Artificial intelligence implementation in healthcare settings is rapidly growing, though its widespread use in clinical practice is still limited. The employment of machine learning algorithms in thoracic surgery is wide-ranging, including all steps of the clinical pathway. METHODS We performed a narrative review of the literature on Scopus, PubMed and Cochrane databases, including all the relevant studies published in the last ten years, until March 2021. CONCLUSION Machine learning methods are promising encouraging results throughout the key issues of thoracic surgery, both clinical, organizational, and educational. Artificial intelligence-based technologies showed remarkable efficacy to improve the perioperative evaluation of the patient, to assist the decision-making process, to enhance the surgical performance, and to optimize the operating room scheduling. Still, some concern remains about data supply, protection, and transparency, thus further studies and specific consensus guidelines are needed to validate these technologies for daily common practice. KEYWORDS Artificial intelligence (AI); thoracic surgery; machine learning; lung resection; perioperative medicine.
Collapse
Affiliation(s)
- Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Marina Valente
- General Surgery Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Paolo Del Rio
- General Surgery Unit, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
39
|
Hayashi H, Uemura N, Matsumura K, Zhao L, Sato H, Shiraishi Y, Yamashita YI, Baba H. Recent advances in artificial intelligence for pancreatic ductal adenocarcinoma. World J Gastroenterol 2021; 27:7480-7496. [PMID: 34887644 PMCID: PMC8613738 DOI: 10.3748/wjg.v27.i43.7480] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 08/02/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) remains the most lethal type of cancer. The 5-year survival rate for patients with early-stage diagnosis can be as high as 20%, suggesting that early diagnosis plays a pivotal role in the prognostic improvement of PDAC cases. In the medical field, the broad availability of biomedical data has led to the advent of the "big data" era. To overcome this deadly disease, how to fully exploit big data is a new challenge in the era of precision medicine. Artificial intelligence (AI) is the ability of a machine to learn and display intelligence to solve problems. AI can help to transform big data into clinically actionable insights more efficiently, reduce inevitable errors to improve diagnostic accuracy, and make real-time predictions. AI-based omics analyses will become the next alterative approach to overcome this poor-prognostic disease by discovering biomarkers for early detection, providing molecular/genomic subtyping, offering treatment guidance, and predicting recurrence and survival. Advances in AI may therefore improve PDAC survival outcomes in the near future. The present review mainly focuses on recent advances of AI in PDAC for clinicians. We believe that breakthroughs will soon emerge to fight this deadly disease using AI-navigated precision medicine.
Collapse
Affiliation(s)
- Hiromitsu Hayashi
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Norio Uemura
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Kazuki Matsumura
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Liu Zhao
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Hiroki Sato
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Yuta Shiraishi
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Yo-ichi Yamashita
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| | - Hideo Baba
- Department of Gastroenterological Surgery, Graduate School of Life Sciences, Kumamoto University, Kumamoto 860-8556, Japan
| |
Collapse
|
40
|
Zheng Y, Leonard G, Tellez J, Zeh H, Majewicz Fey A. Identifying Kinematic Markers Associated with Intraoperative Stress during Surgical Training Tasks. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2021; 2021:10.1109/ismr48346.2021.9661482. [PMID: 37408580 PMCID: PMC10321325 DOI: 10.1109/ismr48346.2021.9661482] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/07/2023]
Abstract
Increased levels of stress can impair surgeon performance and patient safety during surgery. The aim of this study is to investigate the effect of short term stressors on laparoscopic performance through analysis of kinematic data. Thirty subjects were randomly assigned into two groups in this IRB-approved study. The control group was required to finish an extended-duration peg transfer task (6 minutes) using the FLS trainer while listening to normal simulated vital signs and while being observed by a silent moderator. The stressed group finished the same task but listened to a period of progressively deteriorating simulated patient vitals, as well as critical verbal feedback from the moderator, which culminated in 30 seconds of cardiac arrest and expiration of the simulated patient. For all subjects, video and position data using electromagnetic trackers mounted on the handles of the laparoscopic instruments were recorded. A statistical analysis comparing time-series velocity, acceleration, and jerk data, as well as path length and economy of volume was conducted. Clinical stressors lead to significantly higher velocity, acceleration, jerk, and path length as well as lower economy of volume. An objective evaluation score using a modified OSATS technique was also significantly worse for the stressed group than the control group. This study shows the potential feasibility and advantages of using the time-series kinematic data to identify the stressful conditions during laparoscopic surgery in near-real-time. This data could be useful in the design of future robot-assisted algorithms to reduce the unwanted effects of stress on surgical performance.
Collapse
Affiliation(s)
- Yi Zheng
- Yi Zheng and Ann Majewicz Fey are with the Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Grey Leonard
- Grey Leonard, Juan Tellez, Herbert Zeh and Ann Majewicz Fey are with the Department of Surgery, the University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | - Juan Tellez
- Grey Leonard, Juan Tellez, Herbert Zeh and Ann Majewicz Fey are with the Department of Surgery, the University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | - Herbert Zeh
- Grey Leonard, Juan Tellez, Herbert Zeh and Ann Majewicz Fey are with the Department of Surgery, the University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| | - Ann Majewicz Fey
- Yi Zheng and Ann Majewicz Fey are with the Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712 USA
- Grey Leonard, Juan Tellez, Herbert Zeh and Ann Majewicz Fey are with the Department of Surgery, the University of Texas Southwestern Medical Center, Dallas, TX 75390 USA
| |
Collapse
|
41
|
Moglia A, Georgiou K, Georgiou E, Satava RM, Cuschieri A. A systematic review on artificial intelligence in robot-assisted surgery. Int J Surg 2021; 95:106151. [PMID: 34695601 DOI: 10.1016/j.ijsu.2021.106151] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/04/2021] [Accepted: 10/19/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND Despite the extensive published literature on the significant potential of artificial intelligence (AI) there are no reports on its efficacy in improving patient safety in robot-assisted surgery (RAS). The purposes of this work are to systematically review the published literature on AI in RAS, and to identify and discuss current limitations and challenges. MATERIALS AND METHODS A literature search was conducted on PubMed, Web of Science, Scopus, and IEEExplore according to PRISMA 2020 statement. Eligible articles were peer-review studies published in English language from January 1, 2016 to December 31, 2020. Amstar 2 was used for quality assessment. Risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data of the studies were visually presented in tables using SPIDER tool. RESULTS Thirty-five publications, representing 3436 patients, met the search criteria and were included in the analysis. The selected reports concern: motion analysis (n = 17), urology (n = 12), gynecology (n = 1), other specialties (n = 1), training (n = 3), and tissue retraction (n = 1). Precision for surgical tools detection varied from 76.0% to 90.6%. Mean absolute error on prediction of urinary continence after robot-assisted radical prostatectomy (RARP) ranged from 85.9 to 134.7 days. Accuracy on prediction of length of stay after RARP was 88.5%. Accuracy on recognition of the next surgical task during robot-assisted partial nephrectomy (RAPN) achieved 75.7%. CONCLUSION The reviewed studies were of low quality. The findings are limited by the small size of the datasets. Comparison between studies on the same topic was restricted due to algorithms and datasets heterogeneity. There is no proof that currently AI can identify the critical tasks of RAS operations, which determine patient outcome. There is an urgent need for studies on large datasets and external validation of the AI algorithms used. Furthermore, the results should be transparent and meaningful to surgeons, enabling them to inform patients in layman's words. REGISTRATION Review Registry Unique Identifying Number: reviewregistry1225.
Collapse
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, 56124, Pisa, Italy 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Greece MPLSC, Athens Medical School, National and Kapodistrian University of Athens, Greece Department of Surgery, University of Washington Medical Center, Seattle, WA, United States Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, United Kingdom
| | | | | | | | | |
Collapse
|
42
|
Dong LJ, Zhang HB, Shi Q, Lei Q, Du JX, Gao S. Learning and fusing multiple hidden substages for action quality assessment. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107388] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
43
|
Gómez Rivas J, Toribio Vázquez C, Ballesteros Ruiz C, Taratkin M, Marenco JL, Cacciamani GE, Checcucci E, Okhunov Z, Enikeev D, Esperto F, Grossmann R, Somani B, Veneziano D. Artificial intelligence and simulation in urology. Actas Urol Esp 2021; 45:524-529. [PMID: 34526254 DOI: 10.1016/j.acuroe.2021.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/27/2020] [Indexed: 10/20/2022]
Abstract
INTRODUCTION AND OBJECTIVE Artificial intelligence (AI) is in full development and its implementation in medicine has led to an improvement in clinical and surgical practice. One of its multiple applications is surgical training, with the creation of programs that allow avoiding complications and risks for the patient. The aim of this article is to analyze the advantages of AI applied to surgical training in urology. MATERIAL AND METHODS A literary research is carried out to identify articles published in English regarding AI applied to medicine, especially in surgery and the acquisition of surgical skills. RESULTS Surgical training has evolved over time thanks to AI. A model for surgical learning where skills are acquired in a progressive way while avoiding complications to the patient, has been created. The use of simulators allows a progressive learning, providing trainees with procedures that increase in number and complexity. On the other hand, AI is used in imaging tests for surgical or treatment planning. CONCLUSION Currently, the use of AI in daily clinical practice has led to progress in medicine, specifically in surgical training.
Collapse
Affiliation(s)
- J Gómez Rivas
- Departamento de Urología, Hospital Clínico San Carlos, Madrid, Spain; Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands.
| | - C Toribio Vázquez
- Departamento de Urología, Hospital Universitario La Paz, Madrid, Spain
| | | | - M Taratkin
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Institute for Urology and Reproductive Health, Sechenov University, Moscú, Russia
| | - J L Marenco
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Departamento de Urología, Instituto Valenciano de Oncología, Valencia, Spain
| | - G E Cacciamani
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - E Checcucci
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Division of Urology, Department of Oncology, School of Medicine, San Luigi Hospital, University of Turin, Orbassano, Italy
| | - Z Okhunov
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Department of Urology, University of California, Irvine, CA, United States
| | - D Enikeev
- Institute for Urology and Reproductive Health, Sechenov University, Moscú, Russia
| | - F Esperto
- Department of Urology, Campus Biomedico, University of Rome, Roma, Italy
| | - R Grossmann
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Eastern Maine Medical Center, Bangor, ME, United States
| | - B Somani
- Department of Urology, University Hospital Southhampton, Southampton, United Kingdom
| | - D Veneziano
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, The Netherlands; Department of Urology and Kidney Transplant, Grande Ospedale Metropolitano, Reggio Calabria, Italy
| |
Collapse
|
44
|
Gómez Rivas J, Toribio Vázquez C, Ballesteros Ruiz C, Taratkin M, Marenco JL, Cacciamani GE, Checcucci E, Okhunov Z, Enikeev D, Esperto F, Grossmann R, Somani B, Veneziano D. Artificial intelligence and simulation in urology. Actas Urol Esp 2021; 45:S0210-4806(21)00088-7. [PMID: 34127285 DOI: 10.1016/j.acuro.2020.10.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/27/2020] [Indexed: 11/28/2022]
Abstract
INTRODUCTION AND OBJECTIVE Artificial intelligence (AI) is in full development and its implementation in medicine has led to an improvement in clinical and surgical practice. One of its multiple applications is surgical training, with the creation of programs that allow avoiding complications and risks for the patient. The aim of this article is to analyze the advantages of AI applied to surgical training in urology. MATERIAL AND METHODS A literary research is carried out to identify articles published in English regarding AI applied to medicine, especially in surgery and the acquisition of surgical skills. RESULTS Surgical training has evolved over time thanks to AI. A model for surgical learning where skills are acquired in a progressive way while avoiding complications to the patient, has been created. The use of simulators allows a progressive learning, providing trainees with procedures that increase in number and complexity. On the other hand, AI is used in imaging tests for surgical or treatment planning. CONCLUSION Currently, the use of AI in daily clinical practice has led to progress in medicine, specifically in surgical training.
Collapse
Affiliation(s)
- J Gómez Rivas
- Departamento de Urología, Hospital Clínico San Carlos, Madrid, España; Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos.
| | - C Toribio Vázquez
- Departamento de Urología, Hospital Universitario La Paz, Madrid, España
| | | | - M Taratkin
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Institute for Urology and Reproductive Health, Sechenov University, Moscú, Rusia
| | - J L Marenco
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Departamento de Urología, Instituto Valenciano de Oncología, Valencia, España
| | - G E Cacciamani
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, Estados Unidos
| | - E Checcucci
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Division of Urology, Department of Oncology, School of Medicine, San Luigi Hospital, University of Turin, Orbassano, Italia
| | - Z Okhunov
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Department of Urology, University of California, Irvine, California, Estados Unidos
| | - D Enikeev
- Institute for Urology and Reproductive Health, Sechenov University, Moscú, Rusia
| | - F Esperto
- Department of Urology, Campus Biomedico, University of Rome, Roma, Italia
| | - R Grossmann
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Eastern Maine Medical Center, Bangor, Maine, Estados Unidos
| | - B Somani
- Department of Urology, University Hospital Southhampton, Southampton, Reino Unido
| | - D Veneziano
- Young Academic Urologist-Urotechnology Working Party (ESUT-YAU), European Association of Urology, Arnhem, Países Bajos; Department of Urology and Kidney Transplant, Grande Ospedale Metropolitano, Reggio Calabria, Italia
| |
Collapse
|
45
|
Azargoshasb S, van Alphen S, Slof LJ, Rosiello G, Puliatti S, van Leeuwen SI, Houwing KM, Boonekamp M, Verhart J, Dell'Oglio P, van der Hage J, van Oosterom MN, van Leeuwen FWB. The Click-On gamma probe, a second-generation tethered robotic gamma probe that improves dexterity and surgical decision-making. Eur J Nucl Med Mol Imaging 2021; 48:4142-4151. [PMID: 34031721 PMCID: PMC8566398 DOI: 10.1007/s00259-021-05387-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 04/25/2021] [Indexed: 11/24/2022]
Abstract
Purpose Decision-making and dexterity, features that become increasingly relevant in (robot-assisted) minimally invasive surgery, are considered key components in improving the surgical accuracy. Recently, DROP-IN gamma probes were introduced to facilitate radioguided robotic surgery. We now studied if robotic DROP-IN radioguidance can be further improved using tethered Click-On designs that integrate gamma detection onto the robotic instruments themselves. Methods Using computer-assisted drawing software, 3D printing and precision machining, we created a Click-On probe containing two press-fit connections and an additional grasping moiety for a ProGrasp instrument combined with fiducials that could be video tracked using the Firefly laparoscope. Using a dexterity phantom, the duration of the specific tasks and the path traveled could be compared between use of the Click-On or DROP-IN probe. To study the impact on surgical decision-making, we performed a blinded study, in porcine models, wherein surgeons had to identify a hidden 57Co-source using either palpation or Click-On radioguidance. Results When assembled onto a ProGrasp instrument, while preserving grasping function and rotational freedom, the fully functional prototype could be inserted through a 12-mm trocar. In dexterity assessments, the Click-On provided a 40% reduction in movements compared to the DROP-IN, which converted into a reduction in time, path length, and increase in straightness index. Radioguidance also improved decision-making; task-completion rate increased by 60%, procedural time was reduced, and movements became more focused. Conclusion The Click-On gamma probe provides a step toward full integration of radioguidance in minimal invasive surgery. The value of this concept was underlined by its impact on surgical dexterity and decision-making.
Collapse
Affiliation(s)
- Samaneh Azargoshasb
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Simon van Alphen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Leon J Slof
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Giuseppe Rosiello
- Department of Urology and Division of Experimental Oncology, Urological Research Institute IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Stefano Puliatti
- Department of Urology, University of Modena and Reggio Emilia, Via del Pozzo, 71, 41124, Modena, Italy.,ORSI Academy, Melle, Belgium.,Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Sven I van Leeuwen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Krijn M Houwing
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Michael Boonekamp
- Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Jeroen Verhart
- Instrumentele zaken ontwikkeling, facilitair bedrijf, Leiden University Medical Center, Leiden, the Netherlands
| | - Paolo Dell'Oglio
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands.,ORSI Academy, Melle, Belgium.,Department of Urology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Jos van der Hage
- Department of Surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Matthias N van Oosterom
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands
| | - Fijs W B van Leeuwen
- Interventional Molecular Imaging-Laboratory, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands. .,Department of Urology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands. .,ORSI Academy, Melle, Belgium.
| |
Collapse
|
46
|
Brodie A, Dai N, Teoh JYC, Decaestecker K, Dasgupta P, Vasdev N. Artificial intelligence in urological oncology: An update and future applications. Urol Oncol 2021; 39:379-399. [PMID: 34024704 DOI: 10.1016/j.urolonc.2021.03.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/20/2020] [Accepted: 03/21/2021] [Indexed: 01/16/2023]
Abstract
There continues to be rapid developments and research in the field of Artificial Intelligence (AI) in Urological Oncology worldwide. In this review we discuss the basics of AI, application of AI per tumour group (Renal, Prostate and Bladder Cancer) and application of AI in Robotic Urological Surgery. We also discuss future applications of AI being developed with the benefits to patients with Urological Oncology.
Collapse
Affiliation(s)
- Andrew Brodie
- Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| | - Nick Dai
- Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
| | - Jeremy Yuen-Chun Teoh
- S.H. Ho Urology Centre, Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Prokar Dasgupta
- Faculty of Life Sciences and Medicine, King's College London, London, United Kingdom
| | - Nikhil Vasdev
- Hertfordshire and Bedfordshire Urological Cancer Centre, Department of Urology, Lister Hospital, Stevenage, United Kingdom; School of Medicine and Life Sciences, University of Hertfordshire, Hatfield, United Kingdom.
| |
Collapse
|
47
|
Lefor AK, Harada K, Dosis A, Mitsuishi M. Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set II: learning curve analysis. Int J Comput Assist Radiol Surg 2021; 16:589-595. [PMID: 33723706 DOI: 10.1007/s11548-021-02339-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 01/12/2023]
Abstract
PURPOSE The Johns Hopkins-Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset. METHODS Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis. RESULTS There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4. CONCLUSIONS The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.
Collapse
Affiliation(s)
- Alan Kawarai Lefor
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Kanako Harada
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| | | | - Mamoru Mitsuishi
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
48
|
Lavanchy JL, Zindel J, Kirtac K, Twick I, Hosgor E, Candinas D, Beldi G. Automation of surgical skill assessment using a three-stage machine learning algorithm. Sci Rep 2021; 11:5197. [PMID: 33664317 PMCID: PMC7933408 DOI: 10.1038/s41598-021-84295-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 02/15/2021] [Indexed: 12/04/2022] Open
Abstract
Surgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.
Collapse
Affiliation(s)
- Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Joel Zindel
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Kadir Kirtac
- Caresyntax, Komturstr. 18A, 12099, Berlin, Germany
| | | | - Enes Hosgor
- Caresyntax, Komturstr. 18A, 12099, Berlin, Germany
| | - Daniel Candinas
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland.
| |
Collapse
|
49
|
Davids J, Makariou SG, Ashrafian H, Darzi A, Marcus HJ, Giannarou S. Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation. World Neurosurg 2021; 149:e669-e686. [PMID: 33588081 DOI: 10.1016/j.wneu.2021.01.117] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/22/2021] [Accepted: 01/23/2021] [Indexed: 12/22/2022]
Abstract
BACKGROUND/OBJECTIVE Technical skill acquisition is an essential component of neurosurgical training. Educational theory suggests that optimal learning and improvement in performance depends on the provision of objective feedback. Therefore, the aim of this study was to develop a vision-based framework based on a novel representation of surgical tool motion and interactions capable of automated and objective assessment of microsurgical skill. METHODS Videos were obtained from 1 expert, 6 intermediate, and 12 novice surgeons performing arachnoid dissection in a validated clinical model using a standard operating microscope. A mask region convolutional neural network framework was used to segment the tools present within the operative field in a recorded video frame. Tool motion analysis was achieved using novel triangulation metrics. Performance of the framework in classifying skill levels was evaluated using the area under the curve and accuracy. Objective measures of classifying the surgeons' skill level were also compared using the Mann-Whitney U test, and a value of P < 0.05 was considered statistically significant. RESULTS The area under the curve was 0.977 and the accuracy was 84.21%. A number of differences were found, which included experts having a lower median dissector velocity (P = 0.0004; 190.38 ms-1 vs. 116.38 ms-1), and a smaller inter-tool tip distance (median 46.78 vs. 75.92; P = 0.0002) compared with novices. CONCLUSIONS Automated and objective analysis of microsurgery is feasible using a mask region convolutional neural network, and a novel tool motion and interaction representation. This may support technical skills training and assessment in neurosurgery.
Collapse
Affiliation(s)
- Joseph Davids
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Savvas-George Makariou
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom
| | - Hani J Marcus
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Imperial College Healthcare NHS Trust, St. Mary's Praed St., Paddington, London, United Kingdom; Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Stamatia Giannarou
- Department of Surgery and Cancer, Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom.
| |
Collapse
|
50
|
Yu C, Helwig EJ. Artificial intelligence in gastric cancer: a translational narrative review. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:269. [PMID: 33708896 PMCID: PMC7940908 DOI: 10.21037/atm-20-6337] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Increasing clinical contributions and novel techniques have been made by artificial intelligence (AI) during the last decade. The role of AI is increasingly recognized in cancer research and clinical application. Cancers like gastric cancer, or stomach cancer, are ideal testing grounds to see if early undertakings of applying AI to medicine can yield valuable results. There are numerous concepts derived from AI, including machine learning (ML) and deep learning (DL). ML is defined as the ability to learn data features without being explicitly programmed. It arises at the intersection of data science and computer science and aims at the efficiency of computing algorithms. In cancer research, ML has been increasingly used in predictive prognostic models. DL is defined as a subset of ML targeting multilayer computation processes. DL is less dependent on the understanding of data features than ML. Therefore, the algorithms of DL are much more difficult to interpret than ML, even potentially impossible. This review discussed the role of AI in the diagnostic, therapeutic and prognostic advances of gastric cancer. Models like convolutional neural networks (CNNs) or artificial neural networks (ANNs) achieved significant praise in their application. There is much more to be fully covered across the clinical administration of gastric cancer. Despite growing efforts, adapting AI to improving diagnoses for gastric cancer is a worthwhile venture. The information yield can revolutionize how we approach gastric cancer problems. Though integration might be slow and labored, it can be given the ability to enhance diagnosing through visual modalities and augment treatment strategies. It can grow to become an invaluable tool for physicians. AI not only benefits diagnostic and therapeutic outcomes, but also reshapes perspectives over future medical trajectory.
Collapse
Affiliation(s)
- Chaoran Yu
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Fudan University Shanghai Cancer Center, Shanghai, China
| | - Ernest Johann Helwig
- Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|