1
|
Abbas T, Tiryaki S, Tekin A, Fernandez N, Fawzy M, Ulman I, Numanoglu A, Hadidi A, Ali M, Hassan I, Chowdhury M. Hypospadias Reconstruction Training: Development of an Ex-Vivo Model for Objective Evaluation of Surgical Skills. Urology 2024:S0090-4295(24)00805-7. [PMID: 39306302 DOI: 10.1016/j.urology.2024.09.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 09/06/2024] [Accepted: 09/11/2024] [Indexed: 10/07/2024]
Abstract
OBJECTIVE To objectively evaluate technical skill acquisition in hypospadias repair procedures during surgical training using noninvasive wearable sensor technology. METHODS We combined subjective video evaluations with objective electromyography (EMG) measurements in a hands-on hypospadias training course. Surgeons wore wireless EMG and accelerometer sensors on their dominant hand while performing tasks on ex-vivo cadaveric calf penises. The study focused on 4 skills as follows: urethral mobilization, dorsal inlay graft harvest/implantation, meatal-based flap urethroplasty, and dorsal plication. Machine learning techniques analyzed muscle activation patterns and attributes for assessing surgical precision. RESULTS The course included 18 participants (10 female, 8 males; average age 40.18 ± 8.46 years) categorized as novice (n = 10, <3 years' experience), intermediate (n = 5, 3-5 years), and expert (n = 3, >5 years). Video evaluations did not reveal significant differences due to short-term training. However, EMG measurements showed significant reductions in average EMG power, total time, dominant frequency, and cumulative muscle workload after training. Additionally, the mean power spectral density of the EMG signal decreased notably post-training. CONCLUSION This study presents a structured approach for hypospadias training and highlights the effectiveness of wearable sensor technology for objective skill assessment. While video evaluations did not detect significant changes, EMG data provided measurable differences in skill acquisition, suggesting that wearable sensors could enhance objective evaluations of surgical proficiency in residency programs.
Collapse
Affiliation(s)
- Tariq Abbas
- Pediatric Urology Section, Sidra Medicine, Doha, Qatar; College of Medicine, Qatar University, Doha, Qatar; Weill Cornell Medicine Qatar, Doha, Qatar.
| | - Sibel Tiryaki
- Ege University, Faculty of Medicine, Department of Pediatric Surgery, Division of Pediatric Urology, Izmir, Turkey
| | - Ali Tekin
- Ege University, Faculty of Medicine, Department of Pediatric Surgery, Division of Pediatric Urology, Izmir, Turkey
| | - Nicolas Fernandez
- Division of Pediatric Urology Seattle Children's Hospital, Department of Urology University of Washington, Seattle, WA
| | - Mohamed Fawzy
- Hypospadias Clinic, Department of Pediatric Surgery, Emma and Offenbach Hospitals, Offenbach, Germany
| | - Ibrahim Ulman
- Ege University, Faculty of Medicine, Department of Pediatric Surgery, Division of Pediatric Urology, Izmir, Turkey
| | - Alp Numanoglu
- Ege University, Faculty of Medicine, Department of Pediatric Surgery, Division of Pediatric Urology, Izmir, Turkey
| | - Ahmed Hadidi
- Hypospadias Clinic, Department of Pediatric Surgery, Emma and Offenbach Hospitals, Offenbach, Germany
| | - Mansour Ali
- Department of Surgery, Sidra Medicine, Doha, Qatar
| | | | | |
Collapse
|
2
|
Shafiei SB, Shadpour S, Mohler JL, Kauffman EC, Holden M, Gutierrez C. Classification of subtask types and skill levels in robot-assisted surgery using EEG, eye-tracking, and machine learning. Surg Endosc 2024; 38:5137-5147. [PMID: 39039296 PMCID: PMC11362185 DOI: 10.1007/s00464-024-11049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/06/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Objective and standardized evaluation of surgical skills in robot-assisted surgery (RAS) holds critical importance for both surgical education and patient safety. This study introduces machine learning (ML) techniques using features derived from electroencephalogram (EEG) and eye-tracking data to identify surgical subtasks and classify skill levels. METHOD The efficacy of this approach was assessed using a comprehensive dataset encompassing nine distinct classes, each representing a unique combination of three surgical subtasks executed by surgeons while performing operations on pigs. Four ML models, logistic regression, random forest, gradient boosting, and extreme gradient boosting (XGB) were used for multi-class classification. To develop the models, 20% of data samples were randomly allocated to a test set, with the remaining 80% used for training and validation. Hyperparameters were optimized through grid search, using fivefold stratified cross-validation repeated five times. Model reliability was ensured by performing train-test split over 30 iterations, with average measurements reported. RESULTS The findings revealed that the proposed approach outperformed existing methods for classifying RAS subtasks and skills; the XGB and random forest models yielded high accuracy rates (88.49% and 88.56%, respectively) that were not significantly different (two-sample t-test; P-value = 0.9). CONCLUSION These results underscore the potential of ML models to augment the objectivity and precision of RAS subtask and skill evaluation. Future research should consider exploring ways to optimize these models, particularly focusing on the classes identified as challenging in this study. Ultimately, this study marks a significant step towards a more refined, objective, and standardized approach to RAS training and competency assessment.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- The Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Eric C Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Matthew Holden
- School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
3
|
Seino T, Saito N, Ogawa T, Asamizu S, Haseyama M. Expert-Novice Level Classification Using Graph Convolutional Network Introducing Confidence-Aware Node-Level Attention Mechanism. SENSORS (BASEL, SWITZERLAND) 2024; 24:3033. [PMID: 38793888 PMCID: PMC11125224 DOI: 10.3390/s24103033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 05/06/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024]
Abstract
In this study, we propose a classification method of expert-novice levels using a graph convolutional network (GCN) with a confidence-aware node-level attention mechanism. In classification using an attention mechanism, highlighted features may not be significant for accurate classification, thereby degrading classification performance. To address this issue, the proposed method introduces a confidence-aware node-level attention mechanism into a spatiotemporal attention GCN (STA-GCN) for the classification of expert-novice levels. Consequently, our method can contrast the attention value of each node on the basis of the confidence measure of the classification, which solves the problem of classification approaches using attention mechanisms and realizes accurate classification. Furthermore, because the expert-novice levels have ordinalities, using a classification model that considers ordinalities improves the classification performance. The proposed method involves a model that minimizes a loss function that considers the ordinalities of classes to be classified. By implementing the above approaches, the expert-novice level classification performance is improved.
Collapse
Affiliation(s)
- Tatsuki Seino
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan;
| | - Naoki Saito
- Office of Institutional Research, Hokkaido University, Sapporo 060-0808, Japan;
| | - Takahiro Ogawa
- Faculty of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan;
| | - Satoshi Asamizu
- National Institute of Technology, Kushiro College, Kushiro 084-0916, Japan;
| | - Miki Haseyama
- Faculty of Information Science and Technology, Hokkaido University, Sapporo 060-0814, Japan;
| |
Collapse
|
4
|
Morris MX, Fiocco D, Caneva T, Yiapanis P, Orgill DP. Current and future applications of artificial intelligence in surgery: implications for clinical practice and research. Front Surg 2024; 11:1393898. [PMID: 38783862 PMCID: PMC11111929 DOI: 10.3389/fsurg.2024.1393898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
Surgeons are skilled at making complex decisions over invasive procedures that can save lives and alleviate pain and avoid complications in patients. The knowledge to make these decisions is accumulated over years of schooling and practice. Their experience is in turn shared with others, also via peer-reviewed articles, which get published in larger and larger amounts every year. In this work, we review the literature related to the use of Artificial Intelligence (AI) in surgery. We focus on what is currently available and what is likely to come in the near future in both clinical care and research. We show that AI has the potential to be a key tool to elevate the effectiveness of training and decision-making in surgery and the discovery of relevant and valid scientific knowledge in the surgical domain. We also address concerns about AI technology, including the inability for users to interpret algorithms as well as incorrect predictions. A better understanding of AI will allow surgeons to use new tools wisely for the benefit of their patients.
Collapse
Affiliation(s)
- Miranda X. Morris
- Duke University School of Medicine, Duke University Hospital, Durham, NC, United States
| | - Davide Fiocco
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Tommaso Caneva
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Paris Yiapanis
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Dennis P. Orgill
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, United States
| |
Collapse
|
5
|
Takács K, Lukács E, Levendovics R, Pekli D, Szijártó A, Haidegger T. Assessment of Surgeons' Stress Levels with Digital Sensors during Robot-Assisted Surgery: An Experimental Study. SENSORS (BASEL, SWITZERLAND) 2024; 24:2915. [PMID: 38733021 PMCID: PMC11086209 DOI: 10.3390/s24092915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/25/2024] [Accepted: 04/30/2024] [Indexed: 05/13/2024]
Abstract
Robot-Assisted Minimally Invasive Surgery (RAMIS) marks a paradigm shift in surgical procedures, enhancing precision and ergonomics. Concurrently it introduces complex stress dynamics and ergonomic challenges regarding the human-robot interface and interaction. This study explores the stress-related aspects of RAMIS, using the da Vinci XI Surgical System and the Sea Spikes model as a standard skill training phantom to establish a link between technological advancement and human factors in RAMIS environments. By employing different physiological and kinematic sensors for heart rate variability, hand movement tracking, and posture analysis, this research aims to develop a framework for quantifying the stress and ergonomic loads applied to surgeons. Preliminary findings reveal significant correlations between stress levels and several of the skill-related metrics measured by external sensors or the SURG-TLX questionnaire. Furthermore, early analysis of this preliminary dataset suggests the potential benefits of applying machine learning for surgeon skill classification and stress analysis. This paper presents the initial findings, identified correlations, and the lessons learned from the clinical setup, aiming to lay down the cornerstones for wider studies in the fields of clinical situation awareness and attention computing.
Collapse
Affiliation(s)
- Kristóf Takács
- Antal Bejczy Center for Intelligent Robotics (IROB), University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary; (E.L.); (R.L.)
| | - Eszter Lukács
- Antal Bejczy Center for Intelligent Robotics (IROB), University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary; (E.L.); (R.L.)
| | - Renáta Levendovics
- Antal Bejczy Center for Intelligent Robotics (IROB), University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary; (E.L.); (R.L.)
- John von Neumann Faculty of Informatics (NIK), Óbuda University, 1034 Budapest, Hungary
- Austrian Center for Medical Innovation and Technology (ACMIT), 2700 Wiener Neustadt, Austria
| | - Damján Pekli
- Department of Surgery, Transplantation and Gastroenterology, Semmelweis University, 1082 Budapest, Hungary; (D.P.); (A.S.)
| | - Attila Szijártó
- Department of Surgery, Transplantation and Gastroenterology, Semmelweis University, 1082 Budapest, Hungary; (D.P.); (A.S.)
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics (IROB), University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary; (E.L.); (R.L.)
- Austrian Center for Medical Innovation and Technology (ACMIT), 2700 Wiener Neustadt, Austria
| |
Collapse
|
6
|
Choi E, Leonard KW, Jassal JS, Levin AM, Ramachandra V, Jones LR. Artificial Intelligence in Facial Plastic Surgery: A Review of Current Applications, Future Applications, and Ethical Considerations. Facial Plast Surg 2023; 39:454-459. [PMID: 37353051 DOI: 10.1055/s-0043-1770160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2023] Open
Abstract
From virtual chat assistants to self-driving cars, artificial intelligence (AI) is often heralded as the technology that has and will continue to transform this generation. Among widely adopted applications in other industries, its potential use in medicine is being increasingly explored, where the vast amounts of data present in electronic health records and need for continuous improvements in patient care and workflow efficiency present many opportunities for AI implementation. Indeed, AI has already demonstrated capabilities for assisting in tasks such as documentation, image classification, and surgical outcome prediction. More specifically, this technology can be harnessed in facial plastic surgery, where the unique characteristics of the field lends itself well to specific applications. AI is not without its limitations, however, and the further adoption of AI in medicine and facial plastic surgery must necessarily be accompanied by discussion on the ethical implications and proper usage of AI in healthcare. In this article, we review current and potential uses of AI in facial plastic surgery, as well as its ethical ramifications.
Collapse
Affiliation(s)
- Elizabeth Choi
- Wayne State University School of Medicine, Detroit, Michigan
| | - Kyle W Leonard
- Department of Otolaryngology, Henry Ford Hospital, Detroit, Michigan
| | - Japnam S Jassal
- Department of Otolaryngology, Henry Ford Hospital, Detroit, Michigan
| | - Albert M Levin
- Department of Public Health Science, Henry Ford Health, Detroit, Michigan
- Center for Bioinformatics, Henry Ford Health, Detroit, Michigan
| | - Vikas Ramachandra
- Department of Public Health Science, Henry Ford Health, Detroit, Michigan
- Center for Bioinformatics, Henry Ford Health, Detroit, Michigan
| | - Lamont R Jones
- Department of Otolaryngology, Henry Ford Hospital, Detroit, Michigan
| |
Collapse
|
7
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
8
|
Li C, Liu C, Huaulme A, Zemiti N, Jannin P, Poignet P. sEMG-based Motion Recognition for Robotic Surgery Training - A Preliminary Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083107 DOI: 10.1109/embc40787.2023.10340047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Robotic surgery represents a major breakthrough in the evolution of medical technology. Accordingly, efficient skill training and assessment methods should be developed to meet the surgeon's need of acquiring such robotic skills over a relatively short learning curve in a safe manner. Different from conventional training and assessment methods, we aim to explore the surface electromyography (sEMG) signal during the training process in order to obtain semantic and interpretable information to help the trainee better understand and improve his/her training performance. As a preliminary study, motion primitive recognition based on sEMG signal is studied in this work. Using machine learning (ML) technique, it is shown that the sEMG-based motion recognition method is feasible and promising for hand motions along 3 Cartesian axes in the virtual reality (VR) environment of a commercial robotic surgery training platform, which will hence serve as the basis for new robotic surgical skill assessment criterion and training guidance based on muscle activity information. Considering certain motion patterns were less accurately recognized than others, more data collection and deep learning-based analysis will be carried out to further improve the recognition accuracy in future research.
Collapse
|