1
|
Shafiei SB, Shadpour S, Mohler JL, Kauffman EC, Holden M, Gutierrez C. Classification of subtask types and skill levels in robot-assisted surgery using EEG, eye-tracking, and machine learning. Surg Endosc 2024; 38:5137-5147. [PMID: 39039296 PMCID: PMC11362185 DOI: 10.1007/s00464-024-11049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/06/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Objective and standardized evaluation of surgical skills in robot-assisted surgery (RAS) holds critical importance for both surgical education and patient safety. This study introduces machine learning (ML) techniques using features derived from electroencephalogram (EEG) and eye-tracking data to identify surgical subtasks and classify skill levels. METHOD The efficacy of this approach was assessed using a comprehensive dataset encompassing nine distinct classes, each representing a unique combination of three surgical subtasks executed by surgeons while performing operations on pigs. Four ML models, logistic regression, random forest, gradient boosting, and extreme gradient boosting (XGB) were used for multi-class classification. To develop the models, 20% of data samples were randomly allocated to a test set, with the remaining 80% used for training and validation. Hyperparameters were optimized through grid search, using fivefold stratified cross-validation repeated five times. Model reliability was ensured by performing train-test split over 30 iterations, with average measurements reported. RESULTS The findings revealed that the proposed approach outperformed existing methods for classifying RAS subtasks and skills; the XGB and random forest models yielded high accuracy rates (88.49% and 88.56%, respectively) that were not significantly different (two-sample t-test; P-value = 0.9). CONCLUSION These results underscore the potential of ML models to augment the objectivity and precision of RAS subtask and skill evaluation. Future research should consider exploring ways to optimize these models, particularly focusing on the classes identified as challenging in this study. Ultimately, this study marks a significant step towards a more refined, objective, and standardized approach to RAS training and competency assessment.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- The Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Eric C Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Matthew Holden
- School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
2
|
Shafiei SB, Shadpour S, Mohler JL, Rashidi P, Toussi MS, Liu Q, Shafqat A, Gutierrez C. Prediction of Robotic Anastomosis Competency Evaluation (RACE) metrics during vesico-urethral anastomosis using electroencephalography, eye-tracking, and machine learning. Sci Rep 2024; 14:14611. [PMID: 38918593 PMCID: PMC11199555 DOI: 10.1038/s41598-024-65648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 06/21/2024] [Indexed: 06/27/2024] Open
Abstract
Residents learn the vesico-urethral anastomosis (VUA), a key step in robot-assisted radical prostatectomy (RARP), early in their training. VUA assessment and training significantly impact patient outcomes and have high educational value. This study aimed to develop objective prediction models for the Robotic Anastomosis Competency Evaluation (RACE) metrics using electroencephalogram (EEG) and eye-tracking data. Data were recorded from 23 participants performing robot-assisted VUA (henceforth 'anastomosis') on plastic models and animal tissue using the da Vinci surgical robot. EEG and eye-tracking features were extracted, and participants' anastomosis subtask performance was assessed by three raters using the RACE tool and operative videos. Random forest regression (RFR) and gradient boosting regression (GBR) models were developed to predict RACE scores using extracted features, while linear mixed models (LMM) identified associations between features and RACE scores. Overall performance scores significantly differed among inexperienced, competent, and experienced skill levels (P value < 0.0001). For plastic anastomoses, R2 values for predicting unseen test scores were: needle positioning (0.79), needle entry (0.74), needle driving and tissue trauma (0.80), suture placement (0.75), and tissue approximation (0.70). For tissue anastomoses, the values were 0.62, 0.76, 0.65, 0.68, and 0.62, respectively. The models could enhance RARP anastomosis training by offering objective performance feedback to trainees.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA
| | - Qian Liu
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
3
|
Andersen AG, Riparbelli AC, Siebner HR, Konge L, Bjerrum F. Using neuroimaging to assess brain activity and areas associated with surgical skills: a systematic review. Surg Endosc 2024; 38:3004-3026. [PMID: 38653901 DOI: 10.1007/s00464-024-10830-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 03/24/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND Surgical skills acquisition is under continuous development due to the emergence of new technologies, and there is a need for assessment tools to develop along with these. A range of neuroimaging modalities has been used to map the functional activation of brain networks while surgeons acquire novel surgical skills. These have been proposed as a method to provide a deeper understanding of surgical expertise and offer new possibilities for the personalized training of future surgeons. With studies differing in modalities, outcomes, and surgical skills there is a need for a systematic review of the evidence. This systematic review aims to summarize the current knowledge on the topic and evaluate the potential use of neuroimaging in surgical education. METHODS We conducted a systematic review of neuroimaging studies that mapped functional brain activation while surgeons with different levels of expertise learned and performed technical and non-technical surgical tasks. We included all studies published before July 1st, 2023, in MEDLINE, EMBASE and WEB OF SCIENCE. RESULTS 38 task-based brain mapping studies were identified, consisting of randomized controlled trials, case-control studies, and observational cohort or cross-sectional studies. The studies employed a wide range of brain mapping modalities, including electroencephalography, functional magnetic resonance imaging, positron emission tomography, and functional near-infrared spectroscopy, activating brain areas involved in the execution and sensorimotor or cognitive control of surgical skills, especially the prefrontal cortex, supplementary motor area, and primary motor area, showing significant changes between novices and experts. CONCLUSION Functional neuroimaging can reveal how task-related brain activity reflects technical and non-technical surgical skills. The existing body of work highlights the potential of neuroimaging to link task-related brain activity patterns with the individual level of competency or improvement in performance after training surgical skills. More research is needed to establish its validity and usefulness as an assessment tool.
Collapse
Affiliation(s)
- Annarita Ghosh Andersen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark.
- Department of Cardiothoracic Surgery, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.
| | - Agnes Cordelia Riparbelli
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
| | - Hartwig Roman Siebner
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital - Amager and Hvidovre, Hvidovre, Denmark
- Department of Neurology, Copenhagen University Hospital - Bispebjerg and Frederiksberg, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Gastrounit, Surgical Section, Copenhagen University Hospital - Amager and Hvidovre, Hvidovre, Denmark
| |
Collapse
|
4
|
Shafiei SB, Shadpour S, Sasangohar F, Mohler JL, Attwood K, Jing Z. Development of performance and learning rate evaluation models in robot-assisted surgery using electroencephalography and eye-tracking. NPJ SCIENCE OF LEARNING 2024; 9:3. [PMID: 38242909 PMCID: PMC10799032 DOI: 10.1038/s41539-024-00216-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 01/08/2024] [Indexed: 01/21/2024]
Abstract
The existing performance evaluation methods in robot-assisted surgery (RAS) are mainly subjective, costly, and affected by shortcomings such as the inconsistency of results and dependency on the raters' opinions. The aim of this study was to develop models for an objective evaluation of performance and rate of learning RAS skills while practicing surgical simulator tasks. The electroencephalogram (EEG) and eye-tracking data were recorded from 26 subjects while performing Tubes, Suture Sponge, and Dots and Needles tasks. Performance scores were generated by the simulator program. The functional brain networks were extracted using EEG data and coherence analysis. Then these networks, along with community detection analysis, facilitated the extraction of average search information and average temporal flexibility features at 21 Brodmann areas (BA) and four band frequencies. Twelve eye-tracking features were extracted and used to develop linear random intercept models for performance evaluation and multivariate linear regression models for the evaluation of the learning rate. Results showed that subject-wise standardization of features improved the R2 of the models. Average pupil diameter and rate of saccade were associated with performance in the Tubes task (multivariate analysis; p-value = 0.01 and p-value = 0.04, respectively). Entropy of pupil diameter was associated with performance in Dots and Needles task (multivariate analysis; p-value = 0.01). Average temporal flexibility and search information in several BAs and band frequencies were associated with performance and rate of learning. The models may be used to objectify performance and learning rate evaluation in RAS once validated with a broader sample size and tasks.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, N1G 2W1, Canada
| | - Farzan Sasangohar
- Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Kristopher Attwood
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Zhe Jing
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
5
|
Shafiei SB, Shadpour S, Mohler JL, Sasangohar F, Gutierrez C, Seilanian Toussi M, Shafqat A. Surgical skill level classification model development using EEG and eye-gaze data and machine learning algorithms. J Robot Surg 2023; 17:2963-2971. [PMID: 37864129 PMCID: PMC10678814 DOI: 10.1007/s11701-023-01722-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/19/2023] [Indexed: 10/22/2023]
Abstract
The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Farzan Sasangohar
- Mike and Sugar Barnes Faculty Fellow II, Wm Michael Barnes and Department of Industrial and Systems Engineering at Texas A&M University, College Station, TX, 77843, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
6
|
Next in Surgical Data Science: Autonomous Non-Technical Skill Assessment in Minimally Invasive Surgery Training. J Clin Med 2022; 11:jcm11247533. [PMID: 36556148 PMCID: PMC9785657 DOI: 10.3390/jcm11247533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 11/27/2022] [Accepted: 11/27/2022] [Indexed: 12/23/2022] Open
Abstract
Background: It is well understood that surgical skills largely define patient outcomes both in Minimally Invasive Surgery (MIS) and Robot-Assisted MIS (RAMIS). Non-technical surgical skills, including stress and distraction resilience, decision-making and situation awareness also contribute significantly. Autonomous, technologically supported objective skill assessment can be efficient tools to improve patient outcomes without the need to involve expert surgeon reviewers. However, autonomous non-technical skill assessments are unstandardized and open for more research. Recently, Surgical Data Science (SDS) has become able to improve the quality of interventional healthcare with big data and data processing techniques (capture, organization, analysis and modeling of data). SDS techniques can also help to achieve autonomous non-technical surgical skill assessments. Methods: An MIS training experiment is introduced to autonomously assess non-technical skills and to analyse the workload based on sensory data (video image and force) and a self-rating questionnaire (SURG-TLX). A sensorized surgical skill training phantom and adjacent training workflow were designed to simulate a complicated Laparoscopic Cholecystectomy task; the dissection of the cholecyst’s peritonial layer and the safe clip application on the cystic artery in an uncomfortable environment. A total of 20 training sessions were recorded from 7 subjects (3 non-medicals, 2 residents, 1 expert surgeon and 1 expert MIS surgeon). Workload and learning curves were studied via SURG-TLX. For autonomous non-technical skill assessment, video image data with tracked instruments based on Channel and Spatial Reliability Tracker (CSRT) and force data were utilized. An autonomous time series classification was achieved by a Fully Convolutional Neural Network (FCN), where the class labels were provided by SURG-TLX. Results: With unpaired t-tests, significant differences were found between the two groups (medical professionals and control) in certain workload components (mental demands, physical demands, and situational stress, p<0.0001, 95% confidence interval, p<0.05 for task complexity). With paired t-tests, the learning curves of the trials were also studied; the task complexity resulted in a significant difference between the first and the second trials. Autonomous non-technical skill classification was based on the FCN by applying the tool trajectories and force data as input. This resulted in a high accuracy (85%) on temporal demands classification based on the z component of the used forces and 75% accuracy for classifying mental demands/situational stress with the x component of the used forces validated with Leave One Out Cross-Validation. Conclusions: Non-technical skills and workload components can be classified autonomously based on measured training data. SDS can be effective via automated non-technical skill assessment.
Collapse
|
7
|
Shafiei SB, Iqbal U, Hussein AA, Guru KA. Utilizing deep neural networks and electroencephalogram for objective evaluation of surgeon's distraction during robot-assisted surgery. Brain Res 2021; 1769:147607. [PMID: 34352240 DOI: 10.1016/j.brainres.2021.147607] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 06/28/2021] [Accepted: 07/29/2021] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To develop an algorithm for objective evaluation of distraction of surgeons during robot-assisted surgery (RAS). MATERIALS AND METHODS Electroencephalogram (EEG) of 22 medical students was recorded while performing five key tasks on the robotic surgical simulator: Instrument Control, Ball Placement, Spatial Control II, Fourth Arm Tissue Retraction, and Hands-on Surgical Training Tasks. All students completed the Surgery Task Load Index (SURG-TLX), which includes one domain for subjective assessment of distraction (scale: 1-20). Scores were divided into low (score 1-6, subjective label: 1), intermediate (score 7-12, subjective label: 2), and high distraction (score 13-20, subjective label: 3). These cut-off values were arbitrarily considered based on a verbal assessment of participants and experienced surgeons. A Deep Convolutional Neural Network (CNN) algorithm was trained utilizing EEG recordings from the medical students and used to classify their distraction levels. The accuracy of our method was determined by comparing the subjective distraction scores on SURG-TLX and the results from the proposed classification algorithm. Also, Pearson correlation was utilized to assess the relationship between performance scores (generated by the simulator) and distraction (Subjective assessment scores). RESULTS The proposed end-to-end model classified distraction into low, intermediate, and high with 94%, 89%, and 95% accuracy, respectively. We found a significant negative correlation (r = -0.21; p = 0.003) between performance and SURG-TLX distraction scores. CONCLUSIONS Herein we report, to our knowledge, the first objective method to assess and quantify distraction while performing robotic surgical tasks on the robotic simulator, which may improve patient safety. Validation in the clinical setting is required.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States; Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States
| | - Umar Iqbal
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States; Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States
| | - Ahmed A Hussein
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States; Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States; Cairo University, Egypt
| | - Khurshid A Guru
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States; Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, United States.
| |
Collapse
|
8
|
Nagyné Elek R, Haidegger T. Non-Technical Skill Assessment and Mental Load Evaluation in Robot-Assisted Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2021; 21:2666. [PMID: 33920087 PMCID: PMC8068868 DOI: 10.3390/s21082666] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/31/2021] [Accepted: 04/08/2021] [Indexed: 01/07/2023]
Abstract
BACKGROUND: Sensor technologies and data collection practices are changing and improving quality metrics across various domains. Surgical skill assessment in Robot-Assisted Minimally Invasive Surgery (RAMIS) is essential for training and quality assurance. The mental workload on the surgeon (such as time criticality, task complexity, distractions) and non-technical surgical skills (including situational awareness, decision making, stress resilience, communication, leadership) may directly influence the clinical outcome of the surgery. METHODS: A literature search in PubMed, Scopus and PsycNet databases was conducted for relevant scientific publications. The standard PRISMA method was followed to filter the search results, including non-technical skill assessment and mental/cognitive load and workload estimation in RAMIS. Publications related to traditional manual Minimally Invasive Surgery were excluded, and also the usability studies on the surgical tools were not assessed. RESULTS: 50 relevant publications were identified for non-technical skill assessment and mental load and workload estimation in the domain of RAMIS. The identified assessment techniques ranged from self-rating questionnaires and expert ratings to autonomous techniques, citing their most important benefits and disadvantages. CONCLUSIONS: Despite the systematic research, only a limited number of articles was found, indicating that non-technical skill and mental load assessment in RAMIS is not a well-studied area. Workload assessment and soft skill measurement do not constitute part of the regular clinical training and practice yet. Meanwhile, the importance of the research domain is clear based on the publicly available surgical error statistics. Questionnaires and expert-rating techniques are widely employed in traditional surgical skill assessment; nevertheless, recent technological development in sensors and Internet of Things-type devices show that skill assessment approaches in RAMIS can be much more profound employing automated solutions. Measurements and especially big data type analysis may introduce more objectivity and transparency to this critical domain as well. SIGNIFICANCE: Non-technical skill assessment and mental load evaluation in Robot-Assisted Minimally Invasive Surgery is not a well-studied area yet; while the importance of this domain from the clinical outcome's point of view is clearly indicated by the available surgical error statistics.
Collapse
Affiliation(s)
- Renáta Nagyné Elek
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, 1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- John von Neumann Faculty of Informatics, Óbuda University, 1034 Budapest, Hungary
- Austrian Center for Medical Innovation and Technology, 2700 Wiener Neustadt, Austria
| |
Collapse
|
9
|
Association between Functional Brain Network Metrics and Surgeon Performance and Distraction in the Operating Room. Brain Sci 2021; 11:brainsci11040468. [PMID: 33917719 PMCID: PMC8068138 DOI: 10.3390/brainsci11040468] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 04/02/2021] [Accepted: 04/06/2021] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVE The aim of this work was to examine (electroencephalogram) EEG features that represent dynamic changes in the functional brain network of a surgical trainee and whether these features can be used to evaluate a robot assisted surgeon's (RAS) performance and distraction level in the operating room. MATERIALS AND METHODS Electroencephalogram (EEG) data were collected from three robotic surgeons in an operating room (OR) via a 128-channel EEG headset with a frequency of 500 samples/second. Signal processing and network neuroscience algorithms were applied to the data to extract EEG features. The SURG-TLX and NASA-TLX metrics were subjectively evaluated by a surgeon and mentor at the end of each task. The scores given to performance and distraction metrics were used in the analyses here. Statistical test data were utilized to select EEG features that have a significant relationship with surgeon performance and distraction while carrying out a RAS surgical task in the OR. RESULTS RAS surgeon performance and distraction had a relationship with the surgeon's functional brain network metrics as recorded throughout OR surgery. We also found a significant negative Pearson correlation between performance and the distraction level (-0.37, p-value < 0.0001). CONCLUSIONS The method proposed in this study has potential for evaluating RAS surgeon performance and the level of distraction. This has possible applications in improving patient safety, surgical mentorship, and training.
Collapse
|
10
|
Shafiei SB, Durrani M, Jing Z, Mostowy M, Doherty P, Hussein AA, Elsayed AS, Iqbal U, Guru K. Surgical Hand Gesture Recognition Utilizing Electroencephalogram as Input to the Machine Learning and Network Neuroscience Algorithms. SENSORS (BASEL, SWITZERLAND) 2021; 21:1733. [PMID: 33802372 PMCID: PMC7959280 DOI: 10.3390/s21051733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 02/19/2021] [Accepted: 02/24/2021] [Indexed: 11/17/2022]
Abstract
Surgical gestures detection can provide targeted, automated surgical skill assessment and feedback during surgical training for robot-assisted surgery (RAS). Several sources including surgical videos, robot tool kinematics, and an electromyogram (EMG) have been proposed to reach this goal. We aimed to extract features from electroencephalogram (EEG) data and use them in machine learning algorithms to classify robot-assisted surgical gestures. EEG was collected from five RAS surgeons with varying experience while performing 34 robot-assisted radical prostatectomies over the course of three years. Eight dominant hand and six non-dominant hand gesture types were extracted and synchronized with associated EEG data. Network neuroscience algorithms were utilized to extract functional brain network and power spectral density features. Sixty extracted features were used as input to machine learning algorithms to classify gesture types. The analysis of variance (ANOVA) F-value statistical method was used for feature selection and 10-fold cross-validation was used to validate the proposed method. The proposed feature set used in the extra trees (ET) algorithm classified eight gesture types performed by the dominant hand of five RAS surgeons with an accuracy of 90%, precision: 90%, sensitivity: 88%, and also classified six gesture types performed by the non-dominant hand with an accuracy of 93%, precision: 94%, sensitivity: 94%.
Collapse
Affiliation(s)
- Somayeh B. Shafiei
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Mohammad Durrani
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Zhe Jing
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Michael Mostowy
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Philippa Doherty
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Ahmed A. Hussein
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Ahmed S. Elsayed
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Umar Iqbal
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| | - Khurshid Guru
- Applied Technology Laboratory for Advanced Surgery (ATLAS), Roswell Park Comprehensive Cancer Center, Buffalo, NY 14203, USA; (S.B.S.); (M.D.); (Z.J.); (M.M.); (P.D.); (A.A.H.); (A.S.E.); (U.I.)
- Roswell Park Comprehensive Cancer Center, Department of Urology, Buffalo, NY 14203, USA
| |
Collapse
|
11
|
Balkhoyor AM, Awais M, Biyani S, Schaefer A, Craddock M, Jones O, Manogue M, Mon-Williams MA, Mushtaq F. Frontal theta brain activity varies as a function of surgical experience and task error. BMJ SURGERY, INTERVENTIONS, & HEALTH TECHNOLOGIES 2020; 2:e000040. [PMID: 35047792 PMCID: PMC8749254 DOI: 10.1136/bmjsit-2020-000040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 08/19/2020] [Accepted: 09/24/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE Investigations into surgical expertise have almost exclusively focused on overt behavioral characteristics with little consideration of the underlying neural processes. Recent advances in neuroimaging technologies, for example, wireless, wearable scalp-recorded electroencephalography (EEG), allow an insight into the neural processes governing performance. We used scalp-recorded EEG to examine whether surgical expertise and task performance could be differentiated according to an oscillatory brain activity signal known as frontal theta-a putative biomarker for cognitive control processes. DESIGN SETTING AND PARTICIPANTS Behavioral and EEG data were acquired from dental surgery trainees with 1 year (n=25) and 4 years of experience (n=20) while they performed low and high difficulty drilling tasks on a virtual reality surgical simulator. EEG power in the 4-7 Hz range in frontal electrodes (indexing frontal theta) was examined as a function of experience, task difficulty and error rate. RESULTS Frontal theta power was greater for novices relative to experts (p=0.001), but did not vary according to task difficulty (p=0.15) and there was no Experience × Difficulty interaction (p=0.87). Brain-behavior correlations revealed a significant negative relationship between frontal theta and error in the experienced group for the difficult task (r=-0.594, p=0.0058), but no such relationship emerged for novices. CONCLUSION We find frontal theta power differentiates between surgical experiences but correlates only with error rates for experienced surgeons while performing difficult tasks. These results provide a novel perspective on the relationship between expertise and surgical performance.
Collapse
Affiliation(s)
- Ahmed Mohammed Balkhoyor
- School of Dentistry, University of Leeds, Leeds, UK
- Faculty of Dentistry, Umm Al-Qura University, Makkah, Saudi Arabia
- School of Psychology, University of Leeds, Leeds, UK
| | | | | | - Alexandre Schaefer
- Department of Psychology, Jeffrey Cheah School of Medicine and Health Sciences, Monash University, Selangor, Malaysia
| | - Matt Craddock
- School of Psychology, Lincoln University, Lincoln, UK
| | - Olivia Jones
- School of Psychology, University of Leeds, Leeds, UK
| | | | | | | |
Collapse
|
12
|
Nuamah JK, Mantooth W, Karthikeyan R, Mehta RK, Ryu SC. Neural Efficiency of Human-Robotic Feedback Modalities Under Stress Differs With Gender. Front Hum Neurosci 2019; 13:287. [PMID: 31543765 PMCID: PMC6729110 DOI: 10.3389/fnhum.2019.00287] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 08/05/2019] [Indexed: 01/19/2023] Open
Abstract
Sensory feedback, which can be presented in different modalities - single and combined, aids task performance in human-robotic interaction (HRI). However, combining feedback modalities does not always lead to optimal performance. Indeed, it is not known how feedback modalities affect operator performance under stress. Furthermore, there is limited information on how feedback affects neural processes differently for males and females and under stress. This is a critical gap in the literature, particularly in the domain of surgical robotics, where surgeons are under challenging socio-technical environments that burden them physiologically. In the present study, we posited operator performance as the summation of task performance and neurophysiological cost of maintaining that performance. In a within-subject design, we used functional near-infrared spectroscopy to assess cerebral activations of 12 participants who underwent a 3D manipulation task within a virtual environment with concurrent feedback (visual and visual + haptic) in the presence and absence of a cognitive stressor. Cognitive stress was induced with the serial-7 subtraction test. We found that while task performance was higher with visual than visual + haptic feedback, it degraded under stress. The two feedback modalities were found to be associated with varying neural activities and neural efficiencies, and these were stress- and gender-dependent. Our findings engender further investigation into effectiveness of feedback modalities on males and females under stressful conditions in HRI.
Collapse
Affiliation(s)
- Joseph K. Nuamah
- NeuroErgonomics Laboratory, Department of Industrial & Systems Engineering, Texas A&M University, College Station, TX, United States
| | - Whitney Mantooth
- Department of Environmental and Occupational Health, Texas A&M University, College Station, TX, United States
| | - Rohith Karthikeyan
- Department of Mechanical Engineering, Texas A&M University, College Station, TX, United States
| | - Ranjana K. Mehta
- NeuroErgonomics Laboratory, Department of Industrial & Systems Engineering, Texas A&M University, College Station, TX, United States
| | - Seok Chang Ryu
- Department of Mechanical Engineering, Texas A&M University, College Station, TX, United States
| |
Collapse
|
13
|
Re: Rates and Predictors of Conversion to Open Surgery During Minimally Invasive Radical Cystectomy. Eur Urol 2019; 76:409-410. [PMID: 31085020 DOI: 10.1016/j.eururo.2019.04.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 04/17/2019] [Indexed: 11/21/2022]
|