1
|
Shafiei SB, Shadpour S, Shafqat A. Mental workload evaluation using weighted phase lag index and coherence features extracted from EEG data. Brain Res Bull 2024; 214:110992. [PMID: 38825253 DOI: 10.1016/j.brainresbull.2024.110992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/26/2024] [Accepted: 05/30/2024] [Indexed: 06/04/2024]
Abstract
Electroencephalogram (EEG) represents an effective, non-invasive technology to study mental workload. However, volume conduction, a common EEG artifact, influences functional connectivity analysis of EEG data. EEG coherence has been used traditionally to investigate functional connectivity between brain areas associated with mental workload, while weighted Phase Lag Index (wPLI) is a measure that improves on coherence by reducing susceptibility to volume conduction, a common EEG artifact. The goal of this study was to compare two methods of functional connectivity analysis, wPLI and coherence, in the context of mental workload evaluation. The study involved model development for mental workload domains and comparing their performance using coherence-based features, wPLI-based features, and a combination of both. Generalized linear mixed-effects model (GLMM) with the least absolute shrinkage and selection operator (LASSO) feature selection method was used for model development. Results indicated that the model developed using a combination of both feature types demonstrated improved predictive performance across all mental workload domains, compared to models that used each feature type individually. The R2 values were 0.82 for perceived task complexity, 0.71 for distraction, 0.91 for mental demand, 0.85 for physical demand, 0.74 for situational stress, and 0.74 for temporal demand. Furthermore, task complexity and functional connectivity patterns in different brain areas were identified as significant contributors to perceived mental workload (p-value<0.05). Findings showed the potential of using EEG data for mental workload evaluation which suggests that combination of coherence and wPLI can improve the accuracy of mental workload domains prediction. Future research should aim to validate these results on larger, diverse datasets to confirm their generalizability and refine the predictive models.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- the Intelligent Cancer Care Laboratory, the Department of Urology, Roswell Park Comprehensive Cancer Center in Buffalo, NY 14263, USA.
| | - Saeed Shadpour
- the Department of Animal Biosciences, University of Guelph, Guelph, Ontario N1G 2W1, Canada
| | - Ambreen Shafqat
- the Intelligent Cancer Care Laboratory, the Department of Urology, Roswell Park Comprehensive Cancer Center in Buffalo, NY 14263, USA
| |
Collapse
|
2
|
Shafiei SB, Shadpour S, Mohler JL, Rashidi P, Toussi MS, Liu Q, Shafqat A, Gutierrez C. Prediction of Robotic Anastomosis Competency Evaluation (RACE) metrics during vesico-urethral anastomosis using electroencephalography, eye-tracking, and machine learning. Sci Rep 2024; 14:14611. [PMID: 38918593 PMCID: PMC11199555 DOI: 10.1038/s41598-024-65648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 06/21/2024] [Indexed: 06/27/2024] Open
Abstract
Residents learn the vesico-urethral anastomosis (VUA), a key step in robot-assisted radical prostatectomy (RARP), early in their training. VUA assessment and training significantly impact patient outcomes and have high educational value. This study aimed to develop objective prediction models for the Robotic Anastomosis Competency Evaluation (RACE) metrics using electroencephalogram (EEG) and eye-tracking data. Data were recorded from 23 participants performing robot-assisted VUA (henceforth 'anastomosis') on plastic models and animal tissue using the da Vinci surgical robot. EEG and eye-tracking features were extracted, and participants' anastomosis subtask performance was assessed by three raters using the RACE tool and operative videos. Random forest regression (RFR) and gradient boosting regression (GBR) models were developed to predict RACE scores using extracted features, while linear mixed models (LMM) identified associations between features and RACE scores. Overall performance scores significantly differed among inexperienced, competent, and experienced skill levels (P value < 0.0001). For plastic anastomoses, R2 values for predicting unseen test scores were: needle positioning (0.79), needle entry (0.74), needle driving and tissue trauma (0.80), suture placement (0.75), and tissue approximation (0.70). For tissue anastomoses, the values were 0.62, 0.76, 0.65, 0.68, and 0.62, respectively. The models could enhance RARP anastomosis training by offering objective performance feedback to trainees.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA
| | - Qian Liu
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
3
|
Li B, Tong L, Zhang C, Chen P, Wang L, Yan B. Prediction of image interpretation cognitive ability under different mental workloads: a task-state fMRI study. Cereb Cortex 2024; 34:bhae100. [PMID: 38494891 DOI: 10.1093/cercor/bhae100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 03/19/2024] Open
Abstract
Visual imaging experts play an important role in multiple fields, and studies have shown that the combination of functional magnetic resonance imaging and machine learning techniques can predict cognitive abilities, which provides a possible method for selecting individuals with excellent image interpretation skills. We recorded behavioral data and neural activity of 64 participants during image interpretation tasks under different workloads. Based on the comprehensive image interpretation ability, participants were divided into two groups. general linear model analysis showed that during image interpretation tasks, the high-ability group exhibited higher activation in middle frontal gyrus (MFG), fusiform gyrus, inferior occipital gyrus, superior parietal gyrus, inferior parietal gyrus, and insula compared to the low-ability group. The radial basis function Support Vector Machine (SVM) algorithm shows the most excellent performance in predicting participants' image interpretation abilities (Pearson correlation coefficient = 0.54, R2 = 0.31, MSE = 0.039, RMSE = 0.002). Variable importance analysis indicated that the activation features of the fusiform gyrus and MFG played an important role in predicting this ability. Our study revealed the neural basis related to image interpretation ability when exposed to different mental workloads. Additionally, our results demonstrated the efficacy of machine learning algorithms in extracting neural activation features to predict such ability.
Collapse
Affiliation(s)
- Bao Li
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Science Avenue 62, Zhengzhou, 450001, China
| | - Li Tong
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Science Avenue 62, Zhengzhou, 450001, China
| | - Chi Zhang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Science Avenue 62, Zhengzhou, 450001, China
| | - Panpan Chen
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Science Avenue 62, Zhengzhou, 450001, China
| | - Linyuan Wang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Science Avenue 62, Zhengzhou, 450001, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Science Avenue 62, Zhengzhou, 450001, China
| |
Collapse
|
4
|
Shafiei SB, Shadpour S, Sasangohar F, Mohler JL, Attwood K, Jing Z. Development of performance and learning rate evaluation models in robot-assisted surgery using electroencephalography and eye-tracking. NPJ SCIENCE OF LEARNING 2024; 9:3. [PMID: 38242909 PMCID: PMC10799032 DOI: 10.1038/s41539-024-00216-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 01/08/2024] [Indexed: 01/21/2024]
Abstract
The existing performance evaluation methods in robot-assisted surgery (RAS) are mainly subjective, costly, and affected by shortcomings such as the inconsistency of results and dependency on the raters' opinions. The aim of this study was to develop models for an objective evaluation of performance and rate of learning RAS skills while practicing surgical simulator tasks. The electroencephalogram (EEG) and eye-tracking data were recorded from 26 subjects while performing Tubes, Suture Sponge, and Dots and Needles tasks. Performance scores were generated by the simulator program. The functional brain networks were extracted using EEG data and coherence analysis. Then these networks, along with community detection analysis, facilitated the extraction of average search information and average temporal flexibility features at 21 Brodmann areas (BA) and four band frequencies. Twelve eye-tracking features were extracted and used to develop linear random intercept models for performance evaluation and multivariate linear regression models for the evaluation of the learning rate. Results showed that subject-wise standardization of features improved the R2 of the models. Average pupil diameter and rate of saccade were associated with performance in the Tubes task (multivariate analysis; p-value = 0.01 and p-value = 0.04, respectively). Entropy of pupil diameter was associated with performance in Dots and Needles task (multivariate analysis; p-value = 0.01). Average temporal flexibility and search information in several BAs and band frequencies were associated with performance and rate of learning. The models may be used to objectify performance and learning rate evaluation in RAS once validated with a broader sample size and tasks.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, Ontario, N1G 2W1, Canada
| | - Farzan Sasangohar
- Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Kristopher Attwood
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Zhe Jing
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
5
|
Shafiei SB, Shadpour S, Mohler JL, Sasangohar F, Gutierrez C, Seilanian Toussi M, Shafqat A. Surgical skill level classification model development using EEG and eye-gaze data and machine learning algorithms. J Robot Surg 2023; 17:2963-2971. [PMID: 37864129 PMCID: PMC10678814 DOI: 10.1007/s11701-023-01722-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/19/2023] [Indexed: 10/22/2023]
Abstract
The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Farzan Sasangohar
- Mike and Sugar Barnes Faculty Fellow II, Wm Michael Barnes and Department of Industrial and Systems Engineering at Texas A&M University, College Station, TX, 77843, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|