1
|
Shafiei SB, Shadpour S, Mohler JL, Rashidi P, Toussi MS, Liu Q, Shafqat A, Gutierrez C. Prediction of Robotic Anastomosis Competency Evaluation (RACE) metrics during vesico-urethral anastomosis using electroencephalography, eye-tracking, and machine learning. Sci Rep 2024; 14:14611. [PMID: 38918593 PMCID: PMC11199555 DOI: 10.1038/s41598-024-65648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 06/21/2024] [Indexed: 06/27/2024] Open
Abstract
Residents learn the vesico-urethral anastomosis (VUA), a key step in robot-assisted radical prostatectomy (RARP), early in their training. VUA assessment and training significantly impact patient outcomes and have high educational value. This study aimed to develop objective prediction models for the Robotic Anastomosis Competency Evaluation (RACE) metrics using electroencephalogram (EEG) and eye-tracking data. Data were recorded from 23 participants performing robot-assisted VUA (henceforth 'anastomosis') on plastic models and animal tissue using the da Vinci surgical robot. EEG and eye-tracking features were extracted, and participants' anastomosis subtask performance was assessed by three raters using the RACE tool and operative videos. Random forest regression (RFR) and gradient boosting regression (GBR) models were developed to predict RACE scores using extracted features, while linear mixed models (LMM) identified associations between features and RACE scores. Overall performance scores significantly differed among inexperienced, competent, and experienced skill levels (P value < 0.0001). For plastic anastomoses, R2 values for predicting unseen test scores were: needle positioning (0.79), needle entry (0.74), needle driving and tissue trauma (0.80), suture placement (0.75), and tissue approximation (0.70). For tissue anastomoses, the values were 0.62, 0.76, 0.65, 0.68, and 0.62, respectively. The models could enhance RARP anastomosis training by offering objective performance feedback to trainees.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA
| | - Qian Liu
- Department of Biostatistics and Bioinformatics, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Elm and Carlton Streets, Buffalo, NY, 14263, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
2
|
Li R, Ren C, Zhang S, Yang Y, Zhao Q, Hou K, Yuan W, Zhang X, Hu B. STSNet: a novel spatio-temporal-spectral network for subject-independent EEG-based emotion recognition. Health Inf Sci Syst 2023; 11:25. [PMID: 37265664 PMCID: PMC10229500 DOI: 10.1007/s13755-023-00226-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 04/28/2023] [Indexed: 06/03/2023] Open
Abstract
How to use the characteristics of EEG signals to obtain more complementary and discriminative data representation is an issue in EEG-based emotion recognition. Many studies have tried spatio-temporal or spatio-spectral feature fusion to obtain higher-level representations of EEG data. However, these studies ignored the complementarity between spatial, temporal and spectral domains of EEG signals, thus limiting the classification ability of models. This study proposed an end-to-end network based on ManifoldNet and BiLSTM networks, named STSNet. The STSNet first constructed a 4-D spatio-temporal-spectral data representation and a spatio-temporal data representation based on EEG signals in manifold space. After that, they were fed into the ManifoldNet network and the BiLSTM network respectively to calculate higher-level features and achieve spatio-temporal-spectral feature fusion. Finally, extensive comparative experiments were performed on two public datasets, DEAP and DREAMER, using the subject-independent leave-one-subject-out cross-validation strategy. On the DEAP dataset, the average accuracy of the valence and arousal are 69.38% and 71.88%, respectively; on the DREAMER dataset, the average accuracy of the valence and arousal are 78.26% and 82.37%, respectively. Experimental results show that the STSNet model has good emotion recognition performance.
Collapse
Affiliation(s)
- Rui Li
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Chao Ren
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Sipo Zhang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Yikun Yang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Qiqi Zhao
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Kechen Hou
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Wenjie Yuan
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Xiaowei Zhang
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| | - Bin Hu
- Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 Gansu China
| |
Collapse
|
3
|
Shafiei SB, Shadpour S, Mohler JL, Sasangohar F, Gutierrez C, Seilanian Toussi M, Shafqat A. Surgical skill level classification model development using EEG and eye-gaze data and machine learning algorithms. J Robot Surg 2023; 17:2963-2971. [PMID: 37864129 PMCID: PMC10678814 DOI: 10.1007/s11701-023-01722-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/19/2023] [Indexed: 10/22/2023]
Abstract
The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Farzan Sasangohar
- Mike and Sugar Barnes Faculty Fellow II, Wm Michael Barnes and Department of Industrial and Systems Engineering at Texas A&M University, College Station, TX, 77843, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
4
|
Wang Z, Hou S, Xiao T, Zhang Y, Lv H, Li J, Zhao S, Zhao Y. Lightweight Seizure Detection Based on Multi-Scale Channel Attention. Int J Neural Syst 2023; 33:2350061. [PMID: 37845193 DOI: 10.1142/s0129065723500612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2023]
Abstract
Epilepsy is one kind of neurological disease characterized by recurring seizures. Recurrent seizures can cause ongoing negative mental and cognitive damage to the patient. Therefore, timely diagnosis and treatment of epilepsy are crucial for patients. Manual electroencephalography (EEG) signals analysis is time and energy consuming, making automatic detection using EEG signals particularly important. Many deep learning algorithms have thus been proposed to detect seizures. These methods rely on expensive and bulky hardware, which makes them unsuitable for deployment on devices with limited resources due to their high demands on computer resources. In this paper, we propose a novel lightweight neural network for seizure detection using pure convolutions, which is composed of inverted residual structure and multi-scale channel attention mechanism. Compared with other methods, our approach significantly reduces the computational complexity, making it possible to deploy on low-cost portable devices for seizures detection. We conduct experiments on the CHB-MIT dataset and achieves 98.7% accuracy, 98.3% sensitivity and 99.1% specificity with 2.68[Formula: see text]M multiply-accumulate operations (MACs) and only 88[Formula: see text]K parameters.
Collapse
Affiliation(s)
- Ziwei Wang
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Sujuan Hou
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Tiantian Xiao
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Yongfeng Zhang
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Hongbin Lv
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Jiacheng Li
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| | - Shanshan Zhao
- Department of Hematology, Heze Hospital of Traditional Chinese Medicine, Heze 274000, P. R. China
| | - Yanna Zhao
- School of Information Science and Engineering, Shandong Normal University, Jinan 250358, P. R. China
| |
Collapse
|
5
|
Shafiei SB, Shadpour S, Intes X, Rahul R, Toussi MS, Shafqat A. Performance and learning rate prediction models development in FLS and RAS surgical tasks using electroencephalogram and eye gaze data and machine learning. Surg Endosc 2023; 37:8447-8463. [PMID: 37730852 PMCID: PMC10615961 DOI: 10.1007/s00464-023-10409-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 08/14/2023] [Indexed: 09/22/2023]
Abstract
OBJECTIVE This study explored the use of electroencephalogram (EEG) and eye gaze features, experience-related features, and machine learning to evaluate performance and learning rates in fundamentals of laparoscopic surgery (FLS) and robotic-assisted surgery (RAS). METHODS EEG and eye-tracking data were collected from 25 participants performing three FLS and 22 participants performing two RAS tasks. Generalized linear mixed models, using L1-penalized estimation, were developed to objectify performance evaluation using EEG and eye gaze features, and linear models were developed to objectify learning rate evaluation using these features and performance scores at the first attempt. Experience metrics were added to evaluate their role in learning robotic surgery. The differences in performance across experience levels were tested using analysis of variance. RESULTS EEG and eye gaze features and experience-related features were important for evaluating performance in FLS and RAS tasks with reasonable results. Residents outperformed faculty in FLS peg transfer (p value = 0.04), while faculty and residents both excelled over pre-medical students in the FLS pattern cut (p value = 0.01 and p value < 0.001, respectively). Fellows outperformed pre-medical students in FLS suturing (p value = 0.01). In RAS tasks, both faculty and fellows surpassed pre-medical students (p values for the RAS pattern cut were 0.001 for faculty and 0.003 for fellows, while for RAS tissue dissection, the p value was less than 0.001 for both groups), with residents also showing superior skills in tissue dissection (p value = 0.03). CONCLUSION Findings could be used to develop training interventions for improving surgical skills and have implications for understanding motor learning and designing interventions to enhance learning outcomes.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | | | - Xavier Intes
- Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY, 12180, USA
| | - Rahul Rahul
- Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY, 12180, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
6
|
Sweet T, Thompson DE. Applying Big Transfer-based classifiers to the DEAP dataset. 2022 44TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC) 2022; 2022:406-409. [PMID: 36086186 PMCID: PMC10100746 DOI: 10.1109/embc48229.2022.9871388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Affective brain-computer interfaces are a fast-growing area of research. Accurate estimation of emotional states from physiological signals is of great interest to the fields of psychology and human-computer interaction. The DEAP dataset is one of the most popular datasets for emotional classification. In this study we generated heat maps from spectral data within the neurological signals found in the DEAP dataset. To account for the class imbalance within this dataset, we then discarded images belonging to the larger class. We used these images to fine-tune several Big Transfer neural networks for binary classification of arousal, valence, and dominance affective states. Our best classifier was able to achieve greater than 98% accuracy and 990% balanced accuracy in all three classification tasks. We also investigated the effects of this balancing method on our classifiers.
Collapse
Affiliation(s)
- Taylor Sweet
- Kansas State University,Department of Electrical and Computer Engineering,Manhattan,KS,66506
| | - David E. Thompson
- Kansas State University,Department of Electrical and Computer Engineering,Manhattan,KS,66506
| |
Collapse
|