1
|
Tanin U, Duimering A, Law C, Ruzicki J, Luna G, Holden M. Performance evaluation in cataract surgery with an ensemble of 2D-3D convolutional neural networks. Healthc Technol Lett 2024; 11:189-195. [PMID: 38638495 PMCID: PMC11022224 DOI: 10.1049/htl2.12078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 01/22/2024] [Indexed: 04/20/2024] Open
Abstract
An important part of surgical training in ophthalmology is understanding how to proficiently perform cataract surgery. Operating skill in cataract surgery is typically assessed by real-time or video-based expert review using a rating scale. This is time-consuming, subjective and labour-intensive. A typical trainee graduates with over 100 complete surgeries, each of which requires review by the surgical educators. Due to the consistently repetitive nature of this task, it lends itself well to machine learning-based evaluation. Recent studies utilize deep learning models trained on tool motion trajectories obtained using additional equipment or robotic systems. However, the process of tool recognition by extracting frames from the videos to perform phase recognition followed by skill assessment is exhaustive. This project proposes a deep learning model for skill evaluation using raw surgery videos that is cost-effective and end-to-end trainable. An advanced ensemble of convolutional neural network models is leveraged to model technical skills in cataract surgeries and is evaluated using a large dataset comprising almost 200 surgical trials. The highest accuracy of 0.8494 is observed on the phacoemulsification step data. Our model yielded an average accuracy of 0.8200 and an average AUC score of 0.8800 for all four phase datasets of cataract surgery proving its robustness against different data. The proposed ensemble model with 2D and 3D convolutional neural networks demonstrated a promising result without using tool motion trajectories to evaluate surgery expertise.
Collapse
Affiliation(s)
- Ummey Tanin
- School of Computer ScienceCarleton UniversityOttawaOntarioCanada
| | - Adrienne Duimering
- Department of Ophthalmology, School of MedicineQueen's UniversityKingstonOntarioCanada
| | - Christine Law
- Department of Ophthalmology, School of MedicineQueen's UniversityKingstonOntarioCanada
| | - Jessica Ruzicki
- Department of Ophthalmology, School of MedicineQueen's UniversityKingstonOntarioCanada
| | - Gabriela Luna
- Department of Ophthalmology, School of MedicineQueen's UniversityKingstonOntarioCanada
| | - Matthew Holden
- School of Computer ScienceCarleton UniversityOttawaOntarioCanada
| |
Collapse
|
2
|
Chou R, Naz H, Boahene KDO, Maxwell JH, Wanamaker JR, Byrne PJ, Papel ID, Kontis TC, Hager GD, Ishii LE, Malekzadeh S, Vedula SS, Ishii M. Correcting for Rater Effects in Operating Room Surgical Skills Assessment. Laryngoscope 2024. [PMID: 38470307 DOI: 10.1002/lary.31391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 02/05/2024] [Accepted: 02/26/2024] [Indexed: 03/13/2024]
Abstract
OBJECTIVE To estimate and adjust for rater effects in operating room surgical skills assessment performed using a structured rating scale for nasal septoplasty. METHODS We analyzed survey responses from attending surgeons (raters) who supervised residents and fellows (trainees) performing nasal septoplasty in a prospective cohort study. We fit a structural equation model with the rubric item scores regressed on a latent component of skill and then fit a second model including the rating surgeon as a random effect to model a rater-effects-adjusted latent surgical skill. We validated this model against conventional measures including the level of expertise and post-graduation year (PGY) commensurate with the trainee's performance, the actual PGY of the trainee, and whether the surgical goals were achieved. RESULTS Our dataset included 188 assessments by 7 raters and 41 trainees. The model with one latent construct for surgical skill and the rater as a random effect was the best. Rubric scores depended on how severe or lenient the rater was, sometimes almost as much as they depended on trainee skill. Rater-adjusted latent skill scores increased with attending-estimated skill levels and PGY of trainees, increased with the actual PGY, and appeared constant over different levels of achievement of surgical goals. CONCLUSION Our work provides a method to obtain rater effect adjusted surgical skill assessments in the operating room using structured rating scales. Our method allows for the creation of standardized (i.e., rater-effects-adjusted) quantitative surgical skill benchmarks using national-level databases on trainee assessments. LEVEL OF EVIDENCE N/A Laryngoscope, 2024.
Collapse
Affiliation(s)
- Ryan Chou
- Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, U.S.A
| | - Hajira Naz
- Dugoni School of Dentistry, University of Pacific, San Francisco, California, U.S.A
| | - Kofi D O Boahene
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
| | - Jessica H Maxwell
- Department of Otolaryngology-Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, DC, U.S.A
- ENT Section, Veterans Affairs Medical Center, Washington, DC, U.S.A
| | - John R Wanamaker
- Department of Otolaryngology-Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, DC, U.S.A
- ENT Section, Veterans Affairs Medical Center, Washington, DC, U.S.A
| | - Patrick J Byrne
- Head and Neck Institute, Cleveland Clinic, Cleveland, Ohio, U.S.A
| | - Ira D Papel
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
- Aesthetic Center at Woodholme, Baltimore, Maryland, U.S.A
| | - Theda C Kontis
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
- Aesthetic Center at Woodholme, Baltimore, Maryland, U.S.A
| | - Gregory D Hager
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, U.S.A
- Department of Computer Science, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, U.S.A
| | - Lisa E Ishii
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
| | - Sonya Malekzadeh
- Department of Otolaryngology-Head and Neck Surgery, MedStar Georgetown University Hospital, Washington, DC, U.S.A
- ENT Section, Veterans Affairs Medical Center, Washington, DC, U.S.A
| | - S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, U.S.A
| | - Masaru Ishii
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, U.S.A
| |
Collapse
|
3
|
Pan M, Wang S, Li J, Li J, Yang X, Liang K. An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery. Sensors (Basel) 2023; 23:s23094496. [PMID: 37177699 PMCID: PMC10181496 DOI: 10.3390/s23094496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/27/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023]
Abstract
Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results in skill assessment. However, this success is predicated on the SIT sensors, making these approaches impractical when employing the minimally invasive surgical robot with such a tiny end size. To address the assessment issue regarding the operation quality of robot-assisted minimally invasive surgery (RAMIS), this paper proposes a new automatic framework for assessing surgical skills based on visual motion tracking and deep learning. The new method innovatively combines vision and kinematics. The kernel correlation filter (KCF) is introduced in order to obtain the key motion signals of the SIT and classify them by using the residual neural network (ResNet), realizing automated skill assessment in RAMIS. To verify its effectiveness and accuracy, the proposed method is applied to the public minimally invasive surgical robot dataset, the JIGSAWS. The results show that the method based on visual motion tracking technology and a deep neural network model can effectively and accurately assess the skill of robot-assisted surgery in near real-time. In a fairly short computational processing time of 3 to 5 s, the average accuracy of the assessment method is 92.04% and 84.80% in distinguishing two and three skill levels. This study makes an important contribution to the safe and high-quality development of RAMIS.
Collapse
Affiliation(s)
- Mingzhang Pan
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
- State Key Laboratory for Conservation and Utilization of Subtropical Agro-Bioresources, Nanning 530004, China
| | - Shuo Wang
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Jingao Li
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Jing Li
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Xiuze Yang
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Ke Liang
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
- Guangxi Key Laboratory of Manufacturing System & Advanced Manufacturing Technology, School of Mechanical Engineering, Guangxi University, Nanning 530004, China
| |
Collapse
|
4
|
Aghazadeh F, Zheng B, Tavakoli M, Rouhani H. Motion Smoothness-Based Assessment of Surgical Expertise: The Importance of Selecting Proper Metrics. Sensors (Basel) 2023; 23:3146. [PMID: 36991855 PMCID: PMC10057623 DOI: 10.3390/s23063146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/07/2023] [Accepted: 03/12/2023] [Indexed: 06/19/2023]
Abstract
The smooth movement of hand/surgical instruments is considered an indicator of skilled, coordinated surgical performance. Jerky surgical instrument movements or hand tremors can cause unwanted damages to the surgical site. Different methods have been used in previous studies for assessing motion smoothness, causing conflicting results regarding the comparison among surgical skill levels. We recruited four attending surgeons, five surgical residents, and nine novices. The participants conducted three simulated laparoscopic tasks, including peg transfer, bimanual peg transfer, and rubber band translocation. Tooltip motion smoothness was computed using the mean tooltip motion jerk, logarithmic dimensionless tooltip motion jerk, and 95% tooltip motion frequency (originally proposed in this study) to evaluate their capability of surgical skill level differentiation. The results revealed that logarithmic dimensionless motion jerk and 95% motion frequency were capable of distinguishing skill levels, indicated by smoother tooltip movements observed in high compared to low skill levels. Contrarily, mean motion jerk was not able to distinguish the skill levels. Additionally, 95% motion frequency was less affected by the measurement noise since it did not require the calculation of motion jerk, and 95% motion frequency and logarithmic dimensionless motion jerk yielded a better motion smoothness assessment outcome in distinguishing skill levels than mean motion jerk.
Collapse
Affiliation(s)
- Farzad Aghazadeh
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada;
| | - Bin Zheng
- Department of Surgery, University of Alberta, Edmonton, AB T6G 2B7, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Hossein Rouhani
- Department of Mechanical Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada;
| |
Collapse
|
5
|
Berges AJ, Vedula SS, Chara A, Hager GD, Ishii M, Malpani A. Eye Tracking and Motion Data Predict Endoscopic Sinus Surgery Skill. Laryngoscope 2023; 133:500-505. [PMID: 35357011 PMCID: PMC9825109 DOI: 10.1002/lary.30121] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 03/10/2022] [Accepted: 03/14/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Endoscopic surgery has a considerable learning curve due to dissociation of the visual-motor axes, coupled with decreased tactile feedback and mobility. In particular, endoscopic sinus surgery (ESS) lacks objective skill assessment metrics to provide specific feedback to trainees. This study aims to identify summary metrics from eye tracking, endoscope motion, and tool motion to objectively assess surgeons' ESS skill. METHODS In this cross-sectional study, expert and novice surgeons performed ESS tasks of inserting an endoscope and tool into a cadaveric nose, touching an anatomical landmark, and withdrawing the endoscope and tool out of the nose. Tool and endoscope motion were collected using an electromagnetic tracker, and eye gaze was tracked using an infrared camera. Three expert surgeons provided binary assessments of low/high skill. 20 summary statistics were calculated for eye, tool, and endoscope motion and used in logistic regression models to predict surgical skill. RESULTS 14 metrics (10 eye gaze, 2 tool motion, and 2 endoscope motion) were significantly different between surgeons with low and high skill. Models to predict skill for 6/9 ESS tasks had an AUC >0.95. A combined model of all tasks (AUC 0.95, PPV 0.93, NPV 0.89) included metrics from eye tracking data and endoscope motion, indicating that these metrics are transferable across tasks. CONCLUSIONS Eye gaze, endoscope, and tool motion data can provide an objective and accurate measurement of ESS surgical performance. Incorporation of these algorithmic techniques intraoperatively could allow for automated skill assessment for trainees learning endoscopic surgery. LEVEL OF EVIDENCE N/A Laryngoscope, 133:500-505, 2023.
Collapse
Affiliation(s)
| | | | | | | | - Masaru Ishii
- Johns Hopkins Department of Otolaryngology–Head and Neck Surgery
| | | |
Collapse
|
6
|
Nagyné Elek R, Haidegger T. Next in Surgical Data Science: Autonomous Non-Technical Skill Assessment in Minimally Invasive Surgery Training. J Clin Med 2022; 11. [PMID: 36556148 DOI: 10.3390/jcm11247533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 11/27/2022] [Accepted: 11/27/2022] [Indexed: 12/23/2022] Open
Abstract
Background: It is well understood that surgical skills largely define patient outcomes both in Minimally Invasive Surgery (MIS) and Robot-Assisted MIS (RAMIS). Non-technical surgical skills, including stress and distraction resilience, decision-making and situation awareness also contribute significantly. Autonomous, technologically supported objective skill assessment can be efficient tools to improve patient outcomes without the need to involve expert surgeon reviewers. However, autonomous non-technical skill assessments are unstandardized and open for more research. Recently, Surgical Data Science (SDS) has become able to improve the quality of interventional healthcare with big data and data processing techniques (capture, organization, analysis and modeling of data). SDS techniques can also help to achieve autonomous non-technical surgical skill assessments. Methods: An MIS training experiment is introduced to autonomously assess non-technical skills and to analyse the workload based on sensory data (video image and force) and a self-rating questionnaire (SURG-TLX). A sensorized surgical skill training phantom and adjacent training workflow were designed to simulate a complicated Laparoscopic Cholecystectomy task; the dissection of the cholecyst’s peritonial layer and the safe clip application on the cystic artery in an uncomfortable environment. A total of 20 training sessions were recorded from 7 subjects (3 non-medicals, 2 residents, 1 expert surgeon and 1 expert MIS surgeon). Workload and learning curves were studied via SURG-TLX. For autonomous non-technical skill assessment, video image data with tracked instruments based on Channel and Spatial Reliability Tracker (CSRT) and force data were utilized. An autonomous time series classification was achieved by a Fully Convolutional Neural Network (FCN), where the class labels were provided by SURG-TLX. Results: With unpaired t-tests, significant differences were found between the two groups (medical professionals and control) in certain workload components (mental demands, physical demands, and situational stress, p<0.0001, 95% confidence interval, p<0.05 for task complexity). With paired t-tests, the learning curves of the trials were also studied; the task complexity resulted in a significant difference between the first and the second trials. Autonomous non-technical skill classification was based on the FCN by applying the tool trajectories and force data as input. This resulted in a high accuracy (85%) on temporal demands classification based on the z component of the used forces and 75% accuracy for classifying mental demands/situational stress with the x component of the used forces validated with Leave One Out Cross-Validation. Conclusions: Non-technical skills and workload components can be classified autonomously based on measured training data. SDS can be effective via automated non-technical skill assessment.
Collapse
|
7
|
Lajkó G, Nagyné Elek R, Haidegger T. Endoscopic Image-Based Skill Assessment in Robot-Assisted Minimally Invasive Surgery. Sensors (Basel) 2021; 21:5412. [PMID: 34450854 PMCID: PMC8398563 DOI: 10.3390/s21165412] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/02/2021] [Accepted: 08/05/2021] [Indexed: 02/06/2023]
Abstract
Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic-acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors-or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU-ISI Gesture and Skill Assessment Working Set dataset-co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy-averaged over five cross-validation trials-for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features-such as image-based orientation and image-based collision detection-or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively.
Collapse
Affiliation(s)
- Gábor Lajkó
- Autonomous Systems Track, Double Degree Programme, EIT Digital Master School, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany;
- ELTE Faculty of Informatics, Pázmány Péter Sétány 1/C, Eötvös Loránd University, Egyetem tér 1-3, 1117 Budapest, Hungary
| | - Renáta Nagyné Elek
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, Bécsi út 96/b, 1034 Budapest, Hungary
- John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/b, 1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Austrian Center for Medical Innovation and Technology, Viktor Kaplan-Straße 2/1, 2700 Wiener Neustadt, Austria
| |
Collapse
|
8
|
Fukuoka K, Teishima J, Inoue S, Hayashi T, Matsubara A. The influence of reviewer's occupation on the skill assessment of urethrovesical anastomosis in robot-assisted radical prostatectomy. Asian J Endosc Surg 2021; 14:451-457. [PMID: 33145920 DOI: 10.1111/ases.12892] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/25/2020] [Accepted: 10/19/2020] [Indexed: 01/15/2023]
Abstract
INTRODUCTION In recent years, crowd-sourced assessments have been reported as a timesaving, cost-efficient, and practicable method of surgical skill evaluation. However, the differences in the assessment of surgical skills by the individual reviewers cannot be further examined in terms of characteristics of the reviewers because they are usually anonymously and randomly selected. This study aimed to reveal the effects of reviewers' occupations on their assessment of a surgeon's skill. METHODS In total, 42 urologists, 19 paramedics, 73 medical students, and 28 non-medical personnel used the Global Evaluative Assessment of Robotic Skills (GEARS) validated robotic surgery rating tool to assess the surgical skill of surgeons in nine edited video clips of complete urethrovesical anastomosis during a robot-assisted radical prostatectomy. The total GEARS scores of the four groups of reviewers were compared, and the similarities and the differences between the ratings of the urologists group and those of the other three groups were subsequently investigated. RESULTS The rankings of video clips in the order of GEARS scores were very similar in each group, and a strong positive correlation (R2 values >0.8) was observed between the scores assigned by the urologists group and those assigned by the other three groups. CONCLUSION Our findings indicate that the crude evaluation of robot-assisted urethrovesical anastomosis is not affected by the reviewers' occupations. Non-medical personnel may be able to provide a rudimentary screening evaluation of surgical skill.
Collapse
Affiliation(s)
- Kenichiro Fukuoka
- Department of Urology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Jun Teishima
- Department of Urology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Shogo Inoue
- Department of Urology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Tetsutaro Hayashi
- Department of Urology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Akio Matsubara
- Department of Urology, Hiroshima General Hospital, Hatsukaichi, Japan
| |
Collapse
|
9
|
Abstract
Surgical skill-level assessment is key to collecting the required feedback and adapting the educational programs accordingly. Currently, these assessments for the minimal invasive surgery programs are primarily based on subjective methods, and there is no consensus on skill level classifications. One of the most detailed of these classifications categorize skill levels as beginner, novice, intermediate, sub-expert, and expert. To properly integrate skill assessment into minimal invasive surgical education programs and provide skill-based training alternatives, it is necessary to classify the skill levels in as detailed a way as possible and identify the differences between all skill levels in an objective manner. Yet, despite the existence of very encouraging results in the literature, most of the studies have been conducted to better understand the differences between novice and expert surgical skill levels leaving out the other crucial skill levels between them. Additionally, there are very limited studies by considering the eye-movement behaviors of surgical residents. To this end, the present study attempted to distinguish novice- and intermediate-level surgical residents based on their eye movements. The eye-movement data was recorded from 23 volunteer surgical residents while they were performing four computer-based simulated surgical tasks under different hand conditions. The data was analyzed using logistic regression to estimate the skill levels of both groups. The best results of the estimation revealing a 91.3% recognition rate of predicting novice and intermediate surgical residents on one scenario were selected from four under the dominant hand condition. These results show that the eye-movements can be potentially used to identify surgeons with intermediate and novice skills. However, the results also indicate that the order in which the scenarios are provided, and the design of the scenario, the tasks, and their appropriateness with the skill levels of the participants are all critical factors to be considered in improving the estimation ratio, and hence require thorough assessment for future research.
Collapse
Affiliation(s)
- Gonca Gokce Menekse Dalveren
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway.,Department of Information Systems Engineering, Atılım University, Ankara, Turkey
| | | |
Collapse
|
10
|
Abstract
Endoscopic surgery procedures require specific skills, such as eye-hand coordination to be developed. Current education programs are facing with problems to provide appropriate skill improvement and assessment methods in this field. This study aims to propose objec-tive metrics for hand-movement skills and assess eye-hand coordination. An experimental study is conducted with 15 surgical residents to test the newly proposed measures. Two computer-based both-handed endoscopic surgery practice scenarios are developed in a simulation environment to gather the participants' eye-gaze data with the help of an eye tracker as well as the related hand movement data through haptic interfaces. Additionally, participants' eye-hand coordination skills are analyzed. The results indicate higher correla-tions in the intermediates' eye-hand movements compared to the novices. An increase in intermediates' visual concentration leads to smoother hand movements. Similarly, the novices' hand movements are shown to remain at a standstill. After the first round of practice, all participants' eye-hand coordination skills are improved on the specific task targeted in this study. According to these results, it can be concluded that the proposed metrics can potentially provide some additional insights about trainees' eye-hand coordi-nation skills and help instructional system designers to better address training requirements.
Collapse
|
11
|
Shaharan S, Nugent E, Ryan DM, Traynor O, Neary P, Buckley D. Basic Surgical Skill Retention: Can Patriot Motion Tracking System Provide an Objective Measurement for it? J Surg Educ 2016; 73:245-249. [PMID: 26572096 DOI: 10.1016/j.jsurg.2015.10.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2015] [Revised: 08/24/2015] [Accepted: 10/03/2015] [Indexed: 06/05/2023]
Abstract
BACKGROUND Knot tying is a fundamental skill that surgical trainees have to learn early on in their training. The aim of this study was to establish the predictive and concurrent validity of the Patriot as an assessment tool and determine the skill retention in first-year surgical trainees after 5 months of training. METHODS First-year surgical trainees were recruited in their first month of the training program. Experts were invited to set the proficiency level. The subjects performed hand knot tying on a bench model. The skill was assessed at baseline in the first month of training and at 5 months. The assessment tools were the Patriot electromagnetic tracking system and Objective Structured Assessment of Technical Skills (OSATS). The trainees' scores were compared to the proficiency score. The data were analyzed using paired t-test and Pearson correlation analysis. RESULTS A total of 14 first-year trainees participated in this study. The time taken to complete the task and the path length (PL) were significantly shorter (p = 0.007 and p = 0.0085, respectively) at 5 months. OSATS scoring showed a significant improvement (p = 0.0004). There was a significant correlation between PL and OSATS at baseline (r = -0.873) and at Month 5 (r = -0.774). In all, 50% of trainees reached the proficiency PL at baseline and at Month 5. Among them, 3 trainees improved their PL to reach proficiency and the other 3 trainees failed to reach proficiency. CONCLUSION The parameters from the Patriot motion tracker demonstrated a significant correlation with the classical observational assessment tool and were capable of highlighting the skill retention in surgical trainees. Therefore, the automated scoring system has a significant role in the surgical training curriculum as an adjunct to the available assessment tool.
Collapse
Affiliation(s)
- Shazrinizam Shaharan
- National Surgical Training Centre, Royal College of Surgeons in Ireland, Dublin, Ireland.
| | - Emmeline Nugent
- Department of Surgery, Adelaide and Meath Hospital incorporating National Children's Hospital (AMNCH), Dublin, Ireland
| | - Donncha M Ryan
- National Surgical Training Centre, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Oscar Traynor
- National Surgical Training Centre, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Paul Neary
- Department of Surgery, Adelaide and Meath Hospital incorporating National Children's Hospital (AMNCH), Dublin, Ireland
| | - Declan Buckley
- Department of Surgery, Adelaide and Meath Hospital incorporating National Children's Hospital (AMNCH), Dublin, Ireland
| |
Collapse
|
12
|
Abstract
Robotic assistance became a leading trend in minimally invasive surgery, which is based on the global success of laparoscopic surgery. Manual laparoscopy requires advanced skills and capabilities, which is acquired through tedious learning procedure, while da Vinci type surgical systems offer intuitive control and advanced ergonomics. Nevertheless, in either case, the key issue is to be able to assess objectively the surgeons' skills and capabilities. Robotic devices offer radically new way to collect data during surgical procedures, opening the space for new ways of skill parameterization. This may be revolutionary in MIS training, given the new and objective surgical curriculum and examination methods. The article reviews currently developed skill assessment techniques for robotic surgery and simulators, thoroughly inspecting their validation procedure and utility. In the coming years, these methods will become the mainstream of Western surgical education.
Collapse
Affiliation(s)
| | - Tamás Haidegger
- Óbudai Egyetem Bejczy Antal iRobottechnikai Központ 1032 Budapest Kiscelli u. 82
| |
Collapse
|