1
|
Gillani M, Rupji M, Paul Olson TJ, Sullivan P, Shaffer VO, Balch GC, Shields MC, Liu Y, Rosen SA. Objective Performance Indicators During Robotic Right Colectomy Differ According to Surgeon Skill. J Surg Res 2024; 302:836-844. [PMID: 39241292 PMCID: PMC11490410 DOI: 10.1016/j.jss.2024.07.103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Revised: 07/10/2024] [Accepted: 07/19/2024] [Indexed: 09/09/2024]
Abstract
INTRODUCTION Surgeon assessment tools are subjective and nonscalable. Objective performance indicators (OPIs), machine learning-enabled metrics recorded during robotic surgery, offer objective insights into surgeon movements and robotic arm kinematics. In this study, we identified OPIs that significantly differed across expert (EX), intermediate (IM), and novice (NV) surgeons during robotic right colectomy. METHODS Endoscopic videos were annotated to delineate 461 surgical steps across 25 robotic right colectomies. OPIs were compared among two EX, two IM, and eight NV surgeons during mesenteric dissection, vascular pedicle ligation, right colon and hepatic flexure mobilization, and preparation of the proximal and distal bowel for transection. RESULTS Compared to NV's, EX's exhibited greater velocity, acceleration and jerk for camera, dominant, nondominant, and third arms across all steps. Compared to NV's, IM's exhibited more arm swaps and master clutch use, higher camera-related metrics (movement, path length, moving time, velocity, acceleration, and jerk), greater dominant wrist pitch and nondominant wrist articulations (roll, pitch, and yaw), longer dominant and nondominant arm path length, and higher velocity, acceleration and jerk for dominant, nondominant, and third arms across all steps. Compared to NV's, EX/IM surgeons utilized more arm swaps, higher camera-related metrics (movement, path length, velocity, acceleration, and jerk), longer nondominant arm path length, and greater velocity, acceleration and jerk for dominant, nondominant, and third arms across all steps. CONCLUSIONS We report OPIs that discriminate EX, IM, and NV surgeons during RRC. This study is the first to demonstrate feasibility of using OPIs as an objective, scalable way to classify surgeon skill during RRC steps.
Collapse
Affiliation(s)
- Mishal Gillani
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia
| | - Manali Rupji
- Winship Cancer Institute, Emory University, Atlanta, Georgia
| | - Terrah J Paul Olson
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia
| | - Patrick Sullivan
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia
| | - Virginia O Shaffer
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia
| | - Glen C Balch
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia
| | | | - Yuan Liu
- Rollins School of Public Health, Emory University, Atlanta, Georgia
| | - Seth A Rosen
- Department of Surgery, Emory University School of Medicine, Atlanta, Georgia.
| |
Collapse
|
2
|
Gillani M, Rupji M, Paul Olson TJ, Balch GC, Shields MC, Liu Y, Rosen SA. Objective performance indicators during specific steps of robotic right colectomy can differentiate surgeon expertise. Surgery 2024; 176:1036-1043. [PMID: 39025692 PMCID: PMC11381159 DOI: 10.1016/j.surg.2024.06.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 05/21/2024] [Accepted: 06/18/2024] [Indexed: 07/20/2024]
Abstract
BACKGROUND Current surgical assessment tools are subjective and nonscalable. Objective performance indicators, calculated from robotic systems data, provide automated data regarding surgeon movements and robotic arm kinematics. We identified objective performance indicators that significantly differed among expert and trainee surgeons during specific steps of robotic right colectomy. METHODS Endoscopic videos were annotated to delineate surgical steps during robotic right colectomies. Objective performance indicators were compared during mesenteric dissection, ascending colon mobilization, hepatic flexure mobilization, and bowel preparation for transection. RESULTS Twenty-five robotic right colectomy procedures (461 total surgical steps) performed by 2 experts and 8 trainees were analyzed. Experts exhibited faster camera acceleration and jerk during all steps, as well as faster dominant and nondominant arm acceleration and dominant arm jerk during all steps except distal bowel preparation. During mesenteric dissection, experts used faster camera and dominant arm velocity. During medial-to-lateral ascending colon mobilization, experts used less-dominant wrist yaw and pitch, faster nondominant arm velocity, shorter dominant arm path length, and shorter moving times for camera, dominant arm, and nondominant arm. During lateral-to-medial ascending colon mobilization, experts had faster dominant and nondominant arm velocity and third-arm acceleration. During hepatic flexure mobilization, experts exhibited more camera movements, greater velocity for camera, dominant and nondominant arms, and faster third-arm acceleration. During distal bowel preparation, experts used greater dominant wrist articulation, faster camera velocity, and longer nondominant arm path length. During proximal bowel preparation, experts demonstrated faster nondominant arm velocity. CONCLUSION Objective performance indicators can differentiate experts from trainees during distinct steps of robotic right colectomy. These automated, objective and scalable metrics can provide personalized feedback for trainees.
Collapse
Affiliation(s)
- Mishal Gillani
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | - Manali Rupji
- Winship Cancer Institute, Emory University, Atlanta, GA
| | | | - Glen C Balch
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | | | - Yuan Liu
- Rollins School of Public Health, Emory University, Atlanta, GA
| | - Seth Alan Rosen
- Department of Surgery, Emory University School of Medicine, Atlanta, GA.
| |
Collapse
|
3
|
Khan DZ, Koh CH, Das A, Valetopolou A, Hanrahan JG, Horsfall HL, Baldeweg SE, Bano S, Borg A, Dorward NL, Olukoya O, Stoyanov D, Marcus HJ. Video-Based Performance Analysis in Pituitary Surgery-Part 1: Surgical Outcomes. World Neurosurg 2024; 190:e787-e796. [PMID: 39122112 DOI: 10.1016/j.wneu.2024.07.218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 07/31/2024] [Indexed: 08/12/2024]
Abstract
BACKGROUND Endoscopic pituitary adenoma surgery has a steep learning curve, with varying surgical techniques and outcomes across centers. In other surgeries, superior performance is linked with superior surgical outcomes. This study aimed to explore the prediction of patient-specific outcomes using surgical video analysis in pituitary surgery. METHODS Endoscopic pituitary adenoma surgery videos from a single center were annotated by experts for operative workflow (3 surgical phases and 15 surgical steps) and operative skill (using modified Objective Structured Assessment of Technical Skills [mOSATS]). Quantitative workflow metrics were calculated, including phase duration and step transitions. Poisson or logistic regression was used to assess the association of workflow metrics and mOSATS with common inpatient surgical outcomes. RESULTS 100 videos from 100 patients were included. Nasal phase mean duration was 24 minutes and mean mOSATS was 21.2/30. Mean duration was 34 minutes and mean mOSATS was 20.9/30 for the sellar phase, and 11 minutes and 21.7/30, respectively, for the closure phase. The most common adverse outcomes were new anterior pituitary hormone deficiency (n = 26), dysnatremia (n = 24), and cerebrospinal fluid leak (n = 5). Higher mOSATS for all 3 phases and shorter operation duration were associated with decreased length of stay (P = 0.003 &P < 0.001). Superior closure phase mOSATS were associated with reduced postoperative cerebrospinal fluid leak (P < 0.001), and superior sellar phase mOSATS were associated with reduced postoperative visual deterioration (P = 0.041). CONCLUSIONS Superior surgical skill and shorter surgical time were associated with superior surgical outcomes, at a generic and phase-specific level. Such video-based analysis has promise for integration into data-driven training and service improvement initiatives.
Collapse
Affiliation(s)
- Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| | - Chan Hee Koh
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Adrito Das
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Alexandra Valetopolou
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes & Endocrinology, University College London Hospitals NHS Foundation Trust, London, UK; Division of Medicine, Department of Experimental and Translational Medicine, Centre for Obesity and Metabolism, University College London, London, UK
| | - Sophia Bano
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Anouk Borg
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil L Dorward
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Olatomiwa Olukoya
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Danail Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Digital Surgery Ltd, Medtronic, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
4
|
Ershad Langroodi M, Liu X, Tousignant MR, Jarc AM. Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03248-2. [PMID: 39320413 DOI: 10.1007/s11548-024-03248-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 07/29/2024] [Indexed: 09/26/2024]
Abstract
PURPOSE Surgical skill evaluation that relies on subjective scoring of surgical videos can be time-consuming and inconsistent across raters. We demonstrate differentiated opportunities for objective evaluation to improve surgeon training and performance. METHODS Subjective evaluation was performed using the Global evaluative assessment of robotic skills (GEARS) from both expert and crowd raters; whereas, objective evaluation used objective performance indicators (OPIs) derived from da Vinci surgical systems. Classifiers were trained for each evaluation method to distinguish between surgical expertise levels. This study includes one clinical task from a case series of robotic-assisted sleeve gastrectomy procedures performed by a single surgeon, and two training tasks performed by novice and expert surgeons, i.e., surgeons with no experience in robotic-assisted surgery (RAS) and those with more than 500 RAS procedures. RESULTS When comparing expert and novice skill levels, OPI-based classifier showed significantly higher accuracy than GEARS-based classifier on the more complex dissection task (OPI 0.93 ± 0.08 vs. GEARS 0.67 ± 0.18; 95% CI, 0.16-0.37; p = 0.02), but no significant difference was shown on the simpler suturing task. For the single-surgeon case series, both classifiers performed well when differentiating between early and late group cases with smaller group sizes and larger intervals between groups (OPI 0.9 ± 0.08; GEARS 0.87 ± 0.12; 95% CI, 0.02-0.04; p = 0.67). When increasing the group size to include more cases, thereby having smaller intervals between groups, OPIs demonstrated significantly higher accuracy (OPI 0.97 ± 0.06; GEARS 0.76 ± 0.07; 95% CI, 0.12-0.28; p = 0.004) in differentiating between the early/late cases. CONCLUSIONS Objective methods for skill evaluation in RAS outperform subjective methods when (1) differentiating expertise in a technically challenging training task, and (2) identifying more granular differences along early versus late phases of a surgeon learning curve within a clinical task. Objective methods offer an opportunity for more accessible and scalable skill evaluation in RAS.
Collapse
Affiliation(s)
| | - Xi Liu
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Mark R Tousignant
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Anthony M Jarc
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| |
Collapse
|
5
|
Shafiei SB, Shadpour S, Mohler JL, Kauffman EC, Holden M, Gutierrez C. Classification of subtask types and skill levels in robot-assisted surgery using EEG, eye-tracking, and machine learning. Surg Endosc 2024; 38:5137-5147. [PMID: 39039296 PMCID: PMC11362185 DOI: 10.1007/s00464-024-11049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/06/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Objective and standardized evaluation of surgical skills in robot-assisted surgery (RAS) holds critical importance for both surgical education and patient safety. This study introduces machine learning (ML) techniques using features derived from electroencephalogram (EEG) and eye-tracking data to identify surgical subtasks and classify skill levels. METHOD The efficacy of this approach was assessed using a comprehensive dataset encompassing nine distinct classes, each representing a unique combination of three surgical subtasks executed by surgeons while performing operations on pigs. Four ML models, logistic regression, random forest, gradient boosting, and extreme gradient boosting (XGB) were used for multi-class classification. To develop the models, 20% of data samples were randomly allocated to a test set, with the remaining 80% used for training and validation. Hyperparameters were optimized through grid search, using fivefold stratified cross-validation repeated five times. Model reliability was ensured by performing train-test split over 30 iterations, with average measurements reported. RESULTS The findings revealed that the proposed approach outperformed existing methods for classifying RAS subtasks and skills; the XGB and random forest models yielded high accuracy rates (88.49% and 88.56%, respectively) that were not significantly different (two-sample t-test; P-value = 0.9). CONCLUSION These results underscore the potential of ML models to augment the objectivity and precision of RAS subtask and skill evaluation. Future research should consider exploring ways to optimize these models, particularly focusing on the classes identified as challenging in this study. Ultimately, this study marks a significant step towards a more refined, objective, and standardized approach to RAS training and competency assessment.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- The Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Eric C Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Matthew Holden
- School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
6
|
Elmitwalli S, Mehegan J. Sentiment analysis of COP9-related tweets: a comparative study of pre-trained models and traditional techniques. Front Big Data 2024; 7:1357926. [PMID: 38572292 PMCID: PMC10987730 DOI: 10.3389/fdata.2024.1357926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/04/2024] [Indexed: 04/05/2024] Open
Abstract
Introduction Sentiment analysis has become a crucial area of research in natural language processing in recent years. The study aims to compare the performance of various sentiment analysis techniques, including lexicon-based, machine learning, Bi-LSTM, BERT, and GPT-3 approaches, using two commonly used datasets, IMDB reviews and Sentiment140. The objective is to identify the best-performing technique for an exemplar dataset, tweets associated with the WHO Framework Convention on Tobacco Control Ninth Conference of the Parties in 2021 (COP9). Methods A two-stage evaluation was conducted. In the first stage, various techniques were compared on standard sentiment analysis datasets using standard evaluation metrics such as accuracy, F1-score, and precision. In the second stage, the best-performing techniques from the first stage were applied to partially annotated COP9 conference-related tweets. Results In the first stage, BERT achieved the highest F1-scores (0.9380 for IMDB and 0.8114 for Sentiment 140), followed by GPT-3 (0.9119 and 0.7913) and Bi-LSTM (0.8971 and 0.7778). In the second stage, GPT-3 performed the best for sentiment analysis on partially annotated COP9 conference-related tweets, with an F1-score of 0.8812. Discussion The study demonstrates the effectiveness of pre-trained models like BERT and GPT-3 for sentiment analysis tasks, outperforming traditional techniques on standard datasets. Moreover, the better performance of GPT-3 on the partially annotated COP9 tweets highlights its ability to generalize well to domain-specific data with limited annotations. This provides researchers and practitioners with a viable option of using pre-trained models for sentiment analysis in scenarios with limited or no annotated data across different domains.
Collapse
Affiliation(s)
- Sherif Elmitwalli
- Tobacco Control Research Group, Department for Health, University of Bath, Bath, United Kingdom
| | | |
Collapse
|
7
|
Elmitwalli S, Mehegan J, Wellock G, Gallagher A, Gilmore A. Topic prediction for tobacco control based on COP9 tweets using machine learning techniques. PLoS One 2024; 19:e0298298. [PMID: 38358979 PMCID: PMC10868820 DOI: 10.1371/journal.pone.0298298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 01/23/2024] [Indexed: 02/17/2024] Open
Abstract
The prediction of tweets associated with specific topics offers the potential to automatically focus on and understand online discussions surrounding these issues. This paper introduces a comprehensive approach that centers on the topic of "harm reduction" within the broader context of tobacco control. The study leveraged tweets from the period surrounding the ninth Conference of the Parties to review the Framework Convention on Tobacco Control (COP9) as a case study to pilot this approach. By using Latent Dirichlet Allocation (LDA)-based topic modeling, the study successfully categorized tweets related to harm reduction. Subsequently, various machine learning techniques were employed to predict these topics, achieving a prediction accuracy of 91.87% using the Random Forest algorithm. Additionally, the study explored correlations between retweets and sentiment scores. It also conducted a toxicity analysis to understand the extent to which online conversations lacked neutrality. Understanding the topics, sentiment, and toxicity of Twitter data is crucial for identifying public opinion and its formation. By specifically focusing on the topic of "harm reduction" in tweets related to COP9, the findings offer valuable insights into online discussions surrounding tobacco control. This understanding can aid policymakers in effectively informing the public and garnering public support, ultimately contributing to the successful implementation of tobacco control policies.
Collapse
Affiliation(s)
- Sherif Elmitwalli
- Tobacco Control Research Group, Department for Health, University of Bath, Bath, United Kingdom
| | - John Mehegan
- Tobacco Control Research Group, Department for Health, University of Bath, Bath, United Kingdom
| | - Georgie Wellock
- Tobacco Control Research Group, Department for Health, University of Bath, Bath, United Kingdom
| | - Allen Gallagher
- Tobacco Control Research Group, Department for Health, University of Bath, Bath, United Kingdom
| | - Anna Gilmore
- Tobacco Control Research Group, Department for Health, University of Bath, Bath, United Kingdom
| |
Collapse
|
8
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
9
|
Shafiei SB, Shadpour S, Mohler JL, Sasangohar F, Gutierrez C, Seilanian Toussi M, Shafqat A. Surgical skill level classification model development using EEG and eye-gaze data and machine learning algorithms. J Robot Surg 2023; 17:2963-2971. [PMID: 37864129 PMCID: PMC10678814 DOI: 10.1007/s11701-023-01722-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/19/2023] [Indexed: 10/22/2023]
Abstract
The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Farzan Sasangohar
- Mike and Sugar Barnes Faculty Fellow II, Wm Michael Barnes and Department of Industrial and Systems Engineering at Texas A&M University, College Station, TX, 77843, USA
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| | - Mehdi Seilanian Toussi
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Ambreen Shafqat
- Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| |
Collapse
|
10
|
Devin CL, Gillani M, Shields MC, Eldredge K, Kucera W, Rupji M, Purvis LA, Paul Olson TJ, Liu Y, Jarc A, Rosen SA. Ratio of Economy of Motion: A New Objective Performance Indicator to Assign Consoles During Dual-Console Robotic Proctectomy. Am Surg 2023:31348231161767. [PMID: 36898676 DOI: 10.1177/00031348231161767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
BACKGROUND Our group investigates objective performance indicators (OPIs) to analyze robotic colorectal surgery. Analyses of OPI data are difficult in dual-console procedures (DCPs) as there is currently no reliable, efficient, or scalable technique to assign console-specific OPIs during a DCP. We developed and validated a novel metric to assign tasks to appropriate surgeons during DCPs. METHODS A colorectal surgeon and fellow reviewed 21 unedited, dual-console proctectomy videos with no information to identify the operating surgeons. The reviewers watched a small number of random tasks and assigned "attending" or "trainee" to each task. Based on this sampling, the remainder of task assignments for each procedure was extrapolated. In parallel, we applied our newly developed OPI, ratio of economy of motion (rEOM), to assign consoles. Results from the 2 methods were compared. RESULTS A total of 1811 individual surgical tasks were recorded during 21 proctectomy videos. A median of 6.5 random tasks (137 total) were reviewed during each video, and the remainder of task assignments were extrapolated based on the 7.6% of tasks audited. The task assignment agreement was 91.2% for video review vs rEOM, with rEOM providing ground truth. It took 2.5 hours to manually review video and assign tasks. Ratio of economy of motion task assignment was immediately available based on OPI recordings and automated calculation. DISCUSSION We developed and validated rEOM as an accurate, efficient, and scalable OPI to assign individual surgical tasks to appropriate surgeons during DCPs. This new resource will be useful to everyone involved in OPI research across all surgical specialties.
Collapse
Affiliation(s)
- Courtney L Devin
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Mishal Gillani
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | | | - Kyle Eldredge
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Walter Kucera
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| | - Manali Rupji
- Biostatistics Shared Resource, Winship Cancer Institute, 1371Emory University, Atlanta, GA, USA
| | - Lilia A Purvis
- Research Division, 19727Intuitive Surgical, Norcross, GA, USA
| | | | - Yuan Liu
- Biostatistics Shared Resource, Winship Cancer Institute, 1371Emory University, Atlanta, GA, USA.,Department of Biostatistics and Bioinformatics, Rollins School of Public Health, 1371Emory University, Atlanta, GA, USA
| | - Anthony Jarc
- Research Division, 19727Intuitive Surgical, Norcross, GA, USA
| | - Seth A Rosen
- Department of Surgery, 12239Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
11
|
Liang C, Li W, Liu X, Zhao H, Yin L, Li M, Guo Y, Lang J, Bin X, Liu P, Chen C. Effect of annualized surgeon volume on major surgical complications for abdominal and laparoscopic radical hysterectomy for cervical cancer in China, 2004-2016: a retrospective cohort study. BMC Womens Health 2023; 23:69. [PMID: 36793026 PMCID: PMC9933338 DOI: 10.1186/s12905-023-02213-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 02/06/2023] [Indexed: 02/17/2023] Open
Abstract
BACKGROUND Previous studies have suggested that higher surgeon volume leads to improved perioperative outcomes for oncologic surgery; however, the effect of surgeon volumes on surgical outcomes might differ according to the surgical approach used. This paper attempts to evaluate the effect of surgeon volume on complications or cervical cancer in an abdominal radical hysterectomy (ARH) cohort and laparoscopic radical hysterectomy (LRH) cohort. METHODS We conducted a population-based retrospective study using the Major Surgical Complications of Cervical Cancer in China (MSCCCC) database to analyse patients who underwent radical hysterectomy (RH) from 2004 to 2016 at 42 hospitals. We estimated the annualized surgeon volumes in the ARH cohort and in the LRH cohort separately. The effect of the surgeon volume of ARH or LRH on surgical complications was examined using multivariable logistic regression models. RESULTS In total, 22,684 patients who underwent RH for cervical cancer were identified. In the abdominal surgery cohort, the mean surgeon case volume increased from 2004 to 2013 (3.5 to 8.7 cases) and then decreased from 2013 to 2016 (8.7 to 4.9 cases). The mean surgeon case volume number of surgeons performing LRH increased from 1 to 12.1 cases between 2004 and 2016 (P < 0.01). In the abdominal surgery cohort, patients treated by intermediate-volume surgeons were more likely to experience postoperative complications (OR = 1.55, 95% CI = 1.11-2.15) than those treated by high-volume surgeons. In the laparoscopic surgery cohort, surgeon volume did not appear to influence the incidence of intraoperative or postoperative complications (P = 0.46; P = 0.13). CONCLUSIONS The performance of ARH by intermediate-volume surgeons is associated with an increased risk of postoperative complications. However, surgeon volume may have no effect on intraoperative or postoperative complications after LRH.
Collapse
Affiliation(s)
- Cong Liang
- grid.416466.70000 0004 1757 959XDepartment of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, No. 1838 Guangzhou Avenue, Guangzhou, 510515 China
| | - Weili Li
- grid.416466.70000 0004 1757 959XDepartment of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, No. 1838 Guangzhou Avenue, Guangzhou, 510515 China
| | - Xiaoyun Liu
- grid.413390.c0000 0004 1757 6938Department of Gynecology, The Third Affiliated Hospital of Zunyi Medical University, Zunyi, China
| | - Hongwei Zhao
- Department of Gynecology, Shanxi Provincial Cancer Hospital, Taiyuan, China
| | - Lu Yin
- grid.416466.70000 0004 1757 959XDepartment of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, No. 1838 Guangzhou Avenue, Guangzhou, 510515 China
| | - Mingwei Li
- grid.459671.80000 0004 1804 5346Department of Obstetrics and Gynecology, the Jiangmen Central Hospital of SUN YAT-SEN University, Jiangmen, China
| | - Yu Guo
- grid.440151.5Department of Gynecology, Anyang Tumor Hospital, Anyang, China
| | - Jinghe Lang
- grid.506261.60000 0001 0706 7839Department of Obstetrics and Gynecology, Peking Union Medical College Hospital, Peking Union Medical College & Chinese Academy of Medical Science, Beijing, China
| | - Xiaonong Bin
- grid.410737.60000 0000 8653 1072Department of Epidemiology, College of Public Health, Guangzhou Medical University, Guangzhou, China
| | - Ping Liu
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, No. 1838 Guangzhou Avenue, Guangzhou, 510515, China.
| | - Chunlin Chen
- Department of Obstetrics and Gynecology, Nanfang Hospital, Southern Medical University, No. 1838 Guangzhou Avenue, Guangzhou, 510515, China.
| |
Collapse
|
12
|
Wagner M, Brandenburg JM, Bodenstedt S, Schulze A, Jenke AC, Stern A, Daum MTJ, Mündermann L, Kolbinger FR, Bhasker N, Schneider G, Krause-Jüttler G, Alwanni H, Fritz-Kebede F, Burgert O, Wilhelm D, Fallert J, Nickel F, Maier-Hein L, Dugas M, Distler M, Weitz J, Müller-Stich BP, Speidel S. Surgomics: personalized prediction of morbidity, mortality and long-term outcome in surgery using machine learning on multimodal data. Surg Endosc 2022; 36:8568-8591. [PMID: 36171451 PMCID: PMC9613751 DOI: 10.1007/s00464-022-09611-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 09/03/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Collapse
Affiliation(s)
- Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
| | - Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fiona R Kolbinger
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Grit Krause-Jüttler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Hisham Alwanni
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany
| | - Dirk Wilhelm
- Department of Surgery, Faculty of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Johannes Fallert
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Beat-Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| |
Collapse
|
13
|
Sanford DI, Ma R, Ghoreifi A, Haque TF, Nguyen JH, Hung AJ. Association of Suturing Technical Skill Assessment Scores Between Virtual Reality Simulation and Live Surgery. J Endourol 2022; 36:1388-1394. [PMID: 35848509 PMCID: PMC9587778 DOI: 10.1089/end.2022.0158] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Introduction: Robotic surgical performance, in particular suturing, has been linked to postoperative clinical outcomes. Before attempting live surgery, virtual reality (VR) simulators afford opportunities for training surgeons to learn fundamental technical skills. Herein, we evaluate the association of suturing technical skill assessments between VR simulation and live surgery, and functional clinical outcomes. Materials and Methods: Twenty surgeons completed a VR suturing exercise on the Mimic™ Flex VR simulator and the anterior vesicourethral anastomosis during robot-assisted radical prostatectomy (RARP). Three independent and blinded graders provided technical skill scores using a validated assessment tool. Correlations between VR and live scores were assessed by Spearman's correlation coefficients (ρ). In addition, 117 historic RARP cases from participating surgeons were extracted, and the association between VR technical skill scores and urinary continence recovery was assessed by a multilevel mixed-effects model. Results: A total of 20 (6 training and 14 expert) surgeons participated. Statistically significant correlations for scores provided between VR simulation and live surgery were found for overall and needle driving scores (ρ = 0.555, p = 0.011; ρ = 0.570, p = 0.009, respectively). A subanalysis performed on training surgeons found significant correlations for overall scores between VR simulation and live surgery (ρ = 0.828, p = 0.042). Expert cases with high VR needle driving scores had significantly greater continence recovery rates at 24 months after RARP (98.5% vs 84.9%, p = 0.028). Conclusions: Our study found significant correlations in technical scores between VR and live surgery, especially among training surgeons. In addition, we found that VR needle driving scores were associated with continence recovery after RARP. Our data support the association of skill assessments between VR simulation and live surgery and potential implications for clinical outcomes.
Collapse
Affiliation(s)
- Daniel I. Sanford
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Runzhuo Ma
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Alireza Ghoreifi
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Taseen F. Haque
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Jessica H. Nguyen
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Andrew J. Hung
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
14
|
Chen KA, Berginski ME, Desai CS, Guillem JG, Stem J, Gomez Eng SM, Kapadia MR. Differential Performance of Machine Learning Models in Prediction of Procedure-Specific Outcomes. J Gastrointest Surg 2022; 26:1732-1742. [PMID: 35508684 PMCID: PMC9444966 DOI: 10.1007/s11605-022-05332-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 04/02/2022] [Indexed: 01/31/2023]
Abstract
BACKGROUND Procedure-specific complications can have devastating consequences. Machine learning-based tools have the potential to outperform traditional statistical modeling in predicting their risk and guiding decision-making. We sought to develop and compare deep neural network (NN) models, a type of machine learning, to logistic regression (LR) for predicting anastomotic leak after colectomy, bile leak after hepatectomy, and pancreatic fistula after pancreaticoduodenectomy (PD). METHODS The colectomy, hepatectomy, and PD National Surgical Quality Improvement Program (NSQIP) databases were analyzed. Each dataset was split into training, validation, and testing sets in a 60/20/20 ratio, with fivefold cross-validation. Models were created using NN and LR for each outcome. Models were evaluated primarily with area under the receiver operating characteristic curve (AUROC). RESULTS A total of 197,488 patients were included for colectomy, 25,403 for hepatectomy, and 23,333 for PD. For anastomotic leak, AUROC for NN was 0.676 (95% 0.666-0.687), compared with 0.633 (95% CI 0.620-0.647) for LR. For bile leak, AUROC for NN was 0.750 (95% CI 0.739-0.761), compared with 0.722 (95% CI 0.698-0.746) for LR. For pancreatic fistula, AUROC for NN was 0.746 (95% CI 0.733-0.760), compared with 0.713 (95% CI 0.703-0.723) for LR. Variables related to intra-operative information, such as surgical approach, biliary reconstruction, and pancreatic gland texture were highly important for model predictions. DISCUSSION Machine learning showed a marginal advantage over traditional statistical techniques in predicting procedure-specific outcomes. However, models that included intra-operative information performed better than those that did not, suggesting that NSQIP procedure-targeted datasets may be strengthened by including relevant intra-operative information.
Collapse
Affiliation(s)
- Kevin A Chen
- Department of Surgery, University of North Carolina, Chapel Hill, NC, 100 Manning Drive, Burnett Womack Building, Suite 4038, Chapel Hill, NC 27599
| | - Matthew E Berginski
- Department of Pharmacology, University of North Carolina, Chapel Hill, NC, 120 Mason Farm Rd, Genetic Medicine Building, Chapel Hill, NC 27599
| | - Chirag S Desai
- Department of Surgery, University of North Carolina, Chapel Hill, NC, 100 Manning Drive, Burnett Womack Building, Suite 4038, Chapel Hill, NC 27599
| | - Jose G Guillem
- Department of Surgery, University of North Carolina, Chapel Hill, NC, 100 Manning Drive, Burnett Womack Building, Suite 4038, Chapel Hill, NC 27599
| | - Jonathan Stem
- Department of Surgery, University of North Carolina, Chapel Hill, NC, 100 Manning Drive, Burnett Womack Building, Suite 4038, Chapel Hill, NC 27599
| | - Shawn M Gomez Eng
- Department of Pharmacology, University of North Carolina, Chapel Hill, NC, 120 Mason Farm Rd, Genetic Medicine Building, Chapel Hill, NC 27599,Joint Department of Biomedical Engineering, University of North Carolina, Chapel Hill, NC, 10202C Mary Ellen Jones Building, Chapel Hill, NC, 27599
| | - Muneera R Kapadia
- Department of Surgery, University of North Carolina, Chapel Hill, NC, 100 Manning Drive, Burnett Womack Building, Suite 4038, Chapel Hill, NC 27599
| |
Collapse
|
15
|
Kutana S, Bitner DP, Addison P, Chung PJ, Talamini MA, Filicori F. Objective assessment of robotic surgical skills: review of literature and future directions. Surg Endosc 2022; 36:3698-3707. [PMID: 35229215 DOI: 10.1007/s00464-022-09134-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 02/13/2022] [Indexed: 01/29/2023]
Abstract
BACKGROUND Evaluation of robotic surgical skill has become increasingly important as robotic approaches to common surgeries become more widely utilized. However, evaluation of these currently lacks standardization. In this paper, we aimed to review the literature on robotic surgical skill evaluation. METHODS A review of literature on robotic surgical skill evaluation was performed and representative literature presented over the past ten years. RESULTS The study of reliability and validity in robotic surgical evaluation shows two main assessment categories: manual and automatic. Manual assessments have been shown to be valid but typically are time consuming and costly. Automatic evaluation and simulation are similarly valid and simpler to implement. Initial reports on evaluation of skill using artificial intelligence platforms show validity. Few data on evaluation methods of surgical skill connect directly to patient outcomes. CONCLUSION As evaluation in surgery begins to incorporate robotic skills, a simultaneous shift from manual to automatic evaluation may occur given the ease of implementation of these technologies. Robotic platforms offer the unique benefit of providing more objective data streams including kinematic data which allows for precise instrument tracking in the operative field. Such data streams will likely incrementally be implemented in performance evaluations. Similarly, with advances in artificial intelligence, machine evaluation of human technical skill will likely form the next wave of surgical evaluation.
Collapse
Affiliation(s)
- Saratu Kutana
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.
| | - Poppy Addison
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Paul J Chung
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Mark A Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| |
Collapse
|
16
|
Loftus TJ, Vlaar APJ, Hung AJ, Bihorac A, Dennis BM, Juillard C, Hashimoto DA, Kaafarani HMA, Tighe PJ, Kuo PC, Miyashita S, Wexner SD, Behrns KE. Executive summary of the artificial intelligence in surgery series. Surgery 2022; 171:1435-1439. [PMID: 34815097 PMCID: PMC9379376 DOI: 10.1016/j.surg.2021.10.047] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/19/2021] [Accepted: 10/22/2021] [Indexed: 12/17/2022]
Abstract
As opportunities for artificial intelligence to augment surgical care expand, the accompanying surge in published literature has generated both substantial enthusiasm and grave concern regarding the safety and efficacy of artificial intelligence in surgery. For surgeons and surgical data scientists, it is increasingly important to understand the state-of-the-art, recognize knowledge and technology gaps, and critically evaluate the deluge of literature accordingly. This article summarizes the experiences and perspectives of a global, multi-disciplinary group of experts who have faced development and implementation challenges, overcome them, and produced incipient evidence thereof. Collectively, evidence suggests that artificial intelligence has the potential to augment surgeons via decision-support, technical skill assessment, and the semi-autonomous performance of tasks ranging from resource allocation to patching foregut defects. Most applications remain in preclinical phases. As technologies and their implementations improve and positive evidence accumulates, surgeons will face professional imperatives to lead the safe, effective clinical implementation of artificial intelligence in surgery. Substantial challenges remain; recent progress in using artificial intelligence to achieve performance advantages in surgery suggests that remaining challenges can and will be overcome.
Collapse
Affiliation(s)
- Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL.
| | - Alexander P J Vlaar
- Amsterdam UMC, location AMC, University of Amsterdam, Department of Intensive Care, Amsterdam, Netherlands
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, University of Southern California Institute of Urology, Los Angeles, CA
| | - Azra Bihorac
- Department of Medicine, University of Florida Health, Gainesville, FL
| | - Bradley M Dennis
- Division of Trauma, Surgical Critical Care and Emergency General Surgery, Department of Surgery, Vanderbilt University Medical Center, Nashville, TN
| | - Catherine Juillard
- University of California, Los Angeles, Department of Surgery, Los Angeles, CA
| | - Daniel A Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Haytham M A Kaafarani
- Division of Trauma, Emergency Surgery & Surgical Critical Care, Department of Surgery, Massachusetts General Hospital, Boston, MA
| | - Patrick J Tighe
- Department of Anesthesiology, University of Florida College of Medicine, Gainesville, FL
| | - Paul C Kuo
- Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL
| | - Shuhei Miyashita
- Department of Automatic Control and Systems Engineering, University of Sheffield, UK
| | | | | |
Collapse
|
17
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
18
|
Sanford DI, Der B, Haque TF, Ma R, Hakim R, Nguyen JH, Cen S, Hung AJ. Technical Skill Impacts the Success of Sequential Robotic Suturing Substeps. J Endourol 2022; 36:273-278. [PMID: 34779231 PMCID: PMC8861914 DOI: 10.1089/end.2021.0417] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Introduction: Robotic surgical performance, in particular suturing, has been associated with postoperative clinical outcomes. Suturing can be deconstructed into substep components (needle positioning, needle entry angle, needle driving, and needle withdrawal) allowing for the provision of more specific feedback while teaching suturing and more precision when evaluating suturing technical skill and prediction of clinical outcomes. This study evaluates if the technical skill required for particular substeps of the suturing process is associated with the execution of subsequent substeps in terms of technical skill, accuracy, and efficiency. Materials and Methods: Training and expert surgeons completed standardized sutures on the Mimic™ Flex virtual reality robotic simulator. Video recordings were deidentified, time annotated, and provided technical skill scores for each of the four suturing substeps. Hierarchical Poisson regression with generalized estimating equation was used to examine the association of technical skill rating categories between substeps. Results: Twenty-two surgeons completed 428 suturing attempts with 1669 individual technical skill assessments made. Technical skill scores between substeps of the suturing process were found to be significantly associated. When needle positioning was ideal, needle entry angle was associated with a significantly greater chance of being ideal (risk ratio [RR] = 1.12, p = 0.05). In addition, ideal needle entry angle and needle driving technical skill scores were each significantly associated with ideal needle withdrawal technical skill scores (RR = 1.27, p = 0.03; RR = 1.3, p = 0.03, respectively). Our study determined that ideal technical skill was associated with increased accuracy and efficiency of select substeps. Conclusions: Our study found significant associations in the technical skill required for completing substeps of suturing, demonstrating inter-relationships within the suturing process. Together with the known association between technical skill and clinical outcomes, training surgeons should focus on mastering not just the overall suturing process, but also each substep involved. Future machine learning efforts can better evaluate suturing, knowing that these inter-relationships exist.
Collapse
Affiliation(s)
- Daniel I. Sanford
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Balint Der
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Taseen F. Haque
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Ryan Hakim
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Jessica H. Nguyen
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Steven Cen
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Andrew J. Hung
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, California, USA.,Address correspondence to: Andrew J. Hung, MD, Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, Keck School of Medicine of USC, University of Southern California, 1441 East lake Ave, NOR 7416, Los Angeles, CA 90033-9178, USA
| |
Collapse
|
19
|
Review of automated performance metrics to assess surgical technical skills in robot-assisted laparoscopy. Surg Endosc 2021; 36:853-870. [PMID: 34750700 DOI: 10.1007/s00464-021-08792-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 10/17/2021] [Indexed: 10/19/2022]
Abstract
INTRODUCTION Robot-assisted laparoscopy is a safe surgical approach with several studies suggesting correlations between complication rates and the surgeon's technical skills. Surgical skills are usually assessed by questionnaires completed by an expert observer. With the advent of surgical robots, automated surgical performance metrics (APMs)-objective measures related to instrument movements-can be computed. The aim of this systematic review was thus to assess APMs use in robot-assisted laparoscopic procedures. The primary outcome was the assessment of surgical skills by APMs and the secondary outcomes were the association between APM and surgeon parameters and the prediction of clinical outcomes. METHODS A systematic review following the PRISMA guidelines was conducted. PubMed and Scopus electronic databases were screened with the query "robot-assisted surgery OR robotic surgery AND performance metrics" between January 2010 and January 2021. The quality of the studies was assessed by the medical education research study quality instrument. The study settings, metrics, and applications were analysed. RESULTS The initial search yielded 341 citations of which 16 studies were finally included. The study settings were either simulated virtual reality (VR) (4 studies) or real clinical environment (12 studies). Data to compute APMs were kinematics (motion tracking), and system and specific events data (actions from the robot console). APMs were used to differentiate expertise levels, and thus validate VR modules, predict outcomes, and integrate datasets for automatic recognition models. APMs were correlated with clinical outcomes for some studies. CONCLUSIONS APMs constitute an objective approach for assessing technical skills. Evidence of associations between APMs and clinical outcomes remain to be confirmed by further studies, particularly, for non-urological procedures. Concurrent validation is also required.
Collapse
|
20
|
Road to automating robotic suturing skills assessment: Battling mislabeling of the ground truth. Surgery 2021; 171:915-919. [PMID: 34538647 DOI: 10.1016/j.surg.2021.08.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/13/2021] [Accepted: 08/10/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To automate surgeon skills evaluation using robotic instrument kinematic data. Additionally, to implement an unsupervised mislabeling detection algorithm to identify potentially mislabeled samples that can be removed to improve model performance. METHODS Video recordings and instrument kinematic data were derived from suturing exercises completed on the Mimic FlexVR robotic simulator. A structured human consensus-building process was developed to determine Robotic Anastomosis Competency Evaluation technical scores across 3 human graders. A 2-layer long short-term memory-based classification model used instrument kinematic data to automate suturing skills assessment. An unsupervised label analyzer (NoiseRank) was used to identify potential mislabeling of skills data. Performance of the long short-term memory model's technical skill score prediction was measured by best area under the curve over the training runs. NoiseRank outputted a ranked list of rated skills assessments based on likelihood of mislabeling. RESULTS 22 surgeons performed 226 suturing attempts, which were broken down into 1,404 individual skill assessment points. Automation of needle entry angle, needle driving, and needle withdrawal technical skill scores performed better (area under the curve 0.698-0.705) than needle positioning (0.532) at baseline using all available data. Potential mislabels were subsequently identified by NoiseRank and removed, improving model performance across all domains (area under the curve 0.551-0.766). CONCLUSION Using ground truth labels from human graders and robotic instrument kinematic data, machine learning models have automated assessment of detailed suturing technical skills with good performance. Further, an unsupervised mislabeling detection algorithm projected mislabeled data, allowing for their removal and subsequent improvement of model performance.
Collapse
|
21
|
Kwong JCC, McLoughlin LC, Haider M, Goldenberg MG, Erdman L, Rickard M, Lorenzo AJ, Hung AJ, Farcas M, Goldenberg L, Nguan C, Braga LH, Mamdani M, Goldenberg A, Kulkarni GS. Standardized Reporting of Machine Learning Applications in Urology: The STREAM-URO Framework. Eur Urol Focus 2021; 7:672-682. [PMID: 34362709 DOI: 10.1016/j.euf.2021.07.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 07/19/2021] [Indexed: 12/23/2022]
Abstract
The Standardized Reporting of Machine Learning Applications in Urology (STREAM-URO) framework was developed to provide a set of recommendations to help standardize how machine learning studies in urology are reported. This framework serves three purposes: (1) to promote high-quality studies and streamline the peer review process; (2) to enhance reproducibility, comparability, and interpretability of results; and (3) to improve engagement and literacy of machine learning within the urological community.
Collapse
Affiliation(s)
- Jethro C C Kwong
- Division of Urology, Department of Surgery, University of Toronto, Toronto, Canada; Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada
| | - Louise C McLoughlin
- Division of Urology, Department of Surgery, University of Toronto, Toronto, Canada; Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada
| | - Masoom Haider
- Joint Department of Medical Imaging, University of Toronto, Toronto, Canada; AI, Radiomics and Oncologic Imaging Research Lab, Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Canada
| | | | - Lauren Erdman
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada; Vector Institute, Toronto, Ontario, Canada
| | - Mandy Rickard
- Division of Urology, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Armando J Lorenzo
- Division of Urology, Department of Surgery, University of Toronto, Toronto, Canada; Division of Urology, Hospital for Sick Children, Toronto, Ontario, Canada
| | - Andrew J Hung
- Catherine & Joseph Aresty Department of Urology, Center for Robotic Simulation & Education, University of Southern California Institute of Urology, Los Angeles, CA, USA
| | - Monica Farcas
- Division of Urology, Department of Surgery, University of Toronto, Toronto, Canada
| | - Larry Goldenberg
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Chris Nguan
- Department of Urologic Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Luis H Braga
- Division of Urology, Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Muhammad Mamdani
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada; Vector Institute, Toronto, Ontario, Canada; Unity Health Toronto, Toronto, Ontario, Canada
| | - Anna Goldenberg
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada; Department of Computer Science, University of Toronto, Toronto, Ontario, Canada; Vector Institute, Toronto, Ontario, Canada
| | - Girish S Kulkarni
- Division of Urology, Department of Surgery, University of Toronto, Toronto, Canada; Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada.
| |
Collapse
|