1
|
Antonelli A, Veccia A, Malandra S, Rizzetto R, Artoni F, Fracasso P, Fumanelli F, Palumbo I, Raiti A, Roggero L, Treccani LP, Vetro V, DE Marco V, Porcaro AB, Cerruto MA, Brunelli M, Bertolo R. Outcomes of da Vinci® versus Hugo RAS® radical prostatectomy: focus on postoperative course, pathological findings, and patients' health-related quality of life after 100 consecutive cases (the COMPAR-P prospective trial). Minerva Urol Nephrol 2024; 76:596-605. [PMID: 39320250 DOI: 10.23736/s2724-6051.24.05928-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2024]
Abstract
BACKGROUND This study aims to prospectively compare the outcomes of robot-assisted radical prostatectomy (RARP) performed using the Hugo RAS and da Vinci Xi systems, focusing on the postoperative course, pathological findings, and health-related quality of life. METHODS The COMPAR-P trial, a prospective post-market study (clinical-trials.org NCT05766163), commenced in March 2023, enrolling patients for RARP performed with either da Vinci or Hugo RAS without selection criteria for up to 50 consecutive cases per system. Two experienced console surgeons performed the procedures according to a standardized technique. The study evaluated differences between da Vinci and Hugo RAS regarding the postoperative course, pathology findings, 30-day PSA value, functional metrics, and health-related quality of life using SF-36 and University of California Los Angeles Prostate Cancer Index questionnaires. RESULTS Fifty patients underwent DV-RARP and H-RARP each. Postoperative complications, pathological data, and quality of life metrics did not significantly differ between the groups. Noteworthy limitations include the comparison between the first 50 H-RARP and last 50 DV-RARP cases, as well as the potential influence of surgeons' specialized expertise on the generalizability of findings. CONCLUSIONS This prospective study of 100 unselected patients undergoing RARP with either da Vinci or Hugo RAS systems reveals comparable outcomes in postoperative course, pathology, functional metrics, and health-related quality of life. However, further research with larger sample sizes, longer follow-up periods, and diverse surgical expertise is essential to validate these findings and better understand the implications for clinical practice.
Collapse
Affiliation(s)
- Alessandro Antonelli
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy -
| | - Alessandro Veccia
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Sarah Malandra
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
- Residency Program in Health Statistics and Biometrics, University of Verona, Verona, Italy
| | - Riccardo Rizzetto
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Francesco Artoni
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Piero Fracasso
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Francesca Fumanelli
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Iolanda Palumbo
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Antonio Raiti
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Luca Roggero
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Lorenzo P Treccani
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Vincenzo Vetro
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Vincenzo DE Marco
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Antonio B Porcaro
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Maria A Cerruto
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| | - Matteo Brunelli
- Department of Diagnostic and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Riccardo Bertolo
- Urology Unit, Department of Surgery, Dentistry, Pediatrics and Gynecology, University of Verona, A.O.U.I. Verona, Verona, Italy
| |
Collapse
|
2
|
Khan DZ, Koh CH, Das A, Valetopolou A, Hanrahan JG, Horsfall HL, Baldeweg SE, Bano S, Borg A, Dorward NL, Olukoya O, Stoyanov D, Marcus HJ. Video-Based Performance Analysis in Pituitary Surgery-Part 1: Surgical Outcomes. World Neurosurg 2024; 190:e787-e796. [PMID: 39122112 DOI: 10.1016/j.wneu.2024.07.218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 07/31/2024] [Indexed: 08/12/2024]
Abstract
BACKGROUND Endoscopic pituitary adenoma surgery has a steep learning curve, with varying surgical techniques and outcomes across centers. In other surgeries, superior performance is linked with superior surgical outcomes. This study aimed to explore the prediction of patient-specific outcomes using surgical video analysis in pituitary surgery. METHODS Endoscopic pituitary adenoma surgery videos from a single center were annotated by experts for operative workflow (3 surgical phases and 15 surgical steps) and operative skill (using modified Objective Structured Assessment of Technical Skills [mOSATS]). Quantitative workflow metrics were calculated, including phase duration and step transitions. Poisson or logistic regression was used to assess the association of workflow metrics and mOSATS with common inpatient surgical outcomes. RESULTS 100 videos from 100 patients were included. Nasal phase mean duration was 24 minutes and mean mOSATS was 21.2/30. Mean duration was 34 minutes and mean mOSATS was 20.9/30 for the sellar phase, and 11 minutes and 21.7/30, respectively, for the closure phase. The most common adverse outcomes were new anterior pituitary hormone deficiency (n = 26), dysnatremia (n = 24), and cerebrospinal fluid leak (n = 5). Higher mOSATS for all 3 phases and shorter operation duration were associated with decreased length of stay (P = 0.003 &P < 0.001). Superior closure phase mOSATS were associated with reduced postoperative cerebrospinal fluid leak (P < 0.001), and superior sellar phase mOSATS were associated with reduced postoperative visual deterioration (P = 0.041). CONCLUSIONS Superior surgical skill and shorter surgical time were associated with superior surgical outcomes, at a generic and phase-specific level. Such video-based analysis has promise for integration into data-driven training and service improvement initiatives.
Collapse
Affiliation(s)
- Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| | - Chan Hee Koh
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Adrito Das
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Alexandra Valetopolou
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes & Endocrinology, University College London Hospitals NHS Foundation Trust, London, UK; Division of Medicine, Department of Experimental and Translational Medicine, Centre for Obesity and Metabolism, University College London, London, UK
| | - Sophia Bano
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Anouk Borg
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil L Dorward
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Olatomiwa Olukoya
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Danail Stoyanov
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Digital Surgery Ltd, Medtronic, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
3
|
Ershad Langroodi M, Liu X, Tousignant MR, Jarc AM. Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03248-2. [PMID: 39320413 DOI: 10.1007/s11548-024-03248-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 07/29/2024] [Indexed: 09/26/2024]
Abstract
PURPOSE Surgical skill evaluation that relies on subjective scoring of surgical videos can be time-consuming and inconsistent across raters. We demonstrate differentiated opportunities for objective evaluation to improve surgeon training and performance. METHODS Subjective evaluation was performed using the Global evaluative assessment of robotic skills (GEARS) from both expert and crowd raters; whereas, objective evaluation used objective performance indicators (OPIs) derived from da Vinci surgical systems. Classifiers were trained for each evaluation method to distinguish between surgical expertise levels. This study includes one clinical task from a case series of robotic-assisted sleeve gastrectomy procedures performed by a single surgeon, and two training tasks performed by novice and expert surgeons, i.e., surgeons with no experience in robotic-assisted surgery (RAS) and those with more than 500 RAS procedures. RESULTS When comparing expert and novice skill levels, OPI-based classifier showed significantly higher accuracy than GEARS-based classifier on the more complex dissection task (OPI 0.93 ± 0.08 vs. GEARS 0.67 ± 0.18; 95% CI, 0.16-0.37; p = 0.02), but no significant difference was shown on the simpler suturing task. For the single-surgeon case series, both classifiers performed well when differentiating between early and late group cases with smaller group sizes and larger intervals between groups (OPI 0.9 ± 0.08; GEARS 0.87 ± 0.12; 95% CI, 0.02-0.04; p = 0.67). When increasing the group size to include more cases, thereby having smaller intervals between groups, OPIs demonstrated significantly higher accuracy (OPI 0.97 ± 0.06; GEARS 0.76 ± 0.07; 95% CI, 0.12-0.28; p = 0.004) in differentiating between the early/late cases. CONCLUSIONS Objective methods for skill evaluation in RAS outperform subjective methods when (1) differentiating expertise in a technically challenging training task, and (2) identifying more granular differences along early versus late phases of a surgeon learning curve within a clinical task. Objective methods offer an opportunity for more accessible and scalable skill evaluation in RAS.
Collapse
Affiliation(s)
| | - Xi Liu
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Mark R Tousignant
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Anthony M Jarc
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| |
Collapse
|
4
|
Shafiei SB, Shadpour S, Mohler JL, Kauffman EC, Holden M, Gutierrez C. Classification of subtask types and skill levels in robot-assisted surgery using EEG, eye-tracking, and machine learning. Surg Endosc 2024; 38:5137-5147. [PMID: 39039296 PMCID: PMC11362185 DOI: 10.1007/s00464-024-11049-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/06/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Objective and standardized evaluation of surgical skills in robot-assisted surgery (RAS) holds critical importance for both surgical education and patient safety. This study introduces machine learning (ML) techniques using features derived from electroencephalogram (EEG) and eye-tracking data to identify surgical subtasks and classify skill levels. METHOD The efficacy of this approach was assessed using a comprehensive dataset encompassing nine distinct classes, each representing a unique combination of three surgical subtasks executed by surgeons while performing operations on pigs. Four ML models, logistic regression, random forest, gradient boosting, and extreme gradient boosting (XGB) were used for multi-class classification. To develop the models, 20% of data samples were randomly allocated to a test set, with the remaining 80% used for training and validation. Hyperparameters were optimized through grid search, using fivefold stratified cross-validation repeated five times. Model reliability was ensured by performing train-test split over 30 iterations, with average measurements reported. RESULTS The findings revealed that the proposed approach outperformed existing methods for classifying RAS subtasks and skills; the XGB and random forest models yielded high accuracy rates (88.49% and 88.56%, respectively) that were not significantly different (two-sample t-test; P-value = 0.9). CONCLUSION These results underscore the potential of ML models to augment the objectivity and precision of RAS subtask and skill evaluation. Future research should consider exploring ways to optimize these models, particularly focusing on the classes identified as challenging in this study. Ultimately, this study marks a significant step towards a more refined, objective, and standardized approach to RAS training and competency assessment.
Collapse
Affiliation(s)
- Somayeh B Shafiei
- The Intelligent Cancer Care Laboratory, Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA.
| | - Saeed Shadpour
- Department of Animal Biosciences, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - James L Mohler
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Eric C Kauffman
- Department of Urology, Roswell Park Comprehensive Cancer Center, Buffalo, NY, 14263, USA
| | - Matthew Holden
- School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada
| | - Camille Gutierrez
- Obstetrics and Gynecology Residency Program, Sisters of Charity Health System, Buffalo, NY, 14214, USA
| |
Collapse
|
5
|
Lengyel BC, Chinnadurai P, Corr SJ, Lumsden AB, Bavare CS. Robot-assisted vascular surgery: literature review, clinical applications, and future perspectives. J Robot Surg 2024; 18:328. [PMID: 39174843 PMCID: PMC11341614 DOI: 10.1007/s11701-024-02087-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 08/17/2024] [Indexed: 08/24/2024]
Abstract
Although robot-assisted surgical procedures using the da Vinci robotic system (Intuitive Surgical, Sunnyvale, CA) have been performed in more than 13 million procedures worldwide over the last two decades, the vascular surgical community has yet to fully embrace this approach (Intuitive Surgical Investor Presentation Q3 (2023) https://investor.intuitivesurgical.com/static-files/dd0f7e46-db67-4f10-90d9-d826df00554e . Accessed February 22, 2024). In the meantime, endovascular procedures revolutionized vascular care, serving as a minimally invasive alternative to traditional open surgery. In the pursuit of a percutaneous approach, shorter postoperative hospital stay, and fewer perioperative complications, the long-term durability of open surgical vascular reconstruction has been compromised (in Lancet 365:2179-2186, 2005; Patel in Lancet 388:2366-2374, 2016; Wanhainen in Eur J Vasc Endovasc Surg 57:8-93, 2019). The underlying question is whether the robotic-assisted laparoscopic vascular surgical approaches could deliver the robustness and longevity of open vascular surgical reconstruction, but with a minimally invasive delivery system. In the meantime, other surgical specialties have embraced robot-assisted laparoscopic technology and mastered the essential vascular skillsets along with minimally invasive robotic surgery. For example, surgical procedures such as renal transplantation, lung transplantation, and portal vein reconstruction are routinely being performed with robotic assistance that includes major vascular anastomoses (Emerson in J Heart Lung Transplant 43:158-161, 2024; Fei in J Vasc Surg Cases Innov Tech 9, 2023; Tzvetanov in Transplantation 106:479-488, 2022; Slagter in Int J Surg 99, 2022). Handling and dissection of major vascular structures come with the inherent risk of vascular injury, perhaps the most feared complication during such robotic procedures, possibly requiring emergent vascular surgical consultation. In this review article, we describe the impact of a minimally invasive, robotic approach covering the following topics: a brief history of robotic surgery, components and benefits of the robotic system as compared to laparoscopy, current literature on "vascular" applications of the robotic system, evolving training pathways and future perspectives.
Collapse
Affiliation(s)
- Balazs C Lengyel
- Department of Cardiovascular Surgery, Houston Methodist Hospital, 6550 Fannin Street, Houston, TX, 77030, USA.
- Department of Vascular and Endovascular Surgery, Semmelweis University, Budapest, Hungary.
| | - Ponraj Chinnadurai
- Department of Cardiovascular Surgery, Houston Methodist Hospital, 6550 Fannin Street, Houston, TX, 77030, USA
| | - Stuart J Corr
- Department of Cardiovascular Surgery, Houston Methodist Hospital, 6550 Fannin Street, Houston, TX, 77030, USA
| | - Alan B Lumsden
- Department of Cardiovascular Surgery, Houston Methodist Hospital, 6550 Fannin Street, Houston, TX, 77030, USA
| | - Charudatta S Bavare
- Department of Cardiovascular Surgery, Houston Methodist Hospital, 6550 Fannin Street, Houston, TX, 77030, USA
| |
Collapse
|
6
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
7
|
Olsen RG, Svendsen MBS, Tolsgaard MG, Konge L, Røder A, Bjerrum F. Automated performance metrics and surgical gestures: two methods for assessment of technical skills in robotic surgery. J Robot Surg 2024; 18:297. [PMID: 39068261 PMCID: PMC11283394 DOI: 10.1007/s11701-024-02051-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 07/15/2024] [Indexed: 07/30/2024]
Abstract
The objective of this study is to compare automated performance metrics (APM) and surgical gestures for technical skills assessment during simulated robot-assisted radical prostatectomy (RARP). Ten novices and six experienced RARP surgeons performed simulated RARPs on the RobotiX Mentor (Surgical Science, Sweden). Simulator APM were automatically recorded, and surgical videos were manually annotated with five types of surgical gestures. The consequences of the pass/fail levels, which were based on contrasting groups' methods, were compared for APM and surgical gestures. Intra-class correlation coefficient (ICC) analysis and a Bland-Altman plot were used to explore the correlation between APM and surgical gestures. Pass/fail levels for both APM and surgical gesture could fully distinguish between the skill levels of the surgeons with a specificity and sensitivity of 100%. The overall ICC (one-way, random) was 0.70 (95% CI: 0.34-0.88), showing moderate agreement between the methods. The Bland-Altman plot showed a high agreement between the two methods for assessing experienced surgeons but disagreed on the novice surgeons' skill level. APM and surgical gestures could both fully distinguish between novices and experienced surgeons in a simulated setting. Both methods of analyzing technical skills have their advantages and disadvantages and, as of now, those are only to a limited extent available in the clinical setting. The development of assessment methods in a simulated setting enables testing before implementing it in a clinical setting.
Collapse
Affiliation(s)
- Rikke Groth Olsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark.
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark.
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Røder
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Gastrounit, Surgical Section, Copenhagen University Hospital-Amager and Hvidovre, Hvidovre, Denmark
| |
Collapse
|
8
|
Madani A, Liu Y, Pryor A, Altieri M, Hashimoto DA, Feldman L. SAGES surgical data science task force: enhancing surgical innovation, education and quality improvement through data science. Surg Endosc 2024; 38:3489-3493. [PMID: 38831213 DOI: 10.1007/s00464-024-10921-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Accepted: 05/05/2024] [Indexed: 06/05/2024]
Affiliation(s)
- Amin Madani
- Department of Surgery, University of Toronto, Toronto, ON, Canada.
| | - Yao Liu
- Department of Surgery, Brown University, Providence, RI, USA
| | - Aurora Pryor
- Department of Surgery, Northwell Health, New York, NY, USA
| | - Maria Altieri
- Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel A Hashimoto
- Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Liane Feldman
- Department of Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
9
|
Otiato MX, Ma R, Chu TN, Wong EY, Wagner C, Hung AJ. Surgical gestures to evaluate apical dissection of robot-assisted radical prostatectomy. J Robot Surg 2024; 18:245. [PMID: 38847926 PMCID: PMC11161532 DOI: 10.1007/s11701-024-01902-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 03/03/2024] [Indexed: 06/10/2024]
Abstract
Previously, our group established a surgical gesture classification system that deconstructs robotic tissue dissection into basic surgical maneuvers. Here, we evaluate gestures by correlating the metric with surgeon experience and technical skill assessment scores in the apical dissection (AD) of robotic-assisted radical prostatectomy (RARP). Additionally, we explore the association between AD performance and early continence recovery following RARP. 78 AD surgical videos from 2016 to 2018 across two international institutions were included. Surgeons were grouped by median robotic caseload (range 80-5,800 cases): less experienced group (< 475 cases) and more experienced (≥ 475 cases). Videos were decoded with gestures and assessed using Dissection Assessment for Robotic Technique (DART). Statistical findings revealed more experienced surgeons (n = 10) used greater proportions of cold cut (p = 0.008) and smaller proportions of peel/push, spread, and two-hand spread (p < 0.05) than less experienced surgeons (n = 10). Correlations between gestures and technical skills assessments ranged from - 0.397 to 0.316 (p < 0.05). Surgeons utilizing more retraction gestures had lower total DART scores (p < 0.01), suggesting less dissection proficiency. Those who used more gestures and spent more time per gesture had lower efficiency scores (p < 0.01). More coagulation and hook gestures were found in cases of patients with continence recovery compared to those with ongoing incontinence (p < 0.04). Gestures performed during AD vary based on surgeon experience level and patient continence recovery duration. Significant correlations were demonstrated between gestures and dissection technical skills. Gestures can serve as a novel method to objectively evaluate dissection performance and anticipate outcomes.
Collapse
Affiliation(s)
- Maxwell X Otiato
- Catherine and Joseph Aresty Department of Urology, Center for Robotic Simulation and Education, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Runzhuo Ma
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Timothy N Chu
- Catherine and Joseph Aresty Department of Urology, Center for Robotic Simulation and Education, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Elyssa Y Wong
- Catherine and Joseph Aresty Department of Urology, Center for Robotic Simulation and Education, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Christian Wagner
- Department of Urology and Urologic Oncology, St. Antonius-Hospital, Gronau, Germany
| | - Andrew J Hung
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
10
|
Lee A, Baker TS, Bederson JB, Rapoport BI. Levels of autonomy in FDA-cleared surgical robots: a systematic review. NPJ Digit Med 2024; 7:103. [PMID: 38671232 PMCID: PMC11053143 DOI: 10.1038/s41746-024-01102-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 04/04/2024] [Indexed: 04/28/2024] Open
Abstract
The integration of robotics in surgery has increased over the past decade, and advances in the autonomous capabilities of surgical robots have paralleled that of assistive and industrial robots. However, classification and regulatory frameworks have not kept pace with the increasing autonomy of surgical robots. There is a need to modernize our classification to understand technological trends and prepare to regulate and streamline surgical practice around these robotic systems. We present a systematic review of all surgical robots cleared by the United States Food and Drug Administration (FDA) from 2015 to 2023, utilizing a classification system that we call Levels of Autonomy in Surgical Robotics (LASR) to categorize each robot's decision-making and action-taking abilities from Level 1 (Robot Assistance) to Level 5 (Full Autonomy). We searched the 510(k), De Novo, and AccessGUDID databases in December 2023 and included all medical devices fitting our definition of a surgical robot. 37,981 records were screened to identify 49 surgical robots. Most surgical robots were at Level 1 (86%) and some reached Level 3 (Conditional Autonomy) (6%). 2 surgical robots were recognized by the FDA to have machine learning-enabled capabilities, while more were reported to have these capabilities in their marketing materials. Most surgical robots were introduced via the 510(k) pathway, but a growing number via the De Novo pathway. This review highlights trends toward greater autonomy in surgical robotics. Implementing regulatory frameworks that acknowledge varying levels of autonomy in surgical robots may help ensure their safe and effective integration into surgical practice.
Collapse
Affiliation(s)
- Audrey Lee
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Turner S Baker
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Joshua B Bederson
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Benjamin I Rapoport
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
| |
Collapse
|
11
|
Yiu A, Lam K, Simister C, Clarke J, Kinross J. Adoption of routine surgical video recording: a nationwide freedom of information act request across England and Wales. EClinicalMedicine 2024; 70:102545. [PMID: 38685926 PMCID: PMC11056472 DOI: 10.1016/j.eclinm.2024.102545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/28/2024] [Accepted: 02/28/2024] [Indexed: 05/02/2024] Open
Abstract
Background Surgical video contains data with significant potential to improve surgical outcome assessment, quality assurance, education, and research. Current utilisation of surgical video recording is unknown and related policies/governance structures are unclear. Methods A nationwide Freedom of Information (FOI) request concerning surgical video recording, technology, consent, access, and governance was sent to all acute National Health Service (NHS) trusts/boards in England/Wales between 20th February and 20th March 2023. Findings 140/144 (97.2%) trusts/boards in England/Wales responded to the FOI request. Surgical procedures were routinely recorded in 22 trusts/boards. The median estimate of consultant surgeons routinely recording their procedures was 20%. Surgical video was stored on internal systems (n = 27), third-party products (n = 29), and both (n = 9). 32/140 (22.9%) trusts/boards ask for consent to record procedures as part of routine care. Consent for recording included non-clinical purposes in 55/140 (39.3%) trusts/boards. Policies for surgeon/patient access to surgical video were available in 48/140 (34.3%) and 32/140 (22.9%) trusts/boards, respectively. Surgical video was used for non-clinical purposes in 64/140 (45.7%) trusts/boards. Governance policies covering surgical video recording, use, and/or storage were available from 59/140 (42.1%) trusts/boards. Interpretation There is significant heterogeneity in surgical video recording practices in England and Wales. A minority of trusts/boards routinely record surgical procedures, with large variation in recording/storage practices indicating scope for NHS-wide coordination. Revision of surgical video consent, accessibility, and governance policies should be prioritised by trusts/boards to protect key stakeholders. Increased availability of surgical video is essential for patients and surgeons to maximally benefit from the ongoing digital transformation of surgery. Funding KL is supported by an NIHR Academic Clinical Fellowship and acknowledges infrastructure support for this research from the National Institute for Health Research (NIHR) Imperial Biomedical Research Centre (BRC).
Collapse
Affiliation(s)
- Andrew Yiu
- Department of Surgery and Cancer, Imperial College London, UK
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, UK
| | | | - Jonathan Clarke
- Department of Surgery and Cancer, Imperial College London, UK
| | - James Kinross
- Department of Surgery and Cancer, Imperial College London, UK
| |
Collapse
|
12
|
Balvardi S, Kaneva P, Semsar-Kazerooni K, Vassiliou M, Al Mahroos M, Mueller C, Fiore JF, Schwartzman K, Feldman LS. Effect of video-based self-reflection on intraoperative skills: A pilot randomized controlled trial. Surgery 2024; 175:1021-1028. [PMID: 38154996 DOI: 10.1016/j.surg.2023.11.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 10/13/2023] [Accepted: 11/26/2023] [Indexed: 12/30/2023]
Abstract
BACKGROUND The value of video-based self-assessment in enhancing surgical skills is uncertain. This study investigates the feasibility and estimates sample size for a full-scale randomized controlled trial to evaluate the effectiveness of video-based self-assessment to improve surgical performance of laparoscopic cholecystectomy in trainees. METHODS This parallel pilot randomized controlled trial included general surgery trainees performing supervised laparoscopic cholecystectomy randomized 1:1 to control (traditional intraoperative teaching) or intervention group (traditional teaching plus video-based self-assessment). Operative performance was measured by the attending surgeon blinded to group assignment at the time of surgery using standardized assessment tools (Global Operative Assessment of Laparoscopic Skills and Operative Performance Rating System). The intervention group had access to their video recordings on a web-based platform for review and self-assessment using the same instruments. The primary outcome for the estimation of sample size was the difference in faculty-assessed final operative performance (third submitted case). Feasibility criteria included >85% participation, >85% adherence to case submission and >85% completion of self-assessment. RESULTS Of 37 eligible trainees approached, 32 consented and were randomized (86%). There were 16 in the intervention group, 15 in the control group (55% male, 55% junior trainees), and 1 was excluded for protocol violation. Twenty-four (75%) of participants submitted 3 cases. Thirteen trainees (81%) accessed the platform and completed 26 (63.2%) case self-assessments. Fifty-five trainees per arm will be needed to power a full-scale laparoscopic cholecystectomy with Global Operative Assessment of Laparoscopic Skills and 130 trainees per arm with Operative Performance Rating System as the assessment tool. CONCLUSION This pilot study contributes important data to inform the design of an adequately powered randomized controlled trial of video-based self-assessment to improve trainee performance of laparoscopic cholecystectomy. Although a priori trial feasibility criteria were not achieved, automated video capture and storage could significantly improve adherence in future trials.
Collapse
Affiliation(s)
- Saba Balvardi
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada
| | - Pepa Kaneva
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada
| | - Koorosh Semsar-Kazerooni
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada
| | - Melina Vassiliou
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada
| | | | - Carmen Mueller
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada
| | - Julio F Fiore
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada
| | - Kevin Schwartzman
- Respiratory Division, Department of Medicine, McGill University and McGill International Tuberculosis Centre, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada
| | - Liane S Feldman
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, Montreal, Quebec, Canada.
| |
Collapse
|
13
|
Deol ES, Tollefson MK, Antolin A, Zohar M, Bar O, Ben-Ayoun D, Mynderse LA, Lomas DJ, Avant RA, Miller AR, Elliott DS, Boorjian SA, Wolf T, Asselmann D, Khanna A. Automated surgical step recognition in transurethral bladder tumor resection using artificial intelligence: transfer learning across surgical modalities. Front Artif Intell 2024; 7:1375482. [PMID: 38525302 PMCID: PMC10958784 DOI: 10.3389/frai.2024.1375482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 02/26/2024] [Indexed: 03/26/2024] Open
Abstract
Objective Automated surgical step recognition (SSR) using AI has been a catalyst in the "digitization" of surgery. However, progress has been limited to laparoscopy, with relatively few SSR tools in endoscopic surgery. This study aimed to create a SSR model for transurethral resection of bladder tumors (TURBT), leveraging a novel application of transfer learning to reduce video dataset requirements. Materials and methods Retrospective surgical videos of TURBT were manually annotated with the following steps of surgery: primary endoscopic evaluation, resection of bladder tumor, and surface coagulation. Manually annotated videos were then utilized to train a novel AI computer vision algorithm to perform automated video annotation of TURBT surgical video, utilizing a transfer-learning technique to pre-train on laparoscopic procedures. Accuracy of AI SSR was determined by comparison to human annotations as the reference standard. Results A total of 300 full-length TURBT videos (median 23.96 min; IQR 14.13-41.31 min) were manually annotated with sequential steps of surgery. One hundred and seventy-nine videos served as a training dataset for algorithm development, 44 for internal validation, and 77 as a separate test cohort for evaluating algorithm accuracy. Overall accuracy of AI video analysis was 89.6%. Model accuracy was highest for the primary endoscopic evaluation step (98.2%) and lowest for the surface coagulation step (82.7%). Conclusion We developed a fully automated computer vision algorithm for high-accuracy annotation of TURBT surgical videos. This represents the first application of transfer-learning from laparoscopy-based computer vision models into surgical endoscopy, demonstrating the promise of this approach in adapting to new procedure types.
Collapse
Affiliation(s)
- Ekamjit S. Deol
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Maya Zohar
- theator.io, Palo Alto, CA, United States
| | - Omri Bar
- theator.io, Palo Alto, CA, United States
| | | | | | - Derek J. Lomas
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Ross A. Avant
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | - Adam R. Miller
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Tamir Wolf
- theator.io, Palo Alto, CA, United States
| | | | - Abhinav Khanna
- Department of Urology, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
14
|
Knudsen JE, Ghaffar U, Ma R, Hung AJ. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg 2024; 18:102. [PMID: 38427094 PMCID: PMC10907451 DOI: 10.1007/s11701-024-01867-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024]
Abstract
Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.
Collapse
Affiliation(s)
- J Everett Knudsen
- Keck School of Medicine, University of Southern California, Los Angeles, USA
| | | | - Runzhuo Ma
- Cedars-Sinai Medical Center, Los Angeles, USA
| | | |
Collapse
|
15
|
Olsen RG, Svendsen MBS, Tolsgaard MG, Konge L, Røder A, Bjerrum F. Surgical gestures can be used to assess surgical competence in robot-assisted surgery : A validity investigating study of simulated RARP. J Robot Surg 2024; 18:47. [PMID: 38244130 PMCID: PMC10799775 DOI: 10.1007/s11701-023-01807-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 12/23/2023] [Indexed: 01/22/2024]
Abstract
To collect validity evidence for the assessment of surgical competence through the classification of general surgical gestures for a simulated robot-assisted radical prostatectomy (RARP). We used 165 video recordings of novice and experienced RARP surgeons performing three parts of the RARP procedure on the RobotiX Mentor. We annotated the surgical tasks with different surgical gestures: dissection, hemostatic control, application of clips, needle handling, and suturing. The gestures were analyzed using idle time (periods with minimal instrument movements) and active time (whenever a surgical gesture was annotated). The distribution of surgical gestures was described using a one-dimensional heat map, snail tracks. All surgeons had a similar percentage of idle time but novices had longer phases of idle time (mean time: 21 vs. 15 s, p < 0.001). Novices used a higher total number of surgical gestures (number of phases: 45 vs. 35, p < 0.001) and each phase was longer compared with those of the experienced surgeons (mean time: 10 vs. 8 s, p < 0.001). There was a different pattern of gestures between novices and experienced surgeons as seen by a different distribution of the phases. General surgical gestures can be used to assess surgical competence in simulated RARP and can be displayed as a visual tool to show how performance is improving. The established pass/fail level may be used to ensure the competence of the residents before proceeding with supervised real-life surgery. The next step is to investigate if the developed tool can optimize automated feedback during simulator training.
Collapse
Affiliation(s)
- Rikke Groth Olsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark.
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Røder
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for HR & Education, The Capital Region of Denmark, Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| |
Collapse
|
16
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
17
|
Knudsen JE, Ma R, Hung AJ. Simulation training in urology. Curr Opin Urol 2024; 34:37-42. [PMID: 37909886 PMCID: PMC10842538 DOI: 10.1097/mou.0000000000001141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
PURPOSE OF REVIEW This review outlines recent innovations in simulation technology as it applies to urology. It is essential for the next generation of urologists to attain a solid foundation of technical and nontechnical skills, and simulation technology provides a variety of safe, controlled environments to acquire this baseline knowledge. RECENT FINDINGS With a focus on urology, this review first outlines the evidence to support surgical simulation, then discusses the strides being made in the development of 3D-printed models for surgical skill training and preoperative planning, virtual reality models for different urologic procedures, surgical skill assessment for simulation, and integration of simulation into urology residency curricula. SUMMARY Simulation continues to be an integral part of the journey towards the mastery of skills necessary for becoming an expert urologist. Clinicians and researchers should consider how to further incorporate simulation technology into residency training and help future generations of urologists throughout their career.
Collapse
Affiliation(s)
| | - Runzhuo Ma
- Department of Urology, Cedars-Sinai Medical Center; Los Angeles, California, USA
| | - Andrew J Hung
- Department of Urology, Cedars-Sinai Medical Center; Los Angeles, California, USA
| |
Collapse
|
18
|
Campi R, Pecoraro A, Vignolini G, Spatafora P, Sebastianelli A, Sessa F, Li Marzi V, Territo A, Decaestecker K, Breda A, Serni S. The First Entirely 3D-Printed Training Model for Robot-assisted Kidney Transplantation: The RAKT Box. EUR UROL SUPPL 2023; 53:98-105. [PMID: 37304228 PMCID: PMC10251129 DOI: 10.1016/j.euros.2023.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2023] [Indexed: 06/13/2023] Open
Abstract
Background Robot-assisted kidney transplantation (RAKT) is increasingly performed at selected referral institutions worldwide. However, simulation and proficiency-based progression training frameworks for RAKT are still lacking, making acquisition of the RAKT-specific skill set a critical unmet need for future RAKT surgeons. Objective To develop and test the RAKT Box, the first entirely 3D-printed, perfused, hyperaccuracy simulator for vascular anastomoses during RAKT. Design setting and participants The project was developed in a stepwise fashion by a multidisciplinary team including urologists and bioengineers via an iterative process over a 3-yr period (November 2019-November 2022) using an established methodology. The essential and time-sensitive steps of RAKT were selected by a team of RAKT experts and simulated using the RAKT Box according to the principles of the Vattituki-Medanta technique. The RAKT Box was tested in the operating theatre by an expert RAKT surgeon and independently by four trainees with heterogeneous expertise in robotic surgery and kidney transplantation. Surgical procedure Simulation of RAKT. Measurements Video recordings of the trainees' performance of vascular anastomoses using the RAKT Box were evaluated blind by a senior surgeon according to the Global Evaluative Assessment of Robotic Skills (GEARS) and Assessment of Robotic Console Skills (ARCS) tools. Results and limitations All participants successfully completed the training session, confirming the technical reliability of the RAKT Box simulator. Tangible differences were observed among the trainees in both anastomosis time and performance metrics. Key limitations of the RAKT Box include lack of simulation of the ureterovesical anastomosis and the need for a robotic platform, specific training instruments, and disposable 3D-printed vessels. Conclusions The RAKT Box is a reliable educational tool to train novice surgeons in the key steps of RAKT and may represent the first step toward the definition of a structured surgical curriculum in RAKT. Patient summary We describe the first entirely 3D-printed simulator that allows surgeons to test the key steps of robot-assisted kidney transplantation (RAKT) in a training environment before performing the procedure in patients. The simulator, called the RAKT Box, has been successfully tested by an expert surgeon and four trainees. The results confirm its reliability and potential as an educational tool for training of future RAKT surgeons.
Collapse
Affiliation(s)
- Riccardo Campi
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
- Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy
- European Association of Urology Young Academic Urologists Kidney Transplantation Working Group, Arnhem, The Netherlands
| | - Alessio Pecoraro
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
- European Association of Urology Young Academic Urologists Kidney Transplantation Working Group, Arnhem, The Netherlands
| | - Graziano Vignolini
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
| | - Pietro Spatafora
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
| | - Arcangelo Sebastianelli
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
| | - Francesco Sessa
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
| | - Vincenzo Li Marzi
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
| | - Angelo Territo
- European Association of Urology Young Academic Urologists Kidney Transplantation Working Group, Arnhem, The Netherlands
- Department of Urology, Fundaciò Puigvert, Autonomous University of Barcelona, Barcelona, Spain
| | - Karel Decaestecker
- European Association of Urology Robotic Urology Section Robot-assisted Kidney Transplantation Working Group, Arnhem, The Netherlands
- Department of Urology, Ghent University Hospital, Ghent, Belgium
| | - Alberto Breda
- Department of Urology, Fundaciò Puigvert, Autonomous University of Barcelona, Barcelona, Spain
- European Association of Urology Robotic Urology Section Robot-assisted Kidney Transplantation Working Group, Arnhem, The Netherlands
| | - Sergio Serni
- Unit of Urological Robotic Surgery and Renal Transplantation, University of Florence, Careggi Hospital, Florence, Italy
- Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy
| | | | | |
Collapse
|
19
|
Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med 2023; 6:113. [PMID: 37311802 DOI: 10.1038/s41746-023-00858-z] [Citation(s) in RCA: 43] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 06/06/2023] [Indexed: 06/15/2023] Open
Affiliation(s)
- Mirja Mittermaier
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Infectious Diseases, Respiratory Medicine and Critical Care, Berlin, Germany.
- Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | | | | |
Collapse
|
20
|
Marwaha JS, Raza MM, Kvedar JC. The digital transformation of surgery. NPJ Digit Med 2023; 6:103. [PMID: 37258642 PMCID: PMC10232406 DOI: 10.1038/s41746-023-00846-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/15/2023] [Indexed: 06/02/2023] Open
Abstract
Rapid advances in digital technology and artificial intelligence in recent years have already begun to transform many industries, and are beginning to make headway into healthcare. There is tremendous potential for new digital technologies to improve the care of surgical patients. In this piece, we highlight work being done to advance surgical care using machine learning, computer vision, wearable devices, remote patient monitoring, and virtual and augmented reality. We describe ways these technologies can be used to improve the practice of surgery, and discuss opportunities and challenges to their widespread adoption and use in operating rooms and at the bedside.
Collapse
Affiliation(s)
- Jayson S Marwaha
- Beth Israel Deaconess Medical Center, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| | | | - Joseph C Kvedar
- Harvard Medical School, Boston, MA, USA
- Mass General Brigham, Boston, MA, USA
| |
Collapse
|
21
|
Chu TN, Wong EY, Ma R, Yang CH, Dalieh IS, Hung AJ. Exploring the Use of Artificial Intelligence in the Management of Prostate Cancer. Curr Urol Rep 2023; 24:231-240. [PMID: 36808595 PMCID: PMC10090000 DOI: 10.1007/s11934-023-01149-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE OF REVIEW This review aims to explore the current state of research on the use of artificial intelligence (AI) in the management of prostate cancer. We examine the various applications of AI in prostate cancer, including image analysis, prediction of treatment outcomes, and patient stratification. Additionally, the review will evaluate the current limitations and challenges faced in the implementation of AI in prostate cancer management. RECENT FINDINGS Recent literature has focused particularly on the use of AI in radiomics, pathomics, the evaluation of surgical skills, and patient outcomes. AI has the potential to revolutionize the future of prostate cancer management by improving diagnostic accuracy, treatment planning, and patient outcomes. Studies have shown improved accuracy and efficiency of AI models in the detection and treatment of prostate cancer, but further research is needed to understand its full potential as well as limitations.
Collapse
Affiliation(s)
- Timothy N Chu
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Elyssa Y Wong
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Runzhuo Ma
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Cherine H Yang
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Istabraq S Dalieh
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA
| | - Andrew J Hung
- Center for Robotic Simulation & Education, Department of Urology, USC Institute of Urology, University of Southern California, Catherine & Joseph Aresty1441 Eastlake Avenue Suite 7416, Los Angeles, CA, 90089, USA.
| |
Collapse
|