1
|
Cho SM, Joo HH, Golla P, Sahu M, Shankar A, Trakimas DR, Creighton F, Akst L, Taylor RH, Galaiya D. Tremor Assessment in Robot-Assisted Microlaryngeal Surgery Using Computer Vision-Based Tool Tracking. Otolaryngol Head Neck Surg 2024; 171:188-196. [PMID: 38488231 PMCID: PMC11211051 DOI: 10.1002/ohn.714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 01/30/2024] [Accepted: 02/09/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVE Use microscopic video-based tracking of laryngeal surgical instruments to investigate the effect of robot assistance on instrument tremor. STUDY DESIGN Experimental trial. SETTING Tertiary Academic Medical Center. METHODS In this randomized cross-over trial, 36 videos were recorded from 6 surgeons performing left and right cordectomies on cadaveric pig larynges. These recordings captured 3 distinct conditions: without robotic assistance, with robot-assisted scissors, and with robot-assisted graspers. To assess tool tremor, we employed computer vision-based algorithms for tracking surgical tools. Absolute tremor bandpower and normalized path length were utilized as quantitative measures. Wilcoxon rank sum exact tests were employed for statistical analyses and comparisons between trials. Additionally, surveys were administered to assess the perceived ease of use of the robotic system. RESULTS Absolute tremor bandpower showed a significant decrease when using robot-assisted instruments compared to freehand instruments (P = .012). Normalized path length significantly decreased with robot-assisted compared to freehand trials (P = .001). For the scissors, robot-assisted trials resulted in a significant decrease in absolute tremor bandpower (P = .002) and normalized path length (P < .001). For the graspers, there was no significant difference in absolute tremor bandpower (P = .4), but there was a significantly lower normalized path length in the robot-assisted trials (P = .03). CONCLUSION This study demonstrated that computer-vision-based approaches can be used to assess tool motion in simulated microlaryngeal procedures. The results suggest that robot assistance is capable of reducing instrument tremor.
Collapse
Affiliation(s)
- Sue M. Cho
- Department of Computer Science, Johns Hopkins, Baltimore, Maryland, USA
| | - Henry H. Joo
- Department of Otolaryngology–Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Pranathi Golla
- Department of Mechanical Engineering, Johns Hopkins, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins, Baltimore, Maryland, USA
| | - Ahjeetha Shankar
- Department of Otolaryngology–Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Danielle R. Trakimas
- Department of Otolaryngology–Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Francis Creighton
- Department of Otolaryngology–Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Lee Akst
- Department of Otolaryngology–Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head & Neck Surgery, Johns Hopkins, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Mascagni P, Alapatt D, Sestini L, Yu T, Alfieri S, Morales-Conde S, Padoy N, Perretta S. Applications of artificial intelligence in surgery: clinical, technical, and governance considerations. Cir Esp 2024; 102 Suppl 1:S66-S71. [PMID: 38704146 DOI: 10.1016/j.cireng.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 04/29/2024] [Indexed: 05/06/2024]
Abstract
Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the care pathway of colorectal cancer patients and discuss clinical, technical, and governance considerations of major importance for the safe translation of surgical AI for the benefit of our patients and practices.
Collapse
Affiliation(s)
- Pietro Mascagni
- IHU Strasbourg, Strasbourg, France; Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy.
| | - Deepak Alapatt
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Luca Sestini
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Tong Yu
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy
| | | | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France; University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Silvana Perretta
- IHU Strasbourg, Strasbourg, France; IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France; Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| |
Collapse
|
3
|
Awuah WA, Aderinto N, Poornaselvan J, Tan JK, Shah MH, Ashinze P, Pujari AG, Bharadwaj HR, Abdul‐Rahman T, Atallah O. Empowering health care consumers & understanding patients' perspectives on AI integration in oncology and surgery: A perspective. Health Sci Rep 2024; 7:e2268. [PMID: 39050906 PMCID: PMC11266117 DOI: 10.1002/hsr2.2268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 03/24/2024] [Accepted: 07/08/2024] [Indexed: 07/27/2024] Open
Abstract
Introduction Artificial intelligence (AI) is transforming oncology and surgery by improving diagnostics, personalizing treatments, and enhancing surgical precision. Patients appreciate AI for its potential to provide accurate prognoses and tailored therapies. However, AI's implementation raises ethical concerns, data privacy issues, and the need for transparent communication between patients and health care providers. This study aims to understand patients' perspectives on AI integration in oncology and surgery to foster a balanced and patient-centered approach. Methods The study utilized a comprehensive literature review and analysis of existing research on AI applications in oncology and surgery. The focus was on examining patient perceptions, ethical considerations, and the potential benefits and risks associated with AI integration. Data was collected from peer-reviewed journals, conference proceedings, and expert opinions to provide a broad understanding of the topic. The perspectives of patients was also emphasized to highlight the nuances of their acceptance and concerns regarding AI in their health care. Results Patients generally perceive AI in oncology and surgery as beneficial, appreciating its potential for more accurate diagnoses, personalized treatment plans, and improved surgical outcomes. They particularly value AI's role in providing timely and precise diagnostics, which can lead to better prognoses and reduced anxiety. However, concerns about data privacy, ethical implications, and the reliability of AI systems were prevalent. Consequently, trust in AI and health care providers was deemed as a crucial factor for patient acceptance. Additionally, the need for transparent communication and ethical safeguards was also highlighted to address these concerns effectively. Conclusion The integration of AI in oncology and surgeryholds significant promise for enhancing patient care and outcomes. Patients view AI as a valuable tool that can provide accurate prognoses and personalized treatments. However, addressing ethical concerns, ensuring data privacy, and building trust through transparent communication are essential for successful AI integration. Future initiatives should focus on refining AI algorithms, establishing robust ethical guidelines, and enhancing patient education to harmonize technological advancements with patient-centered care principles.
Collapse
Affiliation(s)
| | - Nicholas Aderinto
- Internal Medicine DepartmentLAUTECH Teaching HospitalOgbomosoNigeria
| | | | | | | | - Patrick Ashinze
- Faculty of Clinical SciencesUniversity of IlorinIlorinNigeria
| | | | | | | | - Oday Atallah
- Department of NeurosurgeryHannover Medical SchoolHannoverGermany
| |
Collapse
|
4
|
Isaac S, Phillips MR, Chen KA, Carlson R, Greenberg CC, Khairat S. Usability, Acceptability, and Implementation of Artificial Intelligence (AI) and Machine Learning (ML) Techniques in Surgical Coaching and Training: A Scoping Review. JOURNAL OF SURGICAL EDUCATION 2024; 81:994-1003. [PMID: 38749816 DOI: 10.1016/j.jsurg.2024.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 02/19/2024] [Accepted: 03/27/2024] [Indexed: 06/11/2024]
Abstract
OBJECTIVE To define the current state of peer-reviewed literature demonstrating the usability, acceptability, and implementation of artificial intelligence (AI) and machine learning (ML) techniques in surgical coaching and training. DESIGN We conducted a literature search with defined inclusion and exclusion criteria. We searched five scholarly databases: MEDLINE via PubMed, Embase via Elsevier, Scopus via Elsevier, Cochrane Central Register of Controlled Trials, and the Healthcare Administration Database via ProQuest. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. RESULTS Only 4 articles met the inclusion criteria and used standardized methods for performance evaluation with expert observation. We found no literature examining the impact on performance, user acceptance, or implementation of AI/ML techniques used for surgical coaching and training. We highlight the need for qualitative and quantitative research demonstrating these techniques' effectiveness before broad implementation. CONCLUSION AND RELEVANCE We emphasize the need for research to specifically evaluate performance, impact, user acceptance, and implementation of AI/ML techniques. Incorporating these facets of research when developing AI/ML techniques for surgical training is crucial to ensure emerging technology meets user needs without increasing cognitive burden or frustrating users.
Collapse
Affiliation(s)
- Samuel Isaac
- Carolina Health Informatics Program, University of North Carolina at Chapel Hill (UNC), Chapel Hill, North Carolina.
| | - Michael R Phillips
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina.
| | - Kevin A Chen
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Rebecca Carlson
- Health Sciences Library, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Caprice C Greenberg
- Department of Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Saif Khairat
- Carolina Health Informatics Program, University of North Carolina at Chapel Hill (UNC), Chapel Hill, North Carolina; School of Nursing, University of North Carolina, Chapel Hill, North Carolina
| |
Collapse
|
5
|
Patel M, Tranter-Entwistle I, Sirimanna P, Hugh TJ. 3D visualization systems improve operator efficiency during difficult laparoscopic cholecystectomy: a retrospective blinded review of surgical videos. ANZ J Surg 2024; 94:1114-1121. [PMID: 38486432 DOI: 10.1111/ans.18949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 02/29/2024] [Accepted: 03/06/2024] [Indexed: 06/19/2024]
Abstract
BACKGROUND 3D visualization systems in laparoscopic surgery have been proposed to improve manual task handling compared to 2D, however, few studies have compared the intra-operative efficacy in laparoscopic cholecystectomy (LC). The aim of this study is to determine if there is a benefit in intra-operative efficiency when using a 3D visualization system in difficult LC compared to traditional 2D visualization systems. METHODS Retrospective analysis of 'difficult' LCs (Grades 3 or 4) was completed. The assessor was blinded as all cases were recorded and viewed in 2D only. Variables collected included time to complete steps, missed hook diathermy attempts, failed grasp attempts, missed clip attempts and preparation steps for intra-operative cholangiogram (IOC). Multiple linear regression was undertaken for time variables, Poisson regression or negative binomial regression was completed for continuous variables. RESULTS Fifty-two operative videos of 'difficult' LC were reviewed. 3D systems were associated with reduced operative times, although this was not statistically significant (CI: -2.93-14.93, P-value = 0.183). Dissection of the anterior fold to achieve the critical view of safety was significantly faster by 3.55 min (CI: 1.215-9.206, P-value = 0.002), and with considerably fewer errors when using 3D systems. Fewer IOC preparation errors were observed with a 3D system compared with a 2D system. CONCLUSIONS 3D systems appear to enhance operator efficiency, allowing faster completion of critical steps with fewer errors. This pilot study underscores the utility of video annotation for intra-operative assessment and suggests that, in larger multi-centre studies, 3D systems may demonstrate superior intra-operative efficiency over 2D systems during a 'difficult' LC.
Collapse
Affiliation(s)
- Meet Patel
- Faculty of Medicine and Health, The University of Sydney, Camperdown, New South Wales, Australia
- Northern Beaches Hospital, Frenches Forrest, New South Wales, Australia
| | | | - Pramudith Sirimanna
- Faculty of Medicine and Health, The University of Sydney, Camperdown, New South Wales, Australia
| | - Thomas J Hugh
- Faculty of Medicine and Health, The University of Sydney, Camperdown, New South Wales, Australia
- Upper Gastrointestinal Surgical Unit, Royal North Shore Hospital and North Shore Private Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
6
|
Diehl DL. Phase analysis: a novel and useful application of artificial intelligence in endoscopy. Gastrointest Endosc 2024; 99:839-840. [PMID: 38649225 DOI: 10.1016/j.gie.2024.01.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 01/26/2024] [Indexed: 04/25/2024]
Affiliation(s)
- David L Diehl
- Department of Gastroenterology, Geisinger Medical Center, Danville, Pennsylvania, USA
| |
Collapse
|
7
|
Varghese C, Harrison EM, O'Grady G, Topol EJ. Artificial intelligence in surgery. Nat Med 2024; 30:1257-1268. [PMID: 38740998 DOI: 10.1038/s41591-024-02970-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/03/2024] [Indexed: 05/16/2024]
Abstract
Artificial intelligence (AI) is rapidly emerging in healthcare, yet applications in surgery remain relatively nascent. Here we review the integration of AI in the field of surgery, centering our discussion on multifaceted improvements in surgical care in the preoperative, intraoperative and postoperative space. The emergence of foundation model architectures, wearable technologies and improving surgical data infrastructures is enabling rapid advances in AI interventions and utility. We discuss how maturing AI methods hold the potential to improve patient outcomes, facilitate surgical education and optimize surgical care. We review the current applications of deep learning approaches and outline a vision for future advances through multimodal foundation models.
Collapse
Affiliation(s)
- Chris Varghese
- Department of Surgery, University of Auckland, Auckland, New Zealand
| | - Ewen M Harrison
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Greg O'Grady
- Department of Surgery, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, CA, USA.
| |
Collapse
|
8
|
Chen Z, Yang D, Li A, Sun L, Zhao J, Liu J, Liu L, Zhou X, Chen Y, Cai Y, Wu Z, Cheng K, Cai H, Tang M, Peng B, Wang X. Decoding surgical skill: an objective and efficient algorithm for surgical skill classification based on surgical gesture features -experimental studies. Int J Surg 2024; 110:1441-1449. [PMID: 38079605 PMCID: PMC10942222 DOI: 10.1097/js9.0000000000000975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/21/2023] [Indexed: 03/16/2024]
Abstract
BACKGROUND Various surgical skills lead to differences in patient outcomes and identifying poorly skilled surgeons with constructive feedback contributes to surgical quality improvement. The aim of the study was to develop an algorithm for evaluating surgical skills in laparoscopic cholecystectomy based on the features of elementary functional surgical gestures (Surgestures). MATERIALS AND METHODS Seventy-five laparoscopic cholecystectomy videos were collected from 33 surgeons in five hospitals. The phase of mobilization hepatocystic triangle and gallbladder dissection from the liver bed of each video were annotated with 14 Surgestures. The videos were grouped into competent and incompetent based on the quantiles of modified global operative assessment of laparoscopic skills (mGOALS). Surgeon-related information, clinical data, and intraoperative events were analyzed. Sixty-three Surgesture features were extracted to develop the surgical skill classification algorithm. The area under the receiver operating characteristic curve of the classification and the top features were evaluated. RESULTS Correlation analysis revealed that most perioperative factors had no significant correlation with mGOALS scores. The incompetent group has a higher probability of cholecystic vascular injury compared to the competent group (30.8 vs 6.1%, P =0.004). The competent group demonstrated fewer inefficient Surgestures, lower shift frequency, and a larger dissection-exposure ratio of Surgestures during the procedure. The area under the receiver operating characteristic curve of the classification algorithm achieved 0.866. Different Surgesture features contributed variably to overall performance and specific skill items. CONCLUSION The computer algorithm accurately classified surgeons with different skill levels using objective Surgesture features, adding insight into designing automatic laparoscopic surgical skill assessment tools with technical feedback.
Collapse
Affiliation(s)
- Zixin Chen
- Department of General Surgery, Division of Pancreatic Surgery
- West China School of Medicine, West China Hospital of Sichuan University
| | - Dewei Yang
- Chongqing University of Posts and Telecommunications, School of Advanced Manufacturing Engineering, Chongqing
| | - Ang Li
- Department of General Surgery, Division of Pancreatic Surgery
- Guang’an People’s Hospital, Guang’an
| | - Louzong Sun
- Department of Hepatobiliary Surgery, Zigong First People’s Hospital, Zigong
| | - Jifan Zhao
- Chengdu Withai Innovations Technology Company, Chengdu
| | - Jie Liu
- Chengdu Withai Innovations Technology Company, Chengdu
| | - Linxun Liu
- Department of General Surgery, Qinghai Provincial People’s Hospital, Xining, People’s Republic of China
| | - Xiaobo Zhou
- School of Biomedical Informatics, McGovern Medical School, University of Texas Health Science Center, Houston, USA
| | - Yonghua Chen
- Department of General Surgery, Division of Pancreatic Surgery
| | - Yunqiang Cai
- Department of General Surgery, Division of Pancreatic Surgery
| | - Zhong Wu
- Department of General Surgery, Division of Pancreatic Surgery
| | - Ke Cheng
- Department of General Surgery, Division of Pancreatic Surgery
| | - He Cai
- Department of General Surgery, Division of Pancreatic Surgery
| | - Ming Tang
- Department of General Surgery, Division of Pancreatic Surgery
- West China School of Medicine, West China Hospital of Sichuan University
| | - Bing Peng
- Department of General Surgery, Division of Pancreatic Surgery
| | - Xin Wang
- Department of General Surgery, Division of Pancreatic Surgery
| |
Collapse
|
9
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
10
|
Toale C, Morris M, O'Keeffe D, Boland F, Ryan DM, Nally DM, Kavanagh DO. Assessing operative competence in core surgical training: A reliability analysis. Am J Surg 2023; 226:588-595. [PMID: 37481408 DOI: 10.1016/j.amjsurg.2023.06.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 05/22/2023] [Accepted: 06/18/2023] [Indexed: 07/24/2023]
Abstract
BACKGROUND This study quantifies the number of observations required to reliably assess the operative competence of Core Surgical Trainees (CSTs) in Ireland, using the Supervised Structured Assessment of Operative Performance (SSAOP) tool. METHODS SSAOPs (April 2016-February 2021) were analysed across a mix of undifferentiated procedures, as well as for three commonly performed general surgery procedures in CST: appendicectomy, abdominal wall hernia repair, and skin/subcutaneous lesion excision. Generalizability and Decision studies determined the number of observations required to achieve dependability indices ≥0.8, appropriate for use in high-stakes assessment. RESULTS A total of 2,294 SSAOPs were analysed. Four assessors, each observing 10 cases, can generate scores sufficiently reliable for use in high-stakes assessments. Focusing on a selection of core procedures yields more favourable reliability indices. CONCLUSION Trainers should conduct repeated assessments across a smaller number of procedures to improve reliability. Programs should increase the assessor mix to yield sufficient dependability indices for high-stakes assessment.
Collapse
Affiliation(s)
- Conor Toale
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland.
| | - Marie Morris
- Data Science Centre, University of Medicine and Health Sciences at the Royal College of Surgeons in Ireland, Ireland
| | - Dara O'Keeffe
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| | - Fiona Boland
- Data Science Centre, University of Medicine and Health Sciences at the Royal College of Surgeons in Ireland, Ireland
| | - Donncha M Ryan
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| | - Deirdre M Nally
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| | - Dara O Kavanagh
- Department of Surgical Affairs, Royal College of Surgeons in Ireland, Ireland
| |
Collapse
|
11
|
Patel M, Hugh TJ. A Comparison of Three-Dimensional Visualization Systems and Two-Dimensional Visualization Systems During Laparoscopic Cholecystectomy: A Narrative Review. J Laparoendosc Adv Surg Tech A 2023; 33:957-962. [PMID: 37486672 DOI: 10.1089/lap.2023.0270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2023] Open
Abstract
Background: Laparoscopic cholecystectomy is a common procedure for the definitive treatment for cholecystitis and symptomatic cholelithiasis. One advancement in minimally invasive surgery has been the development of three-dimensional (3D) visualization systems to provide stereopsis. It is yet to be determined whether this innovation is beneficial to the surgeon or simply just a gimmick. This narrative review aims to answer the following research question, what is the impact of 3D visualization systems on surgical efficiency compared with two-dimensional visualization systems in laparoscopic cholecystectomy? Methods: Through a broad literature search it was determined that operative time and intraoperative errors have been used in published research to assess intraoperative efficiency. Results: Studies published to date have used operative time, intraoperative errors, and intraoperative bleeding as current measures for intraoperative efficiency. Previous meta-analysis have shown a slight improvement in operative time for 3D visualization systems; however, subsequent randomized control trials have not shown a significant difference in operative time. Reporting of intraoperative errors has been quite subjective and a difference between visualisation modality has not been shown. Conclusion: 3D visualization systems have shown a minor improvement in operative time compared with traditional laparoscopic systems and it is unlikely to be of any clinical significance. Studies that measure intraoperative error vary greatly in what they report, and which assessment tool is used. Across existing literature, studies do not control for surgeon's experience, elective/emergent cases, and grade of gallbladder/difficulty. Further research is required, using novel tools for assessment in laparoscopic cholecystectomy to determine intraoperative differences through objective and quantitative variables.
Collapse
Affiliation(s)
- Meet Patel
- Northern Clinical School, University of Sydney, Sydney, Australia
- Faculty of Medicine and Health, The University of Sydney, Camperdown, Australia
- Northern Beaches Hospital, Frenches Forrest, Australia
| | - Thomas J Hugh
- Northern Clinical School, University of Sydney, Sydney, Australia
- Upper Gastrointestinal Surgical Unit, Royal North Shore Hospital and North Shore Private Hospital, St Leonards, Australia
| |
Collapse
|
12
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
13
|
Xue H, Sun Y, Chen J, Tian H, Liu Z, Shen M, Liu L. CAT-CBAM-Net: An Automatic Scoring Method for Sow Body Condition Based on CNN and Transformer. SENSORS (BASEL, SWITZERLAND) 2023; 23:7919. [PMID: 37765975 PMCID: PMC10535612 DOI: 10.3390/s23187919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/02/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023]
Abstract
Sow body condition scoring has been confirmed as a vital procedure in sow management. A timely and accurate assessment of the body condition of a sow is conducive to determining nutritional supply, and it takes on critical significance in enhancing sow reproductive performance. Manual sow body condition scoring methods have been extensively employed in large-scale sow farms, which are time-consuming and labor-intensive. To address the above-mentioned problem, a dual neural network-based automatic scoring method was developed in this study for sow body condition. The developed method aims to enhance the ability to capture local features and global information in sow images by combining CNN and transformer networks. Moreover, it introduces a CBAM module to help the network pay more attention to crucial feature channels while suppressing attention to irrelevant channels. To tackle the problem of imbalanced categories and mislabeling of body condition data, the original loss function was substituted with the optimized focal loss function. As indicated by the model test, the sow body condition classification achieved an average precision of 91.06%, the average recall rate was 91.58%, and the average F1 score reached 91.31%. The comprehensive comparative experimental results suggested that the proposed method yielded optimal performance on this dataset. The method developed in this study is capable of achieving automatic scoring of sow body condition, and it shows broad and promising applications.
Collapse
Affiliation(s)
- Hongxiang Xue
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Yuwen Sun
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Jinxin Chen
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Haonan Tian
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Zihao Liu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China; (H.X.); (Y.S.); (J.C.); (Z.L.)
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
| | - Mingxia Shen
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| | - Longshen Liu
- Key Laboratory of Breeding Equipment, Ministry of Agriculture and Rural Affairs, Nanjing 210031, China; (H.T.); (M.S.)
- College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
| |
Collapse
|
14
|
Baghdadi A, Lama S, Singh R, Sutherland GR. Tool-tissue force segmentation and pattern recognition for evaluating neurosurgical performance. Sci Rep 2023; 13:9591. [PMID: 37311965 DOI: 10.1038/s41598-023-36702-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/08/2023] [Indexed: 06/15/2023] Open
Abstract
Surgical data quantification and comprehension expose subtle patterns in tasks and performance. Enabling surgical devices with artificial intelligence provides surgeons with personalized and objective performance evaluation: a virtual surgical assist. Here we present machine learning models developed for analyzing surgical finesse using tool-tissue interaction force data in surgical dissection obtained from a sensorized bipolar forceps. Data modeling was performed using 50 neurosurgery procedures that involved elective surgical treatment for various intracranial pathologies. The data collection was conducted by 13 surgeons of varying experience levels using sensorized bipolar forceps, SmartForceps System. The machine learning algorithm constituted design and implementation for three primary purposes, i.e., force profile segmentation for obtaining active periods of tool utilization using T-U-Net, surgical skill classification into Expert and Novice, and surgical task recognition into two primary categories of Coagulation versus non-Coagulation using FTFIT deep learning architectures. The final report to surgeon was a dashboard containing recognized segments of force application categorized into skill and task classes along with performance metrics charts compared to expert level surgeons. Operating room data recording of > 161 h containing approximately 3.6 K periods of tool operation was utilized. The modeling resulted in Weighted F1-score = 0.95 and AUC = 0.99 for force profile segmentation using T-U-Net, Weighted F1-score = 0.71 and AUC = 0.81 for surgical skill classification, and Weighted F1-score = 0.82 and AUC = 0.89 for surgical task recognition using a subset of hand-crafted features augmented to FTFIT neural network. This study delivers a novel machine learning module in a cloud, enabling an end-to-end platform for intraoperative surgical performance monitoring and evaluation. Accessed through a secure application for professional connectivity, a paradigm for data-driven learning is established.
Collapse
Affiliation(s)
- Amir Baghdadi
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Sanju Lama
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Rahul Singh
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada
| | - Garnette R Sutherland
- Project neuroArm, Department of Clinical Neurosciences, Hotchkiss Brain Institute University of Calgary, Calgary, AB, Canada.
| |
Collapse
|
15
|
Lavanchy JL, Vardazaryan A, Mascagni P, Mutter D, Padoy N. Preserving privacy in surgical video analysis using a deep learning classifier to identify out-of-body scenes in endoscopic videos. Sci Rep 2023; 13:9235. [PMID: 37286660 DOI: 10.1038/s41598-023-36453-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 06/03/2023] [Indexed: 06/09/2023] Open
Abstract
Surgical video analysis facilitates education and research. However, video recordings of endoscopic surgeries can contain privacy-sensitive information, especially if the endoscopic camera is moved out of the body of patients and out-of-body scenes are recorded. Therefore, identification of out-of-body scenes in endoscopic videos is of major importance to preserve the privacy of patients and operating room staff. This study developed and validated a deep learning model for the identification of out-of-body images in endoscopic videos. The model was trained and evaluated on an internal dataset of 12 different types of laparoscopic and robotic surgeries and was externally validated on two independent multicentric test datasets of laparoscopic gastric bypass and cholecystectomy surgeries. Model performance was evaluated compared to human ground truth annotations measuring the receiver operating characteristic area under the curve (ROC AUC). The internal dataset consisting of 356,267 images from 48 videos and the two multicentric test datasets consisting of 54,385 and 58,349 images from 10 and 20 videos, respectively, were annotated. The model identified out-of-body images with 99.97% ROC AUC on the internal test dataset. Mean ± standard deviation ROC AUC on the multicentric gastric bypass dataset was 99.94 ± 0.07% and 99.71 ± 0.40% on the multicentric cholecystectomy dataset, respectively. The model can reliably identify out-of-body images in endoscopic videos and is publicly shared. This facilitates privacy preservation in surgical video analysis.
Collapse
Affiliation(s)
- Joël L Lavanchy
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France.
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- Division of Surgery, Clarunis-University Center for Gastrointestinal and Liver Diseases, St Clara and University Hospital of Basel, Basel, Switzerland.
| | - Armine Vardazaryan
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| | - Pietro Mascagni
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Didier Mutter
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- IHU Strasbourg, 1 Place de l'Hôpital, 67091, Strasbourg Cedex, France
- ICube, University of Strasbourg, CNRS, Strasbourg, France
| |
Collapse
|
16
|
Bkheet E, D'Angelo AL, Goldbraikh A, Laufer S. Using hand pose estimation to automate open surgery training feedback. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02947-6. [PMID: 37253925 DOI: 10.1007/s11548-023-02947-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023]
Abstract
PURPOSE This research aims to facilitate the use of state-of-the-art computer vision algorithms for the automated training of surgeons and the analysis of surgical footage. By estimating 2D hand poses, we model the movement of the practitioner's hands, and their interaction with surgical instruments, to study their potential benefit for surgical training. METHODS We leverage pre-trained models on a publicly available hands dataset to create our own in-house dataset of 100 open surgery simulation videos with 2D hand poses. We also assess the ability of pose estimations to segment surgical videos into gestures and tool-usage segments and compare them to kinematic sensors and I3D features. Furthermore, we introduce 6 novel surgical dexterity proxies stemming from domain experts' training advice, all of which our framework can automatically detect given raw video footage. RESULTS State-of-the-art gesture segmentation accuracy of 88.35% on the open surgery simulation dataset is achieved with the fusion of 2D poses and I3D features from multiple angles. The introduced surgical skill proxies presented significant differences for novices compared to experts and produced actionable feedback for improvement. CONCLUSION This research demonstrates the benefit of pose estimations for open surgery by analyzing their effectiveness in gesture segmentation and skill assessment. Gesture segmentation using pose estimations achieved comparable results to physical sensors while being remote and markerless. Surgical dexterity proxies that rely on pose estimation proved they can be used to work toward automated training feedback. We hope our findings encourage additional collaboration on novel skill proxies to make surgical training more efficient.
Collapse
Affiliation(s)
- Eddie Bkheet
- Data and Decision Sciences, Technion Institute of Technology, Haifa, Israel.
| | | | - Adam Goldbraikh
- Applied Mathematics, Technion Institute of Technology, Haifa, Israel
| | - Shlomi Laufer
- Data and Decision Sciences, Technion Institute of Technology, Haifa, Israel
| |
Collapse
|
17
|
Grüter AAJ, Van Lieshout AS, van Oostendorp SE, Henckens SPG, Ket JCF, Gisbertz SS, Toorenvliet BR, Tanis PJ, Bonjer HJ, Tuynman JB. Video-based tools for surgical quality assessment of technical skills in laparoscopic procedures: a systematic review. Surg Endosc 2023:10.1007/s00464-023-10076-z. [PMID: 37099157 DOI: 10.1007/s00464-023-10076-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 04/08/2023] [Indexed: 04/27/2023]
Abstract
BACKGROUND Quality of surgery has substantial impact on both short- and long-term clinical outcomes. This stresses the need for objective surgical quality assessment (SQA) for education, clinical practice and research purposes. The aim of this systematic review was to provide a comprehensive overview of all video-based objective SQA tools in laparoscopic procedures and their validity to objectively assess surgical performance. METHODS PubMed, Embase.com and Web of Science were systematically searched by two reviewers to identify all studies focusing on video-based SQA tools of technical skills in laparoscopic surgery performed in a clinical setting. Evidence on validity was evaluated using a modified validation scoring system. RESULTS Fifty-five studies with a total of 41 video-based SQA tools were identified. These tools were used in 9 different fields of laparoscopic surgery and were divided into 4 categories: the global assessment scale (GAS), the error-based assessment scale (EBAS), the procedure-specific assessment tool (PSAT) and artificial intelligence (AI). The number of studies focusing on these four categories were 21, 6, 31 and 3, respectively. Twelve studies validated the SQA tool with clinical outcomes. In 11 of those studies, a positive association between surgical quality and clinical outcomes was found. CONCLUSION This systematic review included a total of 41 unique video-based SQA tools to assess surgical technical skills in various domains of laparoscopic surgery. This study suggests that validated SQA tools enable objective assessment of surgical performance with relevance for clinical outcomes, which can be used for training, research and quality improvement programs.
Collapse
Affiliation(s)
- Alexander A J Grüter
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Treatment and Quality of Life, Amsterdam, The Netherlands.
| | - Annabel S Van Lieshout
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Treatment and Quality of Life, Amsterdam, The Netherlands
| | - Stefan E van Oostendorp
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
- Department of Surgery, Rode Kruis Ziekenhuis, Vondellaan 13, Beverwijk, The Netherlands
| | - Sofie P G Henckens
- Cancer Center Amsterdam, Treatment and Quality of Life, Amsterdam, The Netherlands
- Department of Surgery, Amsterdam UMC Location University of Amsterdam, Meibergdreef 9, Amsterdam, The Netherlands
| | - Johannes C F Ket
- Medical Library, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Suzanne S Gisbertz
- Department of Surgery, Amsterdam UMC Location University of Amsterdam, Meibergdreef 9, Amsterdam, The Netherlands
| | | | - Pieter J Tanis
- Department of Surgery, Amsterdam UMC Location University of Amsterdam, Meibergdreef 9, Amsterdam, The Netherlands
- Department of Surgical Oncology and Gastrointestinal Surgery, Erasmus MC, Doctor Molewaterplein 40, Rotterdam, The Netherlands
| | - Hendrik J Bonjer
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Jurriaan B Tuynman
- Department of Surgery, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| |
Collapse
|
18
|
Kalt F, Mayr H, Gero D. Classification of Adverse Events in Adult Surgery. Eur J Pediatr Surg 2023; 33:120-128. [PMID: 36720250 DOI: 10.1055/s-0043-1760821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Successful surgery combines quality (achievement of a positive outcome) with safety (avoidance of a negative outcome). Outcome assessment serves the purpose of quality improvement in health care by establishing performance indicators and allowing the identification of performance gaps. Novel surgical quality metric tools (benchmark cutoffs and textbook outcomes) provide procedure-specific ideal surgical outcomes in a subgroup of well-defined low-risk patients, with the aim of setting realistic and best achievable goals for surgeons and centers, as well as supporting unbiased comparison of surgical quality between centers and periods of time. Validated classification systems have been deployed to grade adverse events during the surgical journey: (1) the ClassIntra classification for the intraoperative period; (2) the Clavien-Dindo classification for the gravity of single adverse events; and the (3) Comprehensive Complication Index (CCI) for the sum of adverse events over a defined postoperative period. The failure to rescue rate refers to the death of a patient following one or more potentially treatable postoperative adverse event(s) and is a reliable proxy of the institutional safety culture and infrastructure. Complication assessment is undergoing digital transformation to decrease resource-intensity and provide surgeons with real-time pre- or intraoperative decision support. Standardized reporting of complications informs patients on their chances to realize favorable postoperative outcomes and assists surgical centers in the prioritization of quality improvement initiatives, multidisciplinary teamwork, surgical education, and ultimately, in the enhancement of clinical standards.
Collapse
Affiliation(s)
- Fabian Kalt
- Department of Surgery and Transplantation, University Hospital Zurich, University of Zurich, Switzerland
| | - Hemma Mayr
- Department of Surgery and Transplantation, University Hospital Zurich, University of Zurich, Switzerland
| | - Daniel Gero
- Department of Surgery and Transplantation, University Hospital Zurich, University of Zurich, Switzerland
| |
Collapse
|
19
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
20
|
Lavanchy JL, Gonzalez C, Kassem H, Nett PC, Mutter D, Padoy N. Proposal and multicentric validation of a laparoscopic Roux-en-Y gastric bypass surgery ontology. Surg Endosc 2023; 37:2070-2077. [PMID: 36289088 PMCID: PMC10017621 DOI: 10.1007/s00464-022-09745-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/14/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Phase and step annotation in surgical videos is a prerequisite for surgical scene understanding and for downstream tasks like intraoperative feedback or assistance. However, most ontologies are applied on small monocentric datasets and lack external validation. To overcome these limitations an ontology for phases and steps of laparoscopic Roux-en-Y gastric bypass (LRYGB) is proposed and validated on a multicentric dataset in terms of inter- and intra-rater reliability (inter-/intra-RR). METHODS The proposed LRYGB ontology consists of 12 phase and 46 step definitions that are hierarchically structured. Two board certified surgeons (raters) with > 10 years of clinical experience applied the proposed ontology on two datasets: (1) StraBypass40 consists of 40 LRYGB videos from Nouvel Hôpital Civil, Strasbourg, France and (2) BernBypass70 consists of 70 LRYGB videos from Inselspital, Bern University Hospital, Bern, Switzerland. To assess inter-RR the two raters' annotations of ten randomly chosen videos from StraBypass40 and BernBypass70 each, were compared. To assess intra-RR ten randomly chosen videos were annotated twice by the same rater and annotations were compared. Inter-RR was calculated using Cohen's kappa. Additionally, for inter- and intra-RR accuracy, precision, recall, F1-score, and application dependent metrics were applied. RESULTS The mean ± SD video duration was 108 ± 33 min and 75 ± 21 min in StraBypass40 and BernBypass70, respectively. The proposed ontology shows an inter-RR of 96.8 ± 2.7% for phases and 85.4 ± 6.0% for steps on StraBypass40 and 94.9 ± 5.8% for phases and 76.1 ± 13.9% for steps on BernBypass70. The overall Cohen's kappa of inter-RR was 95.9 ± 4.3% for phases and 80.8 ± 10.0% for steps. Intra-RR showed an accuracy of 98.4 ± 1.1% for phases and 88.1 ± 8.1% for steps. CONCLUSION The proposed ontology shows an excellent inter- and intra-RR and should therefore be implemented routinely in phase and step annotation of LRYGB.
Collapse
Affiliation(s)
- Joël L Lavanchy
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France.
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Cristians Gonzalez
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Hasan Kassem
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Philipp C Nett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Didier Mutter
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- University Hospital of Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- IHU Strasbourg, 1 Place de l'Hôpital, 67000, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| |
Collapse
|
21
|
Cheikh Youssef S, Haram K, Noël J, Patel V, Porter J, Dasgupta P, Hachach-Haram N. Evolution of the digital operating room: the place of video technology in surgery. Langenbecks Arch Surg 2023; 408:95. [PMID: 36807211 PMCID: PMC9939374 DOI: 10.1007/s00423-023-02830-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/06/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE The aim of this review was to collate current evidence wherein digitalisation, through the incorporation of video technology and artificial intelligence (AI), is being applied to the practice of surgery. Applications are vast, and the literature investigating the utility of surgical video and its synergy with AI has steadily increased over the last 2 decades. This type of technology is widespread in other industries, such as autonomy in transportation and manufacturing. METHODS Articles were identified primarily using the PubMed and MEDLINE databases. The MeSH terms used were "surgical education", "surgical video", "video labelling", "surgery", "surgical workflow", "telementoring", "telemedicine", "machine learning", "deep learning" and "operating room". Given the breadth of the subject and the scarcity of high-level data in certain areas, a narrative synthesis was selected over a meta-analysis or systematic review to allow for a focussed discussion of the topic. RESULTS Three main themes were identified and analysed throughout this review, (1) the multifaceted utility of surgical video recording, (2) teleconferencing/telemedicine and (3) artificial intelligence in the operating room. CONCLUSIONS Evidence suggests the routine collection of intraoperative data will be beneficial in the advancement of surgery, by driving standardised, evidence-based surgical care and personalised training of future surgeons. However, many barriers stand in the way of widespread implementation, necessitating close collaboration between surgeons, data scientists, medicolegal personnel and hospital policy makers.
Collapse
Affiliation(s)
| | | | - Jonathan Noël
- Guy's and St. Thomas' NHS Foundation Trust, Urology Centre, King's Health Partners, London, UK
| | - Vipul Patel
- Adventhealth Global Robotics Institute, 400 Celebration Place, Celebration, FL, USA
| | - James Porter
- Department of Urology, Swedish Urology Group, Seattle, WA, USA
| | - Prokar Dasgupta
- Guy's and St. Thomas' NHS Foundation Trust, Urology Centre, King's Health Partners, London, UK
| | - Nadine Hachach-Haram
- Department of Plastic Surgery, Guy's and St. Thomas' NHS Foundation Trust, King's Health Partners, London, UK
| |
Collapse
|
22
|
Kulkarni CS, Deng S, Wang T, Hartman-Kenzler J, Barnes LE, Parker SH, Safford SD, Lau N. Scene-dependent, feedforward eye gaze metrics can differentiate technical skill levels of trainees in laparoscopic surgery. Surg Endosc 2023; 37:1569-1580. [PMID: 36123548 PMCID: PMC11062149 DOI: 10.1007/s00464-022-09582-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
INTRODUCTION In laparoscopic surgery, looking in the target areas is an indicator of proficiency. However, gaze behaviors revealing feedforward control (i.e., looking ahead) and their importance have been under-investigated in surgery. This study aims to establish the sensitivity and relative importance of different scene-dependent gaze and motion metrics for estimating trainee proficiency levels in surgical skills. METHODS Medical students performed the Fundamentals of Laparoscopic Surgery peg transfer task while recording their gaze on the monitor and tool activities inside the trainer box. Using computer vision and fixation algorithms, five scene-dependent gaze metrics and one tool speed metric were computed for 499 practice trials. Cluster analysis on the six metrics was used to group the trials into different clusters/proficiency levels, and ANOVAs were conducted to test differences between proficiency levels. A Random Forest model was trained to study metric importance at predicting proficiency levels. RESULTS Three clusters were identified, corresponding to three proficiency levels. The correspondence between the clusters and proficiency levels was confirmed by differences between completion times (F2,488 = 38.94, p < .001). Further, ANOVAs revealed significant differences between the three levels for all six metrics. The Random Forest model predicted proficiency level with 99% out-of-bag accuracy and revealed that scene-dependent gaze metrics reflecting feedforward behaviors were more important for prediction than the ones reflecting feedback behaviors. CONCLUSION Scene-dependent gaze metrics revealed skill levels of trainees more precisely than between experts and novices as suggested in the literature. Further, feedforward gaze metrics appeared to be more important than feedback ones at predicting proficiency.
Collapse
Affiliation(s)
- Chaitanya S Kulkarni
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Shiyu Deng
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | - Tianzi Wang
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA
| | | | - Laura E Barnes
- Environmental and Systems Engineering, University of Virginia, Charlottesville, VA, USA
| | | | - Shawn D Safford
- Division of Pediatric General and Thoracic Surgery, UPMC Children's Hospital of Pittsburgh, Harrisburg, PA, USA
| | - Nathan Lau
- Grado Department of Industrial and Systems Engineering, Virginia Tech, 250 Durham Hall (0118), 1145 Perry Street, Blacksburg, VA, 24061, USA.
| |
Collapse
|
23
|
Guerrero DT, Asaad M, Rajesh A, Hassan A, Butler CE. Advancing Surgical Education: The Use of Artificial Intelligence in Surgical Training. Am Surg 2023; 89:49-54. [PMID: 35570822 DOI: 10.1177/00031348221101503] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The technology of artificial intelligence (AI) has made significant in-roads into the field of medicine over the last decade. With surgery being a discipline where repetition is the key to mastery, the scope of AI presents enormous potential for resident education through the analysis of technique and delivery of structured feedback for performance improvement. In an era marred by a raging pandemic that has decreased exposure and opportunity, AI offers an attractive solution towards improving operating room efficiency, safe patient care in the hands of supervised residents and can ultimately culminate in reduced health care costs. Through this article, we elucidate the current adoption of the artificial intelligence technology and its prospects for advancing surgical education.
Collapse
Affiliation(s)
- David T Guerrero
- 12317University of Pittsburgh Medical School, Pittsburgh, PA, USA
| | - Malke Asaad
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Aashish Rajesh
- 14742University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Abbas Hassan
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Charles E Butler
- Department of Plastic Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
24
|
Nema S, Vachhani L. Surgical instrument detection and tracking technologies: Automating dataset labeling for surgical skill assessment. Front Robot AI 2022; 9:1030846. [PMID: 36405072 PMCID: PMC9671944 DOI: 10.3389/frobt.2022.1030846] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 10/14/2022] [Indexed: 11/06/2022] Open
Abstract
Surgical skills can be improved by continuous surgical training and feedback, thus reducing adverse outcomes while performing an intervention. With the advent of new technologies, researchers now have the tools to analyze surgical instrument motion to differentiate surgeons’ levels of technical skill. Surgical skills assessment is time-consuming and prone to subjective interpretation. The surgical instrument detection and tracking algorithm analyzes the image captured by the surgical robotic endoscope and extracts the movement and orientation information of a surgical instrument to provide surgical navigation. This information can be used to label raw surgical video datasets that are used to form an action space for surgical skill analysis. Instrument detection and tracking is a challenging problem in MIS, including robot-assisted surgeries, but vision-based approaches provide promising solutions with minimal hardware integration requirements. This study offers an overview of the developments of assessment systems for surgical intervention analysis. The purpose of this study is to identify the research gap and make a leap in developing technology to automate the incorporation of new surgical skills. A prime factor in automating the learning is to create datasets with minimal manual intervention from raw surgical videos. This review encapsulates the current trends in artificial intelligence (AI) based visual detection and tracking technologies for surgical instruments and their application for surgical skill assessment.
Collapse
|
25
|
Wagner M, Brandenburg JM, Bodenstedt S, Schulze A, Jenke AC, Stern A, Daum MTJ, Mündermann L, Kolbinger FR, Bhasker N, Schneider G, Krause-Jüttler G, Alwanni H, Fritz-Kebede F, Burgert O, Wilhelm D, Fallert J, Nickel F, Maier-Hein L, Dugas M, Distler M, Weitz J, Müller-Stich BP, Speidel S. Surgomics: personalized prediction of morbidity, mortality and long-term outcome in surgery using machine learning on multimodal data. Surg Endosc 2022; 36:8568-8591. [PMID: 36171451 PMCID: PMC9613751 DOI: 10.1007/s00464-022-09611-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 09/03/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Collapse
Affiliation(s)
- Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
| | - Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fiona R Kolbinger
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Grit Krause-Jüttler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Hisham Alwanni
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany
| | - Dirk Wilhelm
- Department of Surgery, Faculty of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Johannes Fallert
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Beat-Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| |
Collapse
|
26
|
Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, Alseidi A, Redan JA, Alfieri S, Costamagna G, Boškoski I, Padoy N, Hashimoto DA. Computer vision in surgery: from potential to clinical value. NPJ Digit Med 2022; 5:163. [PMID: 36307544 PMCID: PMC9616906 DOI: 10.1038/s41746-022-00707-5] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.
Collapse
Affiliation(s)
- Pietro Mascagni
- Gemelli Hospital, Catholic University of the Sacred Heart, Rome, Italy. .,IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France. .,Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France.,Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Maria S Altieri
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Amin Madani
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yusuke Watanabe
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Hokkaido, Hokkaido, Japan
| | - Adnan Alseidi
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Jay A Redan
- Department of Surgery, AdventHealth-Celebration Health, Celebration, FL, USA
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ivo Boškoski
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Nicolas Padoy
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.,ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Daniel A Hashimoto
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.,Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
27
|
Wlasitsch-Nagy Z, Bálint A, Kőnig-Péter A, Varga P, Várady E, Bogner P, Gasz B. New CFD-based method for morphological and functional assessment in cardiovascular skill training. J Vasc Surg Cases Innov Tech 2022; 8:770-778. [DOI: 10.1016/j.jvscit.2022.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 09/20/2022] [Indexed: 11/05/2022] Open
|
28
|
Kil I, Eidt JF, Groff RE, Singapogu RB. Assessment of open surgery suturing skill: Simulator platform, force-based, and motion-based metrics. Front Med (Lausanne) 2022; 9:897219. [PMID: 36111107 PMCID: PMC9468321 DOI: 10.3389/fmed.2022.897219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 08/05/2022] [Indexed: 11/29/2022] Open
Abstract
Objective This paper focuses on simulator-based assessment of open surgery suturing skill. We introduce a new surgical simulator designed to collect synchronized force, motion, video and touch data during a radial suturing task adapted from the Fundamentals of Vascular Surgery (FVS) skill assessment. The synchronized data is analyzed to extract objective metrics for suturing skill assessment. Methods The simulator has a camera positioned underneath the suturing membrane, enabling visual tracking of the needle during suturing. Needle tracking data enables extraction of meaningful metrics related to both the process and the product of the suturing task. To better simulate surgical conditions, the height of the system and the depth of the membrane are both adjustable. Metrics for assessment of suturing skill based on force/torque, motion, and physical contact are presented. Experimental data are presented from a study comparing attending surgeons and surgery residents. Results Analysis shows force metrics (absolute maximum force/torque in z-direction), motion metrics (yaw, pitch, roll), physical contact metric, and image-enabled force metrics (orthogonal and tangential forces) are found to be statistically significant in differentiating suturing skill between attendings and residents. Conclusion and significance The results suggest that this simulator and accompanying metrics could serve as a useful tool for assessing and teaching open surgery suturing skill.
Collapse
Affiliation(s)
- Irfan Kil
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - John F. Eidt
- Division of Vascular Surgery, Baylor Scott & White Heart and Vascular Hospital, Dallas, TX, United States
| | - Richard E. Groff
- Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, United States
| | - Ravikiran B. Singapogu
- Department of Bioengineering, Clemson University, Clemson, SC, United States
- *Correspondence: Ravikiran B. Singapogu
| |
Collapse
|
29
|
Deepika P, Udupa K, Beniwal M, Uppar AM, V V, Rao M. Automated Microsurgical Tool Segmentation and Characterization in Intra-Operative Neurosurgical Videos. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2110-2114. [PMID: 36086279 DOI: 10.1109/embc48229.2022.9871838] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Checklist based routine evaluation of surgical skills in any medical school demands quality time and effort from the supervising expert and is highly influenced by assessor bias. Alternatively, automated video based surgical skill assessment is a simple and viable method to analyse surgical dexterity offline without the need for acute presence of an expert surgeon throughout the surgery. In this paper, a novel approach and results for the automated segmentation of microsurgical instruments from the real-world neurosurgical video dataset was presented. The proposed tool segmentation model showcased mean average precision of 96.7% in detecting, and localizing five surgical instruments from the real-world neurosurgical videos. Accurate detection and characterization of motion features of the microsurgical tool from the novel annotated neurosurgical video dataset forms the key step towards automated surgical skill evaluation. Clinical Relevance- Tool segmentation, localization, and characterization in neurosurgical video, has several applications including assessing surgeons skills, training novice surgeons, understanding critical operating procedures post surgery, characterizing any critical anatomical response to the tool that leads to the success or failure of the surgery, and building models for conducting autonomous robotic surgery. Semantic segmentation, and characterization of the microsurgical tools forms the basis of the modern neurosurgery.
Collapse
|
30
|
Kutana S, Bitner DP, Addison P, Chung PJ, Talamini MA, Filicori F. Objective assessment of robotic surgical skills: review of literature and future directions. Surg Endosc 2022; 36:3698-3707. [PMID: 35229215 DOI: 10.1007/s00464-022-09134-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 02/13/2022] [Indexed: 01/29/2023]
Abstract
BACKGROUND Evaluation of robotic surgical skill has become increasingly important as robotic approaches to common surgeries become more widely utilized. However, evaluation of these currently lacks standardization. In this paper, we aimed to review the literature on robotic surgical skill evaluation. METHODS A review of literature on robotic surgical skill evaluation was performed and representative literature presented over the past ten years. RESULTS The study of reliability and validity in robotic surgical evaluation shows two main assessment categories: manual and automatic. Manual assessments have been shown to be valid but typically are time consuming and costly. Automatic evaluation and simulation are similarly valid and simpler to implement. Initial reports on evaluation of skill using artificial intelligence platforms show validity. Few data on evaluation methods of surgical skill connect directly to patient outcomes. CONCLUSION As evaluation in surgery begins to incorporate robotic skills, a simultaneous shift from manual to automatic evaluation may occur given the ease of implementation of these technologies. Robotic platforms offer the unique benefit of providing more objective data streams including kinematic data which allows for precise instrument tracking in the operative field. Such data streams will likely incrementally be implemented in performance evaluations. Similarly, with advances in artificial intelligence, machine evaluation of human technical skill will likely form the next wave of surgical evaluation.
Collapse
Affiliation(s)
- Saratu Kutana
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.
| | - Poppy Addison
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA
| | - Paul J Chung
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Mark A Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, 186 E. 76th Street, 1st Floor, New York, NY, 10021, USA.,Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| |
Collapse
|
31
|
Hybrid Spatiotemporal Contrastive Representation Learning for Content-Based Surgical Video Retrieval. ELECTRONICS 2022. [DOI: 10.3390/electronics11091353] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In the medical field, due to their economic and clinical benefits, there is a growing interest in minimally invasive surgeries and microscopic surgeries. These types of surgeries are often recorded during operations, and these recordings have become a key resource for education, patient disease analysis, surgical error analysis, and surgical skill assessment. However, manual searching in this collection of long-term surgical videos is an extremely labor-intensive and long-term task, requiring an effective content-based video analysis system. In this regard, previous methods for surgical video retrieval are based on handcrafted features which do not represent the video effectively. On the other hand, deep learning-based solutions were found to be effective in both surgical image and video analysis, where CNN-, LSTM- and CNN-LSTM-based methods were proposed in most surgical video analysis tasks. In this paper, we propose a hybrid spatiotemporal embedding method to enhance spatiotemporal representations using an adaptive fusion layer on top of the LSTM and temporal causal convolutional modules. To learn surgical video representations, we propose exploring the supervised contrastive learning approach to leverage label information in addition to augmented versions. By validating our approach to a video retrieval task on two datasets, Surgical Actions 160 and Cataract-101, we significantly improve on previous results in terms of mean average precision, 30.012 ± 1.778 vs. 22.54 ± 1.557 for Surgical Actions 160 and 81.134 ± 1.28 vs. 33.18 ± 1.311 for Cataract-101. We also validate the proposed method’s suitability for surgical phase recognition task using the benchmark Cholec80 surgical dataset, where our approach outperforms (with 90.2% accuracy) the state of the art.
Collapse
|
32
|
Ranking surgical skills using an attention-enhanced Siamese network with piecewise aggregated kinematic data. Int J Comput Assist Radiol Surg 2022; 17:1039-1048. [DOI: 10.1007/s11548-022-02581-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 02/16/2022] [Indexed: 11/25/2022]
|
33
|
Deng T, Gulati S, Rodriguez W, Dawant BM, Langerman A. Automated detection of electrocautery instrument in videos of open neck procedures using YOLOv3 . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2071-2074. [PMID: 34891696 DOI: 10.1109/embc46164.2021.9630961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
With the rapid development of deep learning approaches, tremendous progress has been made in computer- assisted analysis of minimally-invasive, videoscopic surgery. However, surgery through open incisions ("open surgery"), which constitutes a much larger portion of surgical procedures performed, is rarely investigated because of the difficulty in obtaining high-quality open surgical video footage. Automated detection of surgical instruments shows promise for evaluating surgical activities, and provides a foundation for quality/safety review, education, and identification of surgical performance. In this paper, we present results using YOLOv3 to successfully identify an electrocautery surgical instrument in a library of images derived from 22 open neck procedures (an 887-image training/validation set, and a 1149-image testing set) captured using a wearable surgical camera. We show that our method effectively detects the spatial bounds of the electrocautery pencil in still images and we further demonstrate the ability of our method to detect the location of this instrument in video footage. Our work serves as the first demonstration of open surgical instrument detection using first-person video footage from a wearable camera and sets the stage for further work in this field.Clinical Relevance- Detection of instrumentation in surgical video is the necessary first step towards automating surgical task identification and skills assessment, which will be useful for surgical quality improvement and training.
Collapse
|
34
|
Aspart F, Bolmgren JL, Lavanchy JL, Beldi G, Woods MS, Padoy N, Hosgor E. ClipAssistNet: bringing real-time safety feedback to operating rooms. Int J Comput Assist Radiol Surg 2021; 17:5-13. [PMID: 34297269 PMCID: PMC8739308 DOI: 10.1007/s11548-021-02441-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 06/17/2021] [Indexed: 12/18/2022]
Abstract
Purpose Cholecystectomy is one of the most common laparoscopic procedures. A critical phase of laparoscopic cholecystectomy consists in clipping the cystic duct and artery before cutting them. Surgeons can improve the clipping safety by ensuring full visibility of the clipper, while enclosing the artery or the duct with the clip applier jaws. This can prevent unintentional interaction with neighboring tissues or clip misplacement. In this article, we present a novel real-time feedback to ensure safe visibility of the instrument during this critical phase. This feedback incites surgeons to keep the tip of their clip applier visible while operating. Methods We present a new dataset of 300 laparoscopic cholecystectomy videos with frame-wise annotation of clipper tip visibility. We further present ClipAssistNet, a neural network-based image classifier which detects the clipper tip visibility in single frames. ClipAssistNet ensembles predictions from 5 neural networks trained on different subsets of the dataset. Results Our model learns to classify the clipper tip visibility by detecting its presence in the image. Measured on a separate test set, ClipAssistNet classifies the clipper tip visibility with an AUROC of 0.9107, and 66.15% specificity at 95% sensitivity. Additionally, it can perform real-time inference (16 FPS) on an embedded computing board; this enables its deployment in operating room settings. Conclusion This work presents a new application of computer-assisted surgery for laparoscopic cholecystectomy, namely real-time feedback on adequate visibility of the clip applier. We believe this feedback can increase surgeons’ attentiveness when departing from safe visibility during the critical clipping of the cystic duct and artery. Supplementary Information The online version supplementary material available at 10.1007/s11548-021-02441-x.
Collapse
Affiliation(s)
- Florian Aspart
- Caresyntax GmbH, Komturstraße 18A, 12099, Berlin, Germany.
| | - Jon L Bolmgren
- Caresyntax GmbH, Komturstraße 18A, 12099, Berlin, Germany
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland
| | | | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Enes Hosgor
- Caresyntax GmbH, Komturstraße 18A, 12099, Berlin, Germany
| |
Collapse
|