51
|
Keller S, Jelsma JGM, Tschan F, Sevdalis N, Löllgen RM, Creutzfeldt J, Kennedy-Metz LR, Eppich W, Semmer NK, Van Herzeele I, Härenstam KP, de Bruijne MC. Behavioral sciences applied to acute care teams: a research agenda for the years ahead by a European research network. BMC Health Serv Res 2024; 24:71. [PMID: 38218788 PMCID: PMC10788034 DOI: 10.1186/s12913-024-10555-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 01/03/2024] [Indexed: 01/15/2024] Open
Abstract
BACKGROUND Multi-disciplinary behavioral research on acute care teams has focused on understanding how teams work and on identifying behaviors characteristic of efficient and effective team performance. We aimed to define important knowledge gaps and establish a research agenda for the years ahead of prioritized research questions in this field of applied health research. METHODS In the first step, high-priority research questions were generated by a small highly specialized group of 29 experts in the field, recruited from the multinational and multidisciplinary "Behavioral Sciences applied to Acute care teams and Surgery (BSAS)" research network - a cross-European, interdisciplinary network of researchers from social sciences as well as from the medical field committed to understanding the role of behavioral sciences in the context of acute care teams. A consolidated list of 59 research questions was established. In the second step, 19 experts attending the 2020 BSAS annual conference quantitatively rated the importance of each research question based on four criteria - usefulness, answerability, effectiveness, and translation into practice. In the third step, during half a day of the BSAS conference, the same group of 19 experts discussed the prioritization of the research questions in three online focus group meetings and established recommendations. RESULTS Research priorities identified were categorized into six topics: (1) interventions to improve team process; (2) dealing with and implementing new technologies; (3) understanding and measuring team processes; (4) organizational aspects impacting teamwork; (5) training and health professions education; and (6) organizational and patient safety culture in the healthcare domain. Experts rated the first three topics as particularly relevant in terms of research priorities; the focus groups identified specific research needs within each topic. CONCLUSIONS Based on research priorities within the BSAS community and the broader field of applied health sciences identified through this work, we advocate for the prioritization for funding in these areas.
Collapse
Affiliation(s)
- Sandra Keller
- Department of Visceral Surgery and Medicine, Bern University Hospital, Bern, Switzerland.
- Department for BioMedical Research (DBMR), Bern University, Bern, Switzerland.
| | - Judith G M Jelsma
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Franziska Tschan
- Institute for Work and Organizational Psychology, University of Neuchâtel, Neuchâtel, Switzerland
| | - Nick Sevdalis
- Centre for Implementation Science, Health Service and Population Research Department, KCL, London, UK
| | - Ruth M Löllgen
- Pediatric Emergency Department, Astrid Lindgrens Children's Hospital; Karolinska University Hospital, Stockholm, Sweden
- Department of Women's and Children's Health, Karolinska Institute, Stockholm, Sweden
| | - Johan Creutzfeldt
- Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden
- Center for Advanced Medical Simulation and Training, (CAMST), Karolinska University Hospital and Karolinska Institutet, Stockholm, Sweden
| | - Lauren R Kennedy-Metz
- Department of Surgery, Harvard Medical School, Boston, MA, USA
- Division of Cardiac Surgery, VA Boston Healthcare System, Boston, MA, USA
- Psychology Department, Roanoke College, Salem, VA, USA
| | - Walter Eppich
- Department of Medical Education & Collaborative Practice Centre, University of Melbourne, Melbourne, Australia
| | - Norbert K Semmer
- Department of Work Psychology, University of Bern, Bern, Switzerland
| | - Isabelle Van Herzeele
- Department of Thoracic and Vascular Surgery, Ghent University Hospital, Ghent, Belgium
| | - Karin Pukk Härenstam
- Pediatric Emergency Department, Astrid Lindgrens Children's Hospital; Karolinska University Hospital, Stockholm, Sweden
- Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden
| | - Martine C de Bruijne
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
52
|
Balu A, Kugener G, Pangal DJ, Lee H, Lasky S, Han J, Buchanan I, Liu J, Zada G, Donoho DA. Simulated outcomes for durotomy repair in minimally invasive spine surgery. Sci Data 2024; 11:62. [PMID: 38200013 PMCID: PMC10781746 DOI: 10.1038/s41597-023-02744-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 11/13/2023] [Indexed: 01/12/2024] Open
Abstract
Minimally invasive spine surgery (MISS) is increasingly performed using endoscopic and microscopic visualization, and the captured video can be used for surgical education and development of predictive artificial intelligence (AI) models. Video datasets depicting adverse event management are also valuable, as predictive models not exposed to adverse events may exhibit poor performance when these occur. Given that no dedicated spine surgery video datasets for AI model development are publicly available, we introduce Simulated Outcomes for Durotomy Repair in Minimally Invasive Spine Surgery (SOSpine). A validated MISS cadaveric dural repair simulator was used to educate neurosurgery residents, and surgical microscope video recordings were paired with outcome data. Objects including durotomy, needle, grasper, needle driver, and nerve hook were then annotated. Altogether, SOSpine contains 15,698 frames with 53,238 annotations and associated durotomy repair outcomes. For validation, an AI model was fine-tuned on SOSpine video and detected surgical instruments with a mean average precision of 0.77. In summary, SOSpine depicts spine surgeons managing a common complication, providing opportunities to develop surgical AI models.
Collapse
Affiliation(s)
- Alan Balu
- Department of Neurosurgery, Georgetown University School of Medicine, 3900 Reservoir Rd NW, Washington, D.C., 20007, USA.
| | - Guillaume Kugener
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Dhiraj J Pangal
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Heewon Lee
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Sasha Lasky
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Jane Han
- University of Southern California, 3709 Trousdale Pkwy., Los Angeles, CA, 90089, USA
| | - Ian Buchanan
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - John Liu
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Gabriel Zada
- Department of Neurological Surgery, Keck School of Medicine of University of Southern California, 1200 North State St., Suite 3300, Los Angeles, CA, 90033, USA
| | - Daniel A Donoho
- Department of Neurosurgery, Children's National Hospital, 111 Michigan Avenue NW, Washington, DC, 20010, USA
| |
Collapse
|
53
|
Mascagni P, Alapatt D, Lapergola A, Vardazaryan A, Mazellier JP, Dallemagne B, Mutter D, Padoy N. Early-stage clinical evaluation of real-time artificial intelligence assistance for laparoscopic cholecystectomy. Br J Surg 2024; 111:znad353. [PMID: 37935636 DOI: 10.1093/bjs/znad353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 07/24/2023] [Accepted: 08/26/2023] [Indexed: 11/09/2023]
Abstract
Lay Summary
The growing availability of surgical digital data and developments in analytics such as artificial intelligence (AI) are being harnessed to improve surgical care. However, technical and cultural barriers to real-time intraoperative AI assistance exist. This early-stage clinical evaluation shows the technical feasibility of concurrently deploying several AIs in operating rooms for real-time assistance during procedures. In addition, potentially relevant clinical applications of these AI models are explored with a multidisciplinary cohort of key stakeholders.
Collapse
Affiliation(s)
- Pietro Mascagni
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
- Department of Medical and Abdominal Surgery and Endocrine-Metabolic Science, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
| | - Alfonso Lapergola
- Department of Digestive and Endocrine Surgery, Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | | | | | - Bernard Dallemagne
- Institute for Research against Digestive Cancer (IRCAD), Strasbourg, France
| | - Didier Mutter
- Department of Digestive and Endocrine Surgery, Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, IHU Strasbourg, Strasbourg, France
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| |
Collapse
|
54
|
Daneshgar Rahbar M, Mousavi Mojab SZ. Enhanced U-Net with GridMask (EUGNet): A Novel Approach for Robotic Surgical Tool Segmentation. J Imaging 2023; 9:282. [PMID: 38132700 PMCID: PMC10744415 DOI: 10.3390/jimaging9120282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 12/13/2023] [Accepted: 12/15/2023] [Indexed: 12/23/2023] Open
Abstract
This study proposed enhanced U-Net with GridMask (EUGNet) image augmentation techniques focused on pixel manipulation, emphasizing GridMask augmentation. This study introduces EUGNet, which incorporates GridMask augmentation to address U-Net's limitations. EUGNet features a deep contextual encoder, residual connections, class-balancing loss, adaptive feature fusion, GridMask augmentation module, efficient implementation, and multi-modal fusion. These innovations enhance segmentation accuracy and robustness, making it well-suited for medical image analysis. The GridMask algorithm is detailed, demonstrating its distinct approach to pixel elimination, enhancing model adaptability to occlusions and local features. A comprehensive dataset of robotic surgical scenarios and instruments is used for evaluation, showcasing the framework's robustness. Specifically, there are improvements of 1.6 percentage points in balanced accuracy for the foreground, 1.7 points in intersection over union (IoU), and 1.7 points in mean Dice similarity coefficient (DSC). These improvements are highly significant and have a substantial impact on inference speed. The inference speed, which is a critical factor in real-time applications, has seen a noteworthy reduction. It decreased from 0.163 milliseconds for the U-Net without GridMask to 0.097 milliseconds for the U-Net with GridMask.
Collapse
Affiliation(s)
- Mostafa Daneshgar Rahbar
- Department of Electrical and Computer Engineering, Lawrence Technological University, Southfield, MI 48075, USA
| | | |
Collapse
|
55
|
Mosca V, Fuschillo G, Sciaudone G, Sahnan K, Selvaggi F, Pellino G. Use of artificial intelligence in total mesorectal excision in rectal cancer surgery: State of the art and perspectives. Artif Intell Gastroenterol 2023; 4:64-71. [DOI: 10.35712/aig.v4.i3.64] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/13/2023] [Accepted: 10/23/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND Colorectal cancer is a major public health problem, with 1.9 million new cases and 953000 deaths worldwide in 2020. Total mesorectal excision (TME) is the standard of care for the treatment of rectal cancer and is crucial to prevent local recurrence, but it is a technically challenging surgery. The use of artificial intelligence (AI) could help improve the performance and safety of TME surgery.
AIM To review the literature on the use of AI and machine learning in rectal surgery and potential future developments.
METHODS Online scientific databases were searched for articles on the use of AI in rectal cancer surgery between 2020 and 2023.
RESULTS The literature search yielded 876 results, and only 13 studies were selected for review. The use of AI in rectal cancer surgery and specifically in TME is a rapidly evolving field. There are a number of different AI algorithms that have been developed for use in TME, including algorithms for instrument detection, anatomical structure identification, and image-guided navigation systems.
CONCLUSION AI has the potential to revolutionize TME surgery by providing real-time surgical guidance, preventing complications, and improving training. However, further research is needed to fully understand the benefits and risks of AI in TME surgery.
Collapse
Affiliation(s)
- Vinicio Mosca
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
| | - Giacomo Fuschillo
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
| | - Guido Sciaudone
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, Campobasso 86100, Italy
| | - Kapil Sahnan
- Department of Colorectal Surgery, St Mark’s Hospital, London HA1 3UJ, United Kingdom
- Department of Surgery and Cancer, Imperial College London, London SW7 5NH, United Kingdom
| | - Francesco Selvaggi
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
| | - Gianluca Pellino
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania “Luigi Vanvitelli”, Napoli 80138, Italy
- Colorectal Surgery, Vall d’Hebron University Hospital, Barcelona 08035, Spain
| |
Collapse
|
56
|
Eckhoff JA, Meireles O. Could Artificial Intelligence guide surgeons' hands? Rev Col Bras Cir 2023; 50:e20233696EDIT01. [PMID: 38088637 PMCID: PMC10668586 DOI: 10.1590/0100-6991e-20233696edit01-en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023] Open
Affiliation(s)
- Jennifer A. Eckhoff
- - Harvard Medical School, Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital - Boston - MA - Estados Unidos
| | - Ozanan Meireles
- - Harvard Medical School, Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital - Boston - MA - Estados Unidos
| |
Collapse
|
57
|
Burton W, Myers C, Rutherford M, Rullkoetter P. Evaluation of single-stage vision models for pose estimation of surgical instruments. Int J Comput Assist Radiol Surg 2023; 18:2125-2142. [PMID: 37120481 DOI: 10.1007/s11548-023-02890-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Accepted: 03/27/2023] [Indexed: 05/01/2023]
Abstract
PURPOSE Multiple applications in open surgical environments may benefit from adoption of markerless computer vision depending on associated speed and accuracy requirements. The current work evaluates vision models for 6-degree of freedom pose estimation of surgical instruments in RGB scenes. Potential use cases are discussed based on observed performance. METHODS Convolutional neural nets were developed with simulated training data for 6-degree of freedom pose estimation of a representative surgical instrument in RGB scenes. Trained models were evaluated with simulated and real-world scenes. Real-world scenes were produced by using a robotic manipulator to procedurally generate a wide range of object poses. RESULTS CNNs trained in simulation transferred to real-world evaluation scenes with a mild decrease in pose accuracy. Model performance was sensitive to input image resolution and orientation prediction format. The model with highest accuracy demonstrated mean in-plane translation error of 13 mm and mean long axis orientation error of 5[Formula: see text] in simulated evaluation scenes. Similar errors of 29 mm and 8[Formula: see text] were observed in real-world scenes. CONCLUSION 6-DoF pose estimators can predict object pose in RGB scenes with real-time inference speed. Observed pose accuracy suggests that applications such as coarse-grained guidance, surgical skill evaluation, or instrument tracking for tray optimization may benefit from markerless pose estimation.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA.
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA
| | - Matthew Rutherford
- Unmanned Systems Research Institute, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E Wesley Ave, Denver, CO, 80210, USA
| |
Collapse
|
58
|
Sahari Y, Al-Kadi AMT, Ali JKM. A Cross Sectional Study of ChatGPT in Translation: Magnitude of Use, Attitudes, and Uncertainties. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:2937-2954. [PMID: 37934302 DOI: 10.1007/s10936-023-10031-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 11/08/2023]
Abstract
This preliminary cross-sectional study, focusing on Artificial Intelligence (AI), aimed to assess the impact of ChatGPT on translation within an Arab context. It primarily explored the attitudes of a sample of translation teachers and students through semi-structured interviews and projective techniques. Data collection included gathering information about the advantages and challenges that ChatGPT, in comparison to Google Translate, had introduced to the field of translation and translation teaching. The results indicated that nearly all the participants were satisfied with ChatGPT. The results also revealed that most students preferred ChatGPT over Google Translate, while most teachers favored Google Translate. The study also found that the participants recognized both positive and negative aspects of using ChatGPT in translation. Findings also indicated that ChatGPT, as a recent AI-based translation-related technology, is more valuable for mechanical processes of writing and editing translated texts than for tasks requiring judgment, such as fine-tuning and double-checking. While it offers various advantages, AI also presents new challenges that educators and stakeholders need to address accordingly.
Collapse
|
59
|
Demir KC, Schieber H, Weise T, Roth D, May M, Maier A, Yang SH. Deep Learning in Surgical Workflow Analysis: A Review of Phase and Step Recognition. IEEE J Biomed Health Inform 2023; 27:5405-5417. [PMID: 37665700 DOI: 10.1109/jbhi.2023.3311628] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
OBJECTIVE In the last two decades, there has been a growing interest in exploring surgical procedures with statistical models to analyze operations at different semantic levels. This information is necessary for developing context-aware intelligent systems, which can assist the physicians during operations, evaluate procedures afterward or help the management team to effectively utilize the operating room. The objective is to extract reliable patterns from surgical data for the robust estimation of surgical activities performed during operations. The purpose of this article is to review the state-of-the-art deep learning methods that have been published after 2018 for analyzing surgical workflows, with a focus on phase and step recognition. METHODS Three databases, IEEE Xplore, Scopus, and PubMed were searched, and additional studies are added through a manual search. After the database search, 343 studies were screened and a total of 44 studies are selected for this review. CONCLUSION The use of temporal information is essential for identifying the next surgical action. Contemporary methods used mainly RNNs, hierarchical CNNs, and Transformers to preserve long-distance temporal relations. The lack of large publicly available datasets for various procedures is a great challenge for the development of new and robust models. As supervised learning strategies are used to show proof-of-concept, self-supervised, semi-supervised, or active learning methods are used to mitigate dependency on annotated data. SIGNIFICANCE The present study provides a comprehensive review of recent methods in surgical workflow analysis, summarizes commonly used architectures, datasets, and discusses challenges.
Collapse
|
60
|
Brandenburg JM, Jenke AC, Stern A, Daum MTJ, Schulze A, Younis R, Petrynowski P, Davitashvili T, Vanat V, Bhasker N, Schneider S, Mündermann L, Reinke A, Kolbinger FR, Jörns V, Fritz-Kebede F, Dugas M, Maier-Hein L, Klotz R, Distler M, Weitz J, Müller-Stich BP, Speidel S, Bodenstedt S, Wagner M. Active learning for extracting surgomic features in robot-assisted minimally invasive esophagectomy: a prospective annotation study. Surg Endosc 2023; 37:8577-8593. [PMID: 37833509 PMCID: PMC10615926 DOI: 10.1007/s00464-023-10447-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 09/02/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND With Surgomics, we aim for personalized prediction of the patient's surgical outcome using machine-learning (ML) on multimodal intraoperative data to extract surgomic features as surgical process characteristics. As high-quality annotations by medical experts are crucial, but still a bottleneck, we prospectively investigate active learning (AL) to reduce annotation effort and present automatic recognition of surgomic features. METHODS To establish a process for development of surgomic features, ten video-based features related to bleeding, as highly relevant intraoperative complication, were chosen. They comprise the amount of blood and smoke in the surgical field, six instruments, and two anatomic structures. Annotation of selected frames from robot-assisted minimally invasive esophagectomies was performed by at least three independent medical experts. To test whether AL reduces annotation effort, we performed a prospective annotation study comparing AL with equidistant sampling (EQS) for frame selection. Multiple Bayesian ResNet18 architectures were trained on a multicentric dataset, consisting of 22 videos from two centers. RESULTS In total, 14,004 frames were tag annotated. A mean F1-score of 0.75 ± 0.16 was achieved for all features. The highest F1-score was achieved for the instruments (mean 0.80 ± 0.17). This result is also reflected in the inter-rater-agreement (1-rater-kappa > 0.82). Compared to EQS, AL showed better recognition results for the instruments with a significant difference in the McNemar test comparing correctness of predictions. Moreover, in contrast to EQS, AL selected more frames of the four less common instruments (1512 vs. 607 frames) and achieved higher F1-scores for common instruments while requiring less training frames. CONCLUSION We presented ten surgomic features relevant for bleeding events in esophageal surgery automatically extracted from surgical video using ML. AL showed the potential to reduce annotation effort while keeping ML performance high for selected features. The source code and the trained models are published open source.
Collapse
Affiliation(s)
- Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Rayan Younis
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Philipp Petrynowski
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Tornike Davitashvili
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Vincent Vanat
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Sophia Schneider
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Annika Reinke
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fiona R Kolbinger
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- Else Kröner-Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Vanessa Jörns
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rosa Klotz
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- The Study Center of the German Surgical Society (SDGC), Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Jürgen Weitz
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Beat P Müller-Stich
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- University Center for Gastrointestinal and Liver Diseases, St. Clara Hospital and University Hospital Basel, Basel, Switzerland
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- German Cancer Research Center (DKFZ), Heidelberg, Germany.
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany.
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany.
- Centre for Tactile Internet With Human-in-the-Loop (CeTI), Technische Universität Dresden, 01062, Dresden, Germany.
| |
Collapse
|
61
|
Eckhoff JA, Rosman G, Altieri MS, Speidel S, Stoyanov D, Anvari M, Meier-Hein L, März K, Jannin P, Pugh C, Wagner M, Witkowski E, Shaw P, Madani A, Ban Y, Ward T, Filicori F, Padoy N, Talamini M, Meireles OR. SAGES consensus recommendations on surgical video data use, structure, and exploration (for research in artificial intelligence, clinical quality improvement, and surgical education). Surg Endosc 2023; 37:8690-8707. [PMID: 37516693 PMCID: PMC10616217 DOI: 10.1007/s00464-023-10288-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 07/05/2023] [Indexed: 07/31/2023]
Abstract
BACKGROUND Surgery generates a vast amount of data from each procedure. Particularly video data provides significant value for surgical research, clinical outcome assessment, quality control, and education. The data lifecycle is influenced by various factors, including data structure, acquisition, storage, and sharing; data use and exploration, and finally data governance, which encompasses all ethical and legal regulations associated with the data. There is a universal need among stakeholders in surgical data science to establish standardized frameworks that address all aspects of this lifecycle to ensure data quality and purpose. METHODS Working groups were formed, among 48 representatives from academia and industry, including clinicians, computer scientists and industry representatives. These working groups focused on: Data Use, Data Structure, Data Exploration, and Data Governance. After working group and panel discussions, a modified Delphi process was conducted. RESULTS The resulting Delphi consensus provides conceptualized and structured recommendations for each domain related to surgical video data. We identified the key stakeholders within the data lifecycle and formulated comprehensive, easily understandable, and widely applicable guidelines for data utilization. Standardization of data structure should encompass format and quality, data sources, documentation, metadata, and account for biases within the data. To foster scientific data exploration, datasets should reflect diversity and remain adaptable to future applications. Data governance must be transparent to all stakeholders, addressing legal and ethical considerations surrounding the data. CONCLUSION This consensus presents essential recommendations around the generation of standardized and diverse surgical video databanks, accounting for multiple stakeholders involved in data generation and use throughout its lifecycle. Following the SAGES annotation framework, we lay the foundation for standardization of data use, structure, and exploration. A detailed exploration of requirements for adequate data governance will follow.
Collapse
Affiliation(s)
- Jennifer A Eckhoff
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA.
- Department of General, Visceral, Tumor and Transplant Surgery, University Hospital Cologne, Kerpenerstrasse 62, 50937, Cologne, Germany.
| | - Guy Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Maria S Altieri
- Stony Brook University Hospital, Washington University in St. Louis, 101 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Fiedlerstraße 23, 01307, Dresden, Germany
| | - Danail Stoyanov
- University College London, 43-45 Foley Street, London, W1W 7TY, UK
| | - Mehran Anvari
- Center for Surgical Invention and Innovation, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Lena Meier-Hein
- German Cancer Research Center, Deutsches Krebsforschungszentrum (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Keno März
- German Cancer Research Center, Deutsches Krebsforschungszentrum (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Pierre Jannin
- MediCIS, University of Rennes - Campus Beaulieu, 2 Av. du Professeur Léon Bernard, 35043, Rennes, France
| | - Carla Pugh
- Department of Surgery, Stanford School of Medicine, 291 Campus Drive, Stanford, CA, 94305, USA
| | - Martin Wagner
- Department of Surgery, University Hospital Heidelberg, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Elan Witkowski
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - Paresh Shaw
- New York University Langone, 530 1St Ave. Floor 12, New York, NY, 10016, USA
| | - Amin Madani
- Surgical Artifcial Intelligence Research Academy, Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yutong Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Thomas Ward
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, New York, NY, USA
| | - Nicolas Padoy
- Ihu Strasbourg - Institute Surgery Guided Par L'image, 1 Pl. de L'Hôpital, 67000, Strasbourg, France
| | - Mark Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Ozanan R Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC339, Boston, MA, 02114, USA
| |
Collapse
|
62
|
Cao J, Yip HC, Chen Y, Scheppach M, Luo X, Yang H, Cheng MK, Long Y, Jin Y, Chiu PWY, Yam Y, Meng HML, Dou Q. Intelligent surgical workflow recognition for endoscopic submucosal dissection with real-time animal study. Nat Commun 2023; 14:6676. [PMID: 37865629 PMCID: PMC10590425 DOI: 10.1038/s41467-023-42451-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 10/11/2023] [Indexed: 10/23/2023] Open
Abstract
Recent advancements in artificial intelligence have witnessed human-level performance; however, AI-enabled cognitive assistance for therapeutic procedures has not been fully explored nor pre-clinically validated. Here we propose AI-Endo, an intelligent surgical workflow recognition suit, for endoscopic submucosal dissection (ESD). Our AI-Endo is trained on high-quality ESD cases from an expert endoscopist, covering a decade time expansion and consisting of 201,026 labeled frames. The learned model demonstrates outstanding performance on validation data, including cases from relatively junior endoscopists with various skill levels, procedures conducted with different endoscopy systems and therapeutic skills, and cohorts from international multi-centers. Furthermore, we integrate our AI-Endo with the Olympus endoscopic system and validate the AI-enabled cognitive assistance system with animal studies in live ESD training sessions. Dedicated data analysis from surgical phase recognition results is summarized in an automatically generated report for skill assessment.
Collapse
Affiliation(s)
- Jianfeng Cao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hon-Chi Yip
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China.
| | - Yueyao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Markus Scheppach
- Internal Medicine III-Gastroenterology, University Hospital of Augsburg, Augsburg, Germany
| | - Xiaobei Luo
- Guangdong Provincial Key Laboratory of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Hongzheng Yang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Ming Kit Cheng
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yueming Jin
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore
| | - Philip Wai-Yan Chiu
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
| | - Yeung Yam
- Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong, China.
- Multi-scale Medical Robotics Center and The Chinese University of Hong Kong, Hong Kong, China.
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Helen Mei-Ling Meng
- Centre for Perceptual and Interactive Intelligence and The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
63
|
Haouchine N, Dorent R, Juvekar P, Torio E, Wells WM, Kapur T, Golby AJ, Frisken S. Learning Expected Appearances for Intraoperative Registration during Neurosurgery. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14228:227-237. [PMID: 38371724 PMCID: PMC10870253 DOI: 10.1007/978-3-031-43996-4_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.
Collapse
Affiliation(s)
- Nazim Haouchine
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Reuben Dorent
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - William M Wells
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Alexandra J Golby
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
64
|
Li Y, Jia S, Song G, Wang P, Jia F. SDA-CLIP: surgical visual domain adaptation using video and text labels. Quant Imaging Med Surg 2023; 13:6989-7001. [PMID: 37869278 PMCID: PMC10585553 DOI: 10.21037/qims-23-376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 08/03/2023] [Indexed: 10/24/2023]
Abstract
Background Surgical action recognition is an essential technology in context-aware-based autonomous surgery, whereas the accuracy is limited by clinical dataset scale. Leveraging surgical videos from virtual reality (VR) simulations to research algorithms for the clinical domain application, also known as domain adaptation, can effectively reduce the cost of data acquisition and annotation, and protect patient privacy. Methods We introduced a surgical domain adaptation method based on the contrastive language-image pretraining model (SDA-CLIP) to recognize cross-domain surgical action. Specifically, we utilized the Vision Transformer (ViT) and Transformer to extract video and text embeddings, respectively. Text embedding was developed as a bridge between VR and clinical domains. Inter- and intra-modality loss functions were employed to enhance the consistency of embeddings of the same class. Further, we evaluated our method on the MICCAI 2020 EndoVis Challenge SurgVisDom dataset. Results Our SDA-CLIP achieved a weighted F1-score of 65.9% (+18.9%) on the hard domain adaptation task (trained only with VR data) and 84.4% (+4.4%) on the soft domain adaptation task (trained with VR and clinical-like data), which outperformed the first place team of the challenge by a significant margin. Conclusions The proposed SDA-CLIP model can effectively extract video scene information and textual semantic information, which greatly improves the performance of cross-domain surgical action recognition. The code is available at https://github.com/Lycus99/SDA-CLIP.
Collapse
Affiliation(s)
- Yuchong Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Shuangfu Jia
- Department of Operating Room, Hejian People’s Hospital, Hejian, China
| | - Guangbi Song
- Medical Imaging Center, Luoping County People’s Hospital, Qujing, China
| | - Ping Wang
- Department of Hepatobiliary Surgery, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Pazhou Lab, Guangzhou, China
| |
Collapse
|
65
|
Fischer E, Jawed KJ, Cleary K, Balu A, Donoho A, Thompson Gestrich W, Donoho DA. A methodology for the annotation of surgical videos for supervised machine learning applications. Int J Comput Assist Radiol Surg 2023; 18:1673-1678. [PMID: 37245179 DOI: 10.1007/s11548-023-02923-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 04/14/2023] [Indexed: 05/29/2023]
Abstract
PURPOSE Surgical data science is an emerging field focused on quantitative analysis of pre-, intra-, and postoperative patient data (Maier-Hein et al. in Med Image Anal 76: 102306, 2022). Data science approaches can decompose complex procedures, train surgical novices, assess outcomes of actions, and create predictive models of surgical outcomes (Marcus et al. in Pituitary 24: 839-853, 2021; Røadsch et al. in Nat Mach Intell, 2022). Surgical videos contain powerful signals of events that may impact patient outcomes. A necessary step before the deployment of supervised machine learning methods is the development of labels for objects and anatomy. We describe a complete method for annotating videos of transsphenoidal surgery. METHODS Endoscopic video recordings of transsphenoidal pituitary tumor removal surgeries were collected from a multicenter research collaborative. These videos were anonymized and stored in a cloud-based platform. Videos were uploaded to an online annotation platform. Annotation framework was developed based on a literature review and surgical observations to ensure proper understanding of the tools, anatomy, and steps present. A user guide was developed to trained annotators to ensure standardization. RESULTS A fully annotated video of a transsphenoidal pituitary tumor removal surgery was produced. This annotated video included over 129,826 frames. To prevent any missing annotations, all frames were later reviewed by highly experienced annotators and a surgeon reviewer. Iterations to annotated videos allowed for the creation of an annotated video complete with labeled surgical tools, anatomy, and phases. In addition, a user guide was developed for the training of novice annotators, which provides information about the annotation software to ensure the production of standardized annotations. CONCLUSIONS A standardized and reproducible workflow for managing surgical video data is a necessary prerequisite to surgical data science applications. We developed a standard methodology for annotating surgical videos that may facilitate the quantitative analysis of videos using machine learning applications. Future work will demonstrate the clinical relevance and impact of this workflow by developing process modeling and outcome predictors.
Collapse
Affiliation(s)
- Elizabeth Fischer
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA.
| | - Kochai Jan Jawed
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Alan Balu
- Division of Neurosurgery, Center for Neuroscience and Behavioral Medicine, Children's National Hospital, Washington, DC, USA
- Georgetown University School of Medicine, Washington, DC, USA
| | | | | | - Daniel A Donoho
- Georgetown University School of Medicine, Washington, DC, USA
- Department of Neurosurgery and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington, DC, USA
| |
Collapse
|
66
|
Varas J, Coronel BV, Villagrán I, Escalona G, Hernandez R, Schuit G, Durán V, Lagos-Villaseca A, Jarry C, Neyem A, Achurra P. Innovations in surgical training: exploring the role of artificial intelligence and large language models (LLM). Rev Col Bras Cir 2023; 50:e20233605. [PMID: 37646729 PMCID: PMC10508667 DOI: 10.1590/0100-6991e-20233605-en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 07/14/2023] [Indexed: 09/01/2023] Open
Abstract
The landscape of surgical training is rapidly evolving with the advent of artificial intelligence (AI) and its integration into education and simulation. This manuscript aims to explore the potential applications and benefits of AI-assisted surgical training, particularly the use of large language models (LLMs), in enhancing communication, personalizing feedback, and promoting skill development. We discuss the advancements in simulation-based training, AI-driven assessment tools, video-based assessment systems, virtual reality (VR) and augmented reality (AR) platforms, and the potential role of LLMs in the transcription, translation, and summarization of feedback. Despite the promising opportunities presented by AI integration, several challenges must be addressed, including accuracy and reliability, ethical and privacy concerns, bias in AI models, integration with existing training systems, and training and adoption of AI-assisted tools. By proactively addressing these challenges and harnessing the potential of AI, the future of surgical training may be reshaped to provide a more comprehensive, safe, and effective learning experience for trainees, ultimately leading to better patient outcomes. .
Collapse
Affiliation(s)
- Julian Varas
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Brandon Valencia Coronel
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Ignacio Villagrán
- - Pontificia Universidad Católica de Chile, Carrera de Kinesiología, Departamento de Ciencias de la Salud, Facultad de Medicina - Santiago - Región Metropolitana - Chile
| | - Gabriel Escalona
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Rocio Hernandez
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Gregory Schuit
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Valentina Durán
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Antonia Lagos-Villaseca
- - Pontificia Universidad Católica de Chile, Department of Otolaryngology - Santiago - Región Metropolitana - Chile
| | - Cristian Jarry
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Andres Neyem
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Pablo Achurra
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| |
Collapse
|
67
|
Tranter-Entwistle I, Simcock C, Eglinton T, Connor S. Prospective cohort study of operative outcomes in laparoscopic cholecystectomy using operative difficulty grade-adjusted CUSUM analysis. Br J Surg 2023; 110:1068-1071. [PMID: 36882185 PMCID: PMC10416680 DOI: 10.1093/bjs/znad046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 01/19/2023] [Accepted: 02/07/2023] [Indexed: 03/09/2023]
Affiliation(s)
| | - Corin Simcock
- Department of Surgery, The University of Otago Medical School, Christchurch, New Zealand
| | - Tim Eglinton
- Department of Surgery, The University of Otago Medical School, Christchurch, New Zealand
- Department of General Surgery, Christchurch Hospital, CDHB, Christchurch, New Zealand
| | - Saxon Connor
- Department of General Surgery, Christchurch Hospital, CDHB, Christchurch, New Zealand
| |
Collapse
|
68
|
Montgomery KB, Lindeman B. Using Graduating Surgical Resident Milestone Ratings to Predict Patient Outcomes: A Blunt Instrument for a Complex Problem. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:765-768. [PMID: 36745875 PMCID: PMC10329982 DOI: 10.1097/acm.0000000000005165] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
In 2013, U.S. general surgery residency programs implemented a milestones assessment framework in an effort to incorporate more competency-focused evaluation methods. Developed by a group of surgical education leaders and other stakeholders working with the Accreditation Council for Graduate Medical Education and recently updated in a version 2.0, the surgery milestones framework is centered around 6 "core competencies": patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice. While prior work has focused on the validity of milestones as a measure of resident performance, associations between general surgery resident milestone ratings and their post-training patient outcomes have only recently been explored in an analysis in this issue of Academic Medicine by Kendrick et al. Despite their well-designed efforts to tackle this complex problem, no relationships were identified. This accompanying commentary discusses the broader implications for the use of milestone ratings beyond their intended application, alternative assessment methods, and the challenges of developing predictive assessments in the complex setting of surgical care. Although milestone ratings have not been shown to provide the specificity needed to predict clinical outcomes in the complex settings studied by Kendrick et al, hope remains that utilization of other outcomes, assessment frameworks, and data analytic tools could augment these models and further our progress toward a predictive assessment in surgical education. Evaluation of residents in general surgery residency programs has grown both more sophisticated and complicated in the setting of increasing patient and case complexity, constraints on time, and regulation of resident supervision in the operating room. Over the last decade, surgical education research efforts related to resident assessment have focused on measuring performance through accurate and reproducible methods with evidence for their validity, as well as on attempting to refine decision making about resident preparedness for unsupervised practice.
Collapse
Affiliation(s)
- Kelsey B Montgomery
- K.B. Montgomery is a general surgery resident, Department of Surgery, University of Alabama at Birmingham, Birmingham, Alabama; ORCID: https://orcid.org/0000-0002-1284-1830
| | - Brenessa Lindeman
- B. Lindeman is associate professor, Department of Surgery, and assistant dean, Graduate Medical Education, University of Alabama at Birmingham, Birmingham, Alabama
| |
Collapse
|
69
|
Villani FP, Paderno A, Fiorentino MC, Casella A, Piazza C, Moccia S. Classifying Vocal Folds Fixation from Endoscopic Videos with Machine Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082565 DOI: 10.1109/embc40787.2023.10340017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Vocal folds motility evaluation is paramount in both the assessment of functional deficits and in the accurate staging of neoplastic disease of the glottis. Diagnostic endoscopy, and in particular videoendoscopy, is nowadays the method through which the motility is estimated. The clinical diagnosis, however, relies on the examination of the videoendoscopic frames, which is a subjective and professional-dependent task. Hence, a more rigorous, objective, reliable, and repeatable method is needed. To support clinicians, this paper proposes a machine learning (ML) approach for vocal cords motility classification. From the endoscopic videos of 186 patients with both vocal cords preserved motility and fixation, a dataset of 558 images relative to the two classes was extracted. Successively, a number of features was retrieved from the images and used to train and test four well-grounded ML classifiers. From test results, the best performance was achieved using XGBoost, with precision = 0.82, recall = 0.82, F1 score = 0.82, and accuracy = 0.82. After comparing the most relevant ML models, we believe that this approach could provide precise and reliable support to clinical evaluation.Clinical Relevance- This research represents an important advancement in the state-of-the-art of computer-assisted otolaryngology, to develop an effective tool for motility assessment in the clinical practice.
Collapse
|
70
|
Hashemi N, Svendsen MBS, Bjerrum F, Rasmussen S, Tolsgaard MG, Friis ML. Acquisition and usage of robotic surgical data for machine learning analysis. Surg Endosc 2023:10.1007/s00464-023-10214-7. [PMID: 37389741 PMCID: PMC10338401 DOI: 10.1007/s00464-023-10214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
BACKGROUND The increasing use of robot-assisted surgery (RAS) has led to the need for new methods of assessing whether new surgeons are qualified to perform RAS, without the resource-demanding process of having expert surgeons do the assessment. Computer-based automation and artificial intelligence (AI) are seen as promising alternatives to expert-based surgical assessment. However, no standard protocols or methods for preparing data and implementing AI are available for clinicians. This may be among the reasons for the impediment to the use of AI in the clinical setting. METHOD We tested our method on porcine models with both the da Vinci Si and the da Vinci Xi. We sought to capture raw video data from the surgical robots and 3D movement data from the surgeons and prepared the data for the use in AI by a structured guide to acquire and prepare video data using the following steps: 'Capturing image data from the surgical robot', 'Extracting event data', 'Capturing movement data of the surgeon', 'Annotation of image data'. RESULTS 15 participant (11 novices and 4 experienced) performed 10 different intraabdominal RAS procedures. Using this method we captured 188 videos (94 from the surgical robot, and 94 corresponding movement videos of the surgeons' arms and hands). Event data, movement data, and labels were extracted from the raw material and prepared for use in AI. CONCLUSION With our described methods, we could collect, prepare, and annotate images, events, and motion data from surgical robotic systems in preparation for its use in AI.
Collapse
Affiliation(s)
- Nasseh Hashemi
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark.
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark.
- ROCnord-Robot Centre, Aalborg University Hospital, Aalborg, Denmark.
- Department of Urology, Aalborg University Hospital, Aalborg, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
- Department of Gastrointestinal and Hepatic Diseases, Copenhagen University Hospital - Herlev and Gentofte, Herlev, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
- Copenhagen Academy for Medical Education and Simulation, Center for Human Resources and Education, Copenhagen, Denmark
| | - Mikkel Lønborg Friis
- Department of Clinical Medicine, Aalborg University Hospital, Aalborg, Denmark
- Nordsim-Centre for Skills Training and Simulation, Aalborg, Denmark
| |
Collapse
|
71
|
Ramesh S, Srivastav V, Alapatt D, Yu T, Murali A, Sestini L, Nwoye CI, Hamoud I, Sharma S, Fleurentin A, Exarchakis G, Karargyris A, Padoy N. Dissecting self-supervised learning methods for surgical computer vision. Med Image Anal 2023; 88:102844. [PMID: 37270898 DOI: 10.1016/j.media.2023.102844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 05/08/2023] [Accepted: 05/15/2023] [Indexed: 06/06/2023]
Abstract
The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg.
Collapse
Affiliation(s)
- Sanat Ramesh
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; Altair Robotics Lab, Department of Computer Science, University of Verona, Verona 37134, Italy
| | - Vinkle Srivastav
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Aditya Murali
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano 20133, Italy
| | | | - Idris Hamoud
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | - Saurav Sharma
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France
| | | | - Georgios Exarchakis
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; IHU Strasbourg, Strasbourg 67000, France
| | - Alexandros Karargyris
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; IHU Strasbourg, Strasbourg 67000, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, Strasbourg 67000, France; IHU Strasbourg, Strasbourg 67000, France
| |
Collapse
|
72
|
Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoué V, Donoho D, Jannin P. A systematic review of annotation for surgical process model analysis in minimally invasive surgery based on video. Surg Endosc 2023:10.1007/s00464-023-10041-w. [PMID: 37157035 DOI: 10.1007/s00464-023-10041-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.
Collapse
Affiliation(s)
- Krystel Nyangoh Timoh
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France.
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France.
- Laboratoire d'Anatomie et d'Organogenèse, Faculté de Médecine, Centre Hospitalier Universitaire de Rennes, 2 Avenue du Professeur Léon Bernard, 35043, Rennes Cedex, France.
- Department of Obstetrics and Gynecology, Rennes Hospital, Rennes, France.
| | - Arnaud Huaulme
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, 20010, USA
| | - Myra A Zaheer
- George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Vincent Lavoué
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France
| | - Dan Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, 20010, USA
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| |
Collapse
|
73
|
Kiyasseh D, Laca J, Haque TF, Otiato M, Miles BJ, Wagner C, Donoho DA, Trinh QD, Anandkumar A, Hung AJ. Human visual explanations mitigate bias in AI-based assessment of surgeon skills. NPJ Digit Med 2023; 6:54. [PMID: 36997642 PMCID: PMC10063676 DOI: 10.1038/s41746-023-00766-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 01/21/2023] [Indexed: 04/03/2023] Open
Abstract
Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems-SAIS-deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy -TWIX-which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students' skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.
Collapse
Affiliation(s)
- Dani Kiyasseh
- Department of Computing and Mathematical Sciences, California Institute of Technology, California, CA, USA.
| | - Jasper Laca
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Taseen F Haque
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Maxwell Otiato
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Brian J Miles
- Department of Urology, Houston Methodist Hospital, Texas, TX, USA
| | - Christian Wagner
- Department of Urology, Pediatric Urology and Uro-Oncology, Prostate Center Northwest, St. Antonius-Hospital, Gronau, Germany
| | - Daniel A Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington DC, WA, USA
| | - Quoc-Dien Trinh
- Center for Surgery & Public Health, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Animashree Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, California, CA, USA
| | - Andrew J Hung
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA.
| |
Collapse
|
74
|
Rädsch T, Reinke A, Weru V, Tizabi MD, Schreck N, Kavur AE, Pekdemir B, Roß T, Kopp-Schneider A, Maier-Hein L. Labelling instructions matter in biomedical image analysis. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-023-00625-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
AbstractBiomedical image analysis algorithm validation depends on high-quality annotation of reference datasets, for which labelling instructions are key. Despite their importance, their optimization remains largely unexplored. Here we present a systematic study of labelling instructions and their impact on annotation quality in the field. Through comprehensive examination of professional practice and international competitions registered at the Medical Image Computing and Computer Assisted Intervention Society, the largest international society in the biomedical imaging field, we uncovered a discrepancy between annotators’ needs for labelling instructions and their current quality and availability. On the basis of an analysis of 14,040 images annotated by 156 annotators from four professional annotation companies and 708 Amazon Mechanical Turk crowdworkers using instructions with different information density levels, we further found that including exemplary images substantially boosts annotation performance compared with text-only descriptions, while solely extending text descriptions does not. Finally, professional annotators constantly outperform Amazon Mechanical Turk crowdworkers. Our study raises awareness for the need of quality standards in biomedical image analysis labelling instructions.
Collapse
|
75
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
76
|
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, Bodenstedt S. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark. Med Image Anal 2023; 86:102770. [PMID: 36889206 DOI: 10.1016/j.media.2023.102770] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 02/03/2023] [Accepted: 02/08/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.
Collapse
Affiliation(s)
- Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Beat-Peter Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Anna Kisilenko
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Duc Tran
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Patrick Heger
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Lars Mündermann
- Data Assisted Solutions, Corporate Research & Technology, KARL STORZ SE & Co. KG, Dr. Karl-Storz-Str. 34, 78332 Tuttlingen
| | - David M Lubotsky
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Benjamin Müller
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Tornike Davitashvili
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Manuela Capek
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg
| | - Carissa Reid
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Chinedu Innocent Nwoye
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 111 Michigan Ave NW, Washington, DC 20010, USA
| | - Eung-Joo Lee
- University of Maryland, College Park, 2405 A V Williams Building, College Park, MD 20742, USA
| | - Constantin Disch
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany; University of Bremen, FB3, Medical Image Computing Group, ℅ Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Tong Xia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fucang Jia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Satoshi Kondo
- Konika Minolta, Inc., 1-2, Sakura-machi, Takatsuki, Oasak 569-8503, Japan
| | - Wolfgang Reiter
- Wintegral GmbH, Ehrenbreitsteiner Str. 36, 80993 München, Germany
| | - Yueming Jin
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Yonghao Long
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Meirui Jiang
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Pheng Ann Heng
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Isabell Twick
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Kadir Kirtac
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Enes Hosgor
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | | | | | | | - Long Zhao
- Hikvision Research Institute, Hangzhou, China
| | - Zhenxiao Ge
- Hikvision Research Institute, Hangzhou, China
| | - Haiming Sun
- Hikvision Research Institute, Hangzhou, China
| | - Di Xie
- Hikvision Research Institute, Hangzhou, China
| | - Mengqi Guo
- School of Computing, National University of Singapore, Computing 1, No.13 Computing Drive, 117417, Singapore
| | - Daochang Liu
- National Engineering Research Center of Visual Technology, School of Computer Science, Peking University, Beijing, China
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Moritz von Frankenberg
- Department of Surgery, Salem Hospital of the Evangelische Stadtmission Heidelberg, Zeppelinstrasse 11-33, 69121 Heidelberg, Germany
| | - Franziska Mathis-Ullrich
- Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Geb. 40.28, KIT Campus Süd, Engler-Bunte-Ring 8, 76131 Karlsruhe, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg; Medical Faculty, Heidelberg University, Im Neuenheimer Feld 672, 69120 Heidelberg
| | - Stefanie Speidel
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| | - Sebastian Bodenstedt
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| |
Collapse
|
77
|
Jalal NA, Alshirbaji TA, Docherty PD, Arabian H, Laufer B, Krueger-Ziolek S, Neumuth T, Moeller K. Laparoscopic Video Analysis Using Temporal, Attention, and Multi-Feature Fusion Based-Approaches. SENSORS (BASEL, SWITZERLAND) 2023; 23:1958. [PMID: 36850554 PMCID: PMC9964851 DOI: 10.3390/s23041958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 06/18/2023]
Abstract
Adapting intelligent context-aware systems (CAS) to future operating rooms (OR) aims to improve situational awareness and provide surgical decision support systems to medical teams. CAS analyzes data streams from available devices during surgery and communicates real-time knowledge to clinicians. Indeed, recent advances in computer vision and machine learning, particularly deep learning, paved the way for extensive research to develop CAS. In this work, a deep learning approach for analyzing laparoscopic videos for surgical phase recognition, tool classification, and weakly-supervised tool localization in laparoscopic videos was proposed. The ResNet-50 convolutional neural network (CNN) architecture was adapted by adding attention modules and fusing features from multiple stages to generate better-focused, generalized, and well-representative features. Then, a multi-map convolutional layer followed by tool-wise and spatial pooling operations was utilized to perform tool localization and generate tool presence confidences. Finally, the long short-term memory (LSTM) network was employed to model temporal information and perform tool classification and phase recognition. The proposed approach was evaluated on the Cholec80 dataset. The experimental results (i.e., 88.5% and 89.0% mean precision and recall for phase recognition, respectively, 95.6% mean average precision for tool presence detection, and a 70.1% F1-score for tool localization) demonstrated the ability of the model to learn discriminative features for all tasks. The performances revealed the importance of integrating attention modules and multi-stage feature fusion for more robust and precise detection of surgical phases and tools.
Collapse
Affiliation(s)
- Nour Aldeen Jalal
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103 Leipzig, Germany
| | - Tamer Abdulbaki Alshirbaji
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103 Leipzig, Germany
| | - Paul David Docherty
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand
| | - Herag Arabian
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
| | - Bernhard Laufer
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
| | - Sabine Krueger-Ziolek
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103 Leipzig, Germany
| | - Knut Moeller
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054 Villingen-Schwenningen, Germany
- Department of Mechanical Engineering, University of Canterbury, Christchurch 8041, New Zealand
- Department of Microsystems Engineering, University of Freiburg, 79110 Freiburg, Germany
| |
Collapse
|
78
|
Rosenkranz M, Cetin T, Uslar VN, Bleichner MG. Investigating the attentional focus to workplace-related soundscapes in a complex audio-visual-motor task using EEG. FRONTIERS IN NEUROERGONOMICS 2023; 3:1062227. [PMID: 38235454 PMCID: PMC10790850 DOI: 10.3389/fnrgo.2022.1062227] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 12/16/2022] [Indexed: 01/19/2024]
Abstract
Introduction In demanding work situations (e.g., during a surgery), the processing of complex soundscapes varies over time and can be a burden for medical personnel. Here we study, using mobile electroencephalography (EEG), how humans process workplace-related soundscapes while performing a complex audio-visual-motor task (3D Tetris). Specifically, we wanted to know how the attentional focus changes the processing of the soundscape as a whole. Method Participants played a game of 3D Tetris in which they had to use both hands to control falling blocks. At the same time, participants listened to a complex soundscape, similar to what is found in an operating room (i.e., the sound of machinery, people talking in the background, alarm sounds, and instructions). In this within-subject design, participants had to react to instructions (e.g., "place the next block in the upper left corner") and to sounds depending on the experimental condition, either to a specific alarm sound originating from a fixed location or to a beep sound that originated from varying locations. Attention to the alarm reflected a narrow attentional focus, as it was easy to detect and most of the soundscape could be ignored. Attention to the beep reflected a wide attentional focus, as it required the participants to monitor multiple different sound streams. Results and discussion Results show the robustness of the N1 and P3 event related potential response during this dynamic task with a complex auditory soundscape. Furthermore, we used temporal response functions to study auditory processing to the whole soundscape. This work is a step toward studying workplace-related sound processing in the operating room using mobile EEG.
Collapse
Affiliation(s)
- Marc Rosenkranz
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Timur Cetin
- Pius-Hospital Oldenburg, University Hospital for Visceral Surgery, University of Oldenburg, Oldenburg, Germany
| | - Verena N. Uslar
- Pius-Hospital Oldenburg, University Hospital for Visceral Surgery, University of Oldenburg, Oldenburg, Germany
| | - Martin G. Bleichner
- Neurophysiology of Everyday Life Group, Department of Psychology, University of Oldenburg, Oldenburg, Germany
- Research Center for Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
79
|
Jalal NA, Abdulbaki Alshirbaji T, Laufer B, Docherty PD, Neumuth T, Moeller K. Analysing multi-perspective patient-related data during laparoscopic gynaecology procedures. Sci Rep 2023; 13:1604. [PMID: 36709360 PMCID: PMC9884204 DOI: 10.1038/s41598-023-28652-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 01/23/2023] [Indexed: 01/29/2023] Open
Abstract
Fusing data from different medical perspectives inside the operating room (OR) sets the stage for developing intelligent context-aware systems. These systems aim to promote better awareness inside the OR by keeping every medical team well informed about the work of other teams and thus mitigate conflicts resulting from different targets. In this research, a descriptive analysis of data collected from anaesthesiology and surgery was performed to investigate the relationships between the intra-abdominal pressure (IAP) and lung mechanics for patients during laparoscopic procedures. Data of nineteen patients who underwent laparoscopic gynaecology were included. Statistical analysis of all subjects showed a strong relationship between the IAP and dynamic lung compliance (r = 0.91). Additionally, the peak airway pressure was also strongly correlated to the IAP in volume-controlled ventilated patients (r = 0.928). Statistical results obtained by this study demonstrate the importance of analysing the relationship between surgical actions and physiological responses. Moreover, these results form the basis for developing medical decision support models, e.g., automatic compensation of IAP effects on lung function.
Collapse
Affiliation(s)
- Nour Aldeen Jalal
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany.
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany.
| | - Tamer Abdulbaki Alshirbaji
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Bernhard Laufer
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
| | - Paul D Docherty
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
- Department of Mechanical Engineering, University of Canterbury, Christchurch, 8041, New Zealand
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Knut Moeller
- Institute of Technical Medicine (ITeM), Furtwangen University, 78054, Villingen-Schwenningen, Germany
| |
Collapse
|
80
|
Carstens M, Rinner FM, Bodenstedt S, Jenke AC, Weitz J, Distler M, Speidel S, Kolbinger FR. The Dresden Surgical Anatomy Dataset for Abdominal Organ Segmentation in Surgical Data Science. Sci Data 2023; 10:3. [PMID: 36635312 PMCID: PMC9837071 DOI: 10.1038/s41597-022-01719-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 09/26/2022] [Indexed: 01/14/2023] Open
Abstract
Laparoscopy is an imaging technique that enables minimally-invasive procedures in various medical disciplines including abdominal surgery, gynaecology and urology. To date, publicly available laparoscopic image datasets are mostly limited to general classifications of data, semantic segmentations of surgical instruments and low-volume weak annotations of specific abdominal organs. The Dresden Surgical Anatomy Dataset provides semantic segmentations of eight abdominal organs (colon, liver, pancreas, small intestine, spleen, stomach, ureter, vesicular glands), the abdominal wall and two vessel structures (inferior mesenteric artery, intestinal veins) in laparoscopic view. In total, this dataset comprises 13195 laparoscopic images. For each anatomical structure, we provide over a thousand images with pixel-wise segmentations. Annotations comprise semantic segmentations of single organs and one multi-organ-segmentation dataset including segments for all eleven anatomical structures. Moreover, we provide weak annotations of organ presence for every single image. This dataset markedly expands the horizon for surgical data science applications of computer vision in laparoscopic surgery and could thereby contribute to a reduction of risks and faster translation of Artificial Intelligence into surgical practice.
Collapse
Affiliation(s)
- Matthias Carstens
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Franziska M Rinner
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
| | - Alexander C Jenke
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Dresden, Germany
| | - Marius Distler
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany.
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Technische Universität Dresden, Dresden, Germany.
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Dresden, Germany.
| | - Fiona R Kolbinger
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
- Else Kröner Fresenius Center for Digital Health (EKFZ), Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
81
|
Das A, Khan DZ, Hanrahan JG, Marcus HJ, Stoyanov D. Automatic generation of operation notes in endoscopic pituitary surgery videos using workflow recognition. INTELLIGENCE-BASED MEDICINE 2023; 8:100107. [PMID: 38523618 PMCID: PMC10958393 DOI: 10.1016/j.ibmed.2023.100107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 04/20/2023] [Accepted: 07/27/2023] [Indexed: 03/26/2024]
Abstract
Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-F1 score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.
Collapse
Affiliation(s)
- Adrito Das
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, United Kingdom
| | - Danyal Z. Khan
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, United Kingdom
- National Hospital for Neurology and Neurosurgery, University College London, United Kingdom
| | - John G. Hanrahan
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, United Kingdom
- National Hospital for Neurology and Neurosurgery, University College London, United Kingdom
| | - Hani J. Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, United Kingdom
- National Hospital for Neurology and Neurosurgery, University College London, United Kingdom
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, United Kingdom
| |
Collapse
|
82
|
Nagaraj MB, Namazi B, Sankaranarayanan G, Scott DJ. Developing artificial intelligence models for medical student suturing and knot-tying video-based assessment and coaching. Surg Endosc 2023; 37:402-411. [PMID: 35982284 PMCID: PMC9388210 DOI: 10.1007/s00464-022-09509-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 07/23/2022] [Indexed: 01/20/2023]
Abstract
BACKGROUND Early introduction and distributed learning have been shown to improve student comfort with basic requisite suturing skills. The need for more frequent and directed feedback, however, remains an enduring concern for both remote and in-person training. A previous in-person curriculum for our second-year medical students transitioning to clerkships was adapted to an at-home video-based assessment model due to the social distancing implications of COVID-19. We aimed to develop an Artificial Intelligence (AI) model to perform video-based assessment. METHODS Second-year medical students were asked to submit a video of a simple interrupted knot on a penrose drain with instrument tying technique after self-training to proficiency. Proficiency was defined as performing the task under two minutes with no critical errors. All the videos were first manually rated with a pass-fail rating and then subsequently underwent task segmentation. We developed and trained two AI models based on convolutional neural networks to identify errors (instrument holding and knot-tying) and provide automated ratings. RESULTS A total of 229 medical student videos were reviewed (150 pass, 79 fail). Of those who failed, the critical error distribution was 15 knot-tying, 47 instrument-holding, and 17 multiple. A total of 216 videos were used to train the models after excluding the low-quality videos. A k-fold cross-validation (k = 10) was used. The accuracy of the instrument holding model was 89% with an F-1 score of 74%. For the knot-tying model, the accuracy was 91% with an F-1 score of 54%. CONCLUSIONS Medical students require assessment and directed feedback to better acquire surgical skill, but this is often time-consuming and inadequately done. AI techniques can instead be employed to perform automated surgical video analysis. Future work will optimize the current model to identify discrete errors in order to supplement video-based rating with specific feedback.
Collapse
Affiliation(s)
- Madhuri B Nagaraj
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA.
- University of Texas Southwestern Simulation Center, 2001 Inwood Road, Dallas, TX, 75390-9092, USA.
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA
| | - Ganesh Sankaranarayanan
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA
| | - Daniel J Scott
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA
- University of Texas Southwestern Simulation Center, 2001 Inwood Road, Dallas, TX, 75390-9092, USA
| |
Collapse
|
83
|
Reiter W. Domain generalization improves end-to-end object detection for real-time surgical tool detection. Int J Comput Assist Radiol Surg 2022; 18:939-944. [PMID: 36581742 DOI: 10.1007/s11548-022-02823-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 12/20/2022] [Indexed: 12/31/2022]
Abstract
PURPOSE Computer assistance for endoscopic surgery depends on knowledge about the contents in an endoscopic scene. An important step of analysing the video contents is real-time surgical tool detection. Most methods for tool detection nevertheless depend on multi-step algorithms building upon prior knowledge like anchor boxes or non-maximum suppression which ultimately decrease performance. A real-world difficulty encountered by learning-based methods are limited datasets. Training a neural network on data matching a specific distribution (e.g. from a single hospital or showing a specific type of surgery) can result in a lack of generalization. METHODS In this paper, we propose the application of a transformer based architecture for end-to-end tool detection. This architecture promises state-of-the-art accuracy while decreasing the complexity resulting in improved run-time performance. To improve the lack of cross-domain generalization due to limited datasets, we enhance the architecture with a latent feature space via variational encoding to capture common intra-domain information. This feature space models the linear dependencies between domains by constraining their rank. RESULTS The trained neural networks show a distinct improvement on out-of-domain data indicating better generalization to unseen domains. Inference with the end-to-end architecture can be performed at up to 138 frames per second (FPS) achieving a speedup in comparison to older approaches. CONCLUSIONS Experimental results on three representative datasets demonstrate the performance of the method. We also show that our approach leads to better domain generalization.
Collapse
|
84
|
Gerats BG, Wolterink JM, Broeders IA. 3D human pose estimation in multi-view operating room videos using differentiable camera projections. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2155580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Beerend G.A. Gerats
- Centre for Artificial Intelligence, Meander Medisch Centrum, Amersfoort, the Netherlands
- Robotics and Mechatronics, University of Twente, Enschede, The Netherlands
| | - Jelmer M. Wolterink
- Department of Applied Mathematics & Technical Medical Center, University of Twente, Enschede, The Netherlands
| | - Ivo A.M.J. Broeders
- Centre for Artificial Intelligence, Meander Medisch Centrum, Amersfoort, the Netherlands
- Robotics and Mechatronics, University of Twente, Enschede, The Netherlands
| |
Collapse
|
85
|
Bastian L, Czempiel T, Heiliger C, Karcz K, Eck U, Busam B, Navab N. Know your sensors — a modality study for surgical action classification. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2152377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Lennart Bastian
- Chair for Computer Aided Medical Procedures, TU Munich, Munich, Germany
| | - Tobias Czempiel
- Chair for Computer Aided Medical Procedures, TU Munich, Munich, Germany
| | - Christian Heiliger
- Department of General, Visceral, and Transplant Surgery, University Hospital, LMU Munich, Munich, Germany
| | - Konrad Karcz
- Department of General, Visceral, and Transplant Surgery, University Hospital, LMU Munich, Munich, Germany
| | - Ulrich Eck
- Chair for Computer Aided Medical Procedures, TU Munich, Munich, Germany
| | - Benjamin Busam
- Chair for Computer Aided Medical Procedures, TU Munich, Munich, Germany
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures, TU Munich, Munich, Germany
- Computer Aided Medical Procedures, John Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
86
|
Park JJ, Tiefenbach J, Demetriades AK. The role of artificial intelligence in surgical simulation. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:1076755. [PMID: 36590155 PMCID: PMC9794840 DOI: 10.3389/fmedt.2022.1076755] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial Intelligence (AI) plays an integral role in enhancing the quality of surgical simulation, which is increasingly becoming a popular tool for enriching the training experience of a surgeon. This spans the spectrum from facilitating preoperative planning, to intraoperative visualisation and guidance, ultimately with the aim of improving patient safety. Although arguably still in its early stages of widespread clinical application, AI technology enables personal evaluation and provides personalised feedback in surgical training simulations. Several forms of surgical visualisation technologies currently in use for anatomical education and presurgical assessment rely on different AI algorithms. However, while it is promising to see clinical examples and technological reports attesting to the efficacy of AI-supported surgical simulators, barriers to wide-spread commercialisation of such devices and software remain complex and multifactorial. High implementation and production costs, scarcity of reports evidencing the superiority of such technology, and intrinsic technological limitations remain at the forefront. As AI technology is key to driving the future of surgical simulation, this paper will review the literature delineating its current state, challenges, and prospects. In addition, a consolidated list of FDA/CE approved AI-powered medical devices for surgical simulation is presented, in order to shed light on the existing gap between academic achievements and the universal commercialisation of AI-enabled simulators. We call for further clinical assessment of AI-supported surgical simulators to support novel regulatory body approved devices and usher surgery into a new era of surgical education.
Collapse
Affiliation(s)
- Jay J. Park
- Department of General Surgery, Norfolk and Norwich University Hospital, Norwich, United Kingdom,Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom
| | - Jakov Tiefenbach
- Neurological Institute, Cleveland Clinic, Cleveland, OH, United States
| | - Andreas K. Demetriades
- Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom,Department of Neurosurgery, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
87
|
Villarreal JA, Forrester JD. Novel Use of a Real-Time Prediction Model to Enhance Early Detection of Need for Massive Transfusion-Artificial Intelligence Behind the Drapes. JAMA Netw Open 2022; 5:e2246648. [PMID: 36515953 DOI: 10.1001/jamanetworkopen.2022.46648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
|
88
|
Neumann J, Uciteli A, Meschke T, Bieck R, Franke S, Herre H, Neumuth T. Ontology-based surgical workflow recognition and prediction. J Biomed Inform 2022; 136:104240. [DOI: 10.1016/j.jbi.2022.104240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 10/27/2022] [Accepted: 11/03/2022] [Indexed: 11/09/2022]
|
89
|
Wagner M, Brandenburg JM, Bodenstedt S, Schulze A, Jenke AC, Stern A, Daum MTJ, Mündermann L, Kolbinger FR, Bhasker N, Schneider G, Krause-Jüttler G, Alwanni H, Fritz-Kebede F, Burgert O, Wilhelm D, Fallert J, Nickel F, Maier-Hein L, Dugas M, Distler M, Weitz J, Müller-Stich BP, Speidel S. Surgomics: personalized prediction of morbidity, mortality and long-term outcome in surgery using machine learning on multimodal data. Surg Endosc 2022; 36:8568-8591. [PMID: 36171451 PMCID: PMC9613751 DOI: 10.1007/s00464-022-09611-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 09/03/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Personalized medicine requires the integration and analysis of vast amounts of patient data to realize individualized care. With Surgomics, we aim to facilitate personalized therapy recommendations in surgery by integration of intraoperative surgical data and their analysis with machine learning methods to leverage the potential of this data in analogy to Radiomics and Genomics. METHODS We defined Surgomics as the entirety of surgomic features that are process characteristics of a surgical procedure automatically derived from multimodal intraoperative data to quantify processes in the operating room. In a multidisciplinary team we discussed potential data sources like endoscopic videos, vital sign monitoring, medical devices and instruments and respective surgomic features. Subsequently, an online questionnaire was sent to experts from surgery and (computer) science at multiple centers for rating the features' clinical relevance and technical feasibility. RESULTS In total, 52 surgomic features were identified and assigned to eight feature categories. Based on the expert survey (n = 66 participants) the feature category with the highest clinical relevance as rated by surgeons was "surgical skill and quality of performance" for morbidity and mortality (9.0 ± 1.3 on a numerical rating scale from 1 to 10) as well as for long-term (oncological) outcome (8.2 ± 1.8). The feature category with the highest feasibility to be automatically extracted as rated by (computer) scientists was "Instrument" (8.5 ± 1.7). Among the surgomic features ranked as most relevant in their respective category were "intraoperative adverse events", "action performed with instruments", "vital sign monitoring", and "difficulty of surgery". CONCLUSION Surgomics is a promising concept for the analysis of intraoperative data. Surgomics may be used together with preoperative features from clinical data and Radiomics to predict postoperative morbidity, mortality and long-term outcome, as well as to provide tailored feedback for surgeons.
Collapse
Affiliation(s)
- Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
| | - Johanna M Brandenburg
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| | - André Schulze
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Alexander C Jenke
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Antonia Stern
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Marie T J Daum
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Lars Mündermann
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fiona R Kolbinger
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
| | - Nithya Bhasker
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
| | - Gerd Schneider
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Grit Krause-Jüttler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Hisham Alwanni
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Fleur Fritz-Kebede
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Reutlingen, Germany
| | - Dirk Wilhelm
- Department of Surgery, Faculty of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Johannes Fallert
- Corporate Research and Technology, Karl Storz SE & Co KG, Tuttlingen, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Lena Maier-Hein
- Department of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Dugas
- Institute of Medical Informatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Marius Distler
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral-, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Beat-Peter Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI), Technische Universität Dresden, 01062, Dresden, Germany
| |
Collapse
|
90
|
Moglia A, Georgiou K, Morelli L, Toutouzas K, Satava RM, Cuschieri A. Breaking down the silos of artificial intelligence in surgery: glossary of terms. Surg Endosc 2022; 36:7986-7997. [PMID: 35729406 PMCID: PMC9613746 DOI: 10.1007/s00464-022-09371-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/28/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND The literature on artificial intelligence (AI) in surgery has advanced rapidly during the past few years. However, the published studies on AI are mostly reported by computer scientists using their own jargon which is unfamiliar to surgeons. METHODS A literature search was conducted in using PubMed following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement. The primary outcome of this review is to provide a glossary with definitions of the commonly used AI terms in surgery to improve their understanding by surgeons. RESULTS One hundred ninety-five studies were included in this review, and 38 AI terms related to surgery were retrieved. Convolutional neural networks were the most frequently culled term by the search, accounting for 74 studies on AI in surgery, followed by classification task (n = 62), artificial neural networks (n = 53), and regression (n = 49). Then, the most frequent expressions were supervised learning (reported in 24 articles), support vector machine (SVM) in 21, and logistic regression in 16. The rest of the 38 terms was seldom mentioned. CONCLUSIONS The proposed glossary can be used by several stakeholders. First and foremost, by residents and attending consultant surgeons, both having to understand the fundamentals of AI when reading such articles. Secondly, junior researchers at the start of their career in Surgical Data Science and thirdly experts working in the regulatory sections of companies involved in the AI Business Software as a Medical Device (SaMD) preparing documents for submission to the Food and Drug Administration (FDA) or other agencies for approval.
Collapse
Affiliation(s)
- Andrea Moglia
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
| | - Konstantinos Georgiou
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Luca Morelli
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
- Department of General Surgery, University of Pisa, Pisa, Italy
| | - Konstantinos Toutouzas
- 1st Propaedeutic Surgical Unit, Hippocrateion Athens General Hospital, Athens Medical School, National and Kapodistrian University of Athens, Athens, Greece
| | - Richard M Satava
- Department of Surgery, University of Washington Medical Center, Seattle, WA, USA
| | - Alfred Cuschieri
- Scuola Superiore Sant'Anna of Pisa, 56214, Pisa, Italy
- Institute for Medical Science and Technology, University of Dundee, Dundee, DD2 1FD, UK
| |
Collapse
|
91
|
Mascagni P, Alapatt D, Laracca GG, Guerriero L, Spota A, Fiorillo C, Vardazaryan A, Quero G, Alfieri S, Baldari L, Cassinotti E, Boni L, Cuccurullo D, Costamagna G, Dallemagne B, Padoy N. Multicentric validation of EndoDigest: a computer vision platform for video documentation of the critical view of safety in laparoscopic cholecystectomy. Surg Endosc 2022; 36:8379-8386. [PMID: 35171336 DOI: 10.1007/s00464-022-09112-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 02/07/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND A computer vision (CV) platform named EndoDigest was recently developed to facilitate the use of surgical videos. Specifically, EndoDigest automatically provides short video clips to effectively document the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). The aim of the present study is to validate EndoDigest on a multicentric dataset of LC videos. METHODS LC videos from 4 centers were manually annotated with the time of the cystic duct division and an assessment of CVS criteria. Incomplete recordings, bailout procedures and procedures with an intraoperative cholangiogram were excluded. EndoDigest leveraged predictions of deep learning models for workflow analysis in a rule-based inference system designed to estimate the time of the cystic duct division. Performance was assessed by computing the error in estimating the manually annotated time of the cystic duct division. To provide concise video documentation of CVS, EndoDigest extracted video clips showing the 2 min preceding and the 30 s following the predicted cystic duct division. The relevance of the documentation was evaluated by assessing CVS in automatically extracted 2.5-min-long video clips. RESULTS 144 of the 174 LC videos from 4 centers were analyzed. EndoDigest located the time of the cystic duct division with a mean error of 124.0 ± 270.6 s despite the use of fluorescent cholangiography in 27 procedures and great variations in surgical workflows across centers. The surgical evaluation found that 108 (75.0%) of the automatically extracted short video clips documented CVS effectively. CONCLUSIONS EndoDigest was robust enough to reliably locate the time of the cystic duct division and efficiently video document CVS despite the highly variable workflows. Training specifically on data from each center could improve results; however, this multicentric validation shows the potential for clinical translation of this surgical data science tool to efficiently document surgical safety.
Collapse
Affiliation(s)
- Pietro Mascagni
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France.
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France
| | - Giovanni Guglielmo Laracca
- Department of Medical Surgical Science and Translational Medicine, Sant'Andrea Hospital, Sapienza University of Rome, Rome, Italy
| | - Ludovica Guerriero
- Department of Laparoscopic and Robotic General Surgery, Monaldi Hospital, AORN dei Colli, Naples, Italy
| | - Andrea Spota
- Scuola di Specializzazione in Chirurgia Generale, University of Milan, Milan, Italy
| | - Claudio Fiorillo
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France
| | - Giuseppe Quero
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ludovica Baldari
- Department of Surgery, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico di Milano, University of Milan, Milan, Italy
| | - Elisa Cassinotti
- Department of Surgery, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico di Milano, University of Milan, Milan, Italy
| | - Luigi Boni
- Department of Surgery, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico di Milano, University of Milan, Milan, Italy
| | - Diego Cuccurullo
- Department of Laparoscopic and Robotic General Surgery, Monaldi Hospital, AORN dei Colli, Naples, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Bernard Dallemagne
- Institute for Research Against Digestive Cancer (IRCAD), Strasbourg, France
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| |
Collapse
|
92
|
Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, Alseidi A, Redan JA, Alfieri S, Costamagna G, Boškoski I, Padoy N, Hashimoto DA. Computer vision in surgery: from potential to clinical value. NPJ Digit Med 2022; 5:163. [PMID: 36307544 PMCID: PMC9616906 DOI: 10.1038/s41746-022-00707-5] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.
Collapse
Affiliation(s)
- Pietro Mascagni
- Gemelli Hospital, Catholic University of the Sacred Heart, Rome, Italy.
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Maria S Altieri
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Amin Madani
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yusuke Watanabe
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of Hokkaido, Hokkaido, Japan
| | - Adnan Alseidi
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Jay A Redan
- Department of Surgery, AdventHealth-Celebration Health, Celebration, FL, USA
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ivo Boškoski
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Nicolas Padoy
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Daniel A Hashimoto
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
93
|
Gholinejad M, Pelanis E, Aghayan D, Fretland ÅA, Edwin B, Terkivatan T, Elle OJ, Loeve AJ, Dankelman J. Generic surgical process model for minimally invasive liver treatment methods. Sci Rep 2022; 12:16684. [PMID: 36202857 PMCID: PMC9537522 DOI: 10.1038/s41598-022-19891-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 09/06/2022] [Indexed: 11/09/2022] Open
Abstract
Surgical process modelling is an innovative approach that aims to simplify the challenges involved in improving surgeries through quantitative analysis of a well-established model of surgical activities. In this paper, surgical process model strategies are applied for the analysis of different Minimally Invasive Liver Treatments (MILTs), including ablation and surgical resection of the liver lesions. Moreover, a generic surgical process model for these differences in MILTs is introduced. The generic surgical process model was established at three different granularity levels. The generic process model, encompassing thirteen phases, was verified against videos of MILT procedures and interviews with surgeons. The established model covers all the surgical and interventional activities and the connections between them and provides a foundation for extensive quantitative analysis and simulations of MILT procedures for improving computer-assisted surgery systems, surgeon training and evaluation, surgeon guidance and planning systems and evaluation of new technologies.
Collapse
Affiliation(s)
- Maryam Gholinejad
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands.
| | - Egidius Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Medical Faculty, University of Oslo, Oslo, Norway
| | - Davit Aghayan
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of Surgery N1, Yerevan State Medical University After M. Heratsi, Yerevan, Armenia
| | - Åsmund Avdem Fretland
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Medical Faculty, University of Oslo, Oslo, Norway.,Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Turkan Terkivatan
- Department of Surgery, Division of HPB and Transplant Surgery, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
| | - Arjo J Loeve
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
94
|
Jin Y, Long Y, Gao X, Stoyanov D, Dou Q, Heng PA. Trans-SVNet: hybrid embedding aggregation Transformer for surgical workflow analysis. Int J Comput Assist Radiol Surg 2022; 17:2193-2202. [PMID: 36129573 DOI: 10.1007/s11548-022-02743-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 08/31/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Real-time surgical workflow analysis has been a key component for computer-assisted intervention system to improve cognitive assistance. Most existing methods solely rely on conventional temporal models and encode features with a successive spatial-temporal arrangement. Supportive benefits of intermediate features are partially lost from both visual and temporal aspects. In this paper, we rethink feature encoding to attend and preserve the critical information for accurate workflow recognition and anticipation. METHODS We introduce Transformer in surgical workflow analysis, to reconsider complementary effects of spatial and temporal representations. We propose a hybrid embedding aggregation Transformer, named Trans-SVNet, to effectively interact with the designed spatial and temporal embeddings, by employing spatial embedding to query temporal embedding sequence. We jointly optimized by loss objectives from both analysis tasks to leverage their high correlation. RESULTS We extensively evaluate our method on three large surgical video datasets. Our method consistently outperforms the state-of-the-arts across three datasets on workflow recognition task. Jointly learning with anticipation, recognition results can gain a large improvement. Our approach also shows its effectiveness on anticipation with promising performance achieved. Our model achieves a real-time inference speed of 0.0134 second per frame. CONCLUSION Experimental results demonstrate the efficacy of our hybrid embeddings integration by rediscovering the crucial cues from complementary spatial-temporal embeddings. The better performance by multi-task learning indicates that anticipation task brings the additional knowledge to recognition task. Promising effectiveness and efficiency of our method also show its promising potential to be used in operating room.
Collapse
Affiliation(s)
- Yueming Jin
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Department of Computer Science, University College, London, UK
| | - Yonghao Long
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China
| | - Xiaojie Gao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Department of Computer Science, University College, London, UK
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China. .,Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Shatin, HK, China.
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, HK, China.,Institute of Medical Intelligence and XR, The Chinese University of Hong Kong, Shatin, HK, China
| |
Collapse
|
95
|
Protecting procedural care-cybersecurity considerations for robotic surgery. NPJ Digit Med 2022; 5:148. [PMID: 36127420 PMCID: PMC9489690 DOI: 10.1038/s41746-022-00693-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/01/2022] [Indexed: 11/25/2022] Open
|
96
|
Quero G, Mascagni P, Kolbinger FR, Fiorillo C, De Sio D, Longo F, Schena CA, Laterza V, Rosa F, Menghi R, Papa V, Tondolo V, Cina C, Distler M, Weitz J, Speidel S, Padoy N, Alfieri S. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers (Basel) 2022; 14:3803. [PMID: 35954466 PMCID: PMC9367568 DOI: 10.3390/cancers14153803] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/29/2022] [Accepted: 08/03/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence (AI) and computer vision (CV) are beginning to impact medicine. While evidence on the clinical value of AI-based solutions for the screening and staging of colorectal cancer (CRC) is mounting, CV and AI applications to enhance the surgical treatment of CRC are still in their early stage. This manuscript introduces key AI concepts to a surgical audience, illustrates fundamental steps to develop CV for surgical applications, and provides a comprehensive overview on the state-of-the-art of AI applications for the treatment of CRC. Notably, studies show that AI can be trained to automatically recognize surgical phases and actions with high accuracy even in complex colorectal procedures such as transanal total mesorectal excision (TaTME). In addition, AI models were trained to interpret fluorescent signals and recognize correct dissection planes during total mesorectal excision (TME), suggesting CV as a potentially valuable tool for intraoperative decision-making and guidance. Finally, AI could have a role in surgical training, providing automatic surgical skills assessment in the operating room. While promising, these proofs of concept require further development, validation in multi-institutional data, and clinical studies to confirm AI as a valuable tool to enhance CRC treatment.
Collapse
Affiliation(s)
- Giuseppe Quero
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Pietro Mascagni
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
| | - Fiona R. Kolbinger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Claudio Fiorillo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Davide De Sio
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Fabio Longo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Alberto Schena
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vito Laterza
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Fausto Rosa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Roberta Menghi
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Valerio Papa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vincenzo Tondolo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Caterina Cina
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Marius Distler
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Juergen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, 01307 Dresden, Germany
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
- ICube, Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, 67000 Strasbourg, France
| | - Sergio Alfieri
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| |
Collapse
|
97
|
Müller LR, Petersen J, Yamlahi A, Wise P, Adler TJ, Seitel A, Kowalewski KF, Müller B, Kenngott H, Nickel F, Maier-Hein L. Robust hand tracking for surgical telestration. Int J Comput Assist Radiol Surg 2022; 17:1477-1486. [PMID: 35624404 PMCID: PMC9307534 DOI: 10.1007/s11548-022-02637-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 04/06/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE As human failure has been shown to be one primary cause for post-operative death, surgical training is of the utmost socioeconomic importance. In this context, the concept of surgical telestration has been introduced to enable experienced surgeons to efficiently and effectively mentor trainees in an intuitive way. While previous approaches to telestration have concentrated on overlaying drawings on surgical videos, we explore the augmented reality (AR) visualization of surgical hands to imitate the direct interaction with the situs. METHODS We present a real-time hand tracking pipeline specifically designed for the application of surgical telestration. It comprises three modules, dedicated to (1) the coarse localization of the expert's hand and the subsequent (2) segmentation of the hand for AR visualization in the field of view of the trainee and (3) regression of keypoints making up the hand's skeleton. The semantic representation is obtained to offer the ability for structured reporting of the motions performed as part of the teaching. RESULTS According to a comprehensive validation based on a large data set comprising more than 14,000 annotated images with varying application-relevant conditions, our algorithm enables real-time hand tracking and is sufficiently accurate for the task of surgical telestration. In a retrospective validation study, a mean detection accuracy of 98%, a mean keypoint regression accuracy of 10.0 px and a mean Dice Similarity Coefficient of 0.95 were achieved. In a prospective validation study, it showed uncompromised performance when the sensor, operator or gesture varied. CONCLUSION Due to its high accuracy and fast inference time, our neural network-based approach to hand tracking is well suited for an AR approach to surgical telestration. Future work should be directed to evaluating the clinical value of the approach.
Collapse
Affiliation(s)
- Lucas-Raphael Müller
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Jens Petersen
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Amine Yamlahi
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Philipp Wise
- Department for General, Visceral and Transplantation Surgery, Mannheim University Hospital, Heidelberg, Germany
| | - Tim J Adler
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Alexander Seitel
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urosurgery, Medical Faculty Mannheim, Heidelberg University Hospital, Heidelberg, Germany
| | - Beat Müller
- Department for General, Visceral and Transplantation Surgery, Mannheim University Hospital, Heidelberg, Germany
| | - Hannes Kenngott
- Department for General, Visceral and Transplantation Surgery, Mannheim University Hospital, Heidelberg, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Mannheim University Hospital, Heidelberg, Germany.
| | - Lena Maier-Hein
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
98
|
Lam K, Abràmoff MD, Balibrea JM, Bishop SM, Brady RR, Callcut RA, Chand M, Collins JW, Diener MK, Eisenmann M, Fermont K, Neto MG, Hager GD, Hinchliffe RJ, Horgan A, Jannin P, Langerman A, Logishetty K, Mahadik A, Maier-Hein L, Antona EM, Mascagni P, Mathew RK, Müller-Stich BP, Neumuth T, Nickel F, Park A, Pellino G, Rudzicz F, Shah S, Slack M, Smith MJ, Soomro N, Speidel S, Stoyanov D, Tilney HS, Wagner M, Darzi A, Kinross JM, Purkayastha S. A Delphi consensus statement for digital surgery. NPJ Digit Med 2022; 5:100. [PMID: 35854145 PMCID: PMC9296639 DOI: 10.1038/s41746-022-00641-6] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 06/24/2022] [Indexed: 12/13/2022] Open
Abstract
The use of digital technology is increasing rapidly across surgical specialities, yet there is no consensus for the term ‘digital surgery’. This is critical as digital health technologies present technical, governance, and legal challenges which are unique to the surgeon and surgical patient. We aim to define the term digital surgery and the ethical issues surrounding its clinical application, and to identify barriers and research goals for future practice. 38 international experts, across the fields of surgery, AI, industry, law, ethics and policy, participated in a four-round Delphi exercise. Issues were generated by an expert panel and public panel through a scoping questionnaire around key themes identified from the literature and voted upon in two subsequent questionnaire rounds. Consensus was defined if >70% of the panel deemed the statement important and <30% unimportant. A final online meeting was held to discuss consensus statements. The definition of digital surgery as the use of technology for the enhancement of preoperative planning, surgical performance, therapeutic support, or training, to improve outcomes and reduce harm achieved 100% consensus agreement. We highlight key ethical issues concerning data, privacy, confidentiality and public trust, consent, law, litigation and liability, and commercial partnerships within digital surgery and identify barriers and research goals for future practice. Developers and users of digital surgery must not only have an awareness of the ethical issues surrounding digital applications in healthcare, but also the ethical considerations unique to digital surgery. Future research into these issues must involve all digital surgery stakeholders including patients.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, Imperial College, London, UK.,Institute of Global Health Innovation, Imperial College London, London, UK
| | - Michael D Abràmoff
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA.,Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
| | - José M Balibrea
- Department of Gastrointestinal Surgery, Hospital Clínic de Barcelona, Barcelona, Spain.,Universitat de Barcelona, Barcelona, Spain
| | | | - Richard R Brady
- Newcastle Centre for Bowel Disease Research Hub, Newcastle University, Newcastle, UK.,Department of Colorectal Surgery, Newcastle Hospitals, Newcastle, UK
| | | | - Manish Chand
- Department of Surgery and Interventional Sciences, University College London, London, UK
| | - Justin W Collins
- CMR Surgical Limited, Cambridge, UK.,Department of Surgery and Interventional Sciences, University College London, London, UK
| | - Markus K Diener
- Department of General and Visceral Surgery, University of Freiburg, Freiburg im Breisgau, Germany.,Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Kelly Fermont
- Solicitor of the Senior Courts of England and Wales, Independent Researcher, Bristol, UK
| | - Manoel Galvao Neto
- Endovitta Institute, Sao Paulo, Brazil.,FMABC Medical School, Santo Andre, Brazil
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, MD, USA.,Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA
| | | | - Alan Horgan
- Department of Colorectal Surgery, Newcastle Hospitals, Newcastle, UK
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Alexander Langerman
- Otolaryngology, Head & Neck Surgery and Radiology & Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,International Centre for Surgical Safety, Li Ka Shing Knowledge Institute, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
| | | | | | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany.,Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.,Medical Faculty, Heidelberg University, Heidelberg, Germany.,LKSK Institute of St. Michael's Hospital, Toronto, ON, Canada
| | | | - Pietro Mascagni
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.,IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.,ICube, University of Strasbourg, Strasbourg, France
| | - Ryan K Mathew
- School of Medicine, University of Leeds, Leeds, UK.,Department of Neurosurgery, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Beat P Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.,National Center for Tumor Diseases, Heidelberg, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Universität Leipzig, Leipzig, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Adrian Park
- Department of Surgery, Anne Arundel Medical Center, School of Medicine, Johns Hopkins University, Annapolis, MD, USA
| | - Gianluca Pellino
- Department of Advanced Medical and Surgical Sciences, Università degli Studi della Campania "Luigi Vanvitelli", Naples, Italy.,Colorectal Surgery, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Frank Rudzicz
- Department of Computer Science, University of Toronto, Toronto, ON, Canada.,Vector Institute for Artificial Intelligence, Toronto, ON, Canada.,Unity Health Toronto, Toronto, ON, Canada.,Surgical Safety Technologies Inc, Toronto, ON, Canada
| | - Sam Shah
- Faculty of Future Health, College of Medicine and Dentistry, Ulster University, Birmingham, UK
| | - Mark Slack
- CMR Surgical Limited, Cambridge, UK.,Department of Urogynaecology, Addenbrooke's Hospital, Cambridge, UK.,University of Cambridge, Cambridge, UK
| | - Myles J Smith
- The Royal Marsden Hospital, London, UK.,Institute of Cancer Research, London, UK
| | - Naeem Soomro
- Department of Urology, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC), Dresden, Germany.,Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Henry S Tilney
- Department of Surgery and Cancer, Imperial College, London, UK.,Department of Colorectal Surgery, Frimley Health NHS Foundation Trust, Frimley, UK
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.,National Center for Tumor Diseases, Heidelberg, Germany
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College, London, UK.,Institute of Global Health Innovation, Imperial College London, London, UK
| | - James M Kinross
- Department of Surgery and Cancer, Imperial College, London, UK.
| | | |
Collapse
|
99
|
|
100
|
Torkamani-Azar M, Lee A, Bednarik R. Methods and Measures for Mental Stress Assessment in Surgery: A Systematic Review of 20 Years of Literature. IEEE J Biomed Health Inform 2022; 26:4436-4449. [PMID: 35696473 DOI: 10.1109/jbhi.2022.3182869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Real-time mental stress monitoring from surgeons and surgical staff in operating rooms may reduce surgical injuries, improve performance and quality of medical care, and accelerate implementation of stress-management strategies. Motivated by the increase in usage of objective and subjective metrics for cognitive monitoring and by the gap in reviews of experimental design setups and data analytics, a systematic review of 71 studies on mental stress and workload measurement in surgical settings, published in 2001-2020, is presented. Almost 61% of selected papers used both objective and subjective measures, followed by 25% that only administered subjective tools - mostly consisting of validated instruments and customized surveys. An overall increase in the total number of publications on intraoperative stress assessment was observed from mid-2010 s along with a momentum in the use of both subjective and real-time objective measures. Cardiac activity, including heart-rate variability metrics, stress hormones, and eye-tracking metrics were the most frequently and electroencephalography (EEG) was the least frequently used objective measures. Around 40% of selected papers collected at least two objective measures, 41% used wearable devices, 23% performed synchronization and annotation, and 76% conducted baseline or multi-point data acquisition. Furthermore, 93% used a variety of statistical techniques, 14% applied regression models, and only one study released a public, anonymized dataset. This review of data modalities, experimental setups, and analysis techniques for intraoperative stress monitoring highlights the initiatives of surgical data science and motivates research on computational techniques for mental and surgical skills assessment and cognition-guided surgery.
Collapse
|