1
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
2
|
Uribe Rivera AK, Seeliger B, Goffin L, García-Vázquez A, Mutter D, Giménez ME. Robotic Assistance in Percutaneous Liver Ablation Therapies: A Systematic Review and Meta-Analysis. ANNALS OF SURGERY OPEN 2024; 5:e406. [PMID: 38911657 PMCID: PMC11191991 DOI: 10.1097/as9.0000000000000406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 02/19/2024] [Indexed: 06/25/2024] Open
Abstract
Objective The aim of this systematic review and meta-analysis is to identify current robotic assistance systems for percutaneous liver ablations, compare approaches, and determine how to achieve standardization of procedural concepts for optimized ablation outcomes. Background Image-guided surgical approaches are increasingly common. Assistance by navigation and robotic systems allows to optimize procedural accuracy, with the aim to consistently obtain adequate ablation volumes. Methods Several databases (PubMed/MEDLINE, ProQuest, Science Direct, Research Rabbit, and IEEE Xplore) were systematically searched for robotic preclinical and clinical percutaneous liver ablation studies, and relevant original manuscripts were included according to the Preferred Reporting items for Systematic Reviews and Meta-Analyses guidelines. The endpoints were the type of device, insertion technique (freehand or robotic), planning, execution, and confirmation of the procedure. A meta-analysis was performed, including comparative studies of freehand and robotic techniques in terms of radiation dose, accuracy, and Euclidean error. Results The inclusion criteria were met by 33/755 studies. There were 24 robotic devices reported for percutaneous liver surgery. The most used were the MAXIO robot (8/33; 24.2%), Zerobot, and AcuBot (each 2/33, 6.1%). The most common tracking system was optical (25/33, 75.8%). In the meta-analysis, the robotic approach was superior to the freehand technique in terms of individual radiation (0.5582, 95% confidence interval [CI] = 0.0167-1.0996, dose-length product range 79-2216 mGy.cm), accuracy (0.6260, 95% CI = 0.1423-1.1097), and Euclidean error (0.8189, 95% CI = -0.1020 to 1.7399). Conclusions Robotic assistance in percutaneous ablation for liver tumors achieves superior results and reduces errors compared with manual applicator insertion. Standardization of concepts and reporting is necessary and suggested to facilitate the comparison of the different parameters used to measure liver ablation results. The increasing use of image-guided surgery has encouraged robotic assistance for percutaneous liver ablations. This systematic review analyzed 33 studies and identified 24 robotic devices, with optical tracking prevailing. The meta-analysis favored robotic assessment, showing increased accuracy and reduced errors compared with freehand technique, emphasizing the need for conceptual standardization.
Collapse
Affiliation(s)
- Ana K Uribe Rivera
- From the IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| | - Barbara Seeliger
- From the IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- Department of Visceral and Digestive Surgery, University Hospitals of Strasbourg, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France
- ICube, UMR 7357 CNRS, INSERM U1328 RODIN, University of Strasbourg, Strasbourg, France
- Inserm U1110, Institute for Viral and Liver Diseases, Strasbourg. France
- Trustworthy AI Lab, Centre National de la Recherche Scientifique (CNRS), France
| | - Laurent Goffin
- ICube, UMR 7357 CNRS, INSERM U1328 RODIN, University of Strasbourg, Strasbourg, France
- Trustworthy AI Lab, Centre National de la Recherche Scientifique (CNRS), France
- Computational Surgery SAS, Schiltigheim, France
| | | | - Didier Mutter
- From the IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- Department of Visceral and Digestive Surgery, University Hospitals of Strasbourg, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France
| | - Mariano E Giménez
- From the IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France
- DAICIM Foundation (Training, Research and Clinical Activity in Minimally Invasive Surgery), Buenos Aires, Argentina
| |
Collapse
|
3
|
Wilkens U, Lupp D, Langholf V. Configurations of human-centered AI at work: seven actor-structure engagements in organizations. Front Artif Intell 2023; 6:1272159. [PMID: 38028670 PMCID: PMC10664146 DOI: 10.3389/frai.2023.1272159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/29/2023] [Indexed: 12/01/2023] Open
Abstract
Purpose The discourse on the human-centricity of AI at work needs contextualization. The aim of this study is to distinguish prevalent criteria of human-centricity for AI applications in the scientific discourse and to relate them to the work contexts for which they are specifically intended. This leads to configurations of actor-structure engagements that foster human-centricity in the workplace. Theoretical foundation The study applies configurational theory to sociotechnical systems' analysis of work settings. The assumption is that different approaches to promote human-centricity coexist, depending on the stakeholders responsible for their application. Method The exploration of criteria indicating human-centricity and their synthesis into configurations is based on a cross-disciplinary literature review following a systematic search strategy and a deductive-inductive qualitative content analysis of 101 research articles. Results The article outlines eight criteria of human-centricity, two of which face challenges of human-centered technology development (trustworthiness and explainability), three challenges of human-centered employee development (prevention of job loss, health, and human agency and augmentation), and three challenges of human-centered organizational development (compensation of systems' weaknesses, integration of user-domain knowledge, accountability, and safety culture). The configurational theory allows contextualization of these criteria from a higher-order perspective and leads to seven configurations of actor-structure engagements in terms of engagement for (1) data and technostructure, (2) operational process optimization, (3) operators' employment, (4) employees' wellbeing, (5) proficiency, (6) accountability, and (7) interactive cross-domain design. Each has one criterion of human-centricity in the foreground. Trustworthiness does not build its own configuration but is proposed to be a necessary condition in all seven configurations. Discussion The article contextualizes the overall debate on human-centricity and allows us to specify stakeholder-related engagements and how these complement each other. This is of high value for practitioners bringing human-centricity to the workplace and allows them to compare which criteria are considered in transnational declarations, international norms and standards, or company guidelines.
Collapse
Affiliation(s)
- Uta Wilkens
- Institute of Work Science, Ruhr University Bochum, Bochum, Germany
| | | | | |
Collapse
|
4
|
Pai SN, Jeyaraman M, Jeyaraman N, Nallakumarasamy A, Yadav S. In the Hands of a Robot, From the Operating Room to the Courtroom: The Medicolegal Considerations of Robotic Surgery. Cureus 2023; 15:e43634. [PMID: 37719624 PMCID: PMC10504870 DOI: 10.7759/cureus.43634] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/17/2023] [Indexed: 09/19/2023] Open
Abstract
Robotic surgery has rapidly evolved as a groundbreaking field in medicine, revolutionizing surgical practices across various specialties. Despite its numerous benefits, the adoption of robotic surgery faces significant medicolegal challenges. This article delves into the underexplored legal implications of robotic surgery and identifies three distinct medicolegal problems. First, the lack of standardized training and credentialing for robotic surgery poses potential risks to patient safety and surgeon competence. Second, informed consent processes require additional considerations to ensure patients are fully aware of the technology's capabilities and potential risks. Finally, the issue of legal liability becomes complex due to the involvement of multiple stakeholders in the functioning of robotic systems. The article highlights the need for comprehensive guidelines, regulations, and training programs to navigate the medicolegal aspects of robotic surgery effectively, thereby unlocking its full potential for the future..
Collapse
Affiliation(s)
- Satvik N Pai
- Orthopaedic Surgery, Hospital for Orthopedics, Sports Medicine, Arthritis, and Trauma (HOSMAT) Hospital, Bangalore, IND
| | - Madhan Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Naveen Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Arulkumar Nallakumarasamy
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
5
|
Shi C, Zheng Y, Fey AM. Recognition and Prediction of Surgical Gestures and Trajectories Using Transformer Models in Robot-Assisted Surgery. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2022; 2022:8017-8024. [PMID: 37363719 PMCID: PMC10288529 DOI: 10.1109/iros47612.2022.9981611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Surgical activity recognition and prediction can help provide important context in many Robot-Assisted Surgery (RAS) applications, for example, surgical progress monitoring and estimation, surgical skill evaluation, and shared control strategies during teleoperation. Transformer models were first developed for Natural Language Processing (NLP) to model word sequences and soon the method gained popularity for general sequence modeling tasks. In this paper, we propose the novel use of a Transformer model for three tasks: gesture recognition, gesture prediction, and trajectory prediction during RAS. We modify the original Transformer architecture to be able to generate the current gesture sequence, future gesture sequence, and future trajectory sequence estimations using only the current kinematic data of the surgical robot end-effectors. We evaluate our proposed models on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and use Leave-One-User-Out (LOUO) cross validation to ensure generalizability of our results. Our models achieve up to 89.3% gesture recognition accuracy, 84.6% gesture prediction accuracy (1 second ahead) and 2.71mm trajectory prediction error (1 second ahead). Our models are comparable to and able to outperform state-of-the-art methods while using only the kinematic data channel. This approach can enabling near-real time surgical activity recognition and prediction.
Collapse
Affiliation(s)
- Chang Shi
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Yi Zheng
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Ann Majewicz Fey
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Surgery, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
6
|
Hartwig R, Berlet M, Czempiel T, Fuchtmann J, Rückert T, Feussner H, Wilhelm D. [Image-based supportive measures for future application in surgery]. CHIRURGIE (HEIDELBERG, GERMANY) 2022; 93:956-965. [PMID: 35737019 DOI: 10.1007/s00104-022-01668-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/02/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The development of assistive technologies will become of increasing importance in the coming years and not only in surgery. The comprehensive perception of the actual situation is the basis of every autonomous action. Different sensor systems can be used for this purpose, of which video-based systems have a special potential. METHOD Based on the available literature and on own research projects, central aspects of image-based support systems for surgery are presented. In this context, not only the potential but also the limitations of the methods are explained. RESULTS An established application is the phase detection of surgical interventions, for which surgical videos are analyzed using neural networks. Through a time-based and transformative analysis the results of the prediction could only recently be significantly improved. Robotic camera guidance systems will also use image data to autonomously navigate laparoscopes in the near future. The reliability of the systems needs to be adapted to the high requirements in surgery by means of additional information. A comparable multimodal approach has already been implemented for navigation and localization during laparoscopic procedures. For this purpose, video data are analyzed using various methods and these data are fused with other sensor modalities. DISCUSSION Image-based supportive methods are already available for various tasks and will become an important aspect for the surgery of the future; however, in order to be able to be reliably implemented for autonomous functions, they must be embedded in multimodal approaches in the future in order to provide the necessary security.
Collapse
Affiliation(s)
- R Hartwig
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - M Berlet
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
- Fakultät für Medizin, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - T Czempiel
- Computer Aided Medical Procedures, Technische Universitat München, München, Deutschland
| | - J Fuchtmann
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - T Rückert
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Regensburg, Deutschland
| | - H Feussner
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland
| | - D Wilhelm
- Forschungsgruppe MITI, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland.
- Fakultät für Medizin, Klinik und Poliklinik für Chirurgie, Klinikum rechts der Isar, Technische Universität München, München, Deutschland.
| |
Collapse
|
7
|
Daetwyler S, Mazloom-Farsibaf H, Danuser G, Craig R. U-Hack Med Gap Year-A Virtual Undergraduate Internship Program in Computer-Assisted Healthcare and Biomedical Research. FRONTIERS IN BIOINFORMATICS 2021; 1:727066. [PMID: 36303739 PMCID: PMC9581059 DOI: 10.3389/fbinf.2021.727066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 09/23/2021] [Indexed: 08/29/2023] Open
Abstract
The COVID-19 healthcare crisis dramatically changed educational opportunities for undergraduate students. To overcome the lack of exposure to lab research and provide an alternative to cancelled classes and online lectures, the Lyda Hill Department of Bioinformatics at UT Southwestern Medical Center established an innovative, fully remote and paid "U-Hack Med Gap Year" internship program. At the core of the internship program were dedicated biomedical research projects spanning nine months in fields as diverse as computational microscopy, bioimage analysis, genome sequence analysis and establishment of a surgical skill analysis platform. To complement the project work, a biweekly Gap Year lab meeting was devised with opportunities to develop important skills in presenting, data sharing and analysis of new research. Despite a challenging year, all selected students completed the full internship period and over 30% will continue their project remotely after the end of the program.
Collapse
Affiliation(s)
| | | | | | - Rebekah Craig
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, United States
| |
Collapse
|