1
|
Huaulmé A, Harada K, Nguyen QM, Park B, Hong S, Choi MK, Peven M, Li Y, Long Y, Dou Q, Kumar S, Lalithkumar S, Hongliang R, Matsuzaki H, Ishikawa Y, Harai Y, Kondo S, Mitsuishi M, Jannin P. PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition? Comput Methods Programs Biomed 2023; 236:107561. [PMID: 37119774 DOI: 10.1016/j.cmpb.2023.107561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | | | - Bogyu Park
- VisionAI hutom, Seoul, Republic of Korea
| | | | | | | | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | | | - Ren Hongliang
- National University of Singapore, Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Hiroki Matsuzaki
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuto Ishikawa
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuriko Harai
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
2
|
Nyangoh Timoh K, Huaulme A, Cleary K, Zaheer MA, Lavoué V, Donoho D, Jannin P. A systematic review of annotation for surgical process model analysis in minimally invasive surgery based on video. Surg Endosc 2023:10.1007/s00464-023-10041-w. [PMID: 37157035 DOI: 10.1007/s00464-023-10041-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/25/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND Annotated data are foundational to applications of supervised machine learning. However, there seems to be a lack of common language used in the field of surgical data science. The aim of this study is to review the process of annotation and semantics used in the creation of SPM for minimally invasive surgery videos. METHODS For this systematic review, we reviewed articles indexed in the MEDLINE database from January 2000 until March 2022. We selected articles using surgical video annotations to describe a surgical process model in the field of minimally invasive surgery. We excluded studies focusing on instrument detection or recognition of anatomical areas only. The risk of bias was evaluated with the Newcastle Ottawa Quality assessment tool. Data from the studies were visually presented in table using the SPIDER tool. RESULTS Of the 2806 articles identified, 34 were selected for review. Twenty-two were in the field of digestive surgery, six in ophthalmologic surgery only, one in neurosurgery, three in gynecologic surgery, and two in mixed fields. Thirty-one studies (88.2%) were dedicated to phase, step, or action recognition and mainly relied on a very simple formalization (29, 85.2%). Clinical information in the datasets was lacking for studies using available public datasets. The process of annotation for surgical process model was lacking and poorly described, and description of the surgical procedures was highly variable between studies. CONCLUSION Surgical video annotation lacks a rigorous and reproducible framework. This leads to difficulties in sharing videos between institutions and hospitals because of the different languages used. There is a need to develop and use common ontology to improve libraries of annotated surgical videos.
Collapse
Affiliation(s)
- Krystel Nyangoh Timoh
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France.
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France.
- Laboratoire d'Anatomie et d'Organogenèse, Faculté de Médecine, Centre Hospitalier Universitaire de Rennes, 2 Avenue du Professeur Léon Bernard, 35043, Rennes Cedex, France.
- Department of Obstetrics and Gynecology, Rennes Hospital, Rennes, France.
| | - Arnaud Huaulme
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, 20010, USA
| | - Myra A Zaheer
- George Washington University School of Medicine and Health Sciences, Washington, DC, USA
| | - Vincent Lavoué
- Department of Gynecology and Obstetrics and Human Reproduction, CHU Rennes, Rennes, France
| | - Dan Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, DC, 20010, USA
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, University Rennes 1, Rennes, France
| |
Collapse
|
3
|
Huaulmé A, Sarikaya D, Le Mut K, Despinoy F, Long Y, Dou Q, Chng CB, Lin W, Kondo S, Bravo-Sánchez L, Arbeláez P, Reiter W, Mitsuishi M, Harada K, Jannin P. MIcro-surgical anastomose workflow recognition challenge report. Comput Methods Programs Biomed 2021; 212:106452. [PMID: 34688174 DOI: 10.1016/j.cmpb.2021.106452] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/28/2021] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Duygu Sarikaya
- Gazi University, Faculty of Engineering; Department of Computer Engineering, Ankara, Turkey
| | - Kévin Le Mut
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France
| | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Chin-Boon Chng
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | - Wenjun Lin
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | | | - Laura Bravo-Sánchez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | - Pablo Arbeláez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
4
|
Huaulmé A, Jannin P, Reche F, Faucheron JL, Moreau-Gaudry A, Voros S. Offline identification of surgical deviations in laparoscopic rectopexy. Artif Intell Med 2020; 104:101837. [PMID: 32499005 DOI: 10.1016/j.artmed.2020.101837] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Revised: 02/18/2020] [Accepted: 02/21/2020] [Indexed: 12/13/2022]
Abstract
OBJECTIVE According to a meta-analysis of 7 studies, the median number of patients with at least one adverse event during the surgery is 14.4%, and a third of those adverse events were preventable. The occurrence of adverse events forces surgeons to implement corrective strategies and, thus, deviate from the standard surgical process. Therefore, it is clear that the automatic identification of adverse events is a major challenge for patient safety. In this paper, we have proposed a method enabling us to identify such deviations. We have focused on identifying surgeons' deviations from standard surgical processes due to surgical events rather than anatomic specificities. This is particularly challenging, given the high variability in typical surgical procedure workflows. METHODS We have introduced a new approach designed to automatically detect and distinguish surgical process deviations based on multi-dimensional non-linear temporal scaling with a hidden semi-Markov model using manual annotation of surgical processes. The approach was then evaluated using cross-validation. RESULTS The best results have over 90% accuracy. Recall and precision for event deviations, i.e. related to adverse events, are respectively below 80% and 40%. To understand these results, we have provided a detailed analysis of the incorrectly-detected observations. CONCLUSION Multi-dimensional non-linear temporal scaling with a hidden semi-Markov model provides promising results for detecting deviations. Our error analysis of the incorrectly-detected observations offers different leads in order to further improve our method. SIGNIFICANCE Our method demonstrated the feasibility of automatically detecting surgical deviations that could be implemented for both skill analysis and developing situation awareness-based computer-assisted surgical systems.
Collapse
|
5
|
Huaulmé A, Despinoy F, Perez SAH, Harada K, Mitsuishi M, Jannin P. Automatic annotation of surgical activities using virtual reality environments. Int J Comput Assist Radiol Surg 2019; 14:1663-1671. [PMID: 31177422 DOI: 10.1007/s11548-019-02008-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 03/21/2019] [Indexed: 12/12/2022]
Abstract
PURPOSE Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- INSERM, LTSI - UMR 1099, Univ Rennes, 35000, Rennes, France.
| | | | - Saul Alexis Heredia Perez
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Mamoru Mitsuishi
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, Univ Rennes, 35000, Rennes, France
| |
Collapse
|
6
|
Forestier G, Riffaud L, Petitjean F, Henaux PL, Jannin P. Surgical skills: Can learning curves be computed from recordings of surgical activities? Int J Comput Assist Radiol Surg 2018; 13:629-636. [PMID: 29502229 DOI: 10.1007/s11548-018-1713-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 02/16/2018] [Indexed: 01/01/2023]
Abstract
PURPOSE Surgery is one of the riskiest and most important medical acts that are performed today. The need to improve patient outcomes and surgeon training, and to reduce the costs of surgery, has motivated the equipment of operating rooms with sensors that record surgical interventions. The richness and complexity of the data that are collected call for new methods to support computer-assisted surgery. The aim of this paper is to support the monitoring of junior surgeons learning their surgical skill sets. METHODS Our method is fully automatic and takes as input a series of surgical interventions each represented by a low-level recording of all activities performed by the surgeon during the intervention (e.g., cut the skin with a scalpel). Our method produces a curve describing the process of standardization of the behavior of junior surgeons. Given the fact that junior surgeons receive constant feedback from senior surgeons during surgery, these curves can be directly interpreted as learning curves. RESULTS Our method is assessed using the behavior of a junior surgeon in anterior cervical discectomy and fusion surgery over his first three years after residency. They revealed the ability of the method to accurately represent the surgical skill evolution. We also showed that the learning curves can be computed by phases allowing a finer evaluation of the skill progression. CONCLUSION Preliminary results suggest that our approach constitutes a useful addition to surgical training monitoring.
Collapse
Affiliation(s)
- Germain Forestier
- IRIMAS, University of Haute-Alsace, Mulhouse, France. .,Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Laurent Riffaud
- Department of Neurosurgery, Univ. Hospital, Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, 35000, Rennes, France
| | - François Petitjean
- Faculty of Information Technology, Monash University, Melbourne, Australia
| | - Pierre-Louis Henaux
- Department of Neurosurgery, Univ. Hospital, Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, 35000, Rennes, France
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, 35000, Rennes, France
| |
Collapse
|
7
|
Huaulmé A, Voros S, Riffaud L, Forestier G, Moreau-Gaudry A, Jannin P. Distinguishing surgical behavior by sequential pattern discovery. J Biomed Inform 2017; 67:34-41. [PMID: 28179119 DOI: 10.1016/j.jbi.2017.02.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Revised: 12/23/2016] [Accepted: 02/01/2017] [Indexed: 11/24/2022]
Abstract
OBJECTIVE Each surgical procedure is unique due to patient's and also surgeon's particularities. In this study, we propose a new approach to distinguish surgical behaviors between surgical sites, levels of expertise and individual surgeons thanks to a pattern discovery method. METHODS The developed approach aims to distinguish surgical behaviors based on shared longest frequent sequential patterns between surgical process models. To allow clustering, we propose a new metric called SLFSP. The approach is validated by comparison with a clustering method using Dynamic Time Warping as a metric to characterize the similarity between surgical process models. RESULTS Our method outperformed the existing approach. It was able to make a perfect distinction between surgical sites (accuracy of 100%). We reached an accuracy superior to 90% and 85% for distinguishing levels of expertise and individual surgeons. CONCLUSION Clustering based on shared longest frequent sequential patterns outperformed the previous study based on time analysis. SIGNIFICANCE The proposed method shows the feasibility of comparing surgical process models, not only by their duration but also by their structure of activities. Furthermore, patterns may show risky behaviors, which could be an interesting information for surgical training to prevent adverse events.
Collapse
|
8
|
Glaser B, Schellenberg T, Koch L, Hofer M, Modemann S, Dubach P, Neumuth T. Design and evaluation of an interactive training system for scrub nurses. Int J Comput Assist Radiol Surg 2016; 11:1527-36. [PMID: 26872806 DOI: 10.1007/s11548-016-1356-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Accepted: 01/28/2016] [Indexed: 11/28/2022]
Abstract
OBJECTIVE The current trend toward increasingly integrated technological support systems and the rise of streamlined processes in the OR have led to a growing demand for personnel with higher levels of training. Although simulation systems are widely used and accepted in surgical training, they are practically non-existent for perioperative nursing, especially scrub nursing. This paper describes and evaluates an interactive OR environment simulation to help train scrub nurses. METHODS A system comprising multiple computers and monitors, including an interactive table and a touchscreen combined with a client-server software solution, was designed to simulate a scrub nurse's workplace. The resulting demonstrator was evaluated under laboratory conditions with a multicenter interview study involving three participating ear, nose, and throat (ENT) departments in Germany and Switzerland. RESULTS The participant group of 15 scrub nurses had an average of 12.8 years hands-on experience in the OR. A series of 22 questions was used to evaluate various aspects of the demonstrator system and its suitability for training novices. DISCUSSION The system received very positive feedback. The participants stated that familiarization with instrument names and learning the instrument table setup were the two most important technical topics for beginners. They found the system useful for acquiring these skills as well as certain non-technical aspects. CONCLUSIONS Interactive training through simulation is a new approach for preparing novice scrub nurses for the challenges at the instrument table in the OR. It can also improve the lifelong training of perioperative personnel. The proposed system is currently unique in its kind. It can be used to train both technical and non-technical skills and, therefore, contributes to patient safety. Moreover, it is not dependent on a specific type of surgical intervention or medical discipline.
Collapse
Affiliation(s)
- Bernhard Glaser
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University of Leipzig, Semmelweisstr. 14, 04103, Leipzig, Germany.
| | - Tobias Schellenberg
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University of Leipzig, Semmelweisstr. 14, 04103, Leipzig, Germany
| | - Lucas Koch
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University of Leipzig, Semmelweisstr. 14, 04103, Leipzig, Germany
| | - Mathias Hofer
- ENT Department, Leipzig University Hospital, Leipzig, Germany
| | | | - Patrick Dubach
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University of Leipzig, Semmelweisstr. 14, 04103, Leipzig, Germany
- Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital of Bern, Bern, Switzerland
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University of Leipzig, Semmelweisstr. 14, 04103, Leipzig, Germany
| |
Collapse
|
9
|
Uemura M, Jannin P, Yamashita M, Tomikawa M, Akahoshi T, Obata S, Souzaki R, Ieiri S, Hashizume M. Procedural surgical skill assessment in laparoscopic training environments. Int J Comput Assist Radiol Surg 2015; 11:543-52. [PMID: 26253582 DOI: 10.1007/s11548-015-1274-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2015] [Accepted: 07/21/2015] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aimed to identify detailed differences in laparoscopic surgical processes between expert and novice surgeons in a training environment and demonstrate that surgical process modeling can be used for such detailed analysis. METHODS Eleven expert surgeons each of whom had performed [Formula: see text] laparoscopic procedures were compared with 10 young surgeons each of whom had performed [Formula: see text] laparoscopic procedures, and five medical students. Each examinee performed a specific skill assessment task. During tasks, instrument motion was monitored using a video capture system. From the video, the corresponding workflow was recorded by labeling the surgeons' activities according to a predefined terminology. Activities represented manual work steps performed during the task, described by a combination of a verb (representing the action), a tool, and the involved structure. The results were described as the number of occurrences (times), average duration (seconds), total duration (seconds), minimal duration (seconds), maximal duration (seconds), and occupancy percentage (%). RESULTS The terminology for describing the processes of this task included 10 actions, six tools, four structures, and three events for each hand. There were 63 combinations of different possible activities; significant differences in 12 activities were observed between the expert and novice groups (young surgeons and medical students). The expert group performed the task with fewer occurrences and shorter duration than did the novice group in the left hand. CONCLUSIONS We identified differences in surgical process between experts and novices in laparoscopic surgical simulation. Our proposed method would be useful for education and training in laparoscopic surgery.
Collapse
Affiliation(s)
- Munenori Uemura
- Department of Advanced Medical Initiatives, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| | - Pierre Jannin
- INSERM, U1099, 35000, Rennes, France
- LTSI, Université de Rennes 1, 35000, Rennes, France
| | - Makoto Yamashita
- Department of Advanced Medical Initiatives, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Morimasa Tomikawa
- Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Tomohiko Akahoshi
- Department of Advanced Medical Initiatives, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Satoshi Obata
- Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Ryota Souzaki
- Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Satoshi Ieiri
- Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| | - Makoto Hashizume
- Department of Advanced Medical Initiatives, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
- Department of Advanced Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan
| |
Collapse
|
10
|
Schumann S, Bühligen U, Neumuth T. Outcome quality assessment by surgical process compliance measures in laparoscopic surgery. Artif Intell Med 2015; 63:85-90. [PMID: 25739791 DOI: 10.1016/j.artmed.2014.10.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2013] [Revised: 09/05/2014] [Accepted: 10/26/2014] [Indexed: 12/17/2022]
Abstract
OBJECTIVE The effective and efficient assessment, management, and evolution of surgical processes are intrinsic to excellent patient care. Hence, in addition to economic interests, the quality of the outcome is of great importance. Process benchmarking examines the compliance of an intraoperative surgical process to another process that is considered as best practice. The objective of this work is to assess the relationship between the course and the outcome of surgical processes of the study. MATERIALS AND METHODS By assessing 450 skill practices on rapid prototyping models in minimally invasive surgery training, we extracted descriptions of surgical processes and examined the hypothesis that a significant relationship exists between the course of a surgical process and the quality of its outcome. RESULTS The results showed a significant correlation with Person correlation coefficients >0.05 between the quality of process outcome and process compliance for simple and complex suturing tasks in the study. CONCLUSIONS We conclude that high process compliance supports good quality outcomes and, therefore, excellent patient care. We also showed that a deviation from best training processes led to a decreased outcome quality. This is relevant for identifying requirements for surgical processes, for generating feedback for the surgeon with regard to human factors and for inducing changes in the workflow in order to improve the outcome quality.
Collapse
Affiliation(s)
- Sandra Schumann
- Innovation Center Computer Assisted Surgery, Universität Leipzig, Semmelweisstr. 14, D-04103 Leipzig, Germany
| | - Ulf Bühligen
- Department of Pediatric Surgery, University Medical Center, Liebigstr. 20a, D-04103 Leipzig, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery, Universität Leipzig, Semmelweisstr. 14, D-04103 Leipzig, Germany.
| |
Collapse
|
11
|
Unger M, Chalopin C, Neumuth T. Vision-based online recognition of surgical activities. Int J Comput Assist Radiol Surg 2014; 9:979-86. [PMID: 24664268 DOI: 10.1007/s11548-014-0994-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 03/07/2014] [Indexed: 10/25/2022]
Abstract
PURPOSE Surgical processes are complex entities characterized by expressive models and data. Recognizable activities define each surgical process. The principal limitation of current vision-based recognition methods is inefficiency due to the large amount of information captured during a surgical procedure. To overcome this technical challenge, we introduce a surgical gesture recognition system using temperature-based recognition. METHODS An infrared thermal camera was combined with a hierarchical temporal memory and was used during surgical procedures. The recordings were analyzed for recognition of surgical activities. The image sequence information acquired included hand temperatures. This datum was analyzed to perform gesture extraction and recognition based on heat differences between the surgeon's warm hands and the colder background of the environment. RESULTS The system was validated by simulating a functional endoscopic sinus surgery, a common type of otolaryngologic surgery. The thermal camera was directed toward the hands of the surgeon while handling different instruments. The system achieved an online recognition accuracy of 96% with high precision and recall rates of approximately 60%. CONCLUSION Vision-based recognition methods are the current best practice approaches for monitoring surgical processes. Problems of information overflow and extended recognition times in vision-based approaches were overcome by changing the spectral range to infrared. This change enables the real-time recognition of surgical activities and provides online monitoring information to surgical assistance systems and workflow management systems.
Collapse
Affiliation(s)
- Michael Unger
- Innovation Center Computer Assisted Surgery, University of Leipzig, Semmelweisstr. 14, Leipzig, 04103, Germany.
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery, University of Leipzig, Semmelweisstr. 14, Leipzig, 04103, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery, University of Leipzig, Semmelweisstr. 14, Leipzig, 04103, Germany
| |
Collapse
|
12
|
Neumuth T, Wiedemann R, Foja C, Meier P, Schlomberg J, Neumuth D, Wiedemann P. Identification of surgeon-individual treatment profiles to support the provision of an optimum treatment service for cataract patients. J Ocul Biol Dis Infor 2011; 3:73-83. [PMID: 22500196 DOI: 10.1007/s12177-011-9058-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2010] [Accepted: 03/21/2011] [Indexed: 11/30/2022] Open
Abstract
One objective of ophthalmological departments is the optimization of patient treatment services. A strategy for optimization is the identification of individual potential for advanced training of surgeons based on their daily working results. The objective of this feasibility study was the presentation and evaluation of a strategy for the computation of surgeon-individual treatment profiles (SiTPs). We observed experienced surgeons during their standard daily performance of cataract procedures in the Ophthalmological Department of the University Medical Center Leipzig, Germany. One hundred five cases of cataract procedures were measured as Surgical Process Models (SPMs) with a detailed-to-the-second resolution. The procedures were performed by three different surgeons during their daily work. Subsequently, SiTPs were computed and analyzed from the SPMs as statistical 'mean' treatment strategies for each of the surgeons. The feasibility study demonstrated that it is possible to identify differences in surgeon-individual treatment profiles beyond the resolution of cut-suture times. Surgeon-individual workflows, activity frequencies and average performance durations of surgical activities during cataract procedures were analyzed. Highly significant (p < 0.001) workflow differences were found between the treatment profiles of the three surgeons. Conclusively, the generation of SiTPs is a convenient strategy to identify surgeon-individual training potentials in cataract surgery. Concrete recommendations for further education can be derived from the profiles.
Collapse
|