1
|
Tronchot A, Casy T, Vallee N, Common H, Thomazeau H, Jannin P, Huaulmé A. Virtual reality simulation training improve diagnostic knee arthroscopy and meniscectomy skills: a prospective transfer validity study. J Exp Orthop 2023; 10:138. [PMID: 38095746 PMCID: PMC10721743 DOI: 10.1186/s40634-023-00688-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 11/13/2023] [Indexed: 12/17/2023] Open
Abstract
PURPOSE Limited data exist on the actual transfer of skills learned using a virtual reality (VR) simulator for arthroscopy training because studies mainly focused on VR performance improvement and not on transfer to real word (transfer validity). The purpose of this single-blinded, controlled trial was to objectively investigate transfer validity in the context of initial knee arthroscopy training. METHODS For this study, 36 junior resident orthopaedic surgeons (postgraduate year one and year two) without prior experience in arthroscopic surgery were enrolled to receive standard knee arthroscopy surgery training (NON-VR group) or standard training plus training on a hybrid virtual reality knee arthroscopy simulator (1 h/month) (VR group). At inclusion, all participants completed a questionnaire on their current arthroscopic technical skills. After 6 months of training, both groups performed three exercises that were evaluated independently by two blinded trainers: i) arthroscopic partial meniscectomy on a bench-top knee simulator; ii) supervised diagnostic knee arthroscopy on a cadaveric knee; and iii) supervised knee partial meniscectomy on a cadaveric knee. Training level was determined with the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. RESULTS Overall, performance (ASSET scores) was better in the VR group than NON-VR group (difference in the global scores: p < 0.001, in bench-top meniscectomy scores: p = 0.03, in diagnostic knee arthroscopy on a cadaveric knee scores: p = 0.04, and in partial meniscectomy on a cadaveric knee scores: p = 0.02). Subgroup analysis by postgraduate year showed that the year-one NON-VR subgroup performed worse than the other subgroups, regardless of the exercise. CONCLUSION This study showed the transferability of the technical skills acquired by novice residents on a hybrid virtual reality simulator to the bench-top and cadaveric models. Surgical skill acquired with a VR arthroscopy surgical simulator might safely improve arthroscopy competences in the operating room, also helping to standardise resident training and follow their progress. LEVEL OF EVIDENCE: 2
Collapse
Affiliation(s)
- Alexandre Tronchot
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France.
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France.
| | - Tiphaine Casy
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
| | - Nicolas Vallee
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France
| | - Harold Common
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France
| | - Hervé Thomazeau
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
- Orthopaedics and Trauma Department, Rennes University Hospital, 2 Rue Henri Le Guilloux, 35000, Rennes, France
| | - Pierre Jannin
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
| | - Arnaud Huaulmé
- University Rennes, CHU Rennes, Inserm, LTSI, Equipe MediCIS- UMR 1099, 35000, Rennes, France
| |
Collapse
|
2
|
Galuret S, Vallée N, Tronchot A, Thomazeau H, Jannin P, Huaulmé A. Gaze behavior is related to objective technical skills assessment during virtual reality simulator-based surgical training: a proof of concept. Int J Comput Assist Radiol Surg 2023; 18:1697-1705. [PMID: 37286642 DOI: 10.1007/s11548-023-02961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/16/2023] [Indexed: 06/09/2023]
Abstract
PURPOSE Simulation-based training allows surgical skills to be learned safely. Most virtual reality-based surgical simulators address technical skills without considering non-technical skills, such as gaze use. In this study, we investigated surgeons' visual behavior during virtual reality-based surgical training where visual guidance is provided. Our hypothesis was that the gaze distribution in the environment is correlated with the simulator's technical skills assessment. METHODS We recorded 25 surgical training sessions on an arthroscopic simulator. Trainees were equipped with a head-mounted eye-tracking device. A U-net was trained on two sessions to segment three simulator-specific areas of interest (AoI) and the background, to quantify gaze distribution. We tested whether the percentage of gazes in those areas was correlated with the simulator's scores. RESULTS The neural network was able to segment all AoI with a mean Intersection over Union superior to 94% for each area. The gaze percentage in the AoI differed among trainees. Despite several sources of data loss, we found significant correlations between gaze position and the simulator scores. For instance, trainees obtained better procedural scores when their gaze focused on the virtual assistance (Spearman correlation test, N = 7, r = 0.800, p = 0.031). CONCLUSION Our findings suggest that visual behavior should be quantified for assessing surgical expertise in simulation-based training environments, especially when visual guidance is provided. Ultimately visual behavior could be used to quantitatively assess surgeons' learning curve and expertise while training on VR simulators, in a way that complements existing metrics.
Collapse
Affiliation(s)
- Soline Galuret
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| | - Nicolas Vallée
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Alexandre Tronchot
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Hervé Thomazeau
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
- Orthopedics and Trauma Department, Rennes University Hospital, 35000, Rennes, France
| | - Pierre Jannin
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France.
| | - Arnaud Huaulmé
- LTSI - UMR 1099, Univ. Rennes, Inserm, 35000, Rennes, France
| |
Collapse
|
3
|
Huaulmé A, Harada K, Nguyen QM, Park B, Hong S, Choi MK, Peven M, Li Y, Long Y, Dou Q, Kumar S, Lalithkumar S, Hongliang R, Matsuzaki H, Ishikawa Y, Harai Y, Kondo S, Mitsuishi M, Jannin P. PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition? Comput Methods Programs Biomed 2023; 236:107561. [PMID: 37119774 DOI: 10.1016/j.cmpb.2023.107561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | | | - Bogyu Park
- VisionAI hutom, Seoul, Republic of Korea
| | | | | | | | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | | | - Ren Hongliang
- National University of Singapore, Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Hiroki Matsuzaki
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuto Ishikawa
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuriko Harai
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
4
|
Tronchot A, Berthelemy J, Thomazeau H, Huaulmé A, Walbron P, Sirveaux F, Jannin P. Validation of virtual reality arthroscopy simulator relevance in characterising experienced surgeons. Orthop Traumatol Surg Res 2021; 107:103079. [PMID: 34597826 DOI: 10.1016/j.otsr.2021.103079] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 04/28/2021] [Accepted: 05/17/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Virtual reality (VR) simulation is particularly suitable for learning arthroscopy skills. Despite significant research, one drawback often outlined is the difficulty in distinguishing performance levels (Construct Validity) in experienced surgeons. Therefore, it seems adequate to search new methods of performance measurements using probe trajectories instead of commonly used metrics. HYPOTHESIS It was hypothesized that a larger experience in surgical shoulder arthroscopy would be correlated with better performance on a VR shoulder arthroscopy simulator and that experienced operators would share similar probe trajectories. MATERIALS & METHODS After answering to standardized questionnaires, 104 trajectories from 52 surgeons divided into 2 cohorts (26 intermediates and 26 experts) were recorded on a shoulder arthroscopy simulator. The procedure analysed was the "loose body removal" in a right shoulder joint. 10 metrics were computed on the trajectories including procedure duration, overall path length, economy of motion and smoothness. Additionally, Dynamic Time Warping (DTW) was computed on the trajectories for unsupervised hierarchical clustering of the surgeons. RESULTS Experts were significantly faster (Median 70.9s Interquartile range [56.4-86.3] vs. 116.1s [82.8-154.2], p<0.01), more fluid (4.6.105mm.s-3 [3.1.105-7.2.105] vs. 1.5.106mm.s-3 [2.6.106-3.5.106], p=0.05), and economical in their motion (19.3mm2 [9.1-25.9] vs. 33.8mm2 [14.8-50.5], p<0.01), but there was no significant difference in performance for path length (671.4mm [503.8-846.1] vs 694.6mm [467.0-1090.1], p=0.62). The DTW clustering differentiates two expertise related groups of trajectories with performance similarities, respectively including 48 expert trajectories for the first group and 52 intermediates and 4 expert trajectories for the second group (Sensitivity of 92%, Specificity of 100%). Hierarchical clustering with DTW significantly identified expert operators from intermediate operators and found trajectory similarities among 24/26 experts. CONCLUSION This study demonstrated the Construct Validity of the VR shoulder arthroscopy simulator within groups of experienced surgeons. With new types of metrics simply based on the simulator's raw trajectories, it was possible to significantly distinguish levels of expertise. We demonstrated that clustering analysis with Dynamic Time Warping was able to reliably discriminate between expert operators and intermediate operators. CLINICAL RELEVANCE The results have implications for the future of arthroscopic surgical training or post-graduate accreditation programs using virtual reality simulation. LEVEL OF EVIDENCE III; prospective comparative study.
Collapse
Affiliation(s)
- Alexandre Tronchot
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France; Orthopaedics and Trauma Department, Rennes University Hospital, 2 rue Henri Le Guilloux, 35000 Rennes, France.
| | | | - Hervé Thomazeau
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France; Orthopaedics and Trauma Department, Rennes University Hospital, 2 rue Henri Le Guilloux, 35000 Rennes, France
| | - Arnaud Huaulmé
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France
| | - Paul Walbron
- Orthopaedics Department, Nancy University Hospital, Centre Chirurgical Emile Gallé, 49 rue Hermite, 54000 Nancy, France
| | - François Sirveaux
- Orthopaedics Department, Nancy University Hospital, Centre Chirurgical Emile Gallé, 49 rue Hermite, 54000 Nancy, France
| | - Pierre Jannin
- University Rennes, Inserm, LTSI-UMR 1099, 35000 Rennes, France
| |
Collapse
|
5
|
Huaulmé A, Sarikaya D, Le Mut K, Despinoy F, Long Y, Dou Q, Chng CB, Lin W, Kondo S, Bravo-Sánchez L, Arbeláez P, Reiter W, Mitsuishi M, Harada K, Jannin P. MIcro-surgical anastomose workflow recognition challenge report. Comput Methods Programs Biomed 2021; 212:106452. [PMID: 34688174 DOI: 10.1016/j.cmpb.2021.106452] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/28/2021] [Indexed: 05/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic surgical workflow recognition is an essential step in developing context-aware computer-assisted surgical systems. Video recordings of surgeries are becoming widely accessible, as the operational field view is captured during laparoscopic surgeries. Head and ceiling mounted cameras are also increasingly being used to record videos in open surgeries. This makes videos a common choice in surgical workflow recognition. Additional modalities, such as kinematic data captured during robot-assisted surgeries, could also improve workflow recognition. This paper presents the design and results of the MIcro-Surgical Anastomose Workflow recognition on training sessions (MISAW) challenge whose objective was to develop workflow recognition models based on kinematic data and/or videos. METHODS The MISAW challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels. This data set was composed of videos, kinematics, and workflow annotations. The latter described the sequences at three different granularity levels: phase, step, and activity. Four tasks were proposed to the participants: three of them were related to the recognition of surgical workflow at three different granularity levels, while the last one addressed the recognition of all granularity levels in the same model. We used the average application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric. This takes unbalanced classes into account and it is more clinically relevant than a frame-by-frame score. RESULTS Six teams participated in at least one task. All models employed deep learning models, such as convolutional neural networks (CNN), recurrent neural networks (RNN), or a combination of both. The best models achieved accuracy above 95%, 80%, 60%, and 75% respectively for recognition of phases, steps, activities, and multi-granularity. The RNN-based models outperformed the CNN-based ones as well as the dedicated modality models compared to the multi-granularity except for activity recognition. CONCLUSION For high levels of granularity, the best models had a recognition rate that may be sufficient for applications such as prediction of remaining surgical time. However, for activities, the recognition rate was still low for applications that can be employed clinically. The MISAW data set is publicly available at http://www.synapse.org/MISAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Duygu Sarikaya
- Gazi University, Faculty of Engineering; Department of Computer Engineering, Ankara, Turkey
| | - Kévin Le Mut
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France
| | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, China; T Stone Robotics Institute, The Chinese University of Hong Kong, China
| | - Chin-Boon Chng
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | - Wenjun Lin
- National University of Singapore(NUS), Singapore, Singapore; Southern University of Science and Technology (SUSTech), Shenzhen, China
| | | | - Laura Bravo-Sánchez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | - Pablo Arbeláez
- Center for Research and Formation in Artificial Intelligence, Department of Biomedical Engineering, Universidad de los Andes, Bogotá, Colombia
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo,Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes,INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
6
|
Huaulmé A, Despinoy F, Perez SAH, Harada K, Mitsuishi M, Jannin P. Automatic annotation of surgical activities using virtual reality environments. Int J Comput Assist Radiol Surg 2019; 14:1663-1671. [PMID: 31177422 DOI: 10.1007/s11548-019-02008-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 03/21/2019] [Indexed: 12/12/2022]
Abstract
PURPOSE Annotation of surgical activities becomes increasingly important for many recent applications such as surgical workflow analysis, surgical situation awareness, and the design of the operating room of the future, especially to train machine learning methods in order to develop intelligent assistance. Currently, annotation is mostly performed by observers with medical background and is incredibly costly and time-consuming, creating a major bottleneck for the above-mentioned technologies. In this paper, we propose a way to eliminate, or at least limit, the human intervention in the annotation process. METHODS Meaningful information about interaction between objects is inherently available in virtual reality environments. We propose a strategy to convert automatically this information into annotations in order to provide as output individual surgical process models. VALIDATION We implemented our approach through a peg-transfer task simulator and compared it to manual annotations. To assess the impact of our contribution, we studied both intra- and inter-observer variability. RESULTS AND CONCLUSION In average, manual annotations took more than 12 min for 1 min of video to achieve low-level physical activity annotation, whereas automatic annotation is achieved in less than a second for the same video period. We also demonstrated that manual annotation introduced mistakes as well as intra- and inter-observer variability that our method is able to suppress due to the high precision and reproducibility.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- INSERM, LTSI - UMR 1099, Univ Rennes, 35000, Rennes, France.
| | | | - Saul Alexis Heredia Perez
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Kanako Harada
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Mamoru Mitsuishi
- Department of Mechanical Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Pierre Jannin
- INSERM, LTSI - UMR 1099, Univ Rennes, 35000, Rennes, France
| |
Collapse
|
7
|
Kobayashi S, Cho B, Huaulmé A, Tatsugami K, Honda H, Jannin P, Hashizumea M, Eto M. Assessment of surgical skills by using surgical navigation in robot-assisted partial nephrectomy. Int J Comput Assist Radiol Surg 2019; 14:1449-1459. [PMID: 31119486 DOI: 10.1007/s11548-019-01980-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Accepted: 04/16/2019] [Indexed: 01/15/2023]
Abstract
PURPOSE To assess surgical skills in robot-assisted partial nephrectomy (RAPN) with and without surgical navigation (SN). METHODS We employed an SN system that synchronizes the real-time endoscopic image with a virtual reality three-dimensional (3D) model for RAPN and evaluated the skills of two expert surgeons with regard to the identification and dissection of the renal artery (non-SN group, n = 21 [first surgeon n = 9, second surgeon n = 12]; SN group, n = 32 [first surgeon n = 11, second surgeon n = 21]). We converted all movements of the robotic forceps during RAPN into a dedicated vocabulary. Using RAPN videos, we classified all movements of the robotic forceps into direct action (defined as movements of the robotic forceps that directly affect tissues) and connected motion (defined as movements that link actions). In addition, we analyzed the frequency, duration, and occupancy rate of the connected motion. RESULTS In the SN group, the R.E.N.A.L nephrometry score was lower (7 vs. 6, P = 0.019) and the time to identify and dissect the renal artery (16 vs. 9 min, P = 0.008) was significantly shorter. The connected motions of inefficient "insert," "pull," and "rotate" motions were significantly improved by SN. SN significantly improved the frequency, duration, and occupancy rate of connected motions of the right hand of the first surgeon and of both hands of the second surgeon. The improvements in connected motions were positively associated with SN for both surgeons. CONCLUSION This is the first study to investigate SN for nephron-sparing surgery. SN with 3D models might help improve the connected motions of expert surgeons to ensure efficient RAPN.
Collapse
Affiliation(s)
- Satoshi Kobayashi
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan.,Department of Urology, Kyushu University, Fukuoka, Japan
| | - Byunghyun Cho
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan.
| | - Arnaud Huaulmé
- Faculty of Medicine, National Institute of Health and Scientific Research, University of Rennes 1, Rennes, France
| | | | - Hiroshi Honda
- Department of Radiology, Kyushu University, Fukuoka, Japan
| | - Pierre Jannin
- Faculty of Medicine, National Institute of Health and Scientific Research, University of Rennes 1, Rennes, France
| | - Makoto Hashizumea
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Masatoshi Eto
- Department of Advanced Medical Initiatives Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan.,Department of Urology, Kyushu University, Fukuoka, Japan
| |
Collapse
|
8
|
Forestier G, Petitjean F, Senin P, Despinoy F, Huaulmé A, Fawaz HI, Weber J, Idoumghar L, Muller PA, Jannin P. Surgical motion analysis using discriminative interpretable patterns. Artif Intell Med 2018; 91:3-11. [PMID: 30172445 DOI: 10.1016/j.artmed.2018.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 07/06/2018] [Accepted: 08/13/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.
Collapse
Affiliation(s)
- Germain Forestier
- IRIMAS, Université de Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - François Petitjean
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Pavel Senin
- Los Alamos National Laboratory, University Of Hawai'i at Mānoa, United States.
| | - Fabien Despinoy
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | - Arnaud Huaulmé
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | | | | | | | | | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| |
Collapse
|
9
|
Dergachyova O, Bouget D, Huaulmé A, Morandi X, Jannin P. Automatic data-driven real-time segmentation and recognition of surgical workflow. Int J Comput Assist Radiol Surg 2016; 11:1081-9. [PMID: 26995598 DOI: 10.1007/s11548-016-1371-x] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 02/26/2016] [Indexed: 11/30/2022]
Abstract
PURPOSE With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. METHODS The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. RESULTS On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. CONCLUSION Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.
Collapse
Affiliation(s)
- Olga Dergachyova
- INSERM, U1099, Rennes, 35000, France. .,Université de Rennes 1, LTSI, Rennes, 35000, France.
| | - David Bouget
- INSERM, U1099, Rennes, 35000, France.,Université de Rennes 1, LTSI, Rennes, 35000, France
| | - Arnaud Huaulmé
- INSERM, U1099, Rennes, 35000, France.,Université de Rennes 1, LTSI, Rennes, 35000, France.,Université Joseph Fourier, TIMC-IMAG UMR 5525, Grenoble, 38041, France
| | - Xavier Morandi
- INSERM, U1099, Rennes, 35000, France.,Université de Rennes 1, LTSI, Rennes, 35000, France.,CHU Rennes, Département de Neurochirurgie, Rennes, 35000, France
| | - Pierre Jannin
- INSERM, U1099, Rennes, 35000, France.,Université de Rennes 1, LTSI, Rennes, 35000, France
| |
Collapse
|