51
|
Alnafisee N, Zafar S, Vedula SS, Sikder S. Current methods for assessing technical skill in cataract surgery. J Cataract Refract Surg 2021; 47:256-264. [PMID: 32675650 DOI: 10.1097/j.jcrs.0000000000000322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/19/2020] [Indexed: 12/18/2022]
Abstract
Surgery is a major source of errors in patient care. Preventing complications from surgical errors in the operating room is estimated to lead to reduction of up to 41 846 readmissions and save $620.3 million per year. It is now established that poor technical skill is associated with an increased risk of severe adverse events postoperatively and traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. This review discusses the current methods available for evaluating technical skills in cataract surgery and the recent technological advancements that have enabled capture and analysis of large amounts of complex surgical data for more automated objective skills assessment.
Collapse
Affiliation(s)
- Nouf Alnafisee
- From the The Wilmer Eye Institute, Johns Hopkins University School of Medicine (Alnafisee, Zafar, Sikder), Baltimore, and the Department of Computer Science, Malone Center for Engineering in Healthcare, The Johns Hopkins University Whiting School of Engineering (Vedula), Baltimore, Maryland, USA
| | | | | | | |
Collapse
|
52
|
Castillo-Segura P, Fernández-Panadero C, Alario-Hoyos C, Muñoz-Merino PJ, Delgado Kloos C. Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review. Artif Intell Med 2021; 112:102007. [PMID: 33581827 DOI: 10.1016/j.artmed.2020.102007] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 11/25/2020] [Accepted: 12/28/2020] [Indexed: 11/18/2022]
Abstract
The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT (Internet of Things), the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.
Collapse
Affiliation(s)
- Pablo Castillo-Segura
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | | | - Carlos Alario-Hoyos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Pedro J Muñoz-Merino
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Carlos Delgado Kloos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| |
Collapse
|
53
|
Tontini GE, Neumann H. Artificial intelligence: Thinking outside the box. Best Pract Res Clin Gastroenterol 2020; 52-53:101720. [PMID: 34172247 DOI: 10.1016/j.bpg.2020.101720] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/01/2020] [Accepted: 12/03/2020] [Indexed: 01/31/2023]
Abstract
Artificial intelligence (AI) for luminal gastrointestinal endoscopy is rapidly evolving. To date, most applications have focused on colon polyp detection and characterization. However, the potential of AI to revolutionize our current practice in endoscopy is much more broadly positioned. In this review article, the Authors provide new ideas on how AI might help endoscopists in the future to rediscover endoscopy practice.
Collapse
Affiliation(s)
- Gian Eugenio Tontini
- Gastroenterology and Endoscopy Unit, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy; Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Helmut Neumann
- Department of Interdisciplinary Endoscopy, University Hospital Mainz, Mainz, Germany.
| |
Collapse
|
54
|
Rogers MP, DeSantis AJ, Janjua H, Barry TM, Kuo PC. The future surgical training paradigm: Virtual reality and machine learning in surgical education. Surgery 2020; 169:1250-1252. [PMID: 33280858 DOI: 10.1016/j.surg.2020.09.040] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 09/23/2020] [Accepted: 09/26/2020] [Indexed: 10/22/2022]
Abstract
Surgical training has undergone substantial change in the last few decades. As technology and patient complexity continues to increase, demands for novel approaches to ensure competency have arisen. Virtual reality systems augmented with machine learning represents one such approach. The ability to offer on-demand training, integrate checklists, and provide personalized, surgeon-specific feedback is paving the way to a new era of surgical training. Machine learning algorithms that improve over time as they acquire more data will continue to refine the education they provide. Further, fully immersive simulated environments coupled with machine learning analytics provide real-world training opportunities in a safe atmosphere away from the potential to harm patients. Careful implementation of these technologies has the potential to increase access and improve quality of surgical training and patient care and are poised to change the landscape of current surgical training. Herein, we describe the current state of virtual reality coupled with machine learning for surgical training, future directions, and existing limitations of this technology.
Collapse
Affiliation(s)
- Michael P Rogers
- OnetoMAP Data Analytics and Machine Learning, Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL
| | - Anthony J DeSantis
- OnetoMAP Data Analytics and Machine Learning, Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL
| | - Haroon Janjua
- OnetoMAP Data Analytics and Machine Learning, Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL
| | - Tara M Barry
- OnetoMAP Data Analytics and Machine Learning, Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL
| | - Paul C Kuo
- OnetoMAP Data Analytics and Machine Learning, Department of General Surgery, University of South Florida Morsani College of Medicine, Tampa, FL.
| |
Collapse
|
55
|
Wang WA, Dong P, Zhang A, Wang WJ, Guo CA, Wang J, Liu HB. Artificial intelligence: A new budding star in gastric cancer. Artif Intell Gastroenterol 2020; 1:60-70. [DOI: 10.35712/aig.v1.i4.60] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 11/01/2020] [Accepted: 11/27/2020] [Indexed: 02/06/2023] Open
Abstract
The pursuit of health has always been the driving force for the advancement of human society, and social development will be profoundly affected by every breakthrough in the medical industry. With the arrival of the information technology revolution era, artificial intelligence (AI) technology has been rapidly developed. AI has been combined with medicine but it has been less studied with gastric cancer (GC). AI is a new budding star in GC, and its contribution to GC is mainly focused on diagnosis and treatment. For early GC, AI’s impact is not only reflected in its high accuracy but also its ability to quickly train primary doctors, improve the diagnosis rate of early GC, and reduce missed cases. At the same time, it will also reduce the possibility of missed diagnosis of advanced GC in cardia. Furthermore, it is used to assist imaging doctors to determine the location of lymph nodes and, more importantly, it can more effectively judge the lymph node metastasis of GC, which is conducive to the prognosis of patients. In surgical treatment of GC, it also has great potential. Robotic surgery is the latest technology in GC surgery. It is a bright star for minimally invasive treatment of GC, and together with laparoscopic surgery, it has become a common treatment for GC. Through machine learning, robotic systems can reduce operator errors and trauma of patients, and can predict the prognosis of GC patients. Throughout the centuries of development of surgery, the history gradually changes from traumatic to minimally invasive. In the future, AI will help GC patients reduce surgical trauma and further improve the efficiency of minimally invasive treatment of GC.
Collapse
Affiliation(s)
- Wen-An Wang
- Graduate School, Gansu University of Traditional Chinese Medicine, Lanzhou 730000, Gansu Province, China
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| | - Peng Dong
- Department of General Surgery, Lanzhou University Second Hospital, Lanzhou 730000, Gansu Province, China
| | - An Zhang
- Graduate School, Gansu University of Traditional Chinese Medicine, Lanzhou 730000, Gansu Province, China
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| | - Wen-Jie Wang
- Department of General Surgery, Lanzhou University Second Hospital, Lanzhou 730000, Gansu Province, China
| | - Chang-An Guo
- Department of Emergency Medicine, Lanzhou University Second Hospital, Lanzhou 730000, Gansu Province, China
| | - Jing Wang
- Graduate School, Gansu University of Traditional Chinese Medicine, Lanzhou 730000, Gansu Province, China
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| | - Hong-Bin Liu
- Department of General Surgery, The 940th Hospital of Joint Logistics Support Force of Chinese People’s Liberation Army, Lanzhou 730050, Gansu Province, China
| |
Collapse
|
56
|
Rogers MP, DeSantis AJ, Janjua H, Kuo PC. The present and future state of machine learning for predictive analytics in surgery. Am J Surg 2020; 221:1298-1299. [PMID: 33223076 DOI: 10.1016/j.amjsurg.2020.11.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 11/09/2020] [Accepted: 11/10/2020] [Indexed: 11/16/2022]
Affiliation(s)
- Michael P Rogers
- OnetoMAP Data Analytics and Machine Learning, Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Anthony J DeSantis
- OnetoMAP Data Analytics and Machine Learning, Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Haroon Janjua
- OnetoMAP Data Analytics and Machine Learning, Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA
| | - Paul C Kuo
- OnetoMAP Data Analytics and Machine Learning, Department of Surgery, University of South Florida Morsani College of Medicine, Tampa, FL, USA.
| |
Collapse
|
57
|
Rahman MM, Balakuntala MV, Gonzalez G, Agarwal M, Kaur U, Venkatesh VLN, Sanchez-Tamayo N, Xue Y, Voyles RM, Aggarwal V, Wachs J. SARTRES: a semi-autonomous robot teleoperation environment for surgery. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2020. [DOI: 10.1080/21681163.2020.1834878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Md Masudur Rahman
- Department of Computer Science, Purdue University, West Lafayette, IN, USA
| | | | - Glebys Gonzalez
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Mridul Agarwal
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Upinder Kaur
- School of Engineering Technology, Purdue University, West Lafayette, IN, USA
| | | | | | - Yexiang Xue
- Department of Computer Science, Purdue University, West Lafayette, IN, USA
| | - Richard M. Voyles
- School of Engineering Technology, Purdue University, West Lafayette, IN, USA
| | - Vaneet Aggarwal
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Juan Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
58
|
Ma R, Vanstrum EB, Lee R, Chen J, Hung AJ. Machine learning in the optimization of robotics in the operative field. Curr Opin Urol 2020; 30:808-816. [PMID: 32925312 PMCID: PMC7735438 DOI: 10.1097/mou.0000000000000816] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
PURPOSE OF REVIEW The increasing use of robotics in urologic surgery facilitates collection of 'big data'. Machine learning enables computers to infer patterns from large datasets. This review aims to highlight recent findings and applications of machine learning in robotic-assisted urologic surgery. RECENT FINDINGS Machine learning has been used in surgical performance assessment and skill training, surgical candidate selection, and autonomous surgery. Autonomous segmentation and classification of surgical data have been explored, which serves as the stepping-stone for providing real-time surgical assessment and ultimately, improve surgical safety and quality. Predictive machine learning models have been created to guide appropriate surgical candidate selection, whereas intraoperative machine learning algorithms have been designed to provide 3-D augmented reality and real-time surgical margin checks. Reinforcement-learning strategies have been utilized in autonomous robotic surgery, and the combination of expert demonstrations and trial-and-error learning by the robot itself is a promising approach towards autonomy. SUMMARY Robot-assisted urologic surgery coupled with machine learning is a burgeoning area of study that demonstrates exciting potential. However, further validation and clinical trials are required to ensure the safety and efficacy of incorporating machine learning into surgical practice.
Collapse
Affiliation(s)
- Runzhuo Ma
- Center for Robotic Simulation & Education, Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California, USA
| | | | | | | | | |
Collapse
|
59
|
Zhao S, Xiao X, Wang Q, Zhang X, Li W, Soghier L, Hahn J. An Intelligent Augmented Reality Training Framework for Neonatal Endotracheal Intubation. INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY : (ISMAR) [PROCEEDINGS]. IEEE AND ACM INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY 2020; 2020:672-681. [PMID: 33935605 PMCID: PMC8084704 DOI: 10.1109/ismar50242.2020.00097] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Neonatal Endotracheal Intubation (ETI) is a critical resuscitation skill that requires tremendous practice of trainees before clinical exposure. However, current manikin-based training regimen is ineffective in providing satisfactory real-time procedural guidance for accurate assessment due to the lack of see-through visualization within the manikin. The training efficiency is further reduced by the limited availability of expert instructors, which inevitably results in a long learning curve for trainees. To this end, we propose an intelligent Augmented Reality (AR) training framework that provides trainees with a complete visualization of the ETI procedure for real-time guidance and assessment. Specifically, the proposed framework is capable of capturing the motions of the laryngoscope and the manikin and offer 3D see-through visualization rendered to the head-mounted display (HMD). Furthermore, an attention-based Convolutional Neural Network (CNN) model is developed to automatically assess the ETI performance from the captured motions as well as identify regions of motions that significantly contribute to the performance evaluation. Lastly, augmented user-friendly feedback is delivered with interpretable results with the ETI scoring rubric through the color-coded motion trajectory that classifies highlighted regions that need more practice. The classification accuracy of our machine learning model is 84.6%.
Collapse
Affiliation(s)
| | | | | | | | - Wei Li
- George Washington University
| | | | | |
Collapse
|
60
|
What reveals about depression level? The role of multimodal features at the level of interview questions. INFORMATION & MANAGEMENT 2020. [DOI: 10.1016/j.im.2020.103349] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
61
|
Brown KC, Bhattacharyya KD, Kulason S, Zia A, Jarc A. How to Bring Surgery to the Next Level: Interpretable Skills Assessment in Robotic-Assisted Surgery. Visc Med 2020; 36:463-470. [PMID: 33447602 DOI: 10.1159/000512437] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 10/20/2020] [Indexed: 12/18/2022] Open
Abstract
Introduction A surgeon's technical skills are an important factor in delivering optimal patient care. Most existing methods to estimate technical skills remain subjective and resource intensive. Robotic-assisted surgery (RAS) provides a unique opportunity to develop objective metrics using key elements of intraoperative surgeon behavior which can be captured unobtrusively, such as instrument positions and button presses. Recent studies have shown that objective metrics based on these data (referred to as objective performance indicators [OPIs]) correlate to select clinical outcomes during robotic-assisted radical prostatectomy. However, the current OPIs remain difficult to interpret directly and, therefore, to use within structured feedback to improve surgical efficiencies. Methods We analyzed kinematic and event data from da Vinci surgical systems (Intuitive Surgical, Inc., Sunnyvale, CA, USA) to calculate values that can summarize the use of robotic instruments, referred to as OPIs. These indicators were mapped to broader technical skill categories of established training protocols. A data-driven approach was then applied to further sub-select OPIs that distinguish skill for each technical skill category within each training task. This subset of OPIs was used to build a set of logistic regression classifiers that predict the probability of expertise in that skill to identify targeted improvement and practice. The final, proposed feedback using OPIs was based on the coefficients of the logistic regression model to highlight specific actions that can be taken to improve. Results We determine that for the majority of skills, only a small subset of OPIs (2-10) are required to achieve the highest model accuracies (80-95%) for estimating technical skills within clinical-like tasks on a porcine model. The majority of the skill models have similar accuracy as models predicting overall expertise for a task (80-98%). Skill models can divide a prediction into interpretable categories for simpler, targeted feedback. Conclusion We define and validate a methodology to create interpretable metrics for key technical skills during clinical-like tasks when performing RAS. Using this framework for evaluating technical skills, we believe that surgical trainees can better understand both what can be improved and how to improve.
Collapse
Affiliation(s)
- Kristen C Brown
- Advanced Product Development, Intuitive Surgical, Inc., Norcross, Georgia, USA
| | | | - Sue Kulason
- Advanced Product Development, Intuitive Surgical, Inc., Norcross, Georgia, USA
| | - Aneeq Zia
- Advanced Product Development, Intuitive Surgical, Inc., Norcross, Georgia, USA
| | - Anthony Jarc
- Advanced Product Development, Intuitive Surgical, Inc., Norcross, Georgia, USA
| |
Collapse
|
62
|
Lefor AK, Harada K, Dosis A, Mitsuishi M. Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set using Robotics Video and Motion Assessment Software. Int J Comput Assist Radiol Surg 2020; 15:2017-2025. [PMID: 33025366 PMCID: PMC7671974 DOI: 10.1007/s11548-020-02259-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 09/04/2020] [Indexed: 12/01/2022]
Abstract
Purpose The JIGSAWS dataset is a fixed dataset of robot-assisted surgery kinematic data used to develop predictive models of skill. The purpose of this study is to analyze the relationships of self-defined skill level with global rating scale scores and kinematic data (time, path length and movements) from three exercises (suturing, knot-tying and needle passing) (right and left hands) in the JIGSAWS dataset. Methods Global rating scale scores are reported in the JIGSAWS dataset and kinematic data were calculated using ROVIMAS software. Self-defined skill levels are in the dataset (novice, intermediate, expert). Correlation coefficients (global rating scale-skill level and global rating scale-kinematic parameters) were calculated. Kinematic parameters were compared among skill levels. Results Global rating scale scores correlated with skill in the knot-tying exercise (r = 0.55, p = 0.0005). In the suturing exercise, time, path length (left) and movements (left) were significantly different (p < 0.05) for novices and experts. For knot-tying, time, path length (right and left) and movements (right) differed significantly for novices and experts. For needle passing, no kinematic parameter was significantly different comparing novices and experts. The only kinematic parameter that correlated with global rating scale scores is time in the knot-tying exercise. Conclusion Global rating scale scores weakly correlate with skill level and kinematic parameters. The ability of kinematic parameters to differentiate among self-defined skill levels is inconsistent. Additional data are needed to enhance the dataset and facilitate subset analyses and future model development.
Collapse
Affiliation(s)
- Alan Kawarai Lefor
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Kanako Harada
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.,Department of Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| | | | - Mamoru Mitsuishi
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.,Department of Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
63
|
Lee JA, Close MF, Liu YF, Rowley MA, Isaac MJ, Costello MS, Nguyen SA, Meyer TA. Using Intraoperative Recordings to Evaluate Surgical Technique and Performance in Mastoidectomy. JAMA Otolaryngol Head Neck Surg 2020; 146:893-899. [PMID: 32780790 DOI: 10.1001/jamaoto.2020.2063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Importance Otolaryngology residency programs currently lack rigorous methods for assessing surgical skill and often rely on biased tools of evaluation. Objectives To evaluate which techniques used in mastoidectomy can serve as indicators of surgeon level (defined as the level of training) and whether these determinations of technique can be made based solely on the movement of the drill head or suction. Design, Setting, and Participants In this prospective, observational study conducted from January 1, 2015, to December 31, 2019, at a single tertiary care institution, 3 independent observers made blinded evaluations on 24 intraoperative recordings of surgeons (6 junior residents, 4 senior residents, and 2 attending surgeons) performing mastoidectomies. Main Outcomes and Measures Observers assessed drill stroke count, drilling efficiency, stroke pattern, use of suction and irrigation, and estimated surgeon level. Assessments were made on both original videos and animated videos that show only the path of the burr head or suction as dots against a white background. Results Among the 24 recorded mastoidectomies performed by the 12 study surgeons, intraclass correlation was excellent for original video assessment of drill stroke count (0.98 [95% CI, 0.97-1.00]), use of suction (0.75 [95% CI, 0.52-0.89]), use of irrigation (0.83 [95% CI, 0.66-0.92]), and estimated surgeon level (0.82 [95% CI, 0.64-0.92]) and fair for drilling efficiency (0.54 [95% CI, 0.09-0.79]) and stroke pattern (0.49 [95% CI, -0.02 to 0.76]). Intraclass correlation was excellent for animated video assessment of drill stroke count per unit time (0.98 [95% CI, 0.96-0.99]) and drilling efficiency (0.80 [95% CI, 0.60-0.91]), good for stroke pattern (0.68 [95% CI, 0.38-0.85]) and estimated surgeon level (based on path of drill) (0.69 [95% CI, 0.38-0.85]), and fair for use of suction (0.58 [95% CI, 0.16-0.80]) and estimated surgeon level (based on path of suction) (0.58 [95% CI, 0.17-0.80]). On evaluation of original videos, junior residents had lower drill stroke count compared with senior residents and attending surgeons (6.0 [interquartile range (IQR), 3.0-8.0] vs 9.5 [IQR, 5.0-13.0] vs 10.5 [IQR, 5.0-17.8]; η2 = 0.14 [95% CI, 0.01-0.28]). On evaluation of animated videos, junior residents also had lower drill stroke count compared with senior residents and attending surgeons (6.0 [IQR, 4.0-9.0] vs 10.5 [IQR, 10.0-13.8] vs 10.5 [IQR, 4.3-21.0]; η2 = 0.19 [95% CI, 0.04-0.33]). Compared with junior and senior residents, attending surgeons had higher median ratings of drilling efficiency (original videos: junior residents, 4.0 [IQR, 3.0-4.0]; senior residents, 4.0 [IQR, 3.0-4.8]; attending surgeons, 5.0 [IQR, 4.3-5.0]; η2 = 0.23 [95% CI, 0.06-0.37]; animated videos: junior residents, 4.0 [IQR, 3.0-4.0]; senior residents, 3.0 [IQR, 2.0-4.0]; attending surgeons, 5.0 [IQR, 4.0-5.0]; η2 = 0.25 [95% CI, 0.08-0.39]) and stroke pattern (original videos: junior residents, 4.0 [IQR, 3.0-4.0]; senior residents, 4.0 [IQR, 3.0-4.8]; attending surgeons, 5.0 [IQR, 5.0-5.0]; η2 = 0.17 [95% CI, 0.03-0.31]; animated videos: junior residents, 4.0 [IQR, 3.0-4.0]; senior residents, 4.0 [IQR, 2.0-4.0]; attending surgeons, 5.0 [IQR, 5.0-5.0]; η2 = 0.15 [95% CI, 0.02-0.29]). Conclusions and Relevance This study suggests that observation of intraoperative mastoidectomy recordings is a feasible method of evaluating surgeon level. Reasonable indicators of surgeon level include the drill stroke count, drilling efficiency, stroke pattern, and use of the suction irrigator. Observing the path of the drill alone is sufficient to appreciate differences in drilling technique but not sufficient to accurately determine surgeon level. Intraoperative recordings can serve as a useful addition to resident education and evaluation.
Collapse
Affiliation(s)
- Joshua A Lee
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Michaela F Close
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Yuan F Liu
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - M Andrew Rowley
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Mitchell J Isaac
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Mark S Costello
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Shaun A Nguyen
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - Ted A Meyer
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| |
Collapse
|
64
|
Fong Y, Buell JF, Collins J, Martinie J, Bruns C, Tsung A, Clavien PA, Nachmany I, Edwin B, Pratschke J, Solomonov E, Koenigsrainer A, Giulianotti PC. Applying the Delphi process for development of a hepatopancreaticobiliary robotic surgery training curriculum. Surg Endosc 2020; 34:4233-4244. [PMID: 32767146 DOI: 10.1007/s00464-020-07836-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 07/21/2020] [Indexed: 01/25/2023]
Abstract
BACKGROUND Robotic hepatopancreaticobiliary (HPB) procedures are performed worldwide and establishing processes for safe adoption of this technology is essential for patient benefit. We report results of the Delphi process to define and optimize robotic training procedures for HPB surgeons. METHODS In 2019, a robotic HPB surgery panel with an interest in surgical training from the Americas and Europe was created and met. An e-consensus-finding exercise using the Delphi process was applied and consensus was defined as 80% agreement on each question. Iterations of anonymous voting continued over three rounds. RESULTS Members agreed on several points: there was need for a standardized robotic training curriculum for HPB surgery that considers experience of surgeons and based on a robotic hepatectomy includes a common approach for "basic robotic skills" training (e-learning module, including hardware description, patient selection, port placement, docking, troubleshooting, fundamentals of robotic surgery, team training and efficiency, and emergencies) and an "advanced technical skills curriculum" (e-learning, including patient selection information, cognitive skills, and recommended operative equipment lists). A modular approach to index procedures should be used with video demonstrations, port placement for index procedure, troubleshooting, and emergency scenario management information. Inexperienced surgeons should undergo training in basic robotic skills and console proficiency, transitioning to full procedure training of e-learning (video demonstration, simulation training, case observation, and final evaluation). Experienced surgeons should undergo basic training when using a new system (e-learning, dry lab, and operating room (OR) team training, virtual reality modules, and wet lab; case observations were unnecessary for basic training) and should complete the advanced index procedural robotic curriculum with assessment by wet lab, case observation, and OR team training. CONCLUSIONS Optimization and standardization of training and education of HPB surgeons in robotic procedures was agreed upon. Results are being incorporated into future curriculum for education in robotic surgery.
Collapse
Affiliation(s)
- Yuman Fong
- Department of Surgery, City of Hope Medical Center, 1500 East Duarte Road, Duarte, CA, 91011, USA.
| | - Joseph F Buell
- Department of Surgery, Mission Healthcare, HCA Healthcare, North Carolina Division, MAHEC University of North Carolina, Asheville, NC, USA
| | - Justin Collins
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm, Sweden
| | - John Martinie
- Department of General Surgery, Carolinas Medical Center, Charlotte, NC, USA
| | - Christiane Bruns
- Department of General, Visceral, Cancer and Transplantation Surgery, University Hospital of Cologne, Cologne, Germany
| | - Allan Tsung
- Department of Surgical Oncology, The Ohio State University Comprehensive Cancer Center, Columbus, OH, USA
| | - Pierre-Alain Clavien
- Department of Surgery and Transplantation, University Hospital of Zurich, Zurich, Switzerland
| | - Ido Nachmany
- Department of "Surgery B". Tel Aviv Sourasky Medical Center, Tel Aviv & The Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Bjørn Edwin
- The Intervention Centre and Department of HPB Surgery, Oslo University Hospital and Institute of Clinical Medicine, Oslo University, Oslo, Norway
| | - Johann Pratschke
- Department of Surgery, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Evgeny Solomonov
- Department of General and Hepato-Pancreatico-Biliary and Transplant Surgery, Ziv Medical Centre, Zefat (Safed), Israel
| | - Alfred Koenigsrainer
- Department of General, Visceral, Cancer and Surgery, University of Tuebingen, Tuebingen, Germany
| | | |
Collapse
|
65
|
The Application of Artificial Intelligence in Prostate Cancer Management—What Improvements Can Be Expected? A Systematic Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186428] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial Intelligence (AI) is progressively remodeling our daily life. A large amount of information from “big data” now enables machines to perform predictions and improve our healthcare system. AI has the potential to reshape prostate cancer (PCa) management thanks to growing applications in the field. The purpose of this review is to provide a global overview of AI in PCa for urologists, pathologists, radiotherapists, and oncologists to consider future changes in their daily practice. A systematic review was performed, based on PubMed MEDLINE, Google Scholar, and DBLP databases for original studies published in English from January 2009 to January 2019 relevant to PCa, AI, Machine Learning, Artificial Neural Networks, Convolutional Neural Networks, and Natural-Language Processing. Only articles with full text accessible were considered. A total of 1008 articles were reviewed, and 48 articles were included. AI has potential applications in all fields of PCa management: analysis of genetic predispositions, diagnosis in imaging, and pathology to detect PCa or to differentiate between significant and non-significant PCa. AI also applies to PCa treatment, whether surgical intervention or radiotherapy, skills training, or assessment, to improve treatment modalities and outcome prediction. AI in PCa management has the potential to provide a useful role by predicting PCa more accurately, using a multiomic approach and risk-stratifying patients to provide personalized medicine.
Collapse
|
66
|
Sardari F, Paiement A, Hannuna S, Mirmehdi M. VI-Net-View-Invariant Quality of Human Movement Assessment. SENSORS 2020; 20:s20185258. [PMID: 32942561 PMCID: PMC7570706 DOI: 10.3390/s20185258] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/05/2020] [Accepted: 09/09/2020] [Indexed: 12/30/2022]
Abstract
We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.
Collapse
Affiliation(s)
- Faegheh Sardari
- Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK; (S.H.); (M.M.)
- Correspondence: ; Tel.:+44-(0)117-954 5139
| | - Adeline Paiement
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Marseille, France;
| | - Sion Hannuna
- Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK; (S.H.); (M.M.)
| | - Majid Mirmehdi
- Department of Computer Science, University of Bristol, Bristol BS8 1UB, UK; (S.H.); (M.M.)
| |
Collapse
|
67
|
Ferguson JM, Pitt B, Kuntz A, Granna J, Kavoussi NL, Nimmagadda N, Barth EJ, Herrell SD, Webster RJ. Comparing the accuracy of the da Vinci Xi and da Vinci Si for image guidance and automation. Int J Med Robot 2020; 16:1-10. [DOI: 10.1002/rcs.2149] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 08/06/2020] [Accepted: 08/11/2020] [Indexed: 12/19/2022]
Affiliation(s)
- James M. Ferguson
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Bryn Pitt
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Alan Kuntz
- Robotics Center and School of Computing, University of Utah Salt Lake City Utah USA
| | - Josephine Granna
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Nicholas L. Kavoussi
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| | - Naren Nimmagadda
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| | - Eric J. Barth
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Stanley Duke Herrell
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| | - Robert J. Webster
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| |
Collapse
|
68
|
Close MF, Mehta CH, Liu Y, Isaac MJ, Costello MS, Kulbarsh KD, Meyer TA. Subjective vs Computerized Assessment of Surgeon Skill Level During Mastoidectomy. Otolaryngol Head Neck Surg 2020; 163:1255-1257. [PMID: 32600121 DOI: 10.1177/0194599820933882] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This pilot study examines the use of surgical instrument tracking and motion analysis in objectively measuring surgical performance. Accuracy of objective measures in distinguishing between surgeons of different levels was compared to that of subjective assessments. Twenty-four intraoperative video clips of mastoidectomies performed by junior residents (n = 12), senior residents (n = 8), and faculty (n = 4) were sent to otolaryngology programs via survey, yielding 708 subjective ratings of surgical experience level. Tracking software captured the total distance traveled by the drill, suction irrigator, and patient's head. Measurements were used to predict surgeon level of training, and accuracy was estimated via area under the curve (AUC) of receiver operating characteristic curves. Key objective metrics proved more accurate than subjective evaluations in determining both faculty vs resident level and senior vs junior resident level. The findings of this study suggest that objective analysis using computer software has the potential to improve the accuracy of surgical skill assessment.
Collapse
Affiliation(s)
- Michaela F Close
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Charmee H Mehta
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Yuan Liu
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Mitchell J Isaac
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Mark S Costello
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Kyle D Kulbarsh
- Medical University of South Carolina, Charleston, South Carolina, USA
| | - Ted A Meyer
- Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
69
|
Zhang D, Wu Z, Chen J, Gao A, Chen X, Li P, Wang Z, Yang G, Lo B, Yang GZ. Automatic Microsurgical Skill Assessment Based on Cross-Domain Transfer Learning. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2989075] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
70
|
Xiao X, Zhao S, Zhang X, Soghier L, Hahn J. Automated Assessment of Neonatal Endotracheal Intubation Measured by a Virtual Reality Simulation System. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2429-2433. [PMID: 33018497 PMCID: PMC7538655 DOI: 10.1109/embc44109.2020.9176629] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Manual assessment from experts in neonatal endotracheal intubation (ETI) training is a time-consuming and tedious process. Such subjective, highly variable, and resource-intensive assessment method may not only introduce inter-rater/intra-rater variability, but also represent a serious limitation in many large-scale training programs. Moreover, poor visualization during the procedure prevents instructors from observing the events occurring within the manikin or the patient, which introduces an additional source of error into the assessment. In this paper, we propose a physics-based virtual reality (VR) ETI simulation system that captures the entire motions of the laryngoscope and the endotracheal tube (ETT) in relation to the internal anatomy of the virtual patient. Our system provides a complete visualization of the procedure, offering instructors with comprehensive information for accurate assessment. More importantly, an interpretable machine learning algorithm was developed to automatically assess the ETI performance by training on the performance parameters extracted from the motions and the scores rated by experts. Our results show that the leave-one-out-cross-validation (LOOCV) classification accuracy of the automated assessment algorithm is 80%, which indicates that our system can reliably conduct a consistent and standardized assessment for ETI training.
Collapse
|
71
|
Jin P, Ji X, Kang W, Li Y, Liu H, Ma F, Ma S, Hu H, Li W, Tian Y. Artificial intelligence in gastric cancer: a systematic review. J Cancer Res Clin Oncol 2020; 146:2339-2350. [PMID: 32613386 DOI: 10.1007/s00432-020-03304-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 06/26/2020] [Indexed: 02/08/2023]
Abstract
OBJECTIVE This study aims to systematically review the application of artificial intelligence (AI) techniques in gastric cancer and to discuss the potential limitations and future directions of AI in gastric cancer. METHODS A systematic review was performed that follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Pubmed, EMBASE, the Web of Science, and the Cochrane Library were used to search for gastric cancer publications with an emphasis on AI that were published up to June 2020. The terms "artificial intelligence" and "gastric cancer" were used to search for the publications. RESULTS A total of 64 articles were included in this review. In gastric cancer, AI is mainly used for molecular bio-information analysis, endoscopic detection for Helicobacter pylori infection, chronic atrophic gastritis, early gastric cancer, invasion depth, and pathology recognition. AI may also be used to establish predictive models for evaluating lymph node metastasis, response to drug treatments, and prognosis. In addition, AI can be used for surgical training, skill assessment, and surgery guidance. CONCLUSIONS In the foreseeable future, AI applications can play an important role in gastric cancer management in the era of precision medicine.
Collapse
Affiliation(s)
- Peng Jin
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Xiaoyan Ji
- Department of Emergency Ward, First Teaching Hospital of Tianjin University of Traditional Chinese Medicine, Tianjin, 300193, China
| | - Wenzhe Kang
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yang Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Hao Liu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Fuhai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Shuai Ma
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Haitao Hu
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Weikun Li
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China
| | - Yantao Tian
- Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17, Panjiayuan Nanli, Chaoyang District, Beijing, 100021, China.
| |
Collapse
|
72
|
Gorantla KR, Esfahani ET. Surgical Skill Assessment using Motor Control Features and Hidden Markov Model. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5842-5845. [PMID: 31947180 DOI: 10.1109/embc.2019.8857629] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Surgical Skill Assessment has increased interest through which the training and objective feedback to surgeons can be given based on the task performance. In this paper, motor control features which are a part of psychomotor learning, are developed based on the camera plane coordinates of the tip of the tools from the videos of surgeons performing the Urethro-Vesicle Anastomosis (UVA) surgical task. Classification into Novices (N) and Experts (E), when compared to the manual encoding of subject expertise based on the Dreyfus model, resulted in high accuracy. Additionally, this study could form a basis for closed loop surgical training, specifically for the novitiate surgeons.
Collapse
|
73
|
|
74
|
Prince SW, Kang C, Simonelli J, Lee Y, Gerber MJ, Lim C, Chu K, Dutson EP, Tsao T. A robotic system for telementoring and training in laparoscopic surgery. Int J Med Robot 2019; 16:e2040. [DOI: 10.1002/rcs.2040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Revised: 09/09/2019] [Accepted: 09/09/2019] [Indexed: 12/17/2022]
Affiliation(s)
- Stephen W. Prince
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Christopher Kang
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - James Simonelli
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Yu‐Hsiu Lee
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Matthew J. Gerber
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Christopher Lim
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Kevin Chu
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Erik P. Dutson
- Center for Advanced Surgical and Interventional Technology University of California Los Angeles California
| | - Tsu‐Chin Tsao
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| |
Collapse
|
75
|
Andras I, Mazzone E, van Leeuwen FWB, De Naeyer G, van Oosterom MN, Beato S, Buckle T, O'Sullivan S, van Leeuwen PJ, Beulens A, Crisan N, D'Hondt F, Schatteman P, van Der Poel H, Dell'Oglio P, Mottrie A. Artificial intelligence and robotics: a combination that is changing the operating room. World J Urol 2019; 38:2359-2366. [PMID: 31776737 DOI: 10.1007/s00345-019-03037-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 11/21/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE The aim of the current narrative review was to summarize the available evidence in the literature on artificial intelligence (AI) methods that have been applied during robotic surgery. METHODS A narrative review of the literature was performed on MEDLINE/Pubmed and Scopus database on the topics of artificial intelligence, autonomous surgery, machine learning, robotic surgery, and surgical navigation, focusing on articles published between January 2015 and June 2019. All available evidences were analyzed and summarized herein after an interactive peer-review process of the panel. LITERATURE REVIEW The preliminary results of the implementation of AI in clinical setting are encouraging. By providing a readout of the full telemetry and a sophisticated viewing console, robot-assisted surgery can be used to study and refine the application of AI in surgical practice. Machine learning approaches strengthen the feedback regarding surgical skills acquisition, efficiency of the surgical process, surgical guidance and prediction of postoperative outcomes. Tension-sensors on the robotic arms and the integration of augmented reality methods can help enhance the surgical experience and monitor organ movements. CONCLUSIONS The use of AI in robotic surgery is expected to have a significant impact on future surgical training as well as enhance the surgical experience during a procedure. Both aim to realize precision surgery and thus to increase the quality of the surgical care. Implementation of AI in master-slave robotic surgery may allow for the careful, step-by-step consideration of autonomous robotic surgery.
Collapse
Affiliation(s)
- Iulia Andras
- ORSI Academy, Melle, Belgium
- Department of Urology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Elio Mazzone
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
- Department of Urology and Division of Experimental Oncology, URI, Urological Research Institute, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fijs W B van Leeuwen
- ORSI Academy, Melle, Belgium
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Geert De Naeyer
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Matthias N van Oosterom
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - Tessa Buckle
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - Shane O'Sullivan
- Department of Pathology, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil
| | - Pim J van Leeuwen
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Alexander Beulens
- Department of Urology, Catharina Hospital, Eindhoven, The Netherlands
- Netherlands Institute for Health Services (NIVEL), Utrecht, The Netherlands
| | - Nicolae Crisan
- Department of Urology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca, Romania
| | - Frederiek D'Hondt
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Peter Schatteman
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| | - Henk van Der Poel
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Paolo Dell'Oglio
- ORSI Academy, Melle, Belgium.
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium.
- Interventional Molecular Imaging Laboratory, Department of Radiology, Leiden University Medical Centre, Leiden, The Netherlands.
- Department of Urology, Antoni Van Leeuwenhoek Hospital, The Netherlands Cancer Institute, Amsterdam, The Netherlands.
| | - Alexandre Mottrie
- ORSI Academy, Melle, Belgium
- Department of Urology, Onze Lieve Vrouw Hospital, Aalst, Belgium
| |
Collapse
|
76
|
Lei Q, Du JX, Zhang HB, Ye S, Chen DS. A Survey of Vision-Based Human Action Evaluation Methods. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4129. [PMID: 31554229 PMCID: PMC6806217 DOI: 10.3390/s19194129] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 09/10/2019] [Accepted: 09/18/2019] [Indexed: 01/04/2023]
Abstract
The fields of human activity analysis have recently begun to diversify. Many researchers have taken much interest in developing action recognition or action prediction methods. The research on human action evaluation differs by aiming to design computation models and evaluation approaches for automatically assessing the quality of human actions. This line of study has become popular because of its explosively emerging real-world applications, such as physical rehabilitation, assistive living for elderly people, skill training on self-learning platforms, and sports activity scoring. This paper presents a comprehensive survey of approaches and techniques in action evaluation research, including motion detection and preprocessing using skeleton data, handcrafted feature representation methods, and deep learning-based feature representation methods. The benchmark datasets from this research field and some evaluation criteria employed to validate the algorithms' performance are introduced. Finally, the authors present several promising future directions for further studies.
Collapse
Affiliation(s)
- Qing Lei
- Department of Computer Science and Technology, Huaqiao University, Xiamen 361000, China.
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen 361000, China.
| | - Ji-Xiang Du
- Department of Computer Science and Technology, Huaqiao University, Xiamen 361000, China.
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen 361000, China.
| | - Hong-Bo Zhang
- Department of Computer Science and Technology, Huaqiao University, Xiamen 361000, China.
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen 361000, China.
| | - Shuang Ye
- Department of Computer Science and Technology, Huaqiao University, Xiamen 361000, China.
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen 361000, China.
| | - Duan-Sheng Chen
- Department of Computer Science and Technology, Huaqiao University, Xiamen 361000, China.
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen 361000, China.
| |
Collapse
|
77
|
Real-time surgical needle detection using region-based convolutional neural networks. Int J Comput Assist Radiol Surg 2019; 15:41-47. [PMID: 31422553 DOI: 10.1007/s11548-019-02050-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Accepted: 08/05/2019] [Indexed: 10/26/2022]
Abstract
OBJECTIVE Conventional surgical assistance and skill analysis for suturing mostly focus on the motions of the tools. As the quality of the suturing is determined by needle motions relative to the tissues, having knowledge of the needle motion would be useful for surgical assistance and skill analysis. As the first step toward demonstrating the usefulness of the knowledge of the needle motion, we developed a needle detection algorithm. METHODS Owing to the small needle size, attaching sensors to it is difficult. Therefore, we developed a real-time video-based needle detection algorithm using a region-based convolutional neural network. RESULTS Our method successfully detected the needle with an average precision of 89.2%. The needle was robustly detected even when the needle was heavily occluded by the tools and/or the blood vessels during microvascular anastomosis. However, there were some incorrect detections, including partial detection. CONCLUSION To the best of our knowledge, this is the first time deep neural networks have been applied to real-time needle detection. In the future, we will develop a needle pose estimation algorithm using the predicted needle location toward computer-aided surgical assistance and surgical skill analysis.
Collapse
|
78
|
Funke I, Mees ST, Weitz J, Speidel S. Video-based surgical skill assessment using 3D convolutional neural networks. Int J Comput Assist Radiol Surg 2019; 14:1217-1225. [PMID: 31104257 DOI: 10.1007/s11548-019-01995-1] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 05/08/2019] [Indexed: 11/29/2022]
Abstract
PURPOSE A profound education of novice surgeons is crucial to ensure that surgical interventions are effective and safe. One important aspect is the teaching of technical skills for minimally invasive or robot-assisted procedures. This includes the objective and preferably automatic assessment of surgical skill. Recent studies presented good results for automatic, objective skill evaluation by collecting and analyzing motion data such as trajectories of surgical instruments. However, obtaining the motion data generally requires additional equipment for instrument tracking or the availability of a robotic surgery system to capture kinematic data. In contrast, we investigate a method for automatic, objective skill assessment that requires video data only. This has the advantage that video can be collected effortlessly during minimally invasive and robot-assisted training scenarios. METHODS Our method builds on recent advances in deep learning-based video classification. Specifically, we propose to use an inflated 3D ConvNet to classify snippets, i.e., stacks of a few consecutive frames, extracted from surgical video. The network is extended into a temporal segment network during training. RESULTS We evaluate the method on the publicly available JIGSAWS dataset, which consists of recordings of basic robot-assisted surgery tasks performed on a dry lab bench-top model. Our approach achieves high skill classification accuracies ranging from 95.1 to 100.0%. CONCLUSIONS Our results demonstrate the feasibility of deep learning-based assessment of technical skill from surgical video. Notably, the 3D ConvNet is able to learn meaningful patterns directly from the data, alleviating the need for manual feature engineering. Further evaluation will require more annotated data for training and testing.
Collapse
Affiliation(s)
- Isabel Funke
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany.
| | - Sören Torge Mees
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Jürgen Weitz
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany
| |
Collapse
|
79
|
Holden MS, Xia S, Lia H, Keri Z, Bell C, Patterson L, Ungi T, Fichtinger G. Machine learning methods for automated technical skills assessment with instructional feedback in ultrasound-guided interventions. Int J Comput Assist Radiol Surg 2019; 14:1993-2003. [PMID: 31006107 DOI: 10.1007/s11548-019-01977-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 04/09/2019] [Indexed: 10/27/2022]
Abstract
OBJECTIVE Currently, there is a worldwide shift toward competency-based medical education. This necessitates the use of automated skills assessment methods during self-guided interventions training. Making assessment methods that are transparent and configurable will allow assessment to be interpreted into instructional feedback. The purpose of this work is to develop and validate skills assessment methods in ultrasound-guided interventions that are transparent and configurable. METHODS We implemented a method based upon decision trees and a method based upon fuzzy inference systems for technical skills assessment. Subsequently, we validated these methods for their ability to predict scores of operators on a 25-point global rating scale in ultrasound-guided needle insertions and their ability to provide useful feedback for training. RESULTS Decision tree and fuzzy rule-based assessment performed comparably to state-of-the-art assessment methods. They produced median errors (on a 25-point scale) of 1.7 and 1.8 for in-plane insertions and 1.5 and 3.0 for out-of-plane insertions, respectively. In addition, these methods provided feedback that was useful for trainee learning. Decision tree assessment produced feedback with median usefulness 7 out of 7; fuzzy rule-based assessment produced feedback with median usefulness 6 out of 7. CONCLUSION Transparent and configurable assessment methods are comparable to the state of the art and, in addition, can provide useful feedback. This demonstrates their value in self-guided interventions training curricula.
Collapse
Affiliation(s)
- Matthew S Holden
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada.
| | - Sean Xia
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Hillary Lia
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Zsuzsanna Keri
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Colin Bell
- Department of Emergency Medicine, School of Medicine, Queen's University, Kingston, ON, Canada
| | - Lindsey Patterson
- Department of Anesthesiology and Perioperative Medicine, School of Medicine, Queen's University, Kingston, ON, Canada
| | - Tamas Ungi
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Gabor Fichtinger
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| |
Collapse
|
80
|
Zhang D, Xiao B, Huang B, Zhang L, Liu J, Yang GZ. A Self-Adaptive Motion Scaling Framework for Surgical Robot Remote Control. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2018.2890200] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
81
|
Kowalewski KF, Garrow CR, Schmidt MW, Benner L, Müller-Stich BP, Nickel F. Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying. Surg Endosc 2019; 33:3732-3740. [DOI: 10.1007/s00464-019-06667-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 01/17/2019] [Indexed: 12/17/2022]
|
82
|
Ershad M, Rege R, Majewicz Fey A. Automatic and near real-time stylistic behavior assessment in robotic surgery. Int J Comput Assist Radiol Surg 2019; 14:635-643. [PMID: 30779023 DOI: 10.1007/s11548-019-01920-6] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2018] [Accepted: 01/28/2019] [Indexed: 12/20/2022]
Abstract
PURPOSE Automatic skill evaluation is of great importance in surgical robotic training. Extensive research has been done to evaluate surgical skill, and a variety of quantitative metrics have been proposed. However, these methods primarily use expert selected features which may not capture latent information in movement data. In addition, these features are calculated over the entire task time and are provided to the user after the completion of the task. Thus, these quantitative metrics do not provide users with information on how to modify their movements to improve performance in real time. This study focuses on automatic stylistic behavior recognition that has the potential to be implemented in near real time. METHODS We propose a sparse coding framework for automatic stylistic behavior recognition in short time intervals using only position data from the hands, wrist, elbow, and shoulder. A codebook is built for each stylistic adjective using the positive and negative labels provided for each trial through crowd sourcing. Sparse code coefficients are obtained for short time intervals (0.25 s) in a trial using this codebook. A support vector machine classifier is trained and validated through tenfold cross-validation using the sparse codes from the training set. RESULTS The results indicate that the proposed dictionary learning method is able to assess stylistic behavior performance in near real time using user joint position data with improved accuracy compared to using PCA features or raw data. CONCLUSION The possibility to automatically evaluate a trainee's style of movement in short time intervals could provide the user with online customized feedback and thus improve performance during surgical tasks.
Collapse
Affiliation(s)
- M Ershad
- Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.
| | - R Rege
- Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
| | - Ann Majewicz Fey
- Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA
| |
Collapse
|
83
|
Wang Z, Majewicz Fey A. Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery. Int J Comput Assist Radiol Surg 2018; 13:1959-1970. [PMID: 30255463 DOI: 10.1007/s11548-018-1860-1] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 09/11/2018] [Indexed: 12/18/2022]
Abstract
PURPOSE With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. METHODS We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. RESULTS We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1-3 second window, without needing an observation of entire training trial. CONCLUSION This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.
Collapse
Affiliation(s)
- Ziheng Wang
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.,Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA
| |
Collapse
|
84
|
Forestier G, Petitjean F, Senin P, Despinoy F, Huaulmé A, Fawaz HI, Weber J, Idoumghar L, Muller PA, Jannin P. Surgical motion analysis using discriminative interpretable patterns. Artif Intell Med 2018; 91:3-11. [PMID: 30172445 DOI: 10.1016/j.artmed.2018.08.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 07/06/2018] [Accepted: 08/13/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The analysis of surgical motion has received a growing interest with the development of devices allowing their automatic capture. In this context, the use of advanced surgical training systems makes an automated assessment of surgical trainee possible. Automatic and quantitative evaluation of surgical skills is a very important step in improving surgical patient care. MATERIAL AND METHOD In this paper, we present an approach for the discovery and ranking of discriminative and interpretable patterns of surgical practice from recordings of surgical motions. A pattern is defined as a series of actions or events in the kinematic data that together are distinctive of a specific gesture or skill level. Our approach is based on the decomposition of continuous kinematic data into a set of overlapping gestures represented by strings (bag of words) for which we compute comparative numerical statistic (tf-idf) enabling the discriminative gesture discovery via its relative occurrence frequency. RESULTS We carried out experiments on three surgical motion datasets. The results show that the patterns identified by the proposed method can be used to accurately classify individual gestures, skill levels and surgical interfaces. We also present how the patterns provide a detailed feedback on the trainee skill assessment. CONCLUSIONS The proposed approach is an interesting addition to existing learning tools for surgery as it provides a way to obtain a feedback on which parts of an exercise have been used to classify the attempt as correct or incorrect.
Collapse
Affiliation(s)
- Germain Forestier
- IRIMAS, Université de Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - François Petitjean
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | - Pavel Senin
- Los Alamos National Laboratory, University Of Hawai'i at Mānoa, United States.
| | - Fabien Despinoy
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | - Arnaud Huaulmé
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| | | | | | | | | | - Pierre Jannin
- Univ Rennes, Inserm, LTSI - UMR_S 1099, F35000 Rennes, France.
| |
Collapse
|
85
|
Moglia A. Automated, objective and predictive evaluation of technical skills in robot-assisted surgery. J Robot Surg 2018; 13:189-190. [PMID: 29873022 DOI: 10.1007/s11701-018-0833-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Accepted: 06/03/2018] [Indexed: 11/28/2022]
Affiliation(s)
- Andrea Moglia
- EndoCAS, Center for Computer Assisted Surgery, University of Pisa, Edificio 102, via Paradisa 2, 56124, Pisa, Italy.
| |
Collapse
|
86
|
Ismail Fawaz H, Forestier G, Weber J, Idoumghar L, Muller PA. Evaluating Surgical Skills from Kinematic Data Using Convolutional Neural Networks. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_25] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|