1
|
Rasheed B, Bjelland Ø, Dalen AF, Schaarschmidt U, Schaathun HG, Pedersen MD, Steinert M, Bye RT. Intraoperative identification of patient-specific elastic modulus of the meniscus during arthroscopy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108269. [PMID: 38861877 DOI: 10.1016/j.cmpb.2024.108269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/30/2024] [Accepted: 05/31/2024] [Indexed: 06/13/2024]
Abstract
BACKGROUND AND OBJECTIVE Degenerative meniscus tissue has been associated with a lower elastic modulus and can lead to the development of arthrosis. Safe intraoperative measurement of in vivo elastic modulus of the human meniscus could contribute to a better understanding of meniscus health, and for developing surgical simulators where novice surgeons can learn to distinguish healthy from degenerative meniscus tissue. Such measurement can also support intraoperative decision-making by providing a quantitative measure of the meniscus health condition. The objective of this study is to demonstrate a method for intraoperative identification of meniscus elastic modulus during arthroscopic probing using an adaptive observer method. METHODS Ex vivo arthroscopic examinations were performed on five cadaveric knees to estimate the elastic modulus of the anterior, mid-body, and posterior regions of lateral and medial menisci. Real-time intraoperative force-displacement data was obtained and utilized for modulus estimation through an adaptive observer method. For the validation of arthroscopic elastic moduli, an inverse parameter identification approach using optimization, based on biomechanical indentation tests and finite element analyses, was employed. Experimental force-displacement data in various anatomical locations were measured through indentation. An iterative optimization algorithm was employed to optimize elastic moduli and Poisson's ratios by comparing experimental force values at maximum displacement with the corresponding force values from linear elastic region-specific finite element models. Finally, the estimated elastic modulus values obtained from ex vivo arthroscopy were compared against optimized values using a paired t-test. RESULTS The elastic moduli obtained from ex vivo arthroscopy and optimization showcased subject specificity in material properties. Additionally, the results emphasized anatomical and regional specificity within the menisci. The anterior region of the medial menisci exhibited the highest elastic modulus among the anatomical locations studied (9.97±3.20MPa from arthroscopy and 5.05±1.97MPa from finite element-based inverse parameter identification). The paired t-test results indicated no statistically significant difference between the elastic moduli obtained from arthroscopy and inverse parameter identification, suggesting the feasibility of stiffness estimation using arthroscopic examination. CONCLUSIONS This study has demonstrated the feasibility of intraoperative identification of patient-specific elastic modulus for meniscus tissue during arthroscopy.
Collapse
Affiliation(s)
- Bismi Rasheed
- Cyber-Physical Systems Laboratory, Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Å lesund, 6025, Norway; Å lesund Biomechanics Lab, Department of Research and Innovation, Møre and Romsdal Hospital Trust, Å lesund, 6017, Norway.
| | - Øystein Bjelland
- Cyber-Physical Systems Laboratory, Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Å lesund, 6025, Norway; Å lesund Biomechanics Lab, Department of Research and Innovation, Møre and Romsdal Hospital Trust, Å lesund, 6017, Norway
| | - Andreas F Dalen
- Å lesund Biomechanics Lab, Department of Research and Innovation, Møre and Romsdal Hospital Trust, Å lesund, 6017, Norway; Department of Orthopaedic Surgery, Møre and Romsdal Hospital Trust, Å lesund, 6017, Norway
| | - Ute Schaarschmidt
- Cyber-Physical Systems Laboratory, Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Å lesund, 6025, Norway
| | - Hans Georg Schaathun
- Cyber-Physical Systems Laboratory, Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Å lesund, 6025, Norway
| | - Morten D Pedersen
- Department of Engineering Cybernetics, Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - Martin Steinert
- Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - Robin T Bye
- Cyber-Physical Systems Laboratory, Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Å lesund, 6025, Norway
| |
Collapse
|
2
|
Yanik E, Schwaitzberg S, Yang G, Intes X, Norfleet J, Hackett M, De S. One-shot skill assessment in high-stakes domains with limited data via meta learning. Comput Biol Med 2024; 174:108470. [PMID: 38636326 DOI: 10.1016/j.compbiomed.2024.108470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 04/08/2024] [Accepted: 04/09/2024] [Indexed: 04/20/2024]
Abstract
Deep Learning (DL) has achieved robust competency assessment in various high-stakes fields. However, the applicability of DL models is often hampered by their substantial data requirements and confinement to specific training domains. This prevents them from transitioning to new tasks where data is scarce. Therefore, domain adaptation emerges as a critical element for the practical implementation of DL in real-world scenarios. Herein, we introduce A-VBANet, a novel meta-learning model capable of delivering domain-agnostic skill assessment via one-shot learning. Our methodology has been tested by assessing surgical skills on five laparoscopic and robotic simulators and real-life laparoscopic cholecystectomy. Our model successfully adapted with accuracies up to 99.5 % in one-shot and 99.9 % in few-shot settings for simulated tasks and 89.7 % for laparoscopic cholecystectomy. This study marks the first instance of a domain-agnostic methodology for skill assessment in critical fields setting a precedent for the broad application of DL across diverse real-life domains with limited data.
Collapse
Affiliation(s)
- Erim Yanik
- College of Engineering, Florida A&M University and the Florida State University, USA.
| | | | - Gene Yang
- School of Medicine and Biomedical Sciences, University at Buffalo, USA
| | - Xavier Intes
- Biomedical Engineering Department, Rensselaer Polytechnic Institute, USA
| | - Jack Norfleet
- U.S. Army Combat Capabilities Development Command Soldier Center STTC, USA
| | - Matthew Hackett
- U.S. Army Combat Capabilities Development Command Soldier Center STTC, USA
| | - Suvranu De
- College of Engineering, Florida A&M University and the Florida State University, USA
| |
Collapse
|
3
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
4
|
Xu J, Anastasiou D, Booker J, Burton OE, Layard Horsfall H, Salvadores Fernandez C, Xue Y, Stoyanov D, Tiwari MK, Marcus HJ, Mazomenos EB. A Deep Learning Approach to Classify Surgical Skill in Microsurgery Using Force Data from a Novel Sensorised Surgical Glove. SENSORS (BASEL, SWITZERLAND) 2023; 23:8947. [PMID: 37960645 PMCID: PMC10650455 DOI: 10.3390/s23218947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023]
Abstract
Microsurgery serves as the foundation for numerous operative procedures. Given its highly technical nature, the assessment of surgical skill becomes an essential component of clinical practice and microsurgery education. The interaction forces between surgical tools and tissues play a pivotal role in surgical success, making them a valuable indicator of surgical skill. In this study, we employ six distinct deep learning architectures (LSTM, GRU, Bi-LSTM, CLDNN, TCN, Transformer) specifically designed for the classification of surgical skill levels. We use force data obtained from a novel sensorized surgical glove utilized during a microsurgical task. To enhance the performance of our models, we propose six data augmentation techniques. The proposed frameworks are accompanied by a comprehensive analysis, both quantitative and qualitative, including experiments conducted with two cross-validation schemes and interpretable visualizations of the network's decision-making process. Our experimental results show that CLDNN and TCN are the top-performing models, achieving impressive accuracy rates of 96.16% and 97.45%, respectively. This not only underscores the effectiveness of our proposed architectures, but also serves as compelling evidence that the force data obtained through the sensorized surgical glove contains valuable information regarding surgical skill.
Collapse
Affiliation(s)
- Jialang Xu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Dimitrios Anastasiou
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - James Booker
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Oliver E. Burton
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Hugo Layard Horsfall
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Carmen Salvadores Fernandez
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Yang Xue
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Manish K. Tiwari
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Nanoengineered Systems Laboratory, UCL Mechanical Engineering, University College London, London WC1E 7JE, UK
| | - Hani J. Marcus
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
| | - Evangelos B. Mazomenos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TY, UK; (J.X.); (D.A.); (J.B.); (O.E.B.); (H.L.H.); (C.S.F.); (Y.X.); (D.S.); (M.K.T.); (H.J.M.)
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| |
Collapse
|
5
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
6
|
Liu Z, Hitchcock DB, Singapogu RB. Cannulation Skill Assessment Using Functional Data Analysis. IEEE J Biomed Health Inform 2023; 27:4512-4523. [PMID: 37310836 PMCID: PMC10519736 DOI: 10.1109/jbhi.2023.3283188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE A clinician's operative skill-the ability to safely and effectively perform a procedure-directly impacts patient outcomes and well-being. Therefore, it is necessary to accurately assess skill progression during medical training as well as develop methods to most efficiently train healthcare professionals. METHODS In this study, we explore whether time-series needle angle data recorded during cannulation on a simulator can be analyzed using functional data analysis methods to (1) identify skilled versus unskilled performance and (2) relate angle profiles to degree of success of the procedure. RESULTS Our methods successfully differentiated between types of needle angle profiles. In addition, the identified profile types were associated with degrees of skilled and unskilled behavior of subjects. Furthermore, the types of variability in the dataset were analyzed, providing particular insight into the overall range of needle angles used as well as the rate of change of angle as cannulation progressed in time. Finally, cannulation angle profiles also demonstrated an observable correlation with degree of cannulation success, a metric that is closely related to clinical outcome. CONCLUSION In summary, the methods presented here enable rich assessment of clinical skill since the functional (i.e., dynamic) nature of the data is duly considered.
Collapse
|
7
|
Liu Z, Bible J, Petersen L, Zhang Z, Roy-Chaudhury P, Singapogu R. Relating process and outcome metrics for meaningful and interpretable cannulation skill assessment: A machine learning paradigm. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107429. [PMID: 37119772 PMCID: PMC10291517 DOI: 10.1016/j.cmpb.2023.107429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 02/06/2023] [Accepted: 02/15/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVES The quality of healthcare delivery depends directly on the skills of clinicians. For patients on hemodialysis, medical errors or injuries caused during cannulation can lead to adverse outcomes, including potential death. To promote objective skill assessment and effective training, we present a machine learning approach, which utilizes a highly-sensorized cannulation simulator and a set of objective process and outcome metrics. METHODS In this study, 52 clinicians were recruited to perform a set of pre-defined cannulation tasks on the simulator. Based on data collected by sensors during their task performance, the feature space was then constructed based on force, motion, and infrared sensor data. Following this, three machine learning models- support vector machine (SVM), support vector regression (SVR), and elastic net (EN)- were constructed to relate the feature space to objective outcome metrics. Our models utilize classification based on the conventional skill classification labels as well as a new method that represents skill on a continuum. RESULTS With less than 5% of trials misplaced by two classes, the SVM model was effective in predicting skill based on the feature space. In addition, the SVR model effectively places both skill and outcome on a fine-grained continuum (versus discrete divisions) that is representative of reality. As importantly, the elastic net model enabled the identification of a set of process metrics that highly impact outcomes of the cannulation task, including smoothness of motion, needle angles, and pinch forces. CONCLUSIONS The proposed cannulation simulator, paired with machine learning assessment, demonstrates definite advantages over current cannulation training practices. The methods presented here can be adopted to drastically increase the effectiveness of skill assessment and training, thereby potentially improving clinical outcomes of hemodialysis treatment.
Collapse
Affiliation(s)
- Zhanhe Liu
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Joe Bible
- School of Mathematical and Statistical Sciences, Clemson University, O-110 Martin Hall, Clemson, 29634, SC, USA
| | - Lydia Petersen
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Ziyang Zhang
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA
| | - Prabir Roy-Chaudhury
- UNC Kidney Center, University of North Carolina, Chapel Hill, NC, 28144, USA; (Bill Hefner) VA Medical Center, Salisbury, NC, 28144, USA
| | - Ravikiran Singapogu
- Department of Bioengineering, Clemson University, 301 Rhodes Research Center, Clemson, 29634, SC, USA.
| |
Collapse
|
8
|
Pan M, Wang S, Li J, Li J, Yang X, Liang K. An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094496. [PMID: 37177699 PMCID: PMC10181496 DOI: 10.3390/s23094496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/27/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023]
Abstract
Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results in skill assessment. However, this success is predicated on the SIT sensors, making these approaches impractical when employing the minimally invasive surgical robot with such a tiny end size. To address the assessment issue regarding the operation quality of robot-assisted minimally invasive surgery (RAMIS), this paper proposes a new automatic framework for assessing surgical skills based on visual motion tracking and deep learning. The new method innovatively combines vision and kinematics. The kernel correlation filter (KCF) is introduced in order to obtain the key motion signals of the SIT and classify them by using the residual neural network (ResNet), realizing automated skill assessment in RAMIS. To verify its effectiveness and accuracy, the proposed method is applied to the public minimally invasive surgical robot dataset, the JIGSAWS. The results show that the method based on visual motion tracking technology and a deep neural network model can effectively and accurately assess the skill of robot-assisted surgery in near real-time. In a fairly short computational processing time of 3 to 5 s, the average accuracy of the assessment method is 92.04% and 84.80% in distinguishing two and three skill levels. This study makes an important contribution to the safe and high-quality development of RAMIS.
Collapse
Affiliation(s)
- Mingzhang Pan
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
- State Key Laboratory for Conservation and Utilization of Subtropical Agro-Bioresources, Nanning 530004, China
| | - Shuo Wang
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Jingao Li
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Jing Li
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Xiuze Yang
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
| | - Ke Liang
- College of Mechanical Engineering, Guangxi University, Nanning 530004, China
- Guangxi Key Laboratory of Manufacturing System & Advanced Manufacturing Technology, School of Mechanical Engineering, Guangxi University, Nanning 530004, China
| |
Collapse
|
9
|
Srinivas S, Young AJ. Machine Learning and Artificial Intelligence in Surgical Research. Surg Clin North Am 2023; 103:299-316. [PMID: 36948720 DOI: 10.1016/j.suc.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
Machine learning, a subtype of artificial intelligence, is an emerging field of surgical research dedicated to predictive modeling. From its inception, machine learning has been of interest in medical and surgical research. Built on traditional research metrics for optimal success, avenues of research include diagnostics, prognosis, operative timing, and surgical education, in a variety of surgical subspecialties. Machine learning represents an exciting and developing future in the world of surgical research that will not only allow for more personalized and comprehensive medical care.
Collapse
Affiliation(s)
- Shruthi Srinivas
- Department of Surgery, The Ohio State University, 370 West 9th Avenue, Columbus, OH 43210, USA
| | - Andrew J Young
- Division of Trauma, Critical Care, and Burn, The Ohio State University, 181 Taylor Avenue, Suite 1102K, Columbus, OH 43203, USA.
| |
Collapse
|
10
|
Automated Capture of Intraoperative Adverse Events Using Artificial Intelligence: A Systematic Review and Meta-Analysis. J Clin Med 2023; 12:jcm12041687. [PMID: 36836223 PMCID: PMC9963108 DOI: 10.3390/jcm12041687] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
Intraoperative adverse events (iAEs) impact the outcomes of surgery, and yet are not routinely collected, graded, and reported. Advancements in artificial intelligence (AI) have the potential to power real-time, automatic detection of these events and disrupt the landscape of surgical safety through the prediction and mitigation of iAEs. We sought to understand the current implementation of AI in this space. A literature review was performed to PRISMA-DTA standards. Included articles were from all surgical specialties and reported the automatic identification of iAEs in real-time. Details on surgical specialty, adverse events, technology used for detecting iAEs, AI algorithm/validation, and reference standards/conventional parameters were extracted. A meta-analysis of algorithms with available data was conducted using a hierarchical summary receiver operating characteristic curve (ROC). The QUADAS-2 tool was used to assess the article risk of bias and clinical applicability. A total of 2982 studies were identified by searching PubMed, Scopus, Web of Science, and IEEE Xplore, with 13 articles included for data extraction. The AI algorithms detected bleeding (n = 7), vessel injury (n = 1), perfusion deficiencies (n = 1), thermal damage (n = 1), and EMG abnormalities (n = 1), among other iAEs. Nine of the thirteen articles described at least one validation method for the detection system; five explained using cross-validation and seven divided the dataset into training and validation cohorts. Meta-analysis showed the algorithms were both sensitive and specific across included iAEs (detection OR 14.74, CI 4.7-46.2). There was heterogeneity in reported outcome statistics and article bias risk. There is a need for standardization of iAE definitions, detection, and reporting to enhance surgical care for all patients. The heterogeneous applications of AI in the literature highlights the pluripotent nature of this technology. Applications of these algorithms across a breadth of urologic procedures should be investigated to assess the generalizability of these data.
Collapse
|
11
|
Video-based formative and summative assessment of surgical tasks using deep learning. Sci Rep 2023; 13:1038. [PMID: 36658186 PMCID: PMC9852463 DOI: 10.1038/s41598-022-26367-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/13/2022] [Indexed: 01/20/2023] Open
Abstract
To ensure satisfactory clinical outcomes, surgical skill assessment must be objective, time-efficient, and preferentially automated-none of which is currently achievable. Video-based assessment (VBA) is being deployed in intraoperative and simulation settings to evaluate technical skill execution. However, VBA is manual, time-intensive, and prone to subjective interpretation and poor inter-rater reliability. Herein, we propose a deep learning (DL) model that can automatically and objectively provide a high-stakes summative assessment of surgical skill execution based on video feeds and low-stakes formative assessment to guide surgical skill acquisition. Formative assessment is generated using heatmaps of visual features that correlate with surgical performance. Hence, the DL model paves the way for the quantitative and reproducible evaluation of surgical tasks from videos with the potential for broad dissemination in surgical training, certification, and credentialing.
Collapse
|
12
|
Ota R, Yamashita F. Application of machine learning techniques to the analysis and prediction of drug pharmacokinetics. J Control Release 2022; 352:961-969. [PMID: 36370876 DOI: 10.1016/j.jconrel.2022.11.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 10/23/2022] [Accepted: 11/07/2022] [Indexed: 11/17/2022]
Abstract
In this review, we describe the current status and challenges in applying machine-learning techniques to the analysis and prediction of pharmacokinetic data. The theory of pharmacokinetics has been developed over decades on the basis of physiology and reaction kinetics. Mathematical models allow the reduction of pharmacokinetic data to parameter values, giving insight and understanding into ADME processes and predicting the outcome of different dosing scenarios. However, much information hidden in the data is lost through conceptual simplification with models. It is difficult to use mechanistic models alone to predict diverse pharmacokinetic time profiles, including inter-drug and inter-individual differences, in a cross-sectional manner. Machine learning is a prediction platform that can handle complex phenomena through data-driven analysis. As a resule, machine learning has been successfully adopted in various fields, including image recognition and language processing, and has been used for over two decades in pharmacokinetic research, primarily in the area of quantitative structure-activity relationships for pharmacokinetic parameters. Machine-learning models are generally known to provide better predictive performance than conventional linear models. Owing to the recent success in deep learning, models with new structures are being consistently proposed. These models include transfer learning and generative adversarial networks, which contribute to the effective use of a limited amount of data by diverting existing similar models or generating pseudo-data. How to make such newly emerging machine learning technologies applicable to meet challenges in the pharmacokinetics/pharmacodynamics field is now the key issue.
Collapse
Affiliation(s)
- Ryosaku Ota
- Department of Drug Delivery Research, Graduate School of Pharmaceutical Sciences, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan
| | - Fumiyoshi Yamashita
- Department of Drug Delivery Research, Graduate School of Pharmaceutical Sciences, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan; Department of Applied Pharmacy and Pharmacokinetics, Graduate School of Pharmaceutical Sciences, Kyoto University, Sakyo-ku, Kyoto 606-8501, Japan.
| |
Collapse
|
13
|
Wang WK, Chen I, Hershkovich L, Yang J, Shetty A, Singh G, Jiang Y, Kotla A, Shang JZ, Yerrabelli R, Roghanizad AR, Shandhi MMH, Dunn J. A Systematic Review of Time Series Classification Techniques Used in Biomedical Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:8016. [PMID: 36298367 PMCID: PMC9611376 DOI: 10.3390/s22208016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 09/23/2022] [Accepted: 10/17/2022] [Indexed: 05/06/2023]
Abstract
Background: Digital clinical measures collected via various digital sensing technologies such as smartphones, smartwatches, wearables, and ingestible and implantable sensors are increasingly used by individuals and clinicians to capture the health outcomes or behavioral and physiological characteristics of individuals. Time series classification (TSC) is very commonly used for modeling digital clinical measures. While deep learning models for TSC are very common and powerful, there exist some fundamental challenges. This review presents the non-deep learning models that are commonly used for time series classification in biomedical applications that can achieve high performance. Objective: We performed a systematic review to characterize the techniques that are used in time series classification of digital clinical measures throughout all the stages of data processing and model building. Methods: We conducted a literature search on PubMed, as well as the Institute of Electrical and Electronics Engineers (IEEE), Web of Science, and SCOPUS databases using a range of search terms to retrieve peer-reviewed articles that report on the academic research about digital clinical measures from a five-year period between June 2016 and June 2021. We identified and categorized the research studies based on the types of classification algorithms and sensor input types. Results: We found 452 papers in total from four different databases: PubMed, IEEE, Web of Science Database, and SCOPUS. After removing duplicates and irrelevant papers, 135 articles remained for detailed review and data extraction. Among these, engineered features using time series methods that were subsequently fed into widely used machine learning classifiers were the most commonly used technique, and also most frequently achieved the best performance metrics (77 out of 135 articles). Statistical modeling (24 out of 135 articles) algorithms were the second most common and also the second-best classification technique. Conclusions: In this review paper, summaries of the time series classification models and interpretation methods for biomedical applications are summarized and categorized. While high time series classification performance has been achieved in digital clinical, physiological, or biomedical measures, no standard benchmark datasets, modeling methods, or reporting methodology exist. There is no single widely used method for time series model development or feature interpretation, however many different methods have proven successful.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | | | - Jessilyn Dunn
- Biomedical Engineering Department, Duke University, Durham, NC 27708, USA
| |
Collapse
|
14
|
Hybrid Spatiotemporal Contrastive Representation Learning for Content-Based Surgical Video Retrieval. ELECTRONICS 2022. [DOI: 10.3390/electronics11091353] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
In the medical field, due to their economic and clinical benefits, there is a growing interest in minimally invasive surgeries and microscopic surgeries. These types of surgeries are often recorded during operations, and these recordings have become a key resource for education, patient disease analysis, surgical error analysis, and surgical skill assessment. However, manual searching in this collection of long-term surgical videos is an extremely labor-intensive and long-term task, requiring an effective content-based video analysis system. In this regard, previous methods for surgical video retrieval are based on handcrafted features which do not represent the video effectively. On the other hand, deep learning-based solutions were found to be effective in both surgical image and video analysis, where CNN-, LSTM- and CNN-LSTM-based methods were proposed in most surgical video analysis tasks. In this paper, we propose a hybrid spatiotemporal embedding method to enhance spatiotemporal representations using an adaptive fusion layer on top of the LSTM and temporal causal convolutional modules. To learn surgical video representations, we propose exploring the supervised contrastive learning approach to leverage label information in addition to augmented versions. By validating our approach to a video retrieval task on two datasets, Surgical Actions 160 and Cataract-101, we significantly improve on previous results in terms of mean average precision, 30.012 ± 1.778 vs. 22.54 ± 1.557 for Surgical Actions 160 and 81.134 ± 1.28 vs. 33.18 ± 1.311 for Cataract-101. We also validate the proposed method’s suitability for surgical phase recognition task using the benchmark Cholec80 surgical dataset, where our approach outperforms (with 90.2% accuracy) the state of the art.
Collapse
|
15
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
16
|
Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: A Systematic Scoping Review of Interventions. JOURNAL OF SURGICAL EDUCATION 2022; 79:500-515. [PMID: 34756807 DOI: 10.1016/j.jsurg.2021.09.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/21/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To synthesize peer-reviewed evidence related to the use of artificial intelligence (AI) in surgical education DESIGN: We conducted and reported a scoping review according to the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis with extension for Scoping Reviews guideline and the fourth edition of the Joanna Briggs Institute Reviewer's Manual. We systematically searched eight interdisciplinary databases including MEDLINE-Ovid, ERIC, EMBASE, CINAHL, Web of Science: Core Collection, Compendex, Scopus, and IEEE Xplore. Databases were searched from inception until the date of search on April 13, 2021. SETTING/PARTICIPANTS We only examined original, peer-reviewed interventional studies that self-described as AI interventions, focused on medical education, and were relevant to surgical trainees (defined as medical or dental students, postgraduate residents, or surgical fellows) within the title and abstract (see Table 2). Animal, cadaveric, and in vivo studies were not eligible for inclusion. RESULTS After systematically searching eight databases and 4255 citations, our scoping review identified 49 studies relevant to artificial intelligence in surgical education. We found diverse interventions related to the evaluation of surgical competency, personalization of surgical education, and improvement of surgical education materials across surgical specialties. Many studies used existing surgical education materials, such as the Objective Structured Assessment of Technical Skills framework or the JHU-ISI Gesture and Skill Assessment Working Set database. Though most studies did not provide outcomes related to the implementation in medical schools (such as cost-effective analyses or trainee feedback), there are numerous promising interventions. In particular, many studies noted high accuracy in the objective characterization of surgical skill sets. These interventions could be further used to identify at-risk surgical trainees or evaluate teaching methods. CONCLUSIONS There are promising applications for AI in surgical education, particularly for the assessment of surgical competencies, though further evidence is needed regarding implementation and applicability.
Collapse
Affiliation(s)
| | - Dylan Young
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Noelle Crasto
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Mara Sobel
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada; Department of Obstetrics and Gynaecology, University of Toronto, Toronto, Ontario, Canada; The Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Iqbal U, Jing Z, Ahmed Y, Elsayed AS, Rogers C, Boris R, Porter J, Allaf M, Badani K, Stifelman M, Kaouk J, Terakawa T, Hinata N, Aboumohamed AA, Kauffman E, Li Q, Abaza R, Guru KA, Hussein AA, Eun D. Development and Validation of an Objective Scoring Tool for Robot-Assisted Partial Nephrectomy: Scoring for Partial Nephrectomy. J Endourol 2021; 36:647-653. [PMID: 34809491 DOI: 10.1089/end.2021.0706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Objective: To develop a structured and objective scoring tool for assessment of robot-assisted partial nephrectomy (RAPN): Scoring for Partial Nephrectomy (SPaN). Materials and Methods: Content development: RAPN was deconstructed into 6 domains by a multi-institutional panel of 10 expert robotic surgeons. Performance on each domain was represented on a Likert scale of 1 to 5, with specific descriptions of anchors 1, 3, and 5. Content validation: The Delphi methodology was utilized to achieve consensus about the description of each anchor for each domain in terms of appropriateness of the skill assessed, objectiveness, clarity, and unambiguous wording. The content validity index (CVI) of ≥0.75 was set as cutoff for consensus. Reliability: 15 de-identified videos of RAPN were utilized to determine the inter-rater reliability using linearly weighted percent agreement, and Construct validation of SPaN was described in terms of median scores and odds ratios. Results: The expert panel reached consensus (CVI ≥0.75) after 2 rounds. Consensus was achieved for 36 (67%) statements in the first round and 18 (33%) after the second round. The final six-domain SPaN included Exposure of the kidney; Identification and dissection of the ureter and gonadal vessels; Dissection of the hilum; Tumor localization and exposure; Clamping and tumor resection; and Renorrhaphy. The linearly weighted percent agreement was >0.75 for all domains. There was no difference between median scores for any domain between attendings and trainees. Conclusion: Despite the lack of significant construct validity, SPaN is a structured, reliable, and procedure-specific tool that can objectively assesses technical proficiency for RAPN.
Collapse
Affiliation(s)
- Umar Iqbal
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Zhe Jing
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Youssef Ahmed
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Ahmed S Elsayed
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA.,Cairo University, Cairo, Egypt
| | - Craig Rogers
- Henry Ford Health Systems, Detroit, Michigan, USA
| | - Ronald Boris
- Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - James Porter
- Swedish Medical Center, Seattle, Washington, USA
| | - Mohammad Allaf
- Johns Hopkins University Hospital, Boston, Massachusetts, USA
| | - Ketan Badani
- Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | | | | | | | - Nobuyuki Hinata
- Hiroshima University Graduate School of Biomedical and Health Sciences, Hiroshima, Japan
| | | | - Eric Kauffman
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Qiang Li
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | | | - Khurshid A Guru
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA
| | - Ahmed A Hussein
- A.T.L.A.S. (Applied Technology Laboratory for Advanced Surgery), Roswell Park Comprehensive Cancer Center, Buffalo, New York, USA.,Cairo University, Cairo, Egypt
| | - Daniel Eun
- Temple University Hospital, Philadelphia, Pennsylvania, USA
| |
Collapse
|
18
|
Bilgic E, Gorgy A, Yang A, Cwintal M, Ranjbar H, Kahla K, Reddy D, Li K, Ozturk H, Zimmermann E, Quaiattini A, Abbasgholizadeh-Rahimi S, Poenaru D, Harley JM. Exploring the roles of artificial intelligence in surgical education: A scoping review. Am J Surg 2021; 224:205-216. [PMID: 34865736 DOI: 10.1016/j.amjsurg.2021.11.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND Technology-enhanced teaching and learning, including Artificial Intelligence (AI) applications, has started to evolve in surgical education. Hence, the purpose of this scoping review is to explore the current and future roles of AI in surgical education. METHODS Nine bibliographic databases were searched from January 2010 to January 2021. Full-text articles were included if they focused on AI in surgical education. RESULTS Out of 14,008 unique sources of evidence, 93 were included. Out of 93, 84 were conducted in the simulation setting, and 89 targeted technical skills. Fifty-six studies focused on skills assessment/classification, and 36 used multiple AI techniques. Also, increasing sample size, having balanced data, and using AI to provide feedback were major future directions mentioned by authors. CONCLUSIONS AI can help optimize the education of trainees and our results can help educators and researchers identify areas that need further investigation.
Collapse
Affiliation(s)
- Elif Bilgic
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrew Gorgy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Alison Yang
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Michelle Cwintal
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Hamed Ranjbar
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kalin Kahla
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Dheeksha Reddy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kexin Li
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Helin Ozturk
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Eric Zimmermann
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrea Quaiattini
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, McGill University, Montreal, Quebec, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila Quebec AI Institute, Montreal, Canada
| | - Dan Poenaru
- Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Department of Pediatric Surgery, McGill University, Canada
| | - Jason M Harley
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
19
|
Motaharifar M, Norouzzadeh A, Abdi P, Iranfar A, Lotfi F, Moshiri B, Lashay A, Mohammadi SF, Taghirad HD. Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic. Front Robot AI 2021; 8:612949. [PMID: 34476241 PMCID: PMC8407078 DOI: 10.3389/frobt.2021.612949] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 07/29/2021] [Indexed: 12/15/2022] Open
Abstract
This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.
Collapse
Affiliation(s)
- Mohammad Motaharifar
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
- Department of Electrical Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Norouzzadeh
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Parisa Abdi
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Arash Iranfar
- School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
| | - Faraz Lotfi
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Behzad Moshiri
- School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Alireza Lashay
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed Farzad Mohammadi
- Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid D. Taghirad
- Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| |
Collapse
|
20
|
Lajkó G, Nagyné Elek R, Haidegger T. Endoscopic Image-Based Skill Assessment in Robot-Assisted Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2021; 21:5412. [PMID: 34450854 PMCID: PMC8398563 DOI: 10.3390/s21165412] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/02/2021] [Accepted: 08/05/2021] [Indexed: 02/06/2023]
Abstract
Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic-acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors-or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU-ISI Gesture and Skill Assessment Working Set dataset-co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy-averaged over five cross-validation trials-for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features-such as image-based orientation and image-based collision detection-or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively.
Collapse
Affiliation(s)
- Gábor Lajkó
- Autonomous Systems Track, Double Degree Programme, EIT Digital Master School, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany;
- ELTE Faculty of Informatics, Pázmány Péter Sétány 1/C, Eötvös Loránd University, Egyetem tér 1-3, 1117 Budapest, Hungary
| | - Renáta Nagyné Elek
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, Bécsi út 96/b, 1034 Budapest, Hungary
- John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/b, 1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Austrian Center for Medical Innovation and Technology, Viktor Kaplan-Straße 2/1, 2700 Wiener Neustadt, Austria
| |
Collapse
|
21
|
Battaglia E, Boehm J, Zheng Y, Jamieson AR, Gahan J, Majewicz Fey A. Rethinking Autonomous Surgery: Focusing on Enhancement over Autonomy. Eur Urol Focus 2021; 7:696-705. [PMID: 34246619 PMCID: PMC10394949 DOI: 10.1016/j.euf.2021.06.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 05/28/2021] [Accepted: 06/17/2021] [Indexed: 12/12/2022]
Abstract
CONTEXT As robot-assisted surgery is increasingly used in surgical care, the engineering research effort towards surgical automation has also increased significantly. Automation promises to enhance surgical outcomes, offload mundane or repetitive tasks, and improve workflow. However, we must ask an important question: should autonomous surgery be our long-term goal? OBJECTIVE To provide an overview of the engineering requirements for automating control systems, summarize technical challenges in automated robotic surgery, and review sensing and modeling techniques to capture real-time human behaviors for integration into the robotic control loop for enhanced shared or collaborative control. EVIDENCE ACQUISITION We performed a nonsystematic search of the English language literature up to March 25, 2021. We included original studies related to automation in robot-assisted laparoscopic surgery and human-centered sensing and modeling. EVIDENCE SYNTHESIS We identified four comprehensive review papers that present techniques for automating portions of surgical tasks. Sixteen studies relate to human-centered sensing technologies and 23 to computer vision and/or advanced artificial intelligence or machine learning methods for skill assessment. Twenty-two studies evaluate or review the role of haptic or adaptive guidance during some learning task, with only a few applied to robotic surgery. Finally, only three studies discuss the role of some form of training in patient outcomes and none evaluated the effects of full or semi-autonomy on patient outcomes. CONCLUSIONS Rather than focusing on autonomy, which eliminates the surgeon from the loop, research centered on more fully understanding the surgeon's behaviors, goals, and limitations could facilitate a superior class of collaborative surgical robots that could be more effective and intelligent than automation alone. PATIENT SUMMARY We reviewed the literature for studies on automation in surgical robotics and on modeling of human behavior in human-machine interaction. The main application is to enhance the ability of surgical robotic systems to collaborate more effectively and intelligently with human surgeon operators.
Collapse
Affiliation(s)
- Edoardo Battaglia
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Jacob Boehm
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Yi Zheng
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Andrew R Jamieson
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Jeffrey Gahan
- Department of Urology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
22
|
Lefor AK, Harada K, Dosis A, Mitsuishi M. Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set II: learning curve analysis. Int J Comput Assist Radiol Surg 2021; 16:589-595. [PMID: 33723706 DOI: 10.1007/s11548-021-02339-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 01/12/2023]
Abstract
PURPOSE The Johns Hopkins-Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset. METHODS Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis. RESULTS There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4. CONCLUSIONS The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.
Collapse
Affiliation(s)
- Alan Kawarai Lefor
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Kanako Harada
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| | | | - Mamoru Mitsuishi
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
23
|
Is Experience in Hemodialysis Cannulation Related to Expertise? A Metrics-based Investigation for Skills Assessment. Ann Biomed Eng 2021; 49:1688-1700. [PMID: 33417054 DOI: 10.1007/s10439-020-02708-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/08/2020] [Indexed: 12/19/2022]
Abstract
Cannulation is not only one of the most common medical procedures but also fraught with complications. The skill of the clinician performing cannulation directly impacts cannulation outcomes. However, current methods of teaching this skill are deficient, relying on subjective demonstrations and unrealistic manikins that have limited utility for skills training. Furthermore, of the factors that hinders effective continuing medical education is the assumption that clinical experience results in expertise. In this work, we examine if objective metrics acquired from a novel cannulation simulator are able to distinguish between experienced clinicians and established experts, enabling the measurement of true expertise. Twenty-two healthcare professionals, who practiced cannulation with varying experience, performed a simulated arteriovenous fistula cannulation task on the simulator. Four clinicians were peer-identified as experts while the others were designated to the experienced group. The simulator tracked the motion of the needle (via an electromagnetic sensor), rendered blood flashback function (via an infrared light sensor), and recorded pinch forces exerted on the needle (via force sensing elements). Metrics were computed based on motion, force, and other sensor data. Results indicated that, with near 80% of accuracy using both logistic regression and linear discriminant analysis, the objective metrics differentiated between experts and the experienced, including identifying needle motion and finger force as two prominent features that distinguished between the groups. Furthermore, results indicated that expertise was not correlated with years of experience, validating the central hypothesis of the study. These insights contribute to structured and standardized medical skills training by enabling a meaningful definition of expertise and could potentially lead to more effective skills training methods.
Collapse
|
24
|
Aziz Kalteh A, Babouei S. Control chart patterns recognition using ANFIS with new training algorithm and intelligent utilization of shape and statistical features. ISA TRANSACTIONS 2020; 102:12-22. [PMID: 31848018 DOI: 10.1016/j.isatra.2019.12.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 12/03/2019] [Accepted: 12/08/2019] [Indexed: 06/10/2023]
Abstract
This paper presents a new method for recognition of nine control chart patterns (CCPs) based on the intelligent use of shape and statistical features and optimized fuzzy system. The proposed technique contains three levels of separation. In each level of separation, an effective set of shape and statistical features are utilized as the input of classifier for recognizing a part of patterns. Due to the good performance of the adaptive neuro-fuzzy inference system (ANFIS) in pattern recognition problems, in the proposed method an ANFIS is used as a classifier at each level of separation which is trained by chaotic whale optimization algorithm (CWOA). Intelligent utilization of new extracted features, improving robustness of ANFIS and considering nine patterns in CCP recognition problem are the main contribution of the proposed method. The simulation results showed that the proposed method performs better than other similar methods and can recognize the type of pattern with 99.77% accuracy.
Collapse
Affiliation(s)
- Abdol Aziz Kalteh
- Department of Electrical Engineering, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran.
| | - Sajjad Babouei
- Department of Electrical Engineering, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran
| |
Collapse
|
25
|
Tanzi L, Piazzolla P, Vezzetti E. Intraoperative surgery room management: A deep learning perspective. Int J Med Robot 2020; 16:1-12. [PMID: 32510857 DOI: 10.1002/rcs.2136] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 04/21/2020] [Accepted: 06/03/2020] [Indexed: 12/22/2022]
Abstract
PURPOSE The current study aimed to systematically review the literature addressing the use of deep learning (DL) methods in intraoperative surgery applications, focusing on the data collection, the objectives of these tools and, more technically, the DL-based paradigms utilized. METHODS A literature search with classic databases was performed: we identified, with the use of specific keywords, a total of 996 papers. Among them, we selected 52 for effective analysis, focusing on articles published after January 2015. RESULTS The preliminary results of the implementation of DL in clinical setting are encouraging. Almost all the surgery sub-fields have seen the advent of artificial intelligence (AI) applications and the results outperformed the previous techniques in the majority of the cases. From these results, a conceptualization of an intelligent operating room (IOR) is also presented. CONCLUSION This evaluation outlined how AI and, in particular, DL are revolutionizing the surgery field, with numerous applications, such as context detection and room management. This process is evolving years by years into the realization of an IOR, equipped with technologies perfectly suited to drastically improve the surgical workflow.
Collapse
|