1
|
Cizmic A, Häberle F, Wise PA, Müller F, Gabel F, Mascagni P, Namazi B, Wagner M, Hashimoto DA, Madani A, Alseidi A, Hackert T, Müller-Stich BP, Nickel F. Structured feedback and operative video debriefing with critical view of safety annotation in training of laparoscopic cholecystectomy: a randomized controlled study. Surg Endosc 2024; 38:3241-3252. [PMID: 38653899 PMCID: PMC11133174 DOI: 10.1007/s00464-024-10843-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND The learning curve in minimally invasive surgery (MIS) is lengthened compared to open surgery. It has been reported that structured feedback and training in teams of two trainees improves MIS training and MIS performance. Annotation of surgical images and videos may prove beneficial for surgical training. This study investigated whether structured feedback and video debriefing, including annotation of critical view of safety (CVS), have beneficial learning effects in a predefined, multi-modal MIS training curriculum in teams of two trainees. METHODS This randomized-controlled single-center study included medical students without MIS experience (n = 80). The participants first completed a standardized and structured multi-modal MIS training curriculum. They were then randomly divided into two groups (n = 40 each), and four laparoscopic cholecystectomies (LCs) were performed on ex-vivo porcine livers each. Students in the intervention group received structured feedback after each LC, consisting of LC performance evaluations through tutor-trainee joint video debriefing and CVS video annotation. Performance was evaluated using global and LC-specific Objective Structured Assessments of Technical Skills (OSATS) and Global Operative Assessment of Laparoscopic Skills (GOALS) scores. RESULTS The participants in the intervention group had higher global and LC-specific OSATS as well as global and LC-specific GOALS scores than the participants in the control group (25.5 ± 7.3 vs. 23.4 ± 5.1, p = 0.003; 47.6 ± 12.9 vs. 36 ± 12.8, p < 0.001; 17.5 ± 4.4 vs. 16 ± 3.8, p < 0.001; 6.6 ± 2.3 vs. 5.9 ± 2.1, p = 0.005). The intervention group achieved CVS more often than the control group (1. LC: 20 vs. 10 participants, p = 0.037, 2. LC: 24 vs. 8, p = 0.001, 3. LC: 31 vs. 8, p < 0.001, 4. LC: 31 vs. 10, p < 0.001). CONCLUSIONS Structured feedback and video debriefing with CVS annotation improves CVS achievement and ex-vivo porcine LC training performance based on OSATS and GOALS scores.
Collapse
Affiliation(s)
- Amila Cizmic
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany
| | - Frida Häberle
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Philipp A Wise
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Felix Müller
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Felix Gabel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pietro Mascagni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| | - Babak Namazi
- Center for Evidence-Based Simulation, Baylor University Medical Center, Dallas, USA
| | - Martin Wagner
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Daniel A Hashimoto
- Penn Computer Assisted Surgery and Outcomes (PCASO) Laboratory, Department of Surgery, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, USA
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy (SARA), Department of Surgery, University Health Network, Toronto, Canada
| | - Adnan Alseidi
- Department of Surgery, University of California - San Francisco, San Francisco, USA
| | - Thilo Hackert
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany
| | - Beat P Müller-Stich
- Department of Surgery, Clarunis - University Centre for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany.
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Karlsruhe, Heidelberg, Germany.
| |
Collapse
|
2
|
Matsumoto S, Kawahira H, Fukata K, Doi Y, Kobayashi N, Hosoya Y, Sata N. Laparoscopic distal gastrectomy skill evaluation from video: a new artificial intelligence-based instrument identification system. Sci Rep 2024; 14:12432. [PMID: 38816459 PMCID: PMC11139867 DOI: 10.1038/s41598-024-63388-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 05/28/2024] [Indexed: 06/01/2024] Open
Abstract
The advent of Artificial Intelligence (AI)-based object detection technology has made identification of position coordinates of surgical instruments from videos possible. This study aimed to find kinematic differences by surgical skill level. An AI algorithm was developed to identify X and Y coordinates of surgical instrument tips accurately from video. Kinematic analysis including fluctuation analysis was performed on 18 laparoscopic distal gastrectomy videos from three expert and three novice surgeons (3 videos/surgeon, 11.6 h, 1,254,010 frames). Analysis showed the expert surgeon cohort moved more efficiently and regularly, with significantly less operation time and total travel distance. Instrument tip movement did not differ in velocity, acceleration, or jerk between skill levels. The evaluation index of fluctuation β was significantly higher in experts. ROC curve cutoff value at 1.4 determined sensitivity and specificity of 77.8% for experts and novices. Despite the small sample, this study suggests AI-based object detection with fluctuation analysis is promising because skill evaluation can be calculated in real time with potential for peri-operational evaluation.
Collapse
Affiliation(s)
- Shiro Matsumoto
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan.
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | | | | | | | - Yoshinori Hosoya
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| | - Naohiro Sata
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| |
Collapse
|
3
|
Preukschas AA, Wise PA, Bettscheider L, Pfeiffer M, Wagner M, Huber M, Golriz M, Fischer L, Mehrabi A, Rössler F, Speidel S, Hackert T, Müller-Stich BP, Nickel F, Kenngott HG. Comparing a virtual reality head-mounted display to on-screen three-dimensional visualization and two-dimensional computed tomography data for training in decision making in hepatic surgery: a randomized controlled study. Surg Endosc 2024; 38:2483-2496. [PMID: 38456945 PMCID: PMC11078809 DOI: 10.1007/s00464-023-10615-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 11/26/2023] [Indexed: 03/09/2024]
Abstract
OBJECTIVE Evaluation of the benefits of a virtual reality (VR) environment with a head-mounted display (HMD) for decision-making in liver surgery. BACKGROUND Training in liver surgery involves appraising radiologic images and considering the patient's clinical information. Accurate assessment of 2D-tomography images is complex and requires considerable experience, and often the images are divorced from the clinical information. We present a comprehensive and interactive tool for visualizing operation planning data in a VR environment using a head-mounted-display and compare it to 3D visualization and 2D-tomography. METHODS Ninety medical students were randomized into three groups (1:1:1 ratio). All participants analyzed three liver surgery patient cases with increasing difficulty. The cases were analyzed using 2D-tomography data (group "2D"), a 3D visualization on a 2D display (group "3D") or within a VR environment (group "VR"). The VR environment was displayed using the "Oculus Rift ™" HMD technology. Participants answered 11 questions on anatomy, tumor involvement and surgical decision-making and 18 evaluative questions (Likert scale). RESULTS Sum of correct answers were significantly higher in the 3D (7.1 ± 1.4, p < 0.001) and VR (7.1 ± 1.4, p < 0.001) groups than the 2D group (5.4 ± 1.4) while there was no difference between 3D and VR (p = 0.987). Times to answer in the 3D (6:44 ± 02:22 min, p < 0.001) and VR (6:24 ± 02:43 min, p < 0.001) groups were significantly faster than the 2D group (09:13 ± 03:10 min) while there was no difference between 3D and VR (p = 0.419). The VR environment was evaluated as most useful for identification of anatomic anomalies, risk and target structures and for the transfer of anatomical and pathological information to the intraoperative situation in the questionnaire. CONCLUSIONS A VR environment with 3D visualization using a HMD is useful as a surgical training tool to accurately and quickly determine liver anatomy and tumor involvement in surgery.
Collapse
Affiliation(s)
- Anas Amin Preukschas
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Philipp Anthony Wise
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Lisa Bettscheider
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Micha Pfeiffer
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131, Karlsruhe, Germany
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Fiedlerstraße 23, 01307, Dresden, Germany
| | - Martin Wagner
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Matthias Huber
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131, Karlsruhe, Germany
| | - Mohammad Golriz
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Lars Fischer
- Department of Surgery, Hospital Mittelbaden, Balgerstrasse 50, 76532, Baden-Baden, Germany
| | - Arianeb Mehrabi
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
| | - Fabian Rössler
- Department of Surgery and Transplantation, University Hospital of Zürich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Stefanie Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Fiedlerstraße 23, 01307, Dresden, Germany
| | - Thilo Hackert
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Beat Peter Müller-Stich
- Division of Abdominal Surgery, Clarunis Academic Centre of Gastrointestinal Diseases, St. Clara and University Hospital of Basel, Petersgraben 4, 4051, Basel, Switzerland
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Martinistraße 52, 20246, Hamburg, Germany
| | - Hannes Götz Kenngott
- Department of General, Visceral and Transplantation Surgery, University of Heidelberg, Im Neuenheimer Feld 672, 69120, Heidelberg, Germany.
| |
Collapse
|
4
|
Ryder CY, Mott NM, Gross CL, Anidi C, Shigut L, Bidwell SS, Kim E, Zhao Y, Ngam BN, Snell MJ, Yu BJ, Forczmanski P, Rooney DM, Jeffcoach DR, Kim GJ. Using Artificial Intelligence to Gauge Competency on a Novel Laparoscopic Training System. JOURNAL OF SURGICAL EDUCATION 2024; 81:267-274. [PMID: 38160118 DOI: 10.1016/j.jsurg.2023.10.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/08/2023] [Accepted: 10/13/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE Laparoscopic surgical skill assessment and machine learning are often inaccessible to low-and-middle-income countries (LMIC). Our team developed a low-cost laparoscopic training system to teach and assess psychomotor skills required in laparoscopic salpingostomy in LMICs. We performed video review using AI to assess global surgical techniques. The objective of this study was to assess the validity of artificial intelligence (AI) generated scoring measures of laparoscopic simulation videos by comparing the accuracy of AI results to human-generated scores. DESIGN Seventy-four surgical simulation videos were collected and graded by human participants using a modified OSATS (Objective Structured Assessment of Technical Skills). The videos were then analyzed via AI using 3 different time and distance-based calculations of the laparoscopic instruments including path length, dimensionless jerk, and standard deviation of tool position. Predicted scores were generated using 5-fold cross validation and K-Nearest-Neighbors to train classifiers. SETTING Surgical novices and experts from a variety of hospitals in Ethiopia, Cameroon, Kenya, and the United States contributed 74 laparoscopic salpingostomy simulation videos. RESULTS Complete accuracy of AI compared to human assessment ranged from 65-77%. There were no statistical differences in rank mean scores for 3 domains, Flow of Operation, Respect for Tissue, and Economy of Motion, while there were significant differences in ratings for Instrument Handling, Overall Performance, and the total summed score of all 5 domains (Summed). Estimated effect sizes were all less than 0.11, indicating very small practical effect. Estimated intraclass correlation coefficient (ICC) of Summed was 0.72 indicating moderate correlation between AI and Human scores. CONCLUSIONS Video review using AI technology of global characteristics was similar to that of human review in our laparoscopic training system. Machine learning may help fill an educational gap in LMICs where direct apprenticeship may not be feasible.
Collapse
Affiliation(s)
| | - Nicole M Mott
- University of Michigan Medical School, Ann Arbor, Michigan
| | | | - Chioma Anidi
- University of Michigan Medical School, Ann Arbor, Michigan
| | - Leul Shigut
- Department of Surgery, Soddo Christian General Hospital, Soddo, Ethiopia
| | | | - Erin Kim
- University of Michigan Medical School, Ann Arbor, Michigan
| | - Yimeng Zhao
- University of Michigan Medical School, Ann Arbor, Michigan
| | | | - Mark J Snell
- Department of Surgery, Mbingo Baptist Hospital, Mbingo, Cameroon
| | - B Joon Yu
- Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Pawel Forczmanski
- Department of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, Szczecin, Poland
| | - Deborah M Rooney
- Department of Learning Sciences, University of Michigan, Ann Arbor, Michigan
| | - David R Jeffcoach
- Department of Surgery, Community Regional Medical Center, Fresno, California
| | - Grace J Kim
- Department of Surgery, University of Michigan, Ann Arbor, Michigan.
| |
Collapse
|
5
|
Pedrett R, Mascagni P, Beldi G, Padoy N, Lavanchy JL. Technical skill assessment in minimally invasive surgery using artificial intelligence: a systematic review. Surg Endosc 2023; 37:7412-7424. [PMID: 37584774 PMCID: PMC10520175 DOI: 10.1007/s00464-023-10335-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023]
Abstract
BACKGROUND Technical skill assessment in surgery relies on expert opinion. Therefore, it is time-consuming, costly, and often lacks objectivity. Analysis of intraoperative data by artificial intelligence (AI) has the potential for automated technical skill assessment. The aim of this systematic review was to analyze the performance, external validity, and generalizability of AI models for technical skill assessment in minimally invasive surgery. METHODS A systematic search of Medline, Embase, Web of Science, and IEEE Xplore was performed to identify original articles reporting the use of AI in the assessment of technical skill in minimally invasive surgery. Risk of bias (RoB) and quality of the included studies were analyzed according to Quality Assessment of Diagnostic Accuracy Studies criteria and the modified Joanna Briggs Institute checklists, respectively. Findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. RESULTS In total, 1958 articles were identified, 50 articles met eligibility criteria and were analyzed. Motion data extracted from surgical videos (n = 25) or kinematic data from robotic systems or sensors (n = 22) were the most frequent input data for AI. Most studies used deep learning (n = 34) and predicted technical skills using an ordinal assessment scale (n = 36) with good accuracies in simulated settings. However, all proposed models were in development stage, only 4 studies were externally validated and 8 showed a low RoB. CONCLUSION AI showed good performance in technical skill assessment in minimally invasive surgery. However, models often lacked external validity and generalizability. Therefore, models should be benchmarked using predefined performance metrics and tested in clinical implementation studies.
Collapse
Affiliation(s)
- Romina Pedrett
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Pietro Mascagni
- IHU Strasbourg, Strasbourg, France
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France
- ICube, CNRS, University of Strasbourg, Strasbourg, France
| | - Joël L Lavanchy
- Department of Visceral Surgery and Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
- IHU Strasbourg, Strasbourg, France.
- University Digestive Health Care Center Basel - Clarunis, PO Box, 4002, Basel, Switzerland.
| |
Collapse
|
6
|
Rodriguez Peñaranda N, Eissa A, Ferretti S, Bianchi G, Di Bari S, Farinha R, Piazza P, Checcucci E, Belenchón IR, Veccia A, Gomez Rivas J, Taratkin M, Kowalewski KF, Rodler S, De Backer P, Cacciamani GE, De Groote R, Gallagher AG, Mottrie A, Micali S, Puliatti S. Artificial Intelligence in Surgical Training for Kidney Cancer: A Systematic Review of the Literature. Diagnostics (Basel) 2023; 13:3070. [PMID: 37835812 PMCID: PMC10572445 DOI: 10.3390/diagnostics13193070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/17/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The prevalence of renal cell carcinoma (RCC) is increasing due to advanced imaging techniques. Surgical resection is the standard treatment, involving complex radical and partial nephrectomy procedures that demand extensive training and planning. Furthermore, artificial intelligence (AI) can potentially aid the training process in the field of kidney cancer. This review explores how artificial intelligence (AI) can create a framework for kidney cancer surgery to address training difficulties. Following PRISMA 2020 criteria, an exhaustive search of PubMed and SCOPUS databases was conducted without any filters or restrictions. Inclusion criteria encompassed original English articles focusing on AI's role in kidney cancer surgical training. On the other hand, all non-original articles and articles published in any language other than English were excluded. Two independent reviewers assessed the articles, with a third party settling any disagreement. Study specifics, AI tools, methodologies, endpoints, and outcomes were extracted by the same authors. The Oxford Center for Evidence-Based Medicine's evidence levels were employed to assess the studies. Out of 468 identified records, 14 eligible studies were selected. Potential AI applications in kidney cancer surgical training include analyzing surgical workflow, annotating instruments, identifying tissues, and 3D reconstruction. AI is capable of appraising surgical skills, including the identification of procedural steps and instrument tracking. While AI and augmented reality (AR) enhance training, challenges persist in real-time tracking and registration. The utilization of AI-driven 3D reconstruction proves beneficial for intraoperative guidance and preoperative preparation. Artificial intelligence (AI) shows potential for advancing surgical training by providing unbiased evaluations, personalized feedback, and enhanced learning processes. Yet challenges such as consistent metric measurement, ethical concerns, and data privacy must be addressed. The integration of AI into kidney cancer surgical training offers solutions to training difficulties and a boost to surgical education. However, to fully harness its potential, additional studies are imperative.
Collapse
Affiliation(s)
- Natali Rodriguez Peñaranda
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Ahmed Eissa
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
- Department of Urology, Faculty of Medicine, Tanta University, Tanta 31527, Egypt
| | - Stefania Ferretti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Giampaolo Bianchi
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Di Bari
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Rui Farinha
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Urology Department, Lusíadas Hospital, 1500-458 Lisbon, Portugal
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, FPO-IRCCS Candiolo Cancer Institute, 10060 Turin, Italy;
| | - Inés Rivero Belenchón
- Urology and Nephrology Department, Virgen del Rocío University Hospital, 41013 Seville, Spain;
| | - Alessandro Veccia
- Department of Urology, University of Verona, Azienda Ospedaliera Universitaria Integrata, 37126 Verona, Italy;
| | - Juan Gomez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 119435 Moscow, Russia;
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urosurgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany;
| | - Severin Rodler
- Department of Urology, University Hospital LMU Munich, 80336 Munich, Germany;
| | - Pieter De Backer
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium
| | - Giovanni Enrico Cacciamani
- USC Institute of Urology, Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089, USA;
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA 90089, USA
| | - Ruben De Groote
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Anthony G. Gallagher
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Faculty of Life and Health Sciences, Ulster University, Derry BT48 7JL, UK
| | - Alexandre Mottrie
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Salvatore Micali
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Puliatti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| |
Collapse
|
7
|
Kinoshita T, Komatsu M. Artificial Intelligence in Surgery and Its Potential for Gastric Cancer. J Gastric Cancer 2023; 23:400-409. [PMID: 37553128 PMCID: PMC10412972 DOI: 10.5230/jgc.2023.23.e27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/10/2023] Open
Abstract
Artificial intelligence (AI) has made significant progress in recent years, and many medical fields are attempting to introduce AI technology into clinical practice. Currently, much research is being conducted to evaluate that AI can be incorporated into surgical procedures to make them safer and more efficient, subsequently to obtain better outcomes for patients. In this paper, we review basic AI research regarding surgery and discuss the potential for implementing AI technology in gastric cancer surgery. At present, research and development is focused on AI technologies that assist the surgeon's understandings and judgment during surgery, such as anatomical navigation. AI systems are also being developed to recognize in which the surgical phase is ongoing. Such a surgical phase recognition systems is considered for effective storage of surgical videos and education, in the future, for use in systems to objectively evaluate the skill of surgeons. At this time, it is not considered practical to let AI make intraoperative decisions or move forceps automatically from an ethical standpoint, too. At present, AI research on surgery has various limitations, and it is desirable to develop practical systems that will truly benefit clinical practice in the future.
Collapse
Affiliation(s)
- Takahiro Kinoshita
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan.
| | - Masaru Komatsu
- Gastric Surgery Division, National Cancer Center Hospital East, Kashiwa, Japan
| |
Collapse
|
8
|
Felinska EA, Fuchs TE, Kogkas A, Chen ZW, Otto B, Kowalewski KF, Petersen J, Müller-Stich BP, Mylonas G, Nickel F. Telestration with augmented reality improves surgical performance through gaze guidance. Surg Endosc 2023; 37:3557-3566. [PMID: 36609924 PMCID: PMC10156835 DOI: 10.1007/s00464-022-09859-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/27/2022] [Indexed: 01/07/2023]
Abstract
BACKGROUND In minimally invasive surgery (MIS), trainees need to learn how to interpret the operative field displayed on the laparoscopic screen. Experts currently guide trainees mainly verbally during laparoscopic procedures. A newly developed telestration system with augmented reality (iSurgeon) allows the instructor to display hand gestures in real-time on the laparoscopic screen in augmented reality to provide visual expert guidance (telestration). This study analysed the effect of telestration guided instructions on gaze behaviour during MIS training. METHODS In a randomized-controlled crossover study, 40 MIS naive medical students performed 8 laparoscopic tasks with telestration or with verbal instructions only. Pupil Core eye-tracking glasses were used to capture the instructor's and trainees' gazes. Gaze behaviour measures for tasks 1-7 were gaze latency, gaze convergence and collaborative gaze convergence. Performance measures included the number of errors in tasks 1-7 and trainee's ratings in structured and standardized performance scores in task 8 (ex vivo porcine laparoscopic cholecystectomy). RESULTS There was a significant improvement 1-7 on gaze latency [F(1,39) = 762.5, p < 0.01, ηp2 = 0.95], gaze convergence [F(1,39) = 482.8, p < 0.01, ηp2 = 0.93] and collaborative gaze convergence [F(1,39) = 408.4, p < 0.01, ηp2 = 0.91] upon instruction with iSurgeon. The number of errors was significantly lower in tasks 1-7 (0.18 ± 0.56 vs. 1.94 ± 1.80, p < 0.01) and the score ratings for laparoscopic cholecystectomy were significantly higher with telestration (global OSATS: 29 ± 2.5 vs. 25 ± 5.5, p < 0.01; task-specific OSATS: 60 ± 3 vs. 50 ± 6, p < 0.01). CONCLUSIONS Telestration with augmented reality successfully improved surgical performance. The trainee's gaze behaviour was improved by reducing the time from instruction to fixation on targets and leading to a higher convergence of the instructor's and the trainee's gazes. Also, the convergence of trainee's gaze and target areas increased with telestration. This confirms augmented reality-based telestration works by means of gaze guidance in MIS and could be used to improve training outcomes.
Collapse
Affiliation(s)
- Eleni Amelia Felinska
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Thomas Ewald Fuchs
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Alexandros Kogkas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Zi-Wei Chen
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Benjamin Otto
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urological Surgery, University Medical Center Mannheim, Heidelberg University, 68167, Mannheim, Germany
| | - Jens Petersen
- Department of Medical Image Computing, German Cancer Research Center, 69120, Heidelberg, Germany
| | - Beat Peter Müller-Stich
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany
| | - George Mylonas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - Felix Nickel
- Department of General, Visceral and Transplant Surgery, Heidelberg University Hospital, 69120, Heidelberg, Germany.
| |
Collapse
|
9
|
Singh R, Godiyal AK, Chavakula P, Suri A. Craniotomy Simulator with Force Myography and Machine Learning-Based Skills Assessment. Bioengineering (Basel) 2023; 10:bioengineering10040465. [PMID: 37106652 PMCID: PMC10136274 DOI: 10.3390/bioengineering10040465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 02/24/2023] [Accepted: 02/26/2023] [Indexed: 04/29/2023] Open
Abstract
Craniotomy is a fundamental component of neurosurgery that involves the removal of the skull bone flap. Simulation-based training of craniotomy is an efficient method to develop competent skills outside the operating room. Traditionally, an expert surgeon evaluates the surgical skills using rating scales, but this method is subjective, time-consuming, and tedious. Accordingly, the objective of the present study was to develop an anatomically accurate craniotomy simulator with realistic haptic feedback and objective evaluation of surgical skills. A CT scan segmentation-based craniotomy simulator with two bone flaps for drilling task was developed using 3D printed bone matrix material. Force myography (FMG) and machine learning were used to automatically evaluate the surgical skills. Twenty-two neurosurgeons participated in this study, including novices (n = 8), intermediates (n = 8), and experts (n = 6), and they performed the defined drilling experiments. They provided feedback on the effectiveness of the simulator using a Likert scale questionnaire on a scale ranging from 1 to 10. The data acquired from the FMG band was used to classify the surgical expertise into novice, intermediate and expert categories. The study employed naïve Bayes, linear discriminant (LDA), support vector machine (SVM), and decision tree (DT) classifiers with leave one out cross-validation. The neurosurgeons' feedback indicates that the developed simulator was found to be an effective tool to hone drilling skills. In addition, the bone matrix material provided good value in terms of haptic feedback (average score 7.1). For FMG-data-based skills evaluation, we achieved maximum accuracy using the naïve Bayes classifier (90.0 ± 14.8%). DT had a classification accuracy of 86.22 ± 20.8%, LDA had an accuracy of 81.9 ± 23.6%, and SVM had an accuracy of 76.7 ± 32.9%. The findings of this study indicate that materials with comparable biomechanical properties to those of real tissues are more effective for surgical simulation. In addition, force myography and machine learning provide objective and automated assessment of surgical drilling skills.
Collapse
Affiliation(s)
- Ramandeep Singh
- Neuro-Engineering Lab, Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Anoop Kant Godiyal
- Department of Physical Medicine and Rehabilitation, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Parikshith Chavakula
- Neuro-Engineering Lab, Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Ashish Suri
- Neuro-Engineering Lab, Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi 110029, India
| |
Collapse
|
10
|
Lang F, Gerhäuser AS, Wild C, Wennberg E, Schmidt MW, Wagner M, Müller-Stich BP, Nickel F. Video-based learning of coping strategies for common errors improves laparoscopy training-a randomized study. Surg Endosc 2023; 37:4054-4064. [PMID: 36944741 PMCID: PMC10156798 DOI: 10.1007/s00464-023-09969-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 02/19/2023] [Indexed: 03/23/2023]
Abstract
AIMS The aim of this study was to investigate whether shifting the focus to solution orientation and developing coping strategies for common errors could increase the efficiency of laparoscopic training and influence learning motivation. The concept of coping has been particularly defined by the psychologist Richard Lazarus [Lazarus and Folkman in Stress, appraisal, and coping, Springer publishing company, New York, 1984]. Based on this model, we examined the use of observational learning with a coping model for its effectiveness as a basic teaching model in laparoscopic training. METHODS 55 laparoscopically naive medical students learned a standardized laparoscopic knot tying technique with video-based instructions. The control group was only offered a mastery video that showed the ideal technique and was free from mistakes. The intervention group was instructed on active error analysis and watched freely selectable videos of common errors including solution strategies (coping model) in addition to the mastery videos. RESULTS There was no statistically significant difference between the intervention and control groups for number of knot tying attempts until proficiency was reached (18.8 ± 5.5 vs. 21.3 ± 6.5, p = 0.142). However, there was a significantly higher fraction of knots achieving technical proficiency in the intervention group after first use of the coping model (0.7 ± 0.1 vs. 0.6 ± 0.2, p = 0.026). Additionally, the proportion of blinded attempts that met the criteria for technical proficiency was significantly higher for the intervention group at 60.9% vs. 38.0% in control group (p = 0.021). The motivational subscore "interest" of the validated score on current motivation (QCM) was significantly higher for the intervention group (p = 0.032), as well as subjective learning benefit (p = 0.002) and error awareness (p < 0.001). CONCLUSION Using video-based learning of coping strategies for common errors improves learning motivation and understanding of the technique with a significant difference in its qualitative implementation in laparoscopy training. The ability to think in a solution-oriented, independent way is necessary in surgery in order to recognize and adequately deal with technical difficulties and complications.
Collapse
Affiliation(s)
- F Lang
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - A S Gerhäuser
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - C Wild
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - E Wennberg
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - M W Schmidt
- Department of Gynecology and Obstetrics, University Medical Center of Johannes Gutenberg University, Mainz, Germany
| | - M Wagner
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - B P Müller-Stich
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - F Nickel
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
| |
Collapse
|
11
|
Karadza E, Haney CM, Limen EF, Müller PC, Kowalewski KF, Sandini M, Wennberg E, Schmidt MW, Felinska EA, Lang F, Salg G, Kenngott HG, Rangelova E, Mieog S, Vissers F, Korrel M, Zwart M, Sauvanet A, Loos M, Mehrabi A, de Santibanes M, Shrikhande SV, Abu Hilal M, Besselink MG, Müller-Stich BP, Hackert T, Nickel F. Development of biotissue training models for anastomotic suturing in pancreatic surgery. HPB (Oxford) 2023:S1365-182X(23)00041-2. [PMID: 36828741 DOI: 10.1016/j.hpb.2023.02.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 12/11/2022] [Accepted: 02/06/2023] [Indexed: 02/26/2023]
Abstract
BACKGROUND Anastomotic suturing is the Achilles heel of pancreatic surgery. Especially in laparoscopic and robotically assisted surgery, the pancreatic anastomosis should first be trained outside the operating room. Realistic training models are therefore needed. METHODS Models of the pancreas, small bowel, stomach, bile duct, and a realistic training torso were developed for training of anastomoses in pancreatic surgery. Pancreas models with soft and hard textures, small and large ducts were incrementally developed and evaluated. Experienced pancreatic surgeons (n = 44) evaluated haptic realism, rigidity, fragility of tissues, and realism of suturing and knot tying. RESULTS In the iterative development process the pancreas models showed high haptic realism and highest realism in suturing (4.6 ± 0.7 and 4.9 ± 0.5 on 1-5 Likert scale, soft pancreas). The small bowel model showed highest haptic realism (4.8 ± 0.4) and optimal wall thickness (0.1 ± 0.4 on -2 to +2 Likert scale) and suturing behavior (0.1 ± 0.4). The bile duct models showed optimal wall thickness (0.3 ± 0.8 and 0.4 ± 0.8 on -2 to +2 Likert scale) and optimal tissue fragility (0 ± 0.9 and 0.3 ± 0.7). CONCLUSION The biotissue training models showed high haptic realism and realistic suturing behavior. They are suitable for realistic training of anastomoses in pancreatic surgery which may improve patient outcomes.
Collapse
Affiliation(s)
- Emir Karadza
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Caelan M Haney
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Eldridge F Limen
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Philip C Müller
- Department of Surgery and Transplantation, Swiss HPB and Transplantation Center, University Hospital Zürich, Zürich, Switzerland
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urooncological Surgery, University Medical Center Mannheim, Mannheim, Germany
| | - Marta Sandini
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Erica Wennberg
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Mona W Schmidt
- Department of Gynecology and Obstetrics, University Medical Center Mainz, Mainz, Germany
| | - Eleni A Felinska
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Franziska Lang
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Gabriel Salg
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Hannes G Kenngott
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Elena Rangelova
- Section for Upper Abdominal Surgery at Department of Surgery, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Sven Mieog
- Department of Surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Frederique Vissers
- Department of Surgery, Amsterdam UMC, University of Amsterdam, Cancer Center Amsterdam, the Netherlands
| | - Maarten Korrel
- Department of Surgery, Amsterdam UMC, University of Amsterdam, Cancer Center Amsterdam, the Netherlands
| | - Maurice Zwart
- Department of Surgery, Amsterdam UMC, University of Amsterdam, Cancer Center Amsterdam, the Netherlands
| | - Alain Sauvanet
- Department of HPB Surgery, Hôpital Beaujon, Clichy-Paris, France
| | - Martin Loos
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Arianeb Mehrabi
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Martin de Santibanes
- Department of Surgery, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| | | | - Mohammad Abu Hilal
- Department of Surgery, Instituto Fondazione Poliambulanza, Brescia, Italy
| | - Marc G Besselink
- Department of Surgery, Amsterdam UMC, University of Amsterdam, Cancer Center Amsterdam, the Netherlands
| | - Beat P Müller-Stich
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Thilo Hackert
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral and Transplantation Surgery at Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
12
|
Jørgensen RJ, Olsen RG, Svendsen MBS, Stadeager M, Konge L, Bjerrum F. Comparing Simulator Metrics and Rater Assessment of Laparoscopic Suturing Skills. JOURNAL OF SURGICAL EDUCATION 2023; 80:302-310. [PMID: 37683093 DOI: 10.1016/j.jsurg.2022.09.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 08/17/2022] [Accepted: 09/25/2022] [Indexed: 09/10/2023]
Abstract
BACKGROUND Laparoscopic intracorporeal suturing is important to master and competence should be ensured using an optimal method in a simulated environment before proceeding to real operations. The objectives of this study were to gather validity evidence for two tools for assessing laparoscopic intracorporeal knot tying and compare the rater-based assessment of laparoscopic intracorporeal suturing with the assessment based on simulator metrics. METHODS Twenty-eight novices and 19 experienced surgeons performed four laparoscopic sutures on a Simball Box simulator twice. Two surgeons used the Intracorporeal Suturing Assessment Tool (ISAT) for blinded video rating. RESULTS Composite Simulator Score (CSS) had higher test-retest reliability than the ISAT. The correlation between the number performed procedures including suturing and ISAT score was 0.51, p<0.001, and 0.59 p<0.001 for CSS. We found an inter-rater reliability (0.72, p<0.001 for test 1 and 0.53 p<0.001 for test 2). The pass/fail rates for ISAT and CSS were similar. CONCLUSION CSS and ISAT provide similar results for assessing laparoscopic suturing but assess different aspects of performance. Using simulator metrics and raters' assessments in combination should be considered for a more comprehensive evaluation of laparoscopic knot-tying competency.
Collapse
Affiliation(s)
- Rikke Jeong Jørgensen
- Copenhagen Academy for Medical Education and Simulation, Centre for HR and Education, Capital Region, Copenhagen, Denmark.
| | - Rikke Groth Olsen
- Copenhagen Academy for Medical Education and Simulation, Centre for HR and Education, Capital Region, Copenhagen, Denmark
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation, Centre for HR and Education, Capital Region, Copenhagen, Denmark
| | - Morten Stadeager
- Copenhagen Academy for Medical Education and Simulation, Centre for HR and Education, Capital Region, Copenhagen, Denmark; Department of Surgery, Hvidovre Hospital, Copenhagen University Hospital, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation, Centre for HR and Education, Capital Region, Copenhagen, Denmark; University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation, Centre for HR and Education, Capital Region, Copenhagen, Denmark; Department of Surgery, Herlev-Gentofte Hospital, Herlev, Denmark
| |
Collapse
|
13
|
Deep neural network architecture for automated soft surgical skills evaluation using objective structured assessment of technical skills criteria. Int J Comput Assist Radiol Surg 2023; 18:929-937. [PMID: 36694051 DOI: 10.1007/s11548-022-02827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 12/22/2022] [Indexed: 01/26/2023]
Abstract
PURPOSE Classic methods of surgery skills evaluation tend to classify the surgeon performance in multi-categorical discrete classes. If this classification scheme has proven to be effective, it does not provide in-between evaluation levels. If these intermediate scoring levels were available, they would provide more accurate evaluation of the surgeon trainee. METHODS We propose a novel approach to assess surgery skills on a continuous scale ranging from 1 to 5. We show that the proposed approach is flexible enough to be used either for scores of global performance or several sub-scores based on a surgical criteria set called Objective Structured Assessment of Technical Skills (OSATS). We established a combined CNN+BiLSTM architecture to take advantage of both temporal and spatial features of kinematic data. Our experimental validation relies on real-world data obtained from JIGSAWS database. The surgeons are evaluated on three tasks: Knot-Tying, Needle-Passing and Suturing. The proposed framework of neural networks takes as inputs a sequence of 76 kinematic variables and produces an output float score ranging from 1 to 5, reflecting the quality of the performed surgical task. RESULTS Our proposed model achieves high-quality OSATS scores predictions with means of Spearman correlation coefficients between the predicted outputs and the ground-truth outputs of 0.82, 0.60 and 0.65 for Knot-Tying, Needle-Passing and Suturing, respectively. To our knowledge, we are the first to achieve this regression performance using the OSATS criteria and the JIGSAWS kinematic data. CONCLUSION An effective deep learning tool was created for the purpose of surgical skills assessment. It was shown that our method could be a promising surgical skills evaluation tool for surgical training programs.
Collapse
|
14
|
Ebina K, Abe T, Hotta K, Higuchi M, Furumido J, Iwahara N, Kon M, Miyaji K, Shibuya S, Lingbo Y, Komizunai S, Kurashima Y, Kikuchi H, Matsumoto R, Osawa T, Murai S, Tsujita T, Sase K, Chen X, Konno A, Shinohara N. Automatic assessment of laparoscopic surgical skill competence based on motion metrics. PLoS One 2022; 17:e0277105. [PMID: 36322585 PMCID: PMC9629630 DOI: 10.1371/journal.pone.0277105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 10/19/2022] [Indexed: 11/17/2022] Open
Abstract
The purpose of this study was to characterize the motion features of surgical devices associated with laparoscopic surgical competency and build an automatic skill-credential system in porcine cadaver organ simulation training. Participants performed tissue dissection around the aorta, dividing vascular pedicles after applying Hem-o-lok (tissue dissection task) and parenchymal closure of the kidney (suturing task). Movements of surgical devices were tracked by a motion capture (Mocap) system, and Mocap-metrics were compared according to the level of surgical experience (experts: ≥50 laparoscopic surgeries, intermediates: 10-49, novices: 0-9), using the Kruskal-Wallis test and principal component analysis (PCA). Three machine-learning algorithms: support vector machine (SVM), PCA-SVM, and gradient boosting decision tree (GBDT), were utilized for discrimination of the surgical experience level. The accuracy of each model was evaluated by nested and repeated k-fold cross-validation. A total of 32 experts, 18 intermediates, and 20 novices participated in the present study. PCA revealed that efficiency-related metrics (e.g., path length) significantly contributed to PC 1 in both tasks. Regarding PC 2, speed-related metrics (e.g., velocity, acceleration, jerk) of right-hand devices largely contributed to the tissue dissection task, while those of left-hand devices did in the suturing task. Regarding the three-group discrimination, in the tissue dissection task, the GBDT method was superior to the other methods (median accuracy: 68.6%). In the suturing task, SVM and PCA-SVM methods were superior to the GBDT method (57.4 and 58.4%, respectively). Regarding the two-group discrimination (experts vs. intermediates/novices), the GBDT method resulted in a median accuracy of 72.9% in the tissue dissection task, and, in the suturing task, the PCA-SVM method resulted in a median accuracy of 69.2%. Overall, the mocap-based credential system using machine-learning classifiers provides a correct judgment rate of around 70% (two-group discrimination). Together with motion analysis and wet-lab training, simulation training could be a practical method for objectively assessing the surgical competence of trainees.
Collapse
Affiliation(s)
- Koki Ebina
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Takashige Abe
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
- * E-mail:
| | - Kiyohiko Hotta
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Madoka Higuchi
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Jun Furumido
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Naoya Iwahara
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Masafumi Kon
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Kou Miyaji
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Sayaka Shibuya
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Yan Lingbo
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Shunsuke Komizunai
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Yo Kurashima
- Hokkaido University Clinical Simulation Center, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Hiroshi Kikuchi
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Ryuji Matsumoto
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Takahiro Osawa
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Sachiyo Murai
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| | - Teppei Tsujita
- Department of Mechanical Engineering, National Defense Academy of Japan, Yokosuka, Japan
| | - Kazuya Sase
- Department of Mechanical Engineering and Intelligent Systems, Tohoku Gakuin University, Tagajo, Japan
| | - Xiaoshuai Chen
- Graduate School of Science and Technology, Hirosaki University, Hirosaki, Japan
| | - Atsushi Konno
- Graduate School of Information Science and Technology, Hokkaido University, Sapporo, Japan
| | - Nobuo Shinohara
- Department of Urology, Hokkaido University Graduate School of Medicine, Sapporo, Japan
| |
Collapse
|
15
|
Romero P, Gerhaeuser A, Carstensen L, Kössler-Ebs J, Wennberg E, Schmidt MW, Müller-Stich BP, Günther P, Nickel F. Learning of Intracorporal Knot Tying in Minimally Invasive Surgery by Video or Expert Instruction. Eur J Pediatr Surg 2022; 33:228-233. [PMID: 35668643 DOI: 10.1055/a-1868-6050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
INTRODUCTION Minimally invasive surgery skill laboratories are indispensable in training, especially for complex procedural skills such as intracorporal suturing and knot tying (ICKT). However, maintaining a laboratory is expensive, and specially trained teachers are in short supply. During the COVID-19 pandemic, in-person instruction has reduced to almost zero, while model learning via video instruction (VID) has become an integral part of medical education. The aim of this study was to compare the learning effectiveness and efficiency of ICKT by laparoscopically inexperienced medical students through video versus direct expert instruction. MATERIALS AND METHODS A secondary analysis of two randomized controlled trials was performed. We drew data from students who were trained in ICKT with expert instruction (EXP, n = 30) and from students who were trained via VID, n = 30). A laparoscopic box trainer including laparoscope was used for ICKT. Objective Structured Assessment of Technical Skills (OSATS), knot quality, and total ICKT time were the parameters for the assessment in this study. Proficiency criteria were also defined for these parameters. RESULTS Students in the EXP group performed significantly better in OSATS-procedure-specific checklist (PSC) and knot quality compared with students in the VID group, with no difference in task time. Of the students who reached the proficiency criteria for OSATS-PSC and knot quality, those in the EXP group required fewer attempts to do so than those in the VID group. Students in both groups improved significantly in all parameters over the first hour of evaluation. CONCLUSION For the laparoscopically inexperienced, training in ICKT through expert instruction presents an advantage compared with video-based self-study in the form of faster understanding of the procedure and the associated consistent implementation of good knot quality. Both teaching methods significantly improved participants' ICKT skills.
Collapse
Affiliation(s)
- Philipp Romero
- Department of Surgery, Division of Pediatric Surgery, University of Heidelberg, Heidelberg, Germany
| | - Annabelle Gerhaeuser
- Department of General, Visceral, and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Leonie Carstensen
- Department of Surgery, Division of Pediatric Surgery, University of Heidelberg, Heidelberg, Germany
| | - Julia Kössler-Ebs
- Department of Surgery, Division of Pediatric Surgery, University of Heidelberg, Heidelberg, Germany
| | - Erica Wennberg
- Lady Davis Institute for Medical Research, Jewish General Hospital/McGill University, Montréal, Quebec, Canada
| | - Mona W Schmidt
- Department of Gynecology, University of Mainz, Mainz, Germany
| | - Beat P Müller-Stich
- Department of General, Visceral, and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Patrick Günther
- Department of Surgery, Division of Pediatric Surgery, University of Heidelberg, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| |
Collapse
|
16
|
Pooransari P, Mehrabi S, Mirzamoradi M, Salehgargari S, Afrakhteh M. Comparison of Parameters of Fetal Doppler Echocardiography Between Mothers with and Without Diabetes. Int J Endocrinol Metab 2022; 20:e117524. [PMID: 36741331 PMCID: PMC9884331 DOI: 10.5812/ijem-117524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/26/2022] [Accepted: 08/29/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The current study aimed to compare fetal myocardial function and ventricular thickness in diabetic and normal pregnancies. METHODS Women with singleton pregnancies in the second or third trimester who were referred for routine prenatal or anomaly ultrasounds within March 2020 to February 2021 were enrolled in the study. Women with a positive history of overt or gestational diabetes mellitus (GDM) were considered the case group (n = 50), and women without GDM were considered the control group (n = 50). The study did not include women with multifetal pregnancy, hypertension, intrauterine growth retardation, and polyhydramnios. A complete fetal Doppler echocardiography was performed to measure isovolumic relaxation time (IVRT), left myocardial performance index (MPI), E/A ratio, right and left ventricular wall thickness, and end-diastolic interventricular septal thickness (IVST). The data were analyzed using three types of decision tree (DT) algorithms, and the performance of each DT was measured on the testing dataset. RESULTS The frequency of IVRT > 41 milliseconds was significantly higher in the case group than in the control group. The mean MPI values were 0.53 ± 0.15 and 0.43 ± 0.09 (P < 0.05), respectively, and the mean IVST values were 3.3 ± 1.11 and 2.49 ± 0.55 mm (P < 0.05) in the case and control groups, respectively, but not different between the subjects with overt or GDM (P > 0.05). Additionally, in the case group, the mean left MPI values were 0.57 ± 0.18 and 0.49 ± 0.12 in participants with poor and good glycemic control, respectively (P = 0.12). CONCLUSIONS Complete prenatal echocardiography performed in the second or third trimester is an appropriate tool for the diagnosis of fetal cardiac dysfunction in diabetic mothers and is suggested to perform for diabetic mothers, even those with good glycemic control.
Collapse
Affiliation(s)
- Parichehr Pooransari
- Department of Obstetrics and Gynecology, Shohada Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Sahar Mehrabi
- Department of Obstetrics and Gynecology, Shohada Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Corresponding Author: Department of Obstetrics and Gynecology, Shohada Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Masoumeh Mirzamoradi
- Department of Obstetrics and Gynecology, Mahdiyeh Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Soraya Salehgargari
- Department of Obstetrics and Gynecology, Shohada Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Maryam Afrakhteh
- Department of Obstetrics and Gynecology, Shohada Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
17
|
Artificial intelligence for renal cancer: From imaging to histology and beyond. Asian J Urol 2022; 9:243-252. [PMID: 36035341 PMCID: PMC9399557 DOI: 10.1016/j.ajur.2022.05.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 04/07/2022] [Accepted: 05/07/2022] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has made considerable progress within the last decade and is the subject of contemporary literature. This trend is driven by improved computational abilities and increasing amounts of complex data that allow for new approaches in analysis and interpretation. Renal cell carcinoma (RCC) has a rising incidence since most tumors are now detected at an earlier stage due to improved imaging. This creates considerable challenges as approximately 10%–17% of kidney tumors are designated as benign in histopathological evaluation; however, certain co-morbid populations (the obese and elderly) have an increased peri-interventional risk. AI offers an alternative solution by helping to optimize precision and guidance for diagnostic and therapeutic decisions. The narrative review introduced basic principles and provide a comprehensive overview of current AI techniques for RCC. Currently, AI applications can be found in any aspect of RCC management including diagnostics, perioperative care, pathology, and follow-up. Most commonly applied models include neural networks, random forest, support vector machines, and regression. However, for implementation in daily practice, health care providers need to develop a basic understanding and establish interdisciplinary collaborations in order to standardize datasets, define meaningful endpoints, and unify interpretation.
Collapse
|
18
|
Fuchs R, Van Praet KM, Bieck R, Kempfert J, Holzhey D, Kofler M, Borger MA, Jacobs S, Falk V, Neumuth T. A system for real-time multivariate feature combination of endoscopic mitral valve simulator training data. Int J Comput Assist Radiol Surg 2022; 17:1619-1631. [PMID: 35294716 PMCID: PMC9463288 DOI: 10.1007/s11548-022-02588-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 02/24/2022] [Indexed: 11/29/2022]
Abstract
Purpose For an in-depth analysis of the learning benefits that a stereoscopic view presents during endoscopic training, surgeons required a custom surgical evaluation system enabling simulator independent evaluation of endoscopic skills. Automated surgical skill assessment is in dire need since supervised training sessions and video analysis of recorded endoscope data are very time-consuming. This paper presents a first step towards a multimodal training evaluation system, which is not restricted to certain training setups and fixed evaluation metrics. Methods With our system we performed data fusion of motion and muscle-action measurements during multiple endoscopic exercises. The exercises were performed by medical experts with different surgical skill levels, using either two or three-dimensional endoscopic imaging. Based on the multi-modal measurements, training features were calculated and their significance assessed by distance and variance analysis. Finally, the features were used automatic classification of the used endoscope modes. Results During the study, 324 datasets from 12 participating volunteers were recorded, consisting of spatial information from the participants’ joint and right forearm electromyographic information. Feature significance analysis showed distinctive significance differences, with amplitude-related muscle information and velocity information from hand and wrist being among the most significant ones. The analyzed and generated classification models exceeded a correct prediction rate of used endoscope type accuracy rate of 90%. Conclusion The results support the validity of our setup and feature calculation, while their analysis shows significant distinctions and can be used to identify the used endoscopic view mode, something not apparent when analyzing time tables of each exercise attempt. The presented work is therefore a first step toward future developments, with which multivariate feature vectors can be classified automatically in real-time to evaluate endoscopic training and track learning progress. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-022-02588-1.
Collapse
Affiliation(s)
- Reinhard Fuchs
- Innovation Center Computer Assisted Surgery, University of Leipzig, Leipzig, Germany.
| | - Karel M Van Praet
- Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.,DZHK (German Centre for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
| | - Richard Bieck
- Innovation Center Computer Assisted Surgery, University of Leipzig, Leipzig, Germany
| | - Jörg Kempfert
- Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.,DZHK (German Centre for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
| | - David Holzhey
- Department of Cardiovascular Surgery, Heart Center Leipzig, Leipzig, Germany
| | - Markus Kofler
- Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.,DZHK (German Centre for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
| | - Michael A Borger
- Department of Cardiovascular Surgery, Heart Center Leipzig, Leipzig, Germany
| | - Stephan Jacobs
- Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.,DZHK (German Centre for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
| | - Volkmar Falk
- Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.,DZHK (German Centre for Cardiovascular Research), Partner Site Berlin, Berlin, Germany.,Department of Cardiovascular Surgery, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany.,Translational Cardiovascular Technologies, Institute of Translational Medicine, Department of Health Sciences and Technology, Swiss Federal Institute of Technology (ETH) Zurich, Zurich, Switzerland
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery, University of Leipzig, Leipzig, Germany
| |
Collapse
|
19
|
Lam K, Chen J, Wang Z, Iqbal FM, Darzi A, Lo B, Purkayastha S, Kinross JM. Machine learning for technical skill assessment in surgery: a systematic review. NPJ Digit Med 2022; 5:24. [PMID: 35241760 PMCID: PMC8894462 DOI: 10.1038/s41746-022-00566-0] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 01/21/2022] [Indexed: 12/18/2022] Open
Abstract
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon. PROSPERO: CRD42020226071
Collapse
Affiliation(s)
- Kyle Lam
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Junhong Chen
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Zeyu Wang
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Fahad M Iqbal
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Ara Darzi
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Benny Lo
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| | - Sanjay Purkayastha
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK.
| | - James M Kinross
- Department of Surgery and Cancer, 10th Floor Queen Elizabeth the Queen Mother Building, St Mary's Hospital, Imperial College, London, W2 1NY, UK
| |
Collapse
|
20
|
Kirubarajan A, Young D, Khan S, Crasto N, Sobel M, Sussman D. Artificial Intelligence and Surgical Education: A Systematic Scoping Review of Interventions. JOURNAL OF SURGICAL EDUCATION 2022; 79:500-515. [PMID: 34756807 DOI: 10.1016/j.jsurg.2021.09.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/21/2021] [Accepted: 09/16/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To synthesize peer-reviewed evidence related to the use of artificial intelligence (AI) in surgical education DESIGN: We conducted and reported a scoping review according to the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analysis with extension for Scoping Reviews guideline and the fourth edition of the Joanna Briggs Institute Reviewer's Manual. We systematically searched eight interdisciplinary databases including MEDLINE-Ovid, ERIC, EMBASE, CINAHL, Web of Science: Core Collection, Compendex, Scopus, and IEEE Xplore. Databases were searched from inception until the date of search on April 13, 2021. SETTING/PARTICIPANTS We only examined original, peer-reviewed interventional studies that self-described as AI interventions, focused on medical education, and were relevant to surgical trainees (defined as medical or dental students, postgraduate residents, or surgical fellows) within the title and abstract (see Table 2). Animal, cadaveric, and in vivo studies were not eligible for inclusion. RESULTS After systematically searching eight databases and 4255 citations, our scoping review identified 49 studies relevant to artificial intelligence in surgical education. We found diverse interventions related to the evaluation of surgical competency, personalization of surgical education, and improvement of surgical education materials across surgical specialties. Many studies used existing surgical education materials, such as the Objective Structured Assessment of Technical Skills framework or the JHU-ISI Gesture and Skill Assessment Working Set database. Though most studies did not provide outcomes related to the implementation in medical schools (such as cost-effective analyses or trainee feedback), there are numerous promising interventions. In particular, many studies noted high accuracy in the objective characterization of surgical skill sets. These interventions could be further used to identify at-risk surgical trainees or evaluate teaching methods. CONCLUSIONS There are promising applications for AI in surgical education, particularly for the assessment of surgical competencies, though further evidence is needed regarding implementation and applicability.
Collapse
Affiliation(s)
| | - Dylan Young
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Shawn Khan
- Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Noelle Crasto
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada
| | - Mara Sobel
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, Ontario, Canada; Institute for Biomedical Engineering, Science and Technology (iBEST) at Ryerson University and St. Michael's Hospital, Toronto, Ontario, Canada; Department of Obstetrics and Gynaecology, University of Toronto, Toronto, Ontario, Canada; The Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, Ontario, Canada
| |
Collapse
|
21
|
Junger D, Frommer SM, Burgert O. State-of-the-art of situation recognition systems for intraoperative procedures. Med Biol Eng Comput 2022; 60:921-939. [PMID: 35178622 PMCID: PMC8933302 DOI: 10.1007/s11517-022-02520-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 01/30/2022] [Indexed: 11/05/2022]
Abstract
One of the key challenges for automatic assistance is the support of actors in the operating room depending on the status of the procedure. Therefore, context information collected in the operating room is used to gain knowledge about the current situation. In literature, solutions already exist for specific use cases, but it is doubtful to what extent these approaches can be transferred to other conditions. We conducted a comprehensive literature research on existing situation recognition systems for the intraoperative area, covering 274 articles and 95 cross-references published between 2010 and 2019. We contrasted and compared 58 identified approaches based on defined aspects such as used sensor data or application area. In addition, we discussed applicability and transferability. Most of the papers focus on video data for recognizing situations within laparoscopic and cataract surgeries. Not all of the approaches can be used online for real-time recognition. Using different methods, good results with recognition accuracies above 90% could be achieved. Overall, transferability is less addressed. The applicability of approaches to other circumstances seems to be possible to a limited extent. Future research should place a stronger focus on adaptability. The literature review shows differences within existing approaches for situation recognition and outlines research trends. Applicability and transferability to other conditions are less addressed in current work.
Collapse
Affiliation(s)
- D Junger
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Alteburgstr. 150, 72762, Reutlingen, Germany.
| | - S M Frommer
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Alteburgstr. 150, 72762, Reutlingen, Germany
| | - O Burgert
- School of Informatics, Research Group Computer Assisted Medicine (CaMed), Reutlingen University, Alteburgstr. 150, 72762, Reutlingen, Germany
| |
Collapse
|
22
|
Stenmark M, Omerbašić E, Magnusson M, Andersson V, Abrahamsson M, Tran PK. Vision-based Tracking of Surgical Motion during Live Open-Heart Surgery. J Surg Res 2021; 271:106-116. [PMID: 34879315 DOI: 10.1016/j.jss.2021.10.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 09/01/2021] [Accepted: 10/10/2021] [Indexed: 01/01/2023]
Abstract
BACKGROUND Motion tracking during live surgeries may be used to assess surgeons' intra-operative performance, provide feedback, and predict outcome. Current assessment protocols rely on human observations, controlled laboratory settings, or tracking technologies not suitable for live operating theatres. In this study, a novel method for motion tracking of live open-heart surgery was developed, and evaluated. MATERIALS AND METHODS Three-D-printed 'tracking die' with miniature markers were fitted to DeBakey forceps. The surgical field was recorded with a video camera mounted above the operating table. Software was developed for tracking the die from the recordings. The system was tested on five open-heart procedures. Surgeons were asked to report subjective system related concerns during live surgery and assess the weight of the die on blind test. The accuracy of the system was evaluated against ground truth generated by a robot. RESULTS The 3D-printed die weighed 6 g and tolerated sterilization with hydrogen peroxide, which added approximately 13% to the mass of the forceps. Surgeons sensed a shift in the balance of the instrument but could on blind test not correctly verify changes in weight. When two or more markers were detected, the 3D position estimate was on average within 2-3 mm, and 1.1-2.6 degrees from ground truth. Computational time was 30-50 ms per frame on a standard laptop. CONCLUSIONS The vision-based motion tracking system was applicable for live surgeries with negligible inconvenience to the surgeons. Motion data was extracted with acceptable accuracy and speed at low computational cost.
Collapse
Affiliation(s)
- Maj Stenmark
- Pediatric Cardiac Surgery, Children's Heart Center, Skåne University Hospital, Lund, Sweden.
| | - Edin Omerbašić
- Pediatric Cardiac Surgery, Children's Heart Center, Skåne University Hospital, Lund, Sweden
| | - Måns Magnusson
- Pediatric Cardiac Surgery, Children's Heart Center, Skåne University Hospital, Lund, Sweden
| | - Viktor Andersson
- Pediatric Cardiac Surgery, Children's Heart Center, Skåne University Hospital, Lund, Sweden
| | - Martin Abrahamsson
- Pediatric Cardiac Surgery, Children's Heart Center, Skåne University Hospital, Lund, Sweden
| | - Phan-Kiet Tran
- Pediatric Cardiac Surgery, Children's Heart Center, Skåne University Hospital, Lund, Sweden; Department of Clinical Sciences, Lund University, Lund, Sweden
| |
Collapse
|
23
|
Bilgic E, Gorgy A, Yang A, Cwintal M, Ranjbar H, Kahla K, Reddy D, Li K, Ozturk H, Zimmermann E, Quaiattini A, Abbasgholizadeh-Rahimi S, Poenaru D, Harley JM. Exploring the roles of artificial intelligence in surgical education: A scoping review. Am J Surg 2021; 224:205-216. [PMID: 34865736 DOI: 10.1016/j.amjsurg.2021.11.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/19/2021] [Accepted: 11/22/2021] [Indexed: 01/02/2023]
Abstract
BACKGROUND Technology-enhanced teaching and learning, including Artificial Intelligence (AI) applications, has started to evolve in surgical education. Hence, the purpose of this scoping review is to explore the current and future roles of AI in surgical education. METHODS Nine bibliographic databases were searched from January 2010 to January 2021. Full-text articles were included if they focused on AI in surgical education. RESULTS Out of 14,008 unique sources of evidence, 93 were included. Out of 93, 84 were conducted in the simulation setting, and 89 targeted technical skills. Fifty-six studies focused on skills assessment/classification, and 36 used multiple AI techniques. Also, increasing sample size, having balanced data, and using AI to provide feedback were major future directions mentioned by authors. CONCLUSIONS AI can help optimize the education of trainees and our results can help educators and researchers identify areas that need further investigation.
Collapse
Affiliation(s)
- Elif Bilgic
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrew Gorgy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Alison Yang
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Michelle Cwintal
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Hamed Ranjbar
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kalin Kahla
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Dheeksha Reddy
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Kexin Li
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Helin Ozturk
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Eric Zimmermann
- Department of Surgery, McGill University, Montreal, Quebec, Canada
| | - Andrea Quaiattini
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada
| | - Samira Abbasgholizadeh-Rahimi
- Department of Family Medicine, McGill University, Montreal, Quebec, Canada; Department of Electrical and Computer Engineering, McGill University, Montreal, Canada; Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Canada; Mila Quebec AI Institute, Montreal, Canada
| | - Dan Poenaru
- Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Department of Pediatric Surgery, McGill University, Canada
| | - Jason M Harley
- Department of Surgery, McGill University, Montreal, Quebec, Canada; Institute of Health Sciences Education, McGill University, Montreal, Quebec, Canada; Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Steinberg Centre for Simulation and Interactive Learning, McGill University, Montreal, Quebec, Canada.
| |
Collapse
|
24
|
Bamba Y, Ogawa S, Itabashi M, Kameoka S, Okamoto T, Yamamoto M. Automated recognition of objects and types of forceps in surgical images using deep learning. Sci Rep 2021; 11:22571. [PMID: 34799625 PMCID: PMC8604928 DOI: 10.1038/s41598-021-01911-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 10/26/2021] [Indexed: 12/15/2022] Open
Abstract
Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.
Collapse
Affiliation(s)
- Yoshiko Bamba
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan.
| | - Shimpei Ogawa
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Michio Itabashi
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | | | - Takahiro Okamoto
- Department of Surgery 2, Tokyo Women's Medical University, Tokyo, Japan
| | - Masakazu Yamamoto
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| |
Collapse
|
25
|
Romero P, Carstensen L, Kössler‐Ebs J, Wennberg E, Müller‐Stich BP, Nickel F, Günther P. Learning and application of intracorporal slipping knot techniques in minimally invasive surgery. SURGICAL PRACTICE 2021. [DOI: 10.1111/1744-1633.12534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Philipp Romero
- Department of Surgery, Division of Pediatric Surgery University of Heidelberg Heidelberg Germany
| | - Leonie Carstensen
- Department of Surgery, Division of Pediatric Surgery University of Heidelberg Heidelberg Germany
| | - Julia Kössler‐Ebs
- Department of Surgery, Division of Pediatric Surgery University of Heidelberg Heidelberg Germany
| | - Erica Wennberg
- Lady Davis Institute for Medical Research Jewish General Hospital/McGill University Montréal Québec Canada
| | - Beat P. Müller‐Stich
- Department of General, Visceral, and Transplantation Surgery University of Heidelberg Heidelberg Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery University of Heidelberg Heidelberg Germany
| | - Patrick Günther
- Department of Surgery, Division of Pediatric Surgery University of Heidelberg Heidelberg Germany
| |
Collapse
|
26
|
Blackham RE, Hamdorf JM. Video-Rated Performance Assessment of Simulated Laparoscopic Sleeve Gastrectomy: Validation of a Sleeve Gastrectomy Rating Scale. Obes Surg 2021; 31:3188-3193. [PMID: 33895975 DOI: 10.1007/s11695-021-05422-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 04/06/2021] [Accepted: 04/07/2021] [Indexed: 11/24/2022]
Abstract
PURPOSE The global rise in obesity has been accompanied by widespread uptake of the procedure of laparoscopic sleeve gastrectomy. Despite this, the key components for performance assessment have not been standardized for this procedure. The aim of this study was to develop and demonstrate the validity of a Sleeve Objective Structured Assessment of Technical Skill (SOSATS) scale for learning the procedure of laparoscopic sleeve gastrectomy (LSG). MATERIALS AND METHODS The SOSATS evaluation tool was based upon critical steps of the LSG procedure. Both the SOSATS and the Global Rating Scale (GRS) component of the Objective Structured Assessment of Technical Skill (OSATS) tools were utilized in a prospective single-blinded observational study design of 26 video recordings of surgeons performing sleeve gastrectomies using a novel simulation. The surgeons were allocated into "novice" or "experienced" groups dependent on case-volume criteria. Surgical performance was assessed using both the GRS and SOSATS scales by blinded assessors of the video recordings. RESULTS Face and content validity were demonstrated for key components of the simulated model. An overall positive correlation was established inferring concurrent validity between the accepted OSATS Global Rating Scale against the SOSATS procedural scale. Construct validity was established for a number of areas of the SOSATS scale. CONCLUSION The SOSATS scale is shown to exhibit construct and concurrent validity in the simulated setting for the procedure of sleeve gastrectomy. Utilizing this scale to review surgical performance is potentially feasible and reliable but would require further research prior to use in high-stakes assessment processes such as credentialing.
Collapse
Affiliation(s)
- Ruth E Blackham
- CTEC, Medical School, The University of Western Australia, Perth, Western Australia. .,Western Surgical Health, Nedlands, Western Australia.
| | - Jeffrey M Hamdorf
- CTEC, Medical School, The University of Western Australia, Perth, Western Australia.,Western Surgical Health, Nedlands, Western Australia
| |
Collapse
|
27
|
Garrow CR, Kowalewski KF, Li L, Wagner M, Schmidt MW, Engelhardt S, Hashimoto DA, Kenngott HG, Bodenstedt S, Speidel S, Müller-Stich BP, Nickel F. Machine Learning for Surgical Phase Recognition: A Systematic Review. Ann Surg 2021; 273:684-693. [PMID: 33201088 DOI: 10.1097/sla.0000000000004425] [Citation(s) in RCA: 110] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
OBJECTIVE To provide an overview of ML models and data streams utilized for automated surgical phase recognition. BACKGROUND Phase recognition identifies different steps and phases of an operation. ML is an evolving technology that allows analysis and interpretation of huge data sets. Automation of phase recognition based on data inputs is essential for optimization of workflow, surgical training, intraoperative assistance, patient safety, and efficiency. METHODS A systematic review was performed according to the Cochrane recommendations and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. PubMed, Web of Science, IEEExplore, GoogleScholar, and CiteSeerX were searched. Literature describing phase recognition based on ML models and the capture of intraoperative signals during general surgery procedures was included. RESULTS A total of 2254 titles/abstracts were screened, and 35 full-texts were included. Most commonly used ML models were Hidden Markov Models and Artificial Neural Networks with a trend towards higher complexity over time. Most frequently used data types were feature learning from surgical videos and manual annotation of instrument use. Laparoscopic cholecystectomy was used most commonly, often achieving accuracy rates over 90%, though there was no consistent standardization of defined phases. CONCLUSIONS ML for surgical phase recognition can be performed with high accuracy, depending on the model, data type, and complexity of surgery. Different intraoperative data inputs such as video and instrument type can successfully be used. Most ML models still require significant amounts of manual expert annotations for training. The ML models may drive surgical workflow towards standardization, efficiency, and objectiveness to improve patient outcome in the future. REGISTRATION PROSPERO CRD42018108907.
Collapse
Affiliation(s)
- Carly R Garrow
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Karl-Friedrich Kowalewski
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
- Department of Urology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Linhong Li
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Martin Wagner
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Mona W Schmidt
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sandy Engelhardt
- Department of Computer Science, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, Boston, Massachusetts
| | - Hannes G Kenngott
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT), Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| | - Beat P Müller-Stich
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery, University Hospital of Heidelberg, Heidelberg, Germany
| |
Collapse
|
28
|
Lefor AK, Harada K, Dosis A, Mitsuishi M. Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set II: learning curve analysis. Int J Comput Assist Radiol Surg 2021; 16:589-595. [PMID: 33723706 DOI: 10.1007/s11548-021-02339-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 01/12/2023]
Abstract
PURPOSE The Johns Hopkins-Intuitive Gesture and Skill Assessment Working Set (JIGSAWS) dataset is used to develop robotic surgery skill assessment tools, but there has been no detailed analysis of this dataset. The aim of this study is to perform a learning curve analysis of the existing JIGSAWS dataset. METHODS Five trials were performed in JIGSAWS by eight participants (four novices, two intermediates and two experts) for three exercises (suturing, knot-tying and needle passing). Global Rating Scores and time, path length and movements were analyzed quantitatively and qualitatively by graphical analysis. RESULTS There are no significant differences in Global Rating Scale scores over time. Time in the suturing exercise and path length in needle passing had significant differences. Other kinematic parameters were not significantly different. Qualitative analysis shows a learning curve only for suturing. Cumulative sum analysis suggests completion of the learning curve for suturing by trial 4. CONCLUSIONS The existing JIGSAWS dataset does not show a quantitative learning curve for Global Rating Scale scores, or most kinematic parameters which may be due in part to the limited size of the dataset. Qualitative analysis shows a learning curve for suturing. Cumulative sum analysis suggests completion of the suturing learning curve by trial 4. An expanded dataset is needed to facilitate subset analyses.
Collapse
Affiliation(s)
- Alan Kawarai Lefor
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Kanako Harada
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| | | | - Mamoru Mitsuishi
- Mechanical Engineering, School of Engineering, The University of Tokyo, Tokyo, Japan
- Bioengineering, School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
29
|
Willuth E, Hardon SF, Lang F, Haney CM, Felinska EA, Kowalewski KF, Müller-Stich BP, Horeman T, Nickel F. Robotic-assisted cholecystectomy is superior to laparoscopic cholecystectomy in the initial training for surgical novices in an ex vivo porcine model: a randomized crossover study. Surg Endosc 2021; 36:1064-1079. [PMID: 33638104 PMCID: PMC8758618 DOI: 10.1007/s00464-021-08373-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 02/09/2021] [Indexed: 12/11/2022]
Abstract
Background Robotic-assisted surgery (RAS) potentially reduces workload and shortens the surgical learning curve compared to conventional laparoscopy (CL). The present study aimed to compare robotic-assisted cholecystectomy (RAC) to laparoscopic cholecystectomy (LC) in the initial learning phase for novices. Methods In a randomized crossover study, medical students (n = 40) in their clinical years performed both LC and RAC on a cadaveric porcine model. After standardized instructions and basic skill training, group 1 started with RAC and then performed LC, while group 2 started with LC and then performed RAC. The primary endpoint was surgical performance measured with Objective Structured Assessment of Technical Skills (OSATS) score, secondary endpoints included operating time, complications (liver damage, gallbladder perforations, vessel damage), force applied to tissue, and subjective workload assessment. Results Surgical performance was better for RAC than for LC for total OSATS (RAC = 77.4 ± 7.9 vs. LC = 73.8 ± 9.4; p = 0.025, global OSATS (RAC = 27.2 ± 1.0 vs. LC = 26.5 ± 1.6; p = 0.012, and task specific OSATS score (RAC = 50.5 ± 7.5 vs. LC = 47.1 ± 8.5; p = 0.037). There were less complications with RAC than with LC (10 (25.6%) vs. 26 (65.0%), p = 0.006) but no difference in operating times (RAC = 77.0 ± 15.3 vs. LC = 75.5 ± 15.3 min; p = 0.517). Force applied to tissue was similar. Students found RAC less physical demanding and less frustrating than LC. Conclusions Novices performed their first cholecystectomies with better performance and less complications with RAS than with CL, while operating time showed no differences. Students perceived less subjective workload for RAS than for CL. Unlike our expectations, the lack of haptic feedback on the robotic system did not lead to higher force application during RAC than LC and did not increase tissue damage. These results show potential advantages for RAS over CL for surgical novices while performing their first RAC and LC using an ex vivo cadaveric porcine model. Registration number researchregistry6029 Graphic abstract ![]()
Collapse
Affiliation(s)
- E Willuth
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - S F Hardon
- Department of Surgery, Amsterdam UMC-VU University Medical Center, Amsterdam, The Netherlands
- Department of BioMechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - F Lang
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - C M Haney
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - E A Felinska
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - K F Kowalewski
- Department of Urology and Urological Surgery, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - B P Müller-Stich
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - T Horeman
- Department of BioMechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - F Nickel
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
30
|
Alnafisee N, Zafar S, Vedula SS, Sikder S. Current methods for assessing technical skill in cataract surgery. J Cataract Refract Surg 2021; 47:256-264. [PMID: 32675650 DOI: 10.1097/j.jcrs.0000000000000322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/19/2020] [Indexed: 12/18/2022]
Abstract
Surgery is a major source of errors in patient care. Preventing complications from surgical errors in the operating room is estimated to lead to reduction of up to 41 846 readmissions and save $620.3 million per year. It is now established that poor technical skill is associated with an increased risk of severe adverse events postoperatively and traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. This review discusses the current methods available for evaluating technical skills in cataract surgery and the recent technological advancements that have enabled capture and analysis of large amounts of complex surgical data for more automated objective skills assessment.
Collapse
Affiliation(s)
- Nouf Alnafisee
- From the The Wilmer Eye Institute, Johns Hopkins University School of Medicine (Alnafisee, Zafar, Sikder), Baltimore, and the Department of Computer Science, Malone Center for Engineering in Healthcare, The Johns Hopkins University Whiting School of Engineering (Vedula), Baltimore, Maryland, USA
| | | | | | | |
Collapse
|
31
|
Castillo-Segura P, Fernández-Panadero C, Alario-Hoyos C, Muñoz-Merino PJ, Delgado Kloos C. Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review. Artif Intell Med 2021; 112:102007. [PMID: 33581827 DOI: 10.1016/j.artmed.2020.102007] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 11/25/2020] [Accepted: 12/28/2020] [Indexed: 11/18/2022]
Abstract
The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT (Internet of Things), the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.
Collapse
Affiliation(s)
- Pablo Castillo-Segura
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | | | - Carlos Alario-Hoyos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Pedro J Muñoz-Merino
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| | - Carlos Delgado Kloos
- Universidad Carlos III de Madrid, Av. Universidad 30, 28911, Leganés, Madrid, Spain.
| |
Collapse
|
32
|
Pastewski J, Baker D, Somerset A, Leonard K, Azzie G, Roach VA, Ziegler K, Brahmamdam P. Analysis of Instrument Motion and the Impact of Residency Level and Concurrent Distraction on Laparoscopic Skills. JOURNAL OF SURGICAL EDUCATION 2021; 78:265-274. [PMID: 32741690 DOI: 10.1016/j.jsurg.2020.07.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 06/15/2020] [Accepted: 07/13/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Using a laparoscopic box trainer fitted with motion analysis trackers and software, we aim to identify differences between junior and senior residents performing the peg transfer task, and the impact of a distracting secondary task on performance. DESIGN General surgery residents were asked to perform the laparoscopic peg transfer task on a trainer equipped with a motion tracker. They were also asked to perform the laparoscopic task while completing a secondary task. Extreme velocity and acceleration events of instrument movement in the 3 rotational degrees of freedom were measured during task completion. The number of extreme events, defined as velocity or acceleration exceeding 1 SD above or below their own mean, were tabulated. The performance of junior residents was compared to senior residents. SETTING Simulation learning institute, Beaumont Hospital, Royal Oak, Michigan. PARTICIPANTS Thirty-seven general surgery residents from Beaumont Hospital, Royal Oak. RESULTS When completing the primary task alone, senior residents executed significantly fewer extreme motion events specific to acceleration in pitch (16.63 vs. 20.69, p = 0.04), and executed more extreme motion events specific to velocity in roll (16.14 vs. 15.11, p = 0.038), when compared to junior residents. With addition of a secondary task, senior residents had fewer extreme acceleration events specific to pitch, (14.69 vs. 22.22, p < 0.001). CONCLUSIONS While junior and senior residents completed the peg transfer task with similar times, motion analysis identified differences in extreme motion events between the groups, even when a secondary task was added. Motion analysis may prove useful for real-time feedback during laparoscopic skill acquisition.
Collapse
Affiliation(s)
| | - Dustin Baker
- Department of Surgery, Beaumont Health, Royal Oak, Michigan
| | - Amy Somerset
- Department of Surgery, Beaumont Health, Royal Oak, Michigan
| | - Kelsey Leonard
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Rochester, Michigan
| | - Georges Azzie
- Division of General and Thoracic Surgery, Hospital for Sick Children, Toronto, Canada
| | - Victoria A Roach
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Rochester, Michigan; Department of Surgery, Oakland University William Beaumont School of Medicine, Rochester, Michigan
| | - Kathryn Ziegler
- Department of Surgery, Beaumont Health, Royal Oak, Michigan; Department of Surgery, Oakland University William Beaumont School of Medicine, Rochester, Michigan
| | - Pavan Brahmamdam
- Department of Surgery, Beaumont Health, Royal Oak, Michigan; Department of Surgery, Oakland University William Beaumont School of Medicine, Rochester, Michigan.
| |
Collapse
|
33
|
Ganni S, Botden SMBI, Chmarra M, Li M, Goossens RHM, Jakimowicz JJ. Validation of Motion Tracking Software for Evaluation of Surgical Performance in Laparoscopic Cholecystectomy. J Med Syst 2020; 44:56. [PMID: 31980955 PMCID: PMC6981315 DOI: 10.1007/s10916-020-1525-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 01/16/2020] [Indexed: 01/22/2023]
Abstract
Motion tracking software for assessing laparoscopic surgical proficiency has been proven to be effective in differentiating between expert and novice performances. However, with several indices that can be generated from the software, there is no set threshold that can be used to benchmark performances. The aim of this study was to identify the best possible algorithm that can be used to benchmark expert, intermediate and novice performances for objective evaluation of psychomotor skills. 12 video recordings of various surgeons were collected in a blinded fashion. Data from our previous study of 6 experts and 23 novices was also included in the analysis to determine thresholds for performance. Video recording were analyzed both by the Kinovea 0.8.15 software and a blinded expert observer using the CAT form. Multiple algorithms were tested to accurately identify expert and novice performances. ½ L + [Formula: see text] A + [Formula: see text] J scoring of path length, average movement and jerk index respectively resulted in identifying 23/24 performances. Comparing the algorithm to CAT assessment yielded in a linear regression coefficient R2 of 0.844. The value of motion tracking software in providing objective clinical evaluation and retrospective analysis is evident. Given the prospective use of this tool the algorithm developed in this study proves to be effective in benchmarking performances for psychomotor skills evaluation.
Collapse
Affiliation(s)
- Sandeep Ganni
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands.
- GSL Medical College, Department of Surgery, Rajahmundry, India.
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands.
| | - Sanne M B I Botden
- Department of Pediatric Surgery, Radboudumc - Amalia Children's Hospital, Nijmegen, the Netherlands
| | - Magdalena Chmarra
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
| | - Meng Li
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| | - Richard H M Goossens
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
| | - Jack J Jakimowicz
- Delft University of Technology, Industrial Design Engineering, Medisign, Delft, The Netherlands
- Catharina Hospital, Research and Education, Michelangelolaan 2, 5653 EJ, Eindhoven, The Netherlands
| |
Collapse
|