1
|
Jarry C, Varas Cohen J. Distance simulation in surgical education. Surgery 2025; 180:109097. [PMID: 39787674 DOI: 10.1016/j.surg.2024.109097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 12/06/2024] [Accepted: 12/11/2024] [Indexed: 01/12/2025]
Abstract
Distance and remote simulation have emerged as vital tools in modern surgical education, offering solutions to challenges such as limited operating hours, growing clinical demands, and the need for consistent, high-quality training. This review examines the benefits, limitations, and strategies for implementing sustainable distance simulation, structured around 3 foundational pillars: (1) effective hardware and infrastructure, including simulators and realistic scenarios that enable trainees to develop essential skills; (2) validated training programs grounded in educational theory with a clear focus on skill transfer and predictive validity; and (3) timely access to effective feedback. Distance simulation permits adaptable, scalable training environments, but the addition of remote and deferred feedback has further broadened its impact, helping to overcome the challenges posed by faculty availability and clinician time constraints. Remote-asynchronous feedback not only makes teaching time more efficient but also allows instructors to organize their time flexibly, maximizing their impact. Furthermore, web-based feedback oriented platforms facilitate the creation of a sustainable teaching network through train-the-trainer initiatives, enabling near-peer and nonclinical experts to provide standardized, high-quality teaching. This scalable model reduces the reliance on senior faculty, building a culture of mentorship and support within the surgical education community. In addition, distance simulation platforms are increasingly incorporating artificial intelligence-enhanced assessment tools capable of detecting errors, analyzing procedural steps, and generating automated feedback. Integrating artificial intelligence into innovative simulation modalities not only enhances access to quality feedback but also could provide deeper insights into the learning process, as learners progress through these enriched learning curves. Growing evidence shows how tools for sustainable distance simulation can positively impact education at multiple levels, benefiting undergraduate and postgraduate students, residents, and faculty across a spectrum of skills from basic tasks to complex surgical procedures. Moreover, its applications extend beyond simulated environments, providing frameworks that can be adapted to teach real surgical performance in clinical settings. As surgical education evolves, distance simulation demonstrates immense value in supporting accessible, high-quality training, particularly in resource-limited environments.
Collapse
Affiliation(s)
- Cristián Jarry
- Experimental Surgery and Simulation Center, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile; Colorectal Surgery Unit, Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile. https://twitter.com/@cjarryt
| | - Julián Varas Cohen
- Experimental Surgery and Simulation Center, Faculty of Medicine, Pontificia Universidad Católica de Chile, Santiago, Chile; Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile.
| |
Collapse
|
2
|
Mairi A, Hamza L, Touati A. Artificial intelligence and its application in clinical microbiology. Expert Rev Anti Infect Ther 2025:1-22. [PMID: 40131188 DOI: 10.1080/14787210.2025.2484284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 03/12/2025] [Accepted: 03/21/2025] [Indexed: 03/26/2025]
Abstract
INTRODUCTION Traditional microbiological diagnostics face challenges in pathogen identification speed and antimicrobial resistance (AMR) evaluation. Artificial intelligence (AI) offers transformative solutions, necessitating a comprehensive review of its applications, advancements, and integration challenges in clinical microbiology. AREAS COVERED This review examines AI-driven methodologies, including machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs), for enhancing pathogen detection, AMR prediction, and diagnostic imaging. Applications in virology (e.g. COVID-19 RT-PCR optimization), parasitology (e.g. malaria detection), and bacteriology (e.g. automated colony counting) are analyzed. A literature search was conducted using PubMed, Scopus, and Web of Science (2018-2024), prioritizing peer-reviewed studies on AI's diagnostic accuracy, workflow efficiency, and clinical validation. EXPERT OPINION AI significantly improves diagnostic precision and operational efficiency but requires robust validation to address data heterogeneity, model interpretability, and ethical concerns. Future success hinges on interdisciplinary collaboration to develop standardized, equitable AI tools tailored for global healthcare settings. Advancing explainable AI and federated learning frameworks will be critical for bridging current implementation gaps and maximizing AI's potential in combating infectious diseases.
Collapse
Affiliation(s)
- Assia Mairi
- Université de Bejaia, Laboratoire d'Ecologie Microbienne, Bejaia, Algeria
| | - Lamia Hamza
- Université de Bejaia, Département d'informatique Laboratoire d'Informatique MEDicale (LIMED), Bejaia, Algeria
| | - Abdelaziz Touati
- Université de Bejaia, Laboratoire d'Ecologie Microbienne, Bejaia, Algeria
| |
Collapse
|
3
|
Restrepo-Rodas G, Barajas-Gamboa JS, Ortiz Aparicio FM, Pantoja JP, Abril C, Al-Baqain S, Rodriguez J, Guerron AD. The Role of AI in Modern Hernia Surgery: A Review and Practical Insights. Surg Innov 2025:15533506251328481. [PMID: 40104921 DOI: 10.1177/15533506251328481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Abstract
BackgroundArtificial intelligence (AI) is revolutionizing various aspects of health care, particularly in the surgical field, where it offers significant potential for improving surgical risk assessment, predictive analytics, and research advancement. Despite the development of numerous AI models in surgery, there remains a notable gap in understanding their specific application within the context of hernia surgery.PurposeThis review aims to explore the evolution of AI utilization in hernia surgery over the past 2 decades, focusing on the contributions of Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), and Robotics.ResultsWe discuss how these AI fields enhance surgical outcomes and advance research in the domain of hernia surgery. ML focuses on developing and training prediction models, while NLP enables seamless human-computer interaction through the use of Large Language Models (LLMs). CV assists in critical view detection, which is crucial in procedures such as inguinal hernia repair, and robotics improves minimally invasive techniques, dexterity, and precision. We examine recent evidence and the applicability of various AI models on hernia patients, considering the strengths, limitations, and future possibilities within each field.ConclusionBy consolidating the impact of AI models on hernia surgery, this review provides insights into the potential of AI for advancing patient care and surgical techniques in this field, ultimately contributing to the ongoing evolution of surgical practice.
Collapse
Affiliation(s)
- Gabriela Restrepo-Rodas
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Juan S Barajas-Gamboa
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Freddy Miguel Ortiz Aparicio
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Juan Pablo Pantoja
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Carlos Abril
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Suleiman Al-Baqain
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - John Rodriguez
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Alfredo D Guerron
- Hernia and Core Health Center, Department of General Surgery, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| |
Collapse
|
4
|
Hla DA, Hindin DI. Generative AI & machine learning in surgical education. Curr Probl Surg 2025; 63:101701. [PMID: 39922636 DOI: 10.1016/j.cpsurg.2024.101701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 12/16/2024] [Indexed: 02/10/2025]
Affiliation(s)
- Diana A Hla
- Mayo Clinic Alix School of Medicine, Rochester, MN
| | - David I Hindin
- Division of General Surgery, Department of Surgery, Stanford University, Stanford, CA.
| |
Collapse
|
5
|
Griewing S, Gremke N, Wagner U, Wallwiener M, Kuhn S. Current Developments from Silicon Valley - How Artificial Intelligence is Changing Gynecology and Obstetrics. Geburtshilfe Frauenheilkd 2024; 84:1118-1125. [PMID: 39649123 PMCID: PMC11623998 DOI: 10.1055/a-2335-6122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 09/01/2024] [Indexed: 12/10/2024] Open
Abstract
Artificial intelligence (AI) has become an omnipresent topic in the media. Lively discussions are being held on how AI could revolutionize the global healthcare landscape. The development of innovative AI models, including in the medical sector, is increasingly dominated by large high-tech companies. As a global technology epicenter, Silicon Valley hosts many of these technological giants which are muscling their way into healthcare provision with their advanced technologies. The annual conference of the American College of Obstetrics and Gynecology (ACOG) was held in San Francisco from 17 - 19 May 2024. ACOG celebrated its AI premier, hosting two sessions on current AI topics in gynecology at their annual conference. This paper provides an overview of the topics discussed and permits an insight into the thinking in Silicon Valley, showing how technology companies grow and fail there and examining how our American colleagues perceive increased integration of AI in gynecological and obstetric care. In addition to the classification of various, currently popular AI terms, the article also presents three areas where artificial intelligence is being used in gynecology and looks at the current developmental status in the context of existing obstacles to implementation and the current digitalization status of the German healthcare system.
Collapse
Affiliation(s)
- Sebastian Griewing
- Stanford Center for Biomedical Informatics Research, Stanford University School of Medicine, Palo Alto, CA, USA
- Institut für Digitale Medizin, Universitätsklinikum Marburg, Philipps-Universität Marburg, Marburg, Germany
- Klinik für Gynäkologie und Geburtshilfe Marburg, Philipps-Universität Marburg, Marburg, Germany
- Kommission Digitale Medizin der Deutschen Gesellschaft für Gynäkologie und Geburtshilfe, Berlin, Germany
| | - Niklas Gremke
- Klinik für Gynäkologie und Geburtshilfe Marburg, Philipps-Universität Marburg, Marburg, Germany
| | - Uwe Wagner
- Klinik für Gynäkologie und Geburtshilfe Marburg, Philipps-Universität Marburg, Marburg, Germany
- Kommission Digitale Medizin der Deutschen Gesellschaft für Gynäkologie und Geburtshilfe, Berlin, Germany
| | - Markus Wallwiener
- Kommission Digitale Medizin der Deutschen Gesellschaft für Gynäkologie und Geburtshilfe, Berlin, Germany
- Klinik für Gynäkologie und Geburtshilfe Halle, Martin-Luther-Universität Halle-Wittenberg, Halle (Saale), Germany
| | - Sebastian Kuhn
- Institut für Digitale Medizin, Universitätsklinikum Marburg, Philipps-Universität Marburg, Marburg, Germany
| | | |
Collapse
|
6
|
You J, Cai H, Wang Y, Bian A, Cheng K, Meng L, Wang X, Gao P, Chen S, Cai Y, Peng B. Artificial intelligence automated surgical phases recognition in intraoperative videos of laparoscopic pancreatoduodenectomy. Surg Endosc 2024; 38:4894-4905. [PMID: 38958719 DOI: 10.1007/s00464-024-10916-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 05/05/2024] [Indexed: 07/04/2024]
Abstract
BACKGROUND Laparoscopic pancreatoduodenectomy (LPD) is one of the most challenging operations and has a long learning curve. Artificial intelligence (AI) automated surgical phase recognition in intraoperative videos has many potential applications in surgical education, helping shorten the learning curve, but no study has made this breakthrough in LPD. Herein, we aimed to build AI models to recognize the surgical phase in LPD and explore the performance characteristics of AI models. METHODS Among 69 LPD videos from a single surgical team, we used 42 in the building group to establish the models and used the remaining 27 videos in the analysis group to assess the models' performance characteristics. We annotated 13 surgical phases of LPD, including 4 key phases and 9 necessary phases. Two minimal invasive pancreatic surgeons annotated all the videos. We built two AI models for the key phase and necessary phase recognition, based on convolutional neural networks. The overall performance of the AI models was determined mainly by mean average precision (mAP). RESULTS Overall mAPs of the AI models in the test set of the building group were 89.7% and 84.7% for key phases and necessary phases, respectively. In the 27-video analysis group, overall mAPs were 86.8% and 71.2%, with maximum mAPs of 98.1% and 93.9%. We found commonalities between the error of model recognition and the differences of surgeon annotation, and the AI model exhibited bad performance in cases with anatomic variation or lesion involvement with adjacent organs. CONCLUSIONS AI automated surgical phase recognition can be achieved in LPD, with outstanding performance in selective cases. This breakthrough may be the first step toward AI- and video-based surgical education in more complex surgeries.
Collapse
Affiliation(s)
- Jiaying You
- WestChina-California Research Center for Predictive Intervention, Sichuan University West China Hospital, Chengdu, China
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - He Cai
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Yuxian Wang
- Chengdu Withai Innovations Technology Company, Chengdu, China
| | - Ang Bian
- College of Computer Science, Sichuan University, Chengdu, China
| | - Ke Cheng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Lingwei Meng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Xin Wang
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Pan Gao
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China
| | - Sirui Chen
- Mianyang Central Hospital, School of Medicine University of Electronic Science and Technology of China, Mianyang, China
| | - Yunqiang Cai
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China.
| | - Bing Peng
- Division of Pancreatic Surgery, Department of General Surgery, Sichuan University West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China.
| |
Collapse
|
7
|
Nardone V, Marmorino F, Germani MM, Cichowska-Cwalińska N, Menditti VS, Gallo P, Studiale V, Taravella A, Landi M, Reginelli A, Cappabianca S, Girnyi S, Cwalinski T, Boccardi V, Goyal A, Skokowski J, Oviedo RJ, Abou-Mrad A, Marano L. The Role of Artificial Intelligence on Tumor Boards: Perspectives from Surgeons, Medical Oncologists and Radiation Oncologists. Curr Oncol 2024; 31:4984-5007. [PMID: 39329997 PMCID: PMC11431448 DOI: 10.3390/curroncol31090369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 08/24/2024] [Accepted: 08/26/2024] [Indexed: 09/28/2024] Open
Abstract
The integration of multidisciplinary tumor boards (MTBs) is fundamental in delivering state-of-the-art cancer treatment, facilitating collaborative diagnosis and management by a diverse team of specialists. Despite the clear benefits in personalized patient care and improved outcomes, the increasing burden on MTBs due to rising cancer incidence and financial constraints necessitates innovative solutions. The advent of artificial intelligence (AI) in the medical field offers a promising avenue to support clinical decision-making. This review explores the perspectives of clinicians dedicated to the care of cancer patients-surgeons, medical oncologists, and radiation oncologists-on the application of AI within MTBs. Additionally, it examines the role of AI across various clinical specialties involved in cancer diagnosis and treatment. By analyzing both the potential and the challenges, this study underscores how AI can enhance multidisciplinary discussions and optimize treatment plans. The findings highlight the transformative role that AI may play in refining oncology care and sustaining the efficacy of MTBs amidst growing clinical demands.
Collapse
Affiliation(s)
- Valerio Nardone
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Federica Marmorino
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Marco Maria Germani
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | | | - Vittorio Salvatore Menditti
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Paolo Gallo
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Vittorio Studiale
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Ada Taravella
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Matteo Landi
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy; (F.M.); (M.M.G.); (V.S.); (A.T.); (M.L.)
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Salvatore Cappabianca
- Department of Precision Medicine, University of Campania “L. Vanvitelli”, 80131 Naples, Italy; (V.N.); (V.S.M.); (P.G.); (A.R.); (S.C.)
| | - Sergii Girnyi
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
| | - Tomasz Cwalinski
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
| | - Virginia Boccardi
- Division of Gerontology and Geriatrics, Department of Medicine and Surgery, University of Perugia, 06123 Perugia, Italy;
| | - Aman Goyal
- Adesh Institute of Medical Sciences and Research, Bathinda 151109, Punjab, India;
| | - Jaroslaw Skokowski
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| | - Rodolfo J. Oviedo
- Nacogdoches Medical Center, Nacogdoches, TX 75965, USA
- Tilman J. Fertitta Family College of Medicine, University of Houston, Houston, TX 77021, USA
- College of Osteopathic Medicine, Sam Houston State University, Conroe, TX 77304, USA
| | - Adel Abou-Mrad
- Centre Hospitalier Universitaire d’Orléans, 45100 Orléans, France;
| | - Luigi Marano
- Department of General Surgery and Surgical Oncology, “Saint Wojciech” Hospital, “Nicolaus Copernicus” Health Center, 80-462 Gdańsk, Poland; (S.G.); (T.C.); (J.S.); (L.M.)
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| |
Collapse
|
8
|
Yilmaz R, Bakhaidar M, Alsayegh A, Abou Hamdan N, Fazlollahi AM, Tee T, Langleben I, Winkler-Schwartz A, Laroche D, Santaguida C, Del Maestro RF. Real-Time multifaceted artificial intelligence vs In-Person instruction in teaching surgical technical skills: a randomized controlled trial. Sci Rep 2024; 14:15130. [PMID: 38956112 PMCID: PMC11219907 DOI: 10.1038/s41598-024-65716-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024] Open
Abstract
Trainees develop surgical technical skills by learning from experts who provide context for successful task completion, identify potential risks, and guide correct instrument handling. This expert-guided training faces significant limitations in objectively assessing skills in real-time and tracking learning. It is unknown whether AI systems can effectively replicate nuanced real-time feedback, risk identification, and guidance in mastering surgical technical skills that expert instructors offer. This randomized controlled trial compared real-time AI feedback to in-person expert instruction. Ninety-seven medical trainees completed a 90-min simulation training with five practice tumor resections followed by a realistic brain tumor resection. They were randomly assigned into 1-real-time AI feedback, 2-in-person expert instruction, and 3-no real-time feedback. Performance was assessed using a composite-score and Objective Structured Assessment of Technical Skills rating, rated by blinded experts. Training with real-time AI feedback (n = 33) resulted in significantly better performance outcomes compared to no real-time feedback (n = 32) and in-person instruction (n = 32), .266, [95% CI .107 .425], p < .001; .332, [95% CI .173 .491], p = .005, respectively. Learning from AI resulted in similar OSATS ratings (4.30 vs 4.11, p = 1) compared to in-person training with expert instruction. Intelligent systems may refine the way operating skills are taught, providing tailored, quantifiable feedback and actionable instructions in real-time.
Collapse
Affiliation(s)
- Recai Yilmaz
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada.
| | - Mohamad Bakhaidar
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
- Division of Neurosurgery, Department of Surgery, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Ahmad Alsayegh
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
- Division of Neurosurgery, Department of Surgery, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nour Abou Hamdan
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Ali M Fazlollahi
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Trisha Tee
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Ian Langleben
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Alexander Winkler-Schwartz
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| | - Denis Laroche
- National Research Council Canada, Boucherville, QC, Canada
| | - Carlo Santaguida
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| | - Rolando F Del Maestro
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| |
Collapse
|
9
|
Jarry Trujillo C, Vela Ulloa J, Escalona Vivas G, Grasset Escobar E, Villagrán Gutiérrez I, Achurra Tirado P, Varas Cohen J. Surgeons vs ChatGPT: Assessment and Feedback Performance Based on Real Surgical Scenarios. JOURNAL OF SURGICAL EDUCATION 2024; 81:960-966. [PMID: 38749814 DOI: 10.1016/j.jsurg.2024.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 03/06/2024] [Accepted: 03/15/2024] [Indexed: 06/11/2024]
Abstract
INTRODUCTION Artificial intelligence tools are being progressively integrated into medicine and surgical education. Large language models, such as ChatGPT, could provide relevant feedback aimed at improving surgical skills. The purpose of this study is to assess ChatGPT´s ability to provide feedback based on surgical scenarios. METHODS Surgical situations were transformed into texts using a neutral narrative. Texts were evaluated by ChatGPT 4.0 and 3 surgeons (A, B, C) after a brief instruction was delivered: identify errors and provide feedback accordingly. Surgical residents were provided with each of the situations and feedback obtained during the first stage, as written by each surgeon and ChatGPT, and were asked to assess the utility of feedback (FCUR) and its quality (FQ). As control measurement, an Education-Expert (EE) and a Clinical-Expert (CE) were asked to assess FCUR and FQ. RESULTS Regarding residents' evaluations, 96.43% of times, outputs provided by ChatGPT were considered useful, comparable to what surgeons' B and C obtained. Assessing FQ, ChatGPT and all surgeons received similar scores. Regarding EE's assessment, ChatGPT obtained a significantly higher FQ score when compared to surgeons A and B (p = 0.019; p = 0.033) with a median score of 8 vs. 7 and 7.5, respectively; and no difference respect surgeon C (score of 8; p = 0.2). Regarding CE´s assessment, surgeon B obtained the highest FQ score while ChatGPT received scores comparable to that of surgeons A and C. When participants were asked to identify the source of the feedback, residents, CE, and EE perceived ChatGPT's outputs as human-provided in 33.9%, 28.5%, and 14.3% of cases, respectively. CONCLUSION When given brief written surgical situations, ChatGPT was able to identify errors with a detection rate comparable to that of experienced surgeons and to generate feedback that was considered useful for skill improvement in a surgical context performing as well as surgical instructors across assessments made by general surgery residents, an experienced surgeon, and a nonsurgeon feedback expert.
Collapse
Affiliation(s)
- Cristián Jarry Trujillo
- Experimental Surgery and Simulation Center, Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Javier Vela Ulloa
- Experimental Surgery and Simulation Center, Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Gabriel Escalona Vivas
- Experimental Surgery and Simulation Center, Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | | | - Pablo Achurra Tirado
- Experimental Surgery and Simulation Center, Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Julián Varas Cohen
- Experimental Surgery and Simulation Center, Department of Digestive Surgery, Pontificia Universidad Católica de Chile, Santiago, Chile.
| |
Collapse
|
10
|
Zhang J, Luo Z, Zhang R, Ding Z, Fang Y, Han C, Wu W, Cen G, Qiu Z, Huang C. The transition of surgical simulation training and its learning curve: a bibliometric analysis from 2000 to 2023. Int J Surg 2024; 110:3326-3337. [PMID: 38729115 PMCID: PMC11175803 DOI: 10.1097/js9.0000000000001579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 04/25/2024] [Indexed: 05/12/2024]
Abstract
BACKGROUND Proficient surgical skills are essential for surgeons, making surgical training an important part of surgical education. The development of technology promotes the diversification of surgical training types. This study analyzes the changes in surgical training patterns from the perspective of bibliometrics, and applies the learning curves as a measure to demonstrate their teaching ability. METHOD Related papers were searched in the Web of Science database using the following formula: TS=[(training OR simulation) AND (learning curve) AND (surgical)]. Two researchers browsed the papers to ensure that the topics of articles were focused on the impact of surgical simulation training on the learning curve. CiteSpace, VOSviewer, and R packages were applied to analyze the publication trends, countries, authors, keywords, and references of selected articles. RESULT Ultimately, 2461 documents were screened and analyzed. The USA is the most productive and influential country in this field. Surgical endoscopy and other interventional techniques publish the most articles, while surgical endoscopy and other interventional techniques is the most cited journal. Aggarwal Rajesh is the most productive and influential author. Keyword and reference analyses reveal that laparoscopic surgery, robotic surgery, virtue reality, and artificial intelligence were the hotspots in the field. CONCLUSION This study provided a global overview of the current state and future trend in the surgical education field. The study surmised the applicability of different surgical simulation types by comparing and analyzing the learning curves, which is helpful for the development of this field.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Zai Luo
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Renchao Zhang
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Zehao Ding
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
- The Affiliated Chuzhou Hospital of Anhui Medical University, Anhui, the People's Republic of China
| | - Yuan Fang
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Chao Han
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Weidong Wu
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Gang Cen
- The Affiliated Chuzhou Hospital of Anhui Medical University, Anhui, the People's Republic of China
| | - Zhengjun Qiu
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
| | - Chen Huang
- Department of Gastrointestinal Surgery, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, the People’s Republic of China
- The Affiliated Chuzhou Hospital of Anhui Medical University, Anhui, the People's Republic of China
| |
Collapse
|
11
|
Morris MX, Fiocco D, Caneva T, Yiapanis P, Orgill DP. Current and future applications of artificial intelligence in surgery: implications for clinical practice and research. Front Surg 2024; 11:1393898. [PMID: 38783862 PMCID: PMC11111929 DOI: 10.3389/fsurg.2024.1393898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024] Open
Abstract
Surgeons are skilled at making complex decisions over invasive procedures that can save lives and alleviate pain and avoid complications in patients. The knowledge to make these decisions is accumulated over years of schooling and practice. Their experience is in turn shared with others, also via peer-reviewed articles, which get published in larger and larger amounts every year. In this work, we review the literature related to the use of Artificial Intelligence (AI) in surgery. We focus on what is currently available and what is likely to come in the near future in both clinical care and research. We show that AI has the potential to be a key tool to elevate the effectiveness of training and decision-making in surgery and the discovery of relevant and valid scientific knowledge in the surgical domain. We also address concerns about AI technology, including the inability for users to interpret algorithms as well as incorrect predictions. A better understanding of AI will allow surgeons to use new tools wisely for the benefit of their patients.
Collapse
Affiliation(s)
- Miranda X. Morris
- Duke University School of Medicine, Duke University Hospital, Durham, NC, United States
| | - Davide Fiocco
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Tommaso Caneva
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Paris Yiapanis
- Department of Artificial Intelligence, Frontiers Media SA, Lausanne, Switzerland
| | - Dennis P. Orgill
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, United States
| |
Collapse
|
12
|
Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, Hanson J, Haas M, Spadafore M, Grafton-Clarke C, Gasiea RY, Michie C, Corral J, Kwan B, Dolmans D, Thammasitboon S. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. MEDICAL TEACHER 2024; 46:446-470. [PMID: 38423127 DOI: 10.1080/0142159x.2024.2314198] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) is rapidly transforming healthcare, and there is a critical need for a nuanced understanding of how AI is reshaping teaching, learning, and educational practice in medical education. This review aimed to map the literature regarding AI applications in medical education, core areas of findings, potential candidates for formal systematic review and gaps for future research. METHODS This rapid scoping review, conducted over 16 weeks, employed Arksey and O'Malley's framework and adhered to STORIES and BEME guidelines. A systematic and comprehensive search across PubMed/MEDLINE, EMBASE, and MedEdPublish was conducted without date or language restrictions. Publications included in the review spanned undergraduate, graduate, and continuing medical education, encompassing both original studies and perspective pieces. Data were charted by multiple author pairs and synthesized into various thematic maps and charts, ensuring a broad and detailed representation of the current landscape. RESULTS The review synthesized 278 publications, with a majority (68%) from North American and European regions. The studies covered diverse AI applications in medical education, such as AI for admissions, teaching, assessment, and clinical reasoning. The review highlighted AI's varied roles, from augmenting traditional educational methods to introducing innovative practices, and underscores the urgent need for ethical guidelines in AI's application in medical education. CONCLUSION The current literature has been charted. The findings underscore the need for ongoing research to explore uncharted areas and address potential risks associated with AI use in medical education. This work serves as a foundational resource for educators, policymakers, and researchers in navigating AI's evolving role in medical education. A framework to support future high utility reporting is proposed, the FACETS framework.
Collapse
Affiliation(s)
- Morris Gordon
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
- Blackpool Hospitals NHS Foundation Trust, Blackpool, UK
| | - Michelle Daniel
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Aderonke Ajiboye
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Hussein Uraiby
- Department of Cellular Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Nicole Y Xu
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Rangana Bartlett
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Janice Hanson
- Department of Medicine and Office of Education, School of Medicine, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Mary Haas
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maxwell Spadafore
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | | | - Colin Michie
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Janet Corral
- Department of Medicine, University of Nevada Reno, School of Medicine, Reno, NV, USA
| | - Brian Kwan
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Diana Dolmans
- School of Health Professions Education, Faculty of Health, Maastricht University, Maastricht, NL, USA
| | - Satid Thammasitboon
- Center for Research, Innovation and Scholarship in Health Professions Education, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
13
|
Feinstein M, Katz D, Demaria S, Hofer IS. Remote Monitoring and Artificial Intelligence: Outlook for 2050. Anesth Analg 2024; 138:350-357. [PMID: 38215713 PMCID: PMC10794024 DOI: 10.1213/ane.0000000000006712] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
Remote monitoring and artificial intelligence will become common and intertwined in anesthesiology by 2050. In the intraoperative period, technology will lead to the development of integrated monitoring systems that will integrate multiple data streams and allow anesthesiologists to track patients more effectively. This will free up anesthesiologists to focus on more complex tasks, such as managing risk and making value-based decisions. This will also enable the continued integration of remote monitoring and control towers having profound effects on coverage and practice models. In the PACU and ICU, the technology will lead to the development of early warning systems that can identify patients who are at risk of complications, enabling early interventions and more proactive care. The integration of augmented reality will allow for better integration of diverse types of data and better decision-making. Postoperatively, the proliferation of wearable devices that can monitor patient vital signs and track their progress will allow patients to be discharged from the hospital sooner and receive care at home. This will require increased use of telemedicine, which will allow patients to consult with doctors remotely. All of these advances will require changes to legal and regulatory frameworks that will enable new workflows that are different from those familiar to today's providers.
Collapse
Affiliation(s)
- Max Feinstein
- Department of Anesthesiology Pain and Perioperative Medicine, Icahn School of Medicine at Mount Sinai
| | - Daniel Katz
- Department of Anesthesiology Pain and Perioperative Medicine, Icahn School of Medicine at Mount Sinai
| | - Samuel Demaria
- Department of Anesthesiology Pain and Perioperative Medicine, Icahn School of Medicine at Mount Sinai
| | - Ira S. Hofer
- Department of Anesthesiology Pain and Perioperative Medicine, Icahn School of Medicine at Mount Sinai
| |
Collapse
|
14
|
Ryder CY, Mott NM, Gross CL, Anidi C, Shigut L, Bidwell SS, Kim E, Zhao Y, Ngam BN, Snell MJ, Yu BJ, Forczmanski P, Rooney DM, Jeffcoach DR, Kim GJ. Using Artificial Intelligence to Gauge Competency on a Novel Laparoscopic Training System. JOURNAL OF SURGICAL EDUCATION 2024; 81:267-274. [PMID: 38160118 DOI: 10.1016/j.jsurg.2023.10.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/08/2023] [Accepted: 10/13/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE Laparoscopic surgical skill assessment and machine learning are often inaccessible to low-and-middle-income countries (LMIC). Our team developed a low-cost laparoscopic training system to teach and assess psychomotor skills required in laparoscopic salpingostomy in LMICs. We performed video review using AI to assess global surgical techniques. The objective of this study was to assess the validity of artificial intelligence (AI) generated scoring measures of laparoscopic simulation videos by comparing the accuracy of AI results to human-generated scores. DESIGN Seventy-four surgical simulation videos were collected and graded by human participants using a modified OSATS (Objective Structured Assessment of Technical Skills). The videos were then analyzed via AI using 3 different time and distance-based calculations of the laparoscopic instruments including path length, dimensionless jerk, and standard deviation of tool position. Predicted scores were generated using 5-fold cross validation and K-Nearest-Neighbors to train classifiers. SETTING Surgical novices and experts from a variety of hospitals in Ethiopia, Cameroon, Kenya, and the United States contributed 74 laparoscopic salpingostomy simulation videos. RESULTS Complete accuracy of AI compared to human assessment ranged from 65-77%. There were no statistical differences in rank mean scores for 3 domains, Flow of Operation, Respect for Tissue, and Economy of Motion, while there were significant differences in ratings for Instrument Handling, Overall Performance, and the total summed score of all 5 domains (Summed). Estimated effect sizes were all less than 0.11, indicating very small practical effect. Estimated intraclass correlation coefficient (ICC) of Summed was 0.72 indicating moderate correlation between AI and Human scores. CONCLUSIONS Video review using AI technology of global characteristics was similar to that of human review in our laparoscopic training system. Machine learning may help fill an educational gap in LMICs where direct apprenticeship may not be feasible.
Collapse
Affiliation(s)
| | - Nicole M Mott
- University of Michigan Medical School, Ann Arbor, Michigan
| | | | - Chioma Anidi
- University of Michigan Medical School, Ann Arbor, Michigan
| | - Leul Shigut
- Department of Surgery, Soddo Christian General Hospital, Soddo, Ethiopia
| | | | - Erin Kim
- University of Michigan Medical School, Ann Arbor, Michigan
| | - Yimeng Zhao
- University of Michigan Medical School, Ann Arbor, Michigan
| | | | - Mark J Snell
- Department of Surgery, Mbingo Baptist Hospital, Mbingo, Cameroon
| | - B Joon Yu
- Department of Surgery, University of Michigan, Ann Arbor, Michigan
| | - Pawel Forczmanski
- Department of Computer Science and Information Technology, West Pomeranian University of Technology in Szczecin, Szczecin, Poland
| | - Deborah M Rooney
- Department of Learning Sciences, University of Michigan, Ann Arbor, Michigan
| | - David R Jeffcoach
- Department of Surgery, Community Regional Medical Center, Fresno, California
| | - Grace J Kim
- Department of Surgery, University of Michigan, Ann Arbor, Michigan.
| |
Collapse
|
15
|
Anteby R, Nachmany I. Artificial Intelligence in Surgical Education: Consensus Has Been Reached, What's Next? J Am Coll Surg 2024; 238:144-145. [PMID: 38099587 DOI: 10.1097/xcs.0000000000000125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
|
16
|
Li Y, Xia T, Luo H, He B, Jia F. MT-FiST: A Multi-Task Fine-Grained Spatial-Temporal Framework for Surgical Action Triplet Recognition. IEEE J Biomed Health Inform 2023; 27:4983-4994. [PMID: 37498758 DOI: 10.1109/jbhi.2023.3299321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Surgical action triplet recognition plays a significant role in helping surgeons facilitate scene analysis and decision-making in computer-assisted surgeries. Compared to traditional context-aware tasks such as phase recognition, surgical action triplets, comprising the instrument, verb, and target, can offer more comprehensive and detailed information. However, current triplet recognition methods fall short in distinguishing the fine-grained subclasses and disregard temporal correlation in action triplets. In this article, we propose a multi-task fine-grained spatial-temporal framework for surgical action triplet recognition named MT-FiST. The proposed method utilizes a multi-label mutual channel loss, which consists of diversity and discriminative components. This loss function decouples global task features into class-aligned features, enabling the learning of more local details from the surgical scene. The proposed framework utilizes partial shared-parameters LSTM units to capture temporal correlations between adjacent frames. We conducted experiments on the CholecT50 dataset proposed in the MICCAI 2021 Surgical Action Triplet Recognition Challenge. Our framework is evaluated on the private test set of the challenge to ensure fair comparisons. Our model apparently outperformed state-of-the-art models in instrument, verb, target, and action triplet recognition tasks, with mAPs of 82.1% (+4.6%), 51.5% (+4.0%), 45.50% (+7.8%), and 35.8% (+3.1%), respectively. The proposed MT-FiST boosts the recognition of surgical action triplets in a context-aware surgical assistant system, further solving multi-task recognition by effective temporal aggregation and fine-grained features.
Collapse
|
17
|
Rodriguez Peñaranda N, Eissa A, Ferretti S, Bianchi G, Di Bari S, Farinha R, Piazza P, Checcucci E, Belenchón IR, Veccia A, Gomez Rivas J, Taratkin M, Kowalewski KF, Rodler S, De Backer P, Cacciamani GE, De Groote R, Gallagher AG, Mottrie A, Micali S, Puliatti S. Artificial Intelligence in Surgical Training for Kidney Cancer: A Systematic Review of the Literature. Diagnostics (Basel) 2023; 13:3070. [PMID: 37835812 PMCID: PMC10572445 DOI: 10.3390/diagnostics13193070] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 09/17/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The prevalence of renal cell carcinoma (RCC) is increasing due to advanced imaging techniques. Surgical resection is the standard treatment, involving complex radical and partial nephrectomy procedures that demand extensive training and planning. Furthermore, artificial intelligence (AI) can potentially aid the training process in the field of kidney cancer. This review explores how artificial intelligence (AI) can create a framework for kidney cancer surgery to address training difficulties. Following PRISMA 2020 criteria, an exhaustive search of PubMed and SCOPUS databases was conducted without any filters or restrictions. Inclusion criteria encompassed original English articles focusing on AI's role in kidney cancer surgical training. On the other hand, all non-original articles and articles published in any language other than English were excluded. Two independent reviewers assessed the articles, with a third party settling any disagreement. Study specifics, AI tools, methodologies, endpoints, and outcomes were extracted by the same authors. The Oxford Center for Evidence-Based Medicine's evidence levels were employed to assess the studies. Out of 468 identified records, 14 eligible studies were selected. Potential AI applications in kidney cancer surgical training include analyzing surgical workflow, annotating instruments, identifying tissues, and 3D reconstruction. AI is capable of appraising surgical skills, including the identification of procedural steps and instrument tracking. While AI and augmented reality (AR) enhance training, challenges persist in real-time tracking and registration. The utilization of AI-driven 3D reconstruction proves beneficial for intraoperative guidance and preoperative preparation. Artificial intelligence (AI) shows potential for advancing surgical training by providing unbiased evaluations, personalized feedback, and enhanced learning processes. Yet challenges such as consistent metric measurement, ethical concerns, and data privacy must be addressed. The integration of AI into kidney cancer surgical training offers solutions to training difficulties and a boost to surgical education. However, to fully harness its potential, additional studies are imperative.
Collapse
Affiliation(s)
- Natali Rodriguez Peñaranda
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Ahmed Eissa
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
- Department of Urology, Faculty of Medicine, Tanta University, Tanta 31527, Egypt
| | - Stefania Ferretti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Giampaolo Bianchi
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Di Bari
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Rui Farinha
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Urology Department, Lusíadas Hospital, 1500-458 Lisbon, Portugal
| | - Pietro Piazza
- Division of Urology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy;
| | - Enrico Checcucci
- Department of Surgery, FPO-IRCCS Candiolo Cancer Institute, 10060 Turin, Italy;
| | - Inés Rivero Belenchón
- Urology and Nephrology Department, Virgen del Rocío University Hospital, 41013 Seville, Spain;
| | - Alessandro Veccia
- Department of Urology, University of Verona, Azienda Ospedaliera Universitaria Integrata, 37126 Verona, Italy;
| | - Juan Gomez Rivas
- Department of Urology, Hospital Clinico San Carlos, 28040 Madrid, Spain;
| | - Mark Taratkin
- Institute for Urology and Reproductive Health, Sechenov University, 119435 Moscow, Russia;
| | - Karl-Friedrich Kowalewski
- Department of Urology and Urosurgery, University Medical Center Mannheim, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany;
| | - Severin Rodler
- Department of Urology, University Hospital LMU Munich, 80336 Munich, Germany;
| | - Pieter De Backer
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, 9000 Ghent, Belgium
| | - Giovanni Enrico Cacciamani
- USC Institute of Urology, Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089, USA;
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA 90089, USA
| | - Ruben De Groote
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Anthony G. Gallagher
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
- Faculty of Life and Health Sciences, Ulster University, Derry BT48 7JL, UK
| | - Alexandre Mottrie
- Orsi Academy, 9090 Melle, Belgium; (R.F.); (P.D.B.); (R.D.G.); (A.G.G.); (A.M.)
| | - Salvatore Micali
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| | - Stefano Puliatti
- Department of Urology, Azienda Ospedaliero-Universitaria di Modena, Via Pietro Giardini, 1355, 41126 Baggiovara, Italy; (N.R.P.); (A.E.); (S.F.); (G.B.); (S.D.B.); (S.M.)
| |
Collapse
|
18
|
Fischer E, Jawed KJ, Cleary K, Balu A, Donoho A, Thompson Gestrich W, Donoho DA. A methodology for the annotation of surgical videos for supervised machine learning applications. Int J Comput Assist Radiol Surg 2023; 18:1673-1678. [PMID: 37245179 DOI: 10.1007/s11548-023-02923-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 04/14/2023] [Indexed: 05/29/2023]
Abstract
PURPOSE Surgical data science is an emerging field focused on quantitative analysis of pre-, intra-, and postoperative patient data (Maier-Hein et al. in Med Image Anal 76: 102306, 2022). Data science approaches can decompose complex procedures, train surgical novices, assess outcomes of actions, and create predictive models of surgical outcomes (Marcus et al. in Pituitary 24: 839-853, 2021; Røadsch et al. in Nat Mach Intell, 2022). Surgical videos contain powerful signals of events that may impact patient outcomes. A necessary step before the deployment of supervised machine learning methods is the development of labels for objects and anatomy. We describe a complete method for annotating videos of transsphenoidal surgery. METHODS Endoscopic video recordings of transsphenoidal pituitary tumor removal surgeries were collected from a multicenter research collaborative. These videos were anonymized and stored in a cloud-based platform. Videos were uploaded to an online annotation platform. Annotation framework was developed based on a literature review and surgical observations to ensure proper understanding of the tools, anatomy, and steps present. A user guide was developed to trained annotators to ensure standardization. RESULTS A fully annotated video of a transsphenoidal pituitary tumor removal surgery was produced. This annotated video included over 129,826 frames. To prevent any missing annotations, all frames were later reviewed by highly experienced annotators and a surgeon reviewer. Iterations to annotated videos allowed for the creation of an annotated video complete with labeled surgical tools, anatomy, and phases. In addition, a user guide was developed for the training of novice annotators, which provides information about the annotation software to ensure the production of standardized annotations. CONCLUSIONS A standardized and reproducible workflow for managing surgical video data is a necessary prerequisite to surgical data science applications. We developed a standard methodology for annotating surgical videos that may facilitate the quantitative analysis of videos using machine learning applications. Future work will demonstrate the clinical relevance and impact of this workflow by developing process modeling and outcome predictors.
Collapse
Affiliation(s)
- Elizabeth Fischer
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA.
| | - Kochai Jan Jawed
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Alan Balu
- Division of Neurosurgery, Center for Neuroscience and Behavioral Medicine, Children's National Hospital, Washington, DC, USA
- Georgetown University School of Medicine, Washington, DC, USA
| | | | | | - Daniel A Donoho
- Georgetown University School of Medicine, Washington, DC, USA
- Department of Neurosurgery and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington, DC, USA
| |
Collapse
|
19
|
Varas J, Coronel BV, Villagrán I, Escalona G, Hernandez R, Schuit G, Durán V, Lagos-Villaseca A, Jarry C, Neyem A, Achurra P. Innovations in surgical training: exploring the role of artificial intelligence and large language models (LLM). Rev Col Bras Cir 2023; 50:e20233605. [PMID: 37646729 PMCID: PMC10508667 DOI: 10.1590/0100-6991e-20233605-en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 07/14/2023] [Indexed: 09/01/2023] Open
Abstract
The landscape of surgical training is rapidly evolving with the advent of artificial intelligence (AI) and its integration into education and simulation. This manuscript aims to explore the potential applications and benefits of AI-assisted surgical training, particularly the use of large language models (LLMs), in enhancing communication, personalizing feedback, and promoting skill development. We discuss the advancements in simulation-based training, AI-driven assessment tools, video-based assessment systems, virtual reality (VR) and augmented reality (AR) platforms, and the potential role of LLMs in the transcription, translation, and summarization of feedback. Despite the promising opportunities presented by AI integration, several challenges must be addressed, including accuracy and reliability, ethical and privacy concerns, bias in AI models, integration with existing training systems, and training and adoption of AI-assisted tools. By proactively addressing these challenges and harnessing the potential of AI, the future of surgical training may be reshaped to provide a more comprehensive, safe, and effective learning experience for trainees, ultimately leading to better patient outcomes. .
Collapse
Affiliation(s)
- Julian Varas
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Brandon Valencia Coronel
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Ignacio Villagrán
- - Pontificia Universidad Católica de Chile, Carrera de Kinesiología, Departamento de Ciencias de la Salud, Facultad de Medicina - Santiago - Región Metropolitana - Chile
| | - Gabriel Escalona
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Rocio Hernandez
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Gregory Schuit
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Valentina Durán
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Antonia Lagos-Villaseca
- - Pontificia Universidad Católica de Chile, Department of Otolaryngology - Santiago - Región Metropolitana - Chile
| | - Cristian Jarry
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| | - Andres Neyem
- - Pontificia Universidad Católica de Chile, Computer Science Department, School of Engineering - Santiago - Región Metropolitana - Chile
| | - Pablo Achurra
- - Pontificia Universidad Católica de Chile, Experimental Surgery and Simulation Center, Department of Digestive Surgery - Santiago - Región Metropolitana - Chile
| |
Collapse
|
20
|
Moodi Ghalibaf A, Moghadasin M, Emadzadeh A, Mastour H. Psychometric properties of the persian version of the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS). BMC MEDICAL EDUCATION 2023; 23:577. [PMID: 37582816 PMCID: PMC10428571 DOI: 10.1186/s12909-023-04553-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 07/30/2023] [Indexed: 08/17/2023]
Abstract
INTRODUCTION There are numerous cases where artificial intelligence (AI) can be applied to improve the outcomes of medical education. The extent to which medical practitioners and students are ready to work and leverage this paradigm is unclear in Iran. This study investigated the psychometric properties of a Persian version of the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) developed by Karaca, et al. in 2021. In future studies, the medical AI readiness for Iranian medical students could be investigated using this scale, and effective interventions might be planned and implemented according to the results. METHODS In this study, 502 medical students (mean age 22.66(± 2.767); 55% female) responded to the Persian questionnaire in an online survey. The original questionnaire was translated into Persian using a back translation procedure, and all participants completed the demographic component and the entire MAIRS-MS. Internal and external consistencies, factor analysis, construct validity, and confirmatory factor analysis were examined to analyze the collected data. A P ≤ 0.05 was considered as the level of statistical significance. RESULTS Four subscales emerged from the exploratory factor analysis (Cognition, Ability, Vision, and Ethics), and confirmatory factor analysis confirmed the four subscales. The Cronbach alpha value for internal consistency was 0.944 for the total scale and 0.886, 0.905, 0.865, and 0.856 for cognition, ability, vision, and ethics, respectively. CONCLUSIONS The Persian version of MAIRS-MS was fairly equivalent to the original one regarding the conceptual and linguistic aspects. This study also confirmed the validity and reliability of the Persian version of MAIRS-MS. Therefore, the Persian version can be a suitable and brief instrument to assess Iranian Medical Students' readiness for medical artificial intelligence.
Collapse
Affiliation(s)
- AmirAli Moodi Ghalibaf
- Student Research Committee, Faculty of Medicine, Birjand University of Medical Sciences, Birjand, Iran
| | - Maryam Moghadasin
- Department of Clinical Psychology, Faculty of Psychology and Education, Kharazmi University, Tehran, Iran
| | - Ali Emadzadeh
- Department of Medical Education, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Haniye Mastour
- Department of Medical Education, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran.
| |
Collapse
|
21
|
Kufel J, Bargieł-Łączek K, Kocot S, Koźlik M, Bartnikowska W, Janik M, Czogalik Ł, Dudek P, Magiera M, Lis A, Paszkiewicz I, Nawrat Z, Cebula M, Gruszczyńska K. What Is Machine Learning, Artificial Neural Networks and Deep Learning?-Examples of Practical Applications in Medicine. Diagnostics (Basel) 2023; 13:2582. [PMID: 37568945 PMCID: PMC10417718 DOI: 10.3390/diagnostics13152582] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/19/2023] [Accepted: 08/01/2023] [Indexed: 08/13/2023] Open
Abstract
Machine learning (ML), artificial neural networks (ANNs), and deep learning (DL) are all topics that fall under the heading of artificial intelligence (AI) and have gained popularity in recent years. ML involves the application of algorithms to automate decision-making processes using models that have not been manually programmed but have been trained on data. ANNs that are a part of ML aim to simulate the structure and function of the human brain. DL, on the other hand, uses multiple layers of interconnected neurons. This enables the processing and analysis of large and complex databases. In medicine, these techniques are being introduced to improve the speed and efficiency of disease diagnosis and treatment. Each of the AI techniques presented in the paper is supported with an example of a possible medical application. Given the rapid development of technology, the use of AI in medicine shows promising results in the context of patient care. It is particularly important to keep a close eye on this issue and conduct further research in order to fully explore the potential of ML, ANNs, and DL, and bring further applications into clinical use in the future.
Collapse
Affiliation(s)
- Jakub Kufel
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, 41-808 Zabrze, Poland;
| | - Katarzyna Bargieł-Łączek
- Paediatric Radiology Students’ Scientific Association at the Division of Diagnostic Imaging, Department of Radiology and Nuclear Medicine, Faculty of Medical Science in Katowice, Medical University of Silesia, 40-752 Katowice, Poland; (K.B.-Ł.); (W.B.)
| | - Szymon Kocot
- Bright Coders’ Factory, Technologiczna 2, 45-839 Opole, Poland
| | - Maciej Koźlik
- Division of Cardiology and Structural Heart Disease, Medical University of Silesia, 40-635 Katowice, Poland;
| | - Wiktoria Bartnikowska
- Paediatric Radiology Students’ Scientific Association at the Division of Diagnostic Imaging, Department of Radiology and Nuclear Medicine, Faculty of Medical Science in Katowice, Medical University of Silesia, 40-752 Katowice, Poland; (K.B.-Ł.); (W.B.)
| | - Michał Janik
- Student Scientific Association Named after Professor Zbigniew Religa at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (M.J.); (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Łukasz Czogalik
- Student Scientific Association Named after Professor Zbigniew Religa at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (M.J.); (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Piotr Dudek
- Student Scientific Association Named after Professor Zbigniew Religa at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (M.J.); (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Mikołaj Magiera
- Student Scientific Association Named after Professor Zbigniew Religa at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (M.J.); (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Anna Lis
- Cardiology Students’ Scientific Association at the III Department of Cardiology, Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-635 Katowice, Poland;
| | - Iga Paszkiewicz
- Student Scientific Association Named after Professor Zbigniew Religa at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (M.J.); (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Zbigniew Nawrat
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, 41-808 Zabrze, Poland;
| | - Maciej Cebula
- Individual Specialist Medical Practice Maciej Cebula, 40-754 Katowice, Poland;
| | - Katarzyna Gruszczyńska
- Department of Radiodiagnostics, Invasive Radiology and Nuclear Medicine, Department of Radiology and Nuclear Medicine, School of Medicine in Katowice, Medical University of Silesia, Medyków 14, 40-752 Katowice, Poland;
| |
Collapse
|
22
|
Hassan AM, Nelson JA, Coert JH, Mehrara BJ, Selber JC. Exploring the Potential of Artificial Intelligence in Surgery: Insights from a Conversation with ChatGPT. Ann Surg Oncol 2023; 30:3875-3878. [PMID: 37017834 DOI: 10.1245/s10434-023-13347-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 02/27/2023] [Indexed: 04/06/2023]
Affiliation(s)
- Abbas M Hassan
- Division of Plastic & Reconstructive Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Jonas A Nelson
- Department of Plastic and Reconstructive Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - J Henk Coert
- Department of Plastic and Reconstructive Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Babak J Mehrara
- Department of Plastic and Reconstructive Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Jesse C Selber
- Department of Plastic Surgery, Corewell Health, Grand Rapids, MI, USA.
| |
Collapse
|
23
|
Kiyasseh D, Laca J, Haque TF, Otiato M, Miles BJ, Wagner C, Donoho DA, Trinh QD, Anandkumar A, Hung AJ. Human visual explanations mitigate bias in AI-based assessment of surgeon skills. NPJ Digit Med 2023; 6:54. [PMID: 36997642 PMCID: PMC10063676 DOI: 10.1038/s41746-023-00766-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 01/21/2023] [Indexed: 04/03/2023] Open
Abstract
Artificial intelligence (AI) systems can now reliably assess surgeon skills through videos of intraoperative surgical activity. With such systems informing future high-stakes decisions such as whether to credential surgeons and grant them the privilege to operate on patients, it is critical that they treat all surgeons fairly. However, it remains an open question whether surgical AI systems exhibit bias against surgeon sub-cohorts, and, if so, whether such bias can be mitigated. Here, we examine and mitigate the bias exhibited by a family of surgical AI systems-SAIS-deployed on videos of robotic surgeries from three geographically-diverse hospitals (USA and EU). We show that SAIS exhibits an underskilling bias, erroneously downgrading surgical performance, and an overskilling bias, erroneously upgrading surgical performance, at different rates across surgeon sub-cohorts. To mitigate such bias, we leverage a strategy -TWIX-which teaches an AI system to provide a visual explanation for its skill assessment that otherwise would have been provided by human experts. We show that whereas baseline strategies inconsistently mitigate algorithmic bias, TWIX can effectively mitigate the underskilling and overskilling bias while simultaneously improving the performance of these AI systems across hospitals. We discovered that these findings carry over to the training environment where we assess medical students' skills today. Our study is a critical prerequisite to the eventual implementation of AI-augmented global surgeon credentialing programs, ensuring that all surgeons are treated fairly.
Collapse
Affiliation(s)
- Dani Kiyasseh
- Department of Computing and Mathematical Sciences, California Institute of Technology, California, CA, USA.
| | - Jasper Laca
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Taseen F Haque
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Maxwell Otiato
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA
| | - Brian J Miles
- Department of Urology, Houston Methodist Hospital, Texas, TX, USA
| | - Christian Wagner
- Department of Urology, Pediatric Urology and Uro-Oncology, Prostate Center Northwest, St. Antonius-Hospital, Gronau, Germany
| | - Daniel A Donoho
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington DC, WA, USA
| | - Quoc-Dien Trinh
- Center for Surgery & Public Health, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Animashree Anandkumar
- Department of Computing and Mathematical Sciences, California Institute of Technology, California, CA, USA
| | - Andrew J Hung
- Center for Robotic Simulation and Education, Catherine & Joseph Aresty Department of Urology, University of Southern California, California, CA, USA.
| |
Collapse
|
24
|
Filicori F, Bitner DP, Fuchs HF, Anvari M, Sankaranaraynan G, Bloom MB, Hashimoto DA, Madani A, Mascagni P, Schlachta CM, Talamini M, Meireles OR. SAGES video acquisition framework-analysis of available OR recording technologies by the SAGES AI task force. Surg Endosc 2023:10.1007/s00464-022-09825-3. [PMID: 36729231 DOI: 10.1007/s00464-022-09825-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 12/06/2022] [Indexed: 02/03/2023]
Abstract
BACKGROUND Surgical video recording provides the opportunity to acquire intraoperative data that can subsequently be used for a variety of quality improvement, research, and educational applications. Various recording devices are available for standard operating room camera systems. Some allow for collateral data acquisition including activities of the OR staff, kinematic measurements (motion of surgical instruments), and recording of the endoscopic video streams. Additional analysis through computer vision (CV), which allows software to understand and perform predictive tasks on images, can allow for automatic phase segmentation, instrument tracking, and derivative performance-geared metrics. With this survey, we summarize available surgical video acquisition technologies and associated performance analysis platforms. METHODS In an effort promoted by the SAGES Artificial Intelligence Task Force, we surveyed the available video recording technology companies. Of thirteen companies approached, nine were interviewed, each over an hour-long video conference. A standard set of 17 questions was administered. Questions spanned from data acquisition capacity, quality, and synchronization of video with other data, availability of analytic tools, privacy, and access. RESULTS Most platforms (89%) store video in full-HD (1080p) resolution at a frame rate of 30 fps. Most (67%) of available platforms store data in a Cloud-based databank as opposed to institutional hard drives. CV powered analysis is featured in some platforms: phase segmentation in 44% platforms, out of body blurring or tool tracking in 33%, and suture time in 11%. Kinematic data are provided by 22% and perfusion imaging in one device. CONCLUSION Video acquisition platforms on the market allow for in depth performance analysis through manual and automated review. Most of these devices will be integrated in upcoming robotic surgical platforms. Platform analytic supplementation, including CV, may allow for more refined performance analysis to surgeons and trainees. Most current AI features are related to phase segmentation, instrument tracking, and video blurring.
Collapse
Affiliation(s)
- Filippo Filicori
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, New York, NY, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory (IPAL), Department of General Surgery, Northwell Health, Lenox Hill Hospital, New York, NY, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Hans F Fuchs
- Department of Surgery, Division of Surgical Robotics and Artificial Intelligence, University of Cologne, Cologne, Germany
| | - Mehran Anvari
- Center for Surgical Invention and Innovation, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | - Ganesh Sankaranaraynan
- Artificial Intelligence and Medical Simulation (AIMS) Lab, Department of Surgery, UT Southwestern Medical Center, Dallas, TX, USA
| | - Matthew B Bloom
- Minimally Invasive Surgery Laboratory, Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Daniel A Hashimoto
- Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Amin Madani
- Surgical Artificial Intelligence Research Academy, Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Pietro Mascagni
- Fondazione Policlinico Universitario A. Gemelli, Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, Strasbourg, France
| | - Christopher M Schlachta
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), London Health Sciences Centre, London, ON, Canada
| | - Mark Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Ozanan R Meireles
- Surgical Artificial Intelligence and Innovation Laboratory (SAIIL), Department of General Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC 339, Boston, MA, 02139, USA.
| |
Collapse
|
25
|
Avram MF, Lazăr DC, Mariş MI, Olariu S. Artificial intelligence in improving the outcome of surgical treatment in colorectal cancer. Front Oncol 2023; 13:1116761. [PMID: 36733307 PMCID: PMC9886660 DOI: 10.3389/fonc.2023.1116761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 01/03/2023] [Indexed: 01/19/2023] Open
Abstract
Background A considerable number of recent research have used artificial intelligence (AI) in the area of colorectal cancer (CRC). Surgical treatment of CRC still remains the most important curative component. Artificial intelligence in CRC surgery is not nearly as advanced as it is in screening (colonoscopy), diagnosis and prognosis, especially due to the increased complexity and variability of structures and elements in all fields of view, as well as a general shortage of annotated video banks for utilization. Methods A literature search was made and relevant studies were included in the minireview. Results The intraoperative steps which, at this moment, can benefit from AI in CRC are: phase and action recognition, excision plane navigation, endoscopy control, real-time circulation analysis, knot tying, automatic optical biopsy and hyperspectral imaging. This minireview also analyses the current advances in robotic treatment of CRC as well as the present possibility of automated CRC robotic surgery. Conclusions The use of AI in CRC surgery is still at its beginnings. The development of AI models capable of reproducing a colorectal expert surgeon's skill, the creation of large and complex datasets and the standardization of surgical colorectal procedures will contribute to the widespread use of AI in CRC surgical treatment.
Collapse
Affiliation(s)
- Mihaela Flavia Avram
- Department of Surgery X, 1st Surgery Discipline, “Victor Babeş” University of Medicine and Pharmacy Timişoara, Timişoara, Romania,Department of Mathematics, Politehnica University Timisoara, Timişoara, Romania,*Correspondence: Mihaela Flavia Avram,
| | - Daniela Cornelia Lazăr
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeş” University of Medicine and Pharmacy Timişoara, Timişoara, Romania
| | - Mihaela Ioana Mariş
- Department of Functional Sciences, Division of Physiopathology, “Victor Babes” University of Medicine and Pharmacy Timisoara, Timisoara, Romania,Center for Translational Research and Systems Medicine, “Victor Babes” University of Medicine and Pharmacy Timisoara, Timisoara, Romania
| | - Sorin Olariu
- Department of Surgery X, 1st Surgery Discipline, “Victor Babeş” University of Medicine and Pharmacy Timişoara, Timişoara, Romania
| |
Collapse
|
26
|
Rajesh A, Chartier C, Asaad M, Butler CE. A Synopsis of Artificial Intelligence and its Applications in Surgery. Am Surg 2023; 89:20-24. [PMID: 35713389 DOI: 10.1177/00031348221109450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) has made steady in-roads into the healthcare scenario over the last decade. While widespread adoption into clinical practice remains elusive, the outreach of this discipline has progressed beyond the physician scientist, and different facets of this technology have been incorporated into the care of surgical patients. New AI applications are developing at rapid pace, and it is imperative that the general surgeon be aware of the broad utility of AI as applicable in his or her day-to-day practice, so that healthcare continues to remain up-to-date and evidence based. This review provides a broad account of the tip of the AI iceberg and highlights it potential for positively impacting surgical care.
Collapse
Affiliation(s)
- Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center, San Antonio, TX, USA
| | | | - Malke Asaad
- Department of Plastic Surgery, 6595University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Charles E Butler
- Department of Plastic & Reconstructive Surgery, 571198The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
27
|
Nagaraj MB, Namazi B, Sankaranarayanan G, Scott DJ. Developing artificial intelligence models for medical student suturing and knot-tying video-based assessment and coaching. Surg Endosc 2023; 37:402-411. [PMID: 35982284 PMCID: PMC9388210 DOI: 10.1007/s00464-022-09509-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 07/23/2022] [Indexed: 01/20/2023]
Abstract
BACKGROUND Early introduction and distributed learning have been shown to improve student comfort with basic requisite suturing skills. The need for more frequent and directed feedback, however, remains an enduring concern for both remote and in-person training. A previous in-person curriculum for our second-year medical students transitioning to clerkships was adapted to an at-home video-based assessment model due to the social distancing implications of COVID-19. We aimed to develop an Artificial Intelligence (AI) model to perform video-based assessment. METHODS Second-year medical students were asked to submit a video of a simple interrupted knot on a penrose drain with instrument tying technique after self-training to proficiency. Proficiency was defined as performing the task under two minutes with no critical errors. All the videos were first manually rated with a pass-fail rating and then subsequently underwent task segmentation. We developed and trained two AI models based on convolutional neural networks to identify errors (instrument holding and knot-tying) and provide automated ratings. RESULTS A total of 229 medical student videos were reviewed (150 pass, 79 fail). Of those who failed, the critical error distribution was 15 knot-tying, 47 instrument-holding, and 17 multiple. A total of 216 videos were used to train the models after excluding the low-quality videos. A k-fold cross-validation (k = 10) was used. The accuracy of the instrument holding model was 89% with an F-1 score of 74%. For the knot-tying model, the accuracy was 91% with an F-1 score of 54%. CONCLUSIONS Medical students require assessment and directed feedback to better acquire surgical skill, but this is often time-consuming and inadequately done. AI techniques can instead be employed to perform automated surgical video analysis. Future work will optimize the current model to identify discrete errors in order to supplement video-based rating with specific feedback.
Collapse
Affiliation(s)
- Madhuri B Nagaraj
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA.
- University of Texas Southwestern Simulation Center, 2001 Inwood Road, Dallas, TX, 75390-9092, USA.
| | - Babak Namazi
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA
| | - Ganesh Sankaranarayanan
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA
| | - Daniel J Scott
- Department of Surgery, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-9159, USA
- University of Texas Southwestern Simulation Center, 2001 Inwood Road, Dallas, TX, 75390-9092, USA
| |
Collapse
|
28
|
Robotically assisted augmented reality system for identification of targeted lymph nodes in laparoscopic gynecological surgery: a first step toward the identification of sentinel node : Augmented reality in gynecological surgery. Surg Endosc 2022; 36:9224-9233. [PMID: 35831676 DOI: 10.1007/s00464-022-09409-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 06/19/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND To prove feasibility of multimodal and temporal fusion of laparoscopic images with preoperative computed tomography scans for a real-time in vivo-targeted lymph node (TLN) detection during minimally invasive pelvic lymphadenectomy and to validate and enable such guidance for safe and accurate sentinel lymph node dissection, including anatomical landmarks in an experimental model. METHODS A measurement campaign determined the most accurate tracking system (UR5-Cobot versus NDI Polaris). The subsequent interventions on two pigs consisted of an identification of artificial TLN and anatomical landmarks without and with augmented reality (AR) assistance. The AR overlay on target structures was quantitatively evaluated. The clinical relevance of our system was assessed via a questionnaire completed by experienced and trainee surgeons. RESULTS An AR-based robotic assistance system that performed real-time multimodal and temporal fusion of laparoscopic images with preoperative medical images was developed and tested. It enabled the detection of TLN and their surrounding anatomical structures during pelvic lymphadenectomy. Accuracy of the CT overlay was > 90%, with overflow rates < 6%. When comparing AR to direct vision, we found that scores were significatively higher in AR for all target structures. AR aided both experienced surgeons and trainees, whether it was for TLN, ureter, or vessel identification. CONCLUSION This computer-assisted system was reliable, safe, and accurate, and the present achievements represent a first step toward a clinical study.
Collapse
|
29
|
Mascagni P, Alapatt D, Laracca GG, Guerriero L, Spota A, Fiorillo C, Vardazaryan A, Quero G, Alfieri S, Baldari L, Cassinotti E, Boni L, Cuccurullo D, Costamagna G, Dallemagne B, Padoy N. Multicentric validation of EndoDigest: a computer vision platform for video documentation of the critical view of safety in laparoscopic cholecystectomy. Surg Endosc 2022; 36:8379-8386. [PMID: 35171336 DOI: 10.1007/s00464-022-09112-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 02/07/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND A computer vision (CV) platform named EndoDigest was recently developed to facilitate the use of surgical videos. Specifically, EndoDigest automatically provides short video clips to effectively document the critical view of safety (CVS) in laparoscopic cholecystectomy (LC). The aim of the present study is to validate EndoDigest on a multicentric dataset of LC videos. METHODS LC videos from 4 centers were manually annotated with the time of the cystic duct division and an assessment of CVS criteria. Incomplete recordings, bailout procedures and procedures with an intraoperative cholangiogram were excluded. EndoDigest leveraged predictions of deep learning models for workflow analysis in a rule-based inference system designed to estimate the time of the cystic duct division. Performance was assessed by computing the error in estimating the manually annotated time of the cystic duct division. To provide concise video documentation of CVS, EndoDigest extracted video clips showing the 2 min preceding and the 30 s following the predicted cystic duct division. The relevance of the documentation was evaluated by assessing CVS in automatically extracted 2.5-min-long video clips. RESULTS 144 of the 174 LC videos from 4 centers were analyzed. EndoDigest located the time of the cystic duct division with a mean error of 124.0 ± 270.6 s despite the use of fluorescent cholangiography in 27 procedures and great variations in surgical workflows across centers. The surgical evaluation found that 108 (75.0%) of the automatically extracted short video clips documented CVS effectively. CONCLUSIONS EndoDigest was robust enough to reliably locate the time of the cystic duct division and efficiently video document CVS despite the highly variable workflows. Training specifically on data from each center could improve results; however, this multicentric validation shows the potential for clinical translation of this surgical data science tool to efficiently document surgical safety.
Collapse
Affiliation(s)
- Pietro Mascagni
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France.
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France
| | - Giovanni Guglielmo Laracca
- Department of Medical Surgical Science and Translational Medicine, Sant'Andrea Hospital, Sapienza University of Rome, Rome, Italy
| | - Ludovica Guerriero
- Department of Laparoscopic and Robotic General Surgery, Monaldi Hospital, AORN dei Colli, Naples, Italy
| | - Andrea Spota
- Scuola di Specializzazione in Chirurgia Generale, University of Milan, Milan, Italy
| | - Claudio Fiorillo
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France
| | - Giuseppe Quero
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ludovica Baldari
- Department of Surgery, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico di Milano, University of Milan, Milan, Italy
| | - Elisa Cassinotti
- Department of Surgery, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico di Milano, University of Milan, Milan, Italy
| | - Luigi Boni
- Department of Surgery, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico di Milano, University of Milan, Milan, Italy
| | - Diego Cuccurullo
- Department of Laparoscopic and Robotic General Surgery, Monaldi Hospital, AORN dei Colli, Naples, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Bernard Dallemagne
- Institute for Research Against Digestive Cancer (IRCAD), Strasbourg, France
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, c/o IHU-Strasbourg, 1, place de l'hôpital, 67000, Strasbourg, France
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| |
Collapse
|
30
|
Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, Alseidi A, Redan JA, Alfieri S, Costamagna G, Boškoski I, Padoy N, Hashimoto DA. Computer vision in surgery: from potential to clinical value. NPJ Digit Med 2022; 5:163. [PMID: 36307544 PMCID: PMC9616906 DOI: 10.1038/s41746-022-00707-5] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/10/2022] [Indexed: 11/09/2022] Open
Abstract
Hundreds of millions of operations are performed worldwide each year, and the rising uptake in minimally invasive surgery has enabled fiber optic cameras and robots to become both important tools to conduct surgery and sensors from which to capture information about surgery. Computer vision (CV), the application of algorithms to analyze and interpret visual data, has become a critical technology through which to study the intraoperative phase of care with the goals of augmenting surgeons' decision-making processes, supporting safer surgery, and expanding access to surgical care. While much work has been performed on potential use cases, there are currently no CV tools widely used for diagnostic or therapeutic applications in surgery. Using laparoscopic cholecystectomy as an example, we reviewed current CV techniques that have been applied to minimally invasive surgery and their clinical applications. Finally, we discuss the challenges and obstacles that remain to be overcome for broader implementation and adoption of CV in surgery.
Collapse
Affiliation(s)
- Pietro Mascagni
- Gemelli Hospital, Catholic University of the Sacred Heart, Rome, Italy.
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada.
| | - Deepak Alapatt
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Luca Sestini
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Maria S Altieri
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Amin Madani
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University Health Network, Toronto, ON, Canada
| | - Yusuke Watanabe
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of Hokkaido, Hokkaido, Japan
| | - Adnan Alseidi
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Jay A Redan
- Department of Surgery, AdventHealth-Celebration Health, Celebration, FL, USA
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guido Costamagna
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Ivo Boškoski
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Nicolas Padoy
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- ICube, University of Strasbourg, CNRS, IHU, Strasbourg, France
| | - Daniel A Hashimoto
- Global Surgical Artificial Intelligence Collaborative, Toronto, ON, Canada
- Department of Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
31
|
Ward TM, Hashimoto DA, Ban Y, Rosman G, Meireles OR. Artificial intelligence prediction of cholecystectomy operative course from automated identification of gallbladder inflammation. Surg Endosc 2022; 36:6832-6840. [PMID: 35031869 DOI: 10.1007/s00464-022-09009-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 01/03/2022] [Indexed: 12/24/2022]
Abstract
BACKGROUND Operative courses of laparoscopic cholecystectomies vary widely due to differing pathologies. Efforts to assess intra-operative difficulty include the Parkland grading scale (PGS), which scores inflammation from the initial view of the gallbladder on a 1-5 scale. We investigated the impact of PGS on intra-operative outcomes, including laparoscopic duration, attainment of the critical view of safety (CVS), and gallbladder injury. We additionally trained an artificial intelligence (AI) model to identify PGS. METHODS One surgeon labeled surgical phases, PGS, CVS attainment, and gallbladder injury in 200 cholecystectomy videos. We used multilevel Bayesian regression models to analyze the PGS's effect on intra-operative outcomes. We trained AI models to identify PGS from an initial view of the gallbladder and compared model performance to annotations by a second surgeon. RESULTS Slightly inflamed gallbladders (PGS-2) minimally increased duration, adding 2.7 [95% compatibility interval (CI) 0.3-7.0] minutes to an operation. This contrasted with maximally inflamed gallbladders (PGS-5), where on average 16.9 (95% CI 4.4-33.9) minutes were added, with 31.3 (95% CI 8.0-67.5) minutes added for the most affected surgeon. Inadvertent gallbladder injury occurred in 25% of cases, with a minimal increase in gallbladder injury observed with added inflammation. However, up to a 28% (95% CI - 2, 63) increase in probability of a gallbladder hole during PGS-5 cases was observed for some surgeons. Inflammation had no substantial effect on whether or not a surgeon attained the CVS. An AI model could reliably (Krippendorff's α = 0.71, 95% CI 0.65-0.77) quantify inflammation when compared to a second surgeon (α = 0.82, 95% CI 0.75-0.87). CONCLUSIONS An AI model can identify the degree of gallbladder inflammation, which is predictive of cholecystectomy intra-operative course. This automated assessment could be useful for operating room workflow optimization and for targeted per-surgeon and per-resident feedback to accelerate acquisition of operative skills.
Collapse
Affiliation(s)
- Thomas M Ward
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA.
| | - Daniel A Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
| | - Yutong Ban
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
- Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Guy Rosman
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
- Distributed Robotics Laboratory, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ozanan R Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Department of Surgery, Massachusetts General Hospital, 15 Parkman St., WAC 460, Boston, MA, 02114, USA
| |
Collapse
|
32
|
Quero G, Mascagni P, Kolbinger FR, Fiorillo C, De Sio D, Longo F, Schena CA, Laterza V, Rosa F, Menghi R, Papa V, Tondolo V, Cina C, Distler M, Weitz J, Speidel S, Padoy N, Alfieri S. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers (Basel) 2022; 14:3803. [PMID: 35954466 PMCID: PMC9367568 DOI: 10.3390/cancers14153803] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/29/2022] [Accepted: 08/03/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence (AI) and computer vision (CV) are beginning to impact medicine. While evidence on the clinical value of AI-based solutions for the screening and staging of colorectal cancer (CRC) is mounting, CV and AI applications to enhance the surgical treatment of CRC are still in their early stage. This manuscript introduces key AI concepts to a surgical audience, illustrates fundamental steps to develop CV for surgical applications, and provides a comprehensive overview on the state-of-the-art of AI applications for the treatment of CRC. Notably, studies show that AI can be trained to automatically recognize surgical phases and actions with high accuracy even in complex colorectal procedures such as transanal total mesorectal excision (TaTME). In addition, AI models were trained to interpret fluorescent signals and recognize correct dissection planes during total mesorectal excision (TME), suggesting CV as a potentially valuable tool for intraoperative decision-making and guidance. Finally, AI could have a role in surgical training, providing automatic surgical skills assessment in the operating room. While promising, these proofs of concept require further development, validation in multi-institutional data, and clinical studies to confirm AI as a valuable tool to enhance CRC treatment.
Collapse
Affiliation(s)
- Giuseppe Quero
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Pietro Mascagni
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
| | - Fiona R. Kolbinger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Claudio Fiorillo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Davide De Sio
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Fabio Longo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Alberto Schena
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vito Laterza
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Fausto Rosa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Roberta Menghi
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Valerio Papa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vincenzo Tondolo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Caterina Cina
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Marius Distler
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Juergen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, 01307 Dresden, Germany
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
- ICube, Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, 67000 Strasbourg, France
| | - Sergio Alfieri
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| |
Collapse
|
33
|
Abstract
BACKGROUND Artificial intelligence (AI) applications aiming to support surgical decision-making processes are generating novel threats to ethical surgical care. To understand and address these threats, we summarize the main ethical issues that may arise from applying AI to surgery, starting from the Ethics Guidelines for Trustworthy Artificial Intelligence framework recently promoted by the European Commission. STUDY DESIGN A modified Delphi process has been employed to achieve expert consensus. RESULTS The main ethical issues that arise from applying AI to surgery, described in detail here, relate to human agency, accountability for errors, technical robustness, privacy and data governance, transparency, diversity, non-discrimination, and fairness. It may be possible to address many of these ethical issues by expanding the breadth of surgical AI research to focus on implementation science. The potential for AI to disrupt surgical practice suggests that formal digital health education is becoming increasingly important for surgeons and surgical trainees. CONCLUSIONS A multidisciplinary focus on implementation science and digital health education is desirable to balance opportunities offered by emerging AI technologies and respect for the ethical principles of a patient-centric philosophy.
Collapse
|
34
|
Vedula SS, Ghazi A, Collins JW, Pugh C, Stefanidis D, Meireles O, Hung AJ, Schwaitzberg S, Levy JS, Sachdeva AK. Artificial Intelligence Methods and Artificial Intelligence-Enabled Metrics for Surgical Education: A Multidisciplinary Consensus. J Am Coll Surg 2022; 234:1181-1192. [PMID: 35703817 PMCID: PMC10634198 DOI: 10.1097/xcs.0000000000000190] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Artificial intelligence (AI) methods and AI-enabled metrics hold tremendous potential to advance surgical education. Our objective was to generate consensus guidance on specific needs for AI methods and AI-enabled metrics for surgical education. STUDY DESIGN The study included a systematic literature search, a virtual conference, and a 3-round Delphi survey of 40 representative multidisciplinary stakeholders with domain expertise selected through purposeful sampling. The accelerated Delphi process was completed within 10 days. The survey covered overall utility, anticipated future (10-year time horizon), and applications for surgical training, assessment, and feedback. Consensus was agreement among 80% or more respondents. We coded survey questions into 11 themes and descriptively analyzed the responses. RESULTS The respondents included surgeons (40%), engineers (15%), affiliates of industry (27.5%), professional societies (7.5%), regulatory agencies (7.5%), and a lawyer (2.5%). The survey included 155 questions; consensus was achieved on 136 (87.7%). The panel listed 6 deliverables each for AI-enhanced learning curve analytics and surgical skill assessment. For feedback, the panel identified 10 priority deliverables spanning 2-year (n = 2), 5-year (n = 4), and 10-year (n = 4) timeframes. Within 2 years, the panel expects development of methods to recognize anatomy in images of the surgical field and to provide surgeons with performance feedback immediately after an operation. The panel also identified 5 essential that should be included in operative performance reports for surgeons. CONCLUSIONS The Delphi panel consensus provides a specific, bold, and forward-looking roadmap for AI methods and AI-enabled metrics for surgical education.
Collapse
Affiliation(s)
- S Swaroop Vedula
- From the Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD (Vedula)
| | - Ahmed Ghazi
- the Department of Urology, University of Rochester Medical Center, Rochester, NY (Ghazi)
| | - Justin W Collins
- the Division of Surgery and Interventional Science, Research Department of Targeted Intervention and Wellcome/Engineering and Physical Sciences Research Council Center for Interventional and Surgical Sciences, University College London, London, UK (Collins)
| | - Carla Pugh
- the Department of Surgery, Stanford University, Stanford, CA (Pugh)
| | | | - Ozanan Meireles
- the Department of Surgery, Massachusetts General Hospital, Boston, MA (Meireles)
| | - Andrew J Hung
- the Artificial Intelligence Center at University of Southern California Urology, Department of Urology, University of Southern California, Los Angeles, CA (Hung)
| | | | - Jeffrey S Levy
- Institute for Surgical Excellence, Washington, DC (Levy)
| | - Ajit K Sachdeva
- Division of Education, American College of Surgeons, Chicago, IL (Sachdeva)
| |
Collapse
|
35
|
Unadkat V, Pangal DJ, Kugener G, Roshannai A, Chan J, Zhu Y, Markarian N, Zada G, Donoho DA. Code-free machine learning for object detection in surgical video: a benchmarking, feasibility, and cost study. Neurosurg Focus 2022; 52:E11. [PMID: 35364576 DOI: 10.3171/2022.1.focus21652] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/25/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE While the utilization of machine learning (ML) for data analysis typically requires significant technical expertise, novel platforms can deploy ML methods without requiring the user to have any coding experience (termed AutoML). The potential for these methods to be applied to neurosurgical video and surgical data science is unknown. METHODS AutoML, a code-free ML (CFML) system, was used to identify surgical instruments contained within each frame of endoscopic, endonasal intraoperative video obtained from a previously validated internal carotid injury training exercise performed on a high-fidelity cadaver model. Instrument-detection performances using CFML were compared with two state-of-the-art ML models built using the Python coding language on the same intraoperative video data set. RESULTS The CFML system successfully ingested surgical video without the use of any code. A total of 31,443 images were used to develop this model; 27,223 images were uploaded for training, 2292 images for validation, and 1928 images for testing. The mean average precision on the test set across all instruments was 0.708. The CFML model outperformed two standard object detection networks, RetinaNet and YOLOv3, which had mean average precisions of 0.669 and 0.527, respectively, in analyzing the same data set. Significant advantages to the CFML system included ease of use, relatively low cost, displays of true/false positives and negatives in a user-friendly interface, and the ability to deploy models for further analysis with ease. Significant drawbacks of the CFML model included an inability to view the structure of the trained model, an inability to update the ML model once trained with new examples, and the inability for robust downstream analysis of model performance and error modes. CONCLUSIONS This first report describes the baseline performance of CFML in an object detection task using a publicly available surgical video data set as a test bed. Compared with standard, code-based object detection networks, CFML exceeded performance standards. This finding is encouraging for surgeon-scientists seeking to perform object detection tasks to answer clinical questions, perform quality improvement, and develop novel research ideas. The limited interpretability and customization of CFML models remain ongoing challenges. With the further development of code-free platforms, CFML will become increasingly important across biomedical research. Using CFML, surgeons without significant coding experience can perform exploratory ML analyses rapidly and efficiently.
Collapse
Affiliation(s)
- Vyom Unadkat
- 1Department of Computer Science, USC Viterbi School of Engineering, Los Angeles, California.,2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Dhiraj J Pangal
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Guillaume Kugener
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Arman Roshannai
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Justin Chan
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Yichao Zhu
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Nicholas Markarian
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Gabriel Zada
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Daniel A Donoho
- 3Division of Neurosurgery, Center for Neurosciences, Children's National Hospital, Washington, DC
| |
Collapse
|
36
|
Chen Z, Zhang Y, Yan Z, Dong J, Cai W, Ma Y, Jiang J, Dai K, Liang H, He J. Artificial intelligence assisted display in thoracic surgery: development and possibilities. J Thorac Dis 2022; 13:6994-7005. [PMID: 35070382 PMCID: PMC8743398 DOI: 10.21037/jtd-21-1240] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 11/02/2021] [Indexed: 12/24/2022]
Abstract
In this golden age of rapid development of artificial intelligence (AI), researchers and surgeons realized that AI could contribute to healthcare in all aspects, especially in surgery. The popularity of low-dose computed tomography (LDCT) and the improvement of the video-assisted thoracoscopic surgery (VATS) not only bring opportunities for thoracic surgery but also bring challenges on the way forward. Preoperatively localizing lung nodules precisely, intraoperatively identifying anatomical structures accurately, and avoiding complications requires a visual display of individuals’ specific anatomy for surgical simulation and assistance. With the advance of AI-assisted display technologies, including 3D reconstruction/3D printing, virtual reality (VR), augmented reality (AR), and mixed reality (MR), computer tomography (CT) imaging in thoracic surgery has been fully utilized for transforming 2D images to 3D model, which facilitates surgical teaching, planning, and simulation. AI-assisted display based on surgical videos is a new surgical application, which is still in its infancy. Notably, it has potential applications in thoracic surgery education, surgical quality evaluation, intraoperative assistance, and postoperative analysis. In this review, we illustrated the current AI-assisted display applications based on CT in thoracic surgery; focused on the emerging AI applications in thoracic surgery based on surgical videos by reviewing its relevant researches in other surgical fields and anticipate its potential development in thoracic surgery.
Collapse
Affiliation(s)
- Zhuxing Chen
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Yudong Zhang
- Department of Thoracic Surgery, the First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Zeping Yan
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China.,Guangdong Association of Thoracic Diseases, Guangzhou, China
| | - Junguo Dong
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Weipeng Cai
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Yongfu Ma
- Department of Thoracic Surgery, the First Medical Centre, Chinese PLA General Hospital, Beijing, China
| | - Jipeng Jiang
- Department of Thoracic Surgery, the First Medical Centre, Chinese PLA General Hospital, Beijing, China
| | - Keyao Dai
- Department of Cardiothoracic Surgery, The Affiliated Hospital of Guangdong Medical University, Zhanjiang, China
| | - Hengrui Liang
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Surgery and Oncology, the First Affiliated Hospital of Guangzhou Medical University, National Center for Respiratory Medicine, State Key Laboratory of Respiratory Disease, National Clinical Research Center for Respiratory Disease, Guangzhou Institute of Respiratory Health, Guangzhou, China
| |
Collapse
|
37
|
Gumbs AA, Frigerio I, Spolverato G, Croner R, Illanes A, Chouillard E, Elyan E. Artificial Intelligence Surgery: How Do We Get to Autonomous Actions in Surgery? SENSORS (BASEL, SWITZERLAND) 2021; 21:5526. [PMID: 34450976 PMCID: PMC8400539 DOI: 10.3390/s21165526] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/03/2021] [Accepted: 08/11/2021] [Indexed: 12/30/2022]
Abstract
Most surgeons are skeptical as to the feasibility of autonomous actions in surgery. Interestingly, many examples of autonomous actions already exist and have been around for years. Since the beginning of this millennium, the field of artificial intelligence (AI) has grown exponentially with the development of machine learning (ML), deep learning (DL), computer vision (CV) and natural language processing (NLP). All of these facets of AI will be fundamental to the development of more autonomous actions in surgery, unfortunately, only a limited number of surgeons have or seek expertise in this rapidly evolving field. As opposed to AI in medicine, AI surgery (AIS) involves autonomous movements. Fortuitously, as the field of robotics in surgery has improved, more surgeons are becoming interested in technology and the potential of autonomous actions in procedures such as interventional radiology, endoscopy and surgery. The lack of haptics, or the sensation of touch, has hindered the wider adoption of robotics by many surgeons; however, now that the true potential of robotics can be comprehended, the embracing of AI by the surgical community is more important than ever before. Although current complete surgical systems are mainly only examples of tele-manipulation, for surgeons to get to more autonomously functioning robots, haptics is perhaps not the most important aspect. If the goal is for robots to ultimately become more and more independent, perhaps research should not focus on the concept of haptics as it is perceived by humans, and the focus should be on haptics as it is perceived by robots/computers. This article will discuss aspects of ML, DL, CV and NLP as they pertain to the modern practice of surgery, with a focus on current AI issues and advances that will enable us to get to more autonomous actions in surgery. Ultimately, there may be a paradigm shift that needs to occur in the surgical community as more surgeons with expertise in AI may be needed to fully unlock the potential of AIS in a safe, efficacious and timely manner.
Collapse
Affiliation(s)
- Andrew A. Gumbs
- Centre Hospitalier Intercommunal de POISSY/SAINT-GERMAIN-EN-LAYE 10, Rue Champ de Gaillard, 78300 Poissy, France;
| | - Isabella Frigerio
- Department of Hepato-Pancreato-Biliary Surgery, Pederzoli Hospital, 37019 Peschiera del Garda, Italy;
| | - Gaya Spolverato
- Department of Surgical, Oncological and Gastroenterological Sciences, University of Padova, 35122 Padova, Italy;
| | - Roland Croner
- Department of General-, Visceral-, Vascular- and Transplantation Surgery, University of Magdeburg, Haus 60a, Leipziger Str. 44, 39120 Magdeburg, Germany;
| | - Alfredo Illanes
- INKA–Innovation Laboratory for Image Guided Therapy, Medical Faculty, Otto-von-Guericke University Magdeburg, 39120 Magdeburg, Germany;
| | - Elie Chouillard
- Centre Hospitalier Intercommunal de POISSY/SAINT-GERMAIN-EN-LAYE 10, Rue Champ de Gaillard, 78300 Poissy, France;
| | - Eyad Elyan
- School of Computing, Robert Gordon University, Aberdeen AB10 7JG, UK;
| |
Collapse
|