1
|
Kodama T, Arimura H, Tokuda T, Tanaka K, Yabuuchi H, Gowdh NFM, Liam CK, Chai CS, Ng KH. Topological radiogenomics based on persistent lifetime images for identification of epidermal growth factor receptor mutation in patients with non-small cell lung tumors. Comput Biol Med 2025; 185:109519. [PMID: 39667057 DOI: 10.1016/j.compbiomed.2024.109519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 11/17/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024]
Abstract
We hypothesized that persistent lifetime (PLT) images could represent tumor imaging traits, locations, and persistent contrasts of topological components (connected and hole components) corresponding to gene mutations such as epidermal growth factor receptor (EGFR) mutant signs. We aimed to develop a topological radiogenomic approach using PLT images to identify EGFR mutation-positive patients with non-small cell lung cancer (NSCLC). The PLT image was newly proposed to visualize the locations and persistent contrasts of the topological components for a sequence of binary images with consecutive thresholding of an original computed tomography (CT) image. This study employed 226 NSCLC patients (94 mutant and 132 wildtype patients) with pretreatment contrast-enhanced CT images obtained from four datasets from different countries for training and testing prediction models. Two-dimensional (2D) and three-dimensional (3D) PLT images were assumed to characterize specific imaging traits (e.g., air bronchogram sign, cavitation, and ground glass nodule) of EGFR-mutant tumors. Seven types of machine learning classification models were constructed to predict EGFR mutations with significant features selected from 2D-PLT, 3D-PLT, and conventional radiogenomic features. Among the means and standard deviations of the test areas under the receiver operating characteristic curves (AUCs) of all radiogenomic approaches in a four-fold cross-validation test, the 2D-PLT features showed the highest AUC with the lowest standard deviation of 0.927 ± 0.08. The best radiogenomic approaches with the highest AUC were the random forest model trained with the Betti number (BN) map features (AUC = 0.984) in the internal test and the adapting boosting model trained with the BN map features (AUC = 0.717) in the external test. PLT features can be used as radiogenomic imaging biomarkers for the identification of EGFR mutation status in patients with NSCLC.
Collapse
Affiliation(s)
- Takumi Kodama
- Division of Medical Quantum Science, Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| | - Hidetaka Arimura
- Division of Medical Quantum Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| | - Tomoki Tokuda
- Joint Graduate School of Mathematics for Innovation, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka, 819-0395, Japan.
| | - Kentaro Tanaka
- Department of Pulmonary Medicine, Graduate School of Medical and Dental Sciences, Kagoshima University, 8-35-1, Sakuragaoka, Kagoshima, 890-8544, Japan.
| | - Hidetake Yabuuchi
- Division of Medical Quantum Science, Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| | - Nadia Fareeda Muhammad Gowdh
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Lembah Pantai, 50603, Kuala Lumpur, Malaysia.
| | - Chong-Kin Liam
- Department of Medicine, Faculty of Medicine, University of Malaya, Lembah Pantai, 50603, Kuala Lumpur, Malaysia.
| | - Chee-Shee Chai
- Department of Medicine, Faculty of Medicine and Health Science, University of Malaysia, Sarawak, 94300, Kota Samarahan, Sarawak, Malaysia.
| | - Kwan Hoong Ng
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Lembah Pantai, 50603, Kuala Lumpur, Malaysia.
| |
Collapse
|
2
|
Kiwan O, Al-Kalbani M, Rafie A, Hijazi Y. Artificial intelligence in plastic surgery, where do we stand? JPRAS Open 2024; 42:234-243. [PMID: 39435018 PMCID: PMC11491964 DOI: 10.1016/j.jpra.2024.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 09/05/2024] [Indexed: 10/23/2024] Open
Abstract
Since the pandemic, artificial intelligence (AI) has been integrated into several fields and everyday life as well. Healthcare is not an exception. Plastic surgery is a key focus area of this technological revolution, with hundreds of studies and reviews already published on the use of AI in plastics. This review summarizes the entirety of the available literature from 2020 to provide a comprehensive overview on AI innovation in plastic surgery. A systematic literature review (following the PRISMA guidelines) of all studies and papers that examined the application of AI in plastic surgery was carried out using Medline, Cochrane, Embase, and Google Scholar. Outcomes of interest included the growing role of AI in clinical consultations, diagnosing potentials, surgical planning, intraoperative, and post-operative uses. Ninety-six studies were included in this review; six examined the role of AI in consultations, fifteen used AI in diagnoses and assessments, seventeen involved AI in surgical planning, fifteen reported on AI use in post-operative predictions and management, and nine involved administrations and documentation. This comprehensive review of available literature found AI to be capable of transforming care throughout the entire patient journey. Certain challenges and concerns persist, but a collaborative effort can solve these issues to bring about a new era of medicine, where AI aids doctors in the pursuit of optimal patient care.
Collapse
Affiliation(s)
- Omar Kiwan
- Faculty of Biology, Medicine and Health, University of Manchester, United Kingdom
| | - Mohammed Al-Kalbani
- Faculty of Biology, Medicine and Health, University of Manchester, United Kingdom
| | - Arash Rafie
- Plastic and Reconstructive Department, Lancashire Teaching Hospitals NHS Foundation, United Kingdom
| | - Yasser Hijazi
- Plastic and Reconstructive Department, Lancashire Teaching Hospitals NHS Foundation, United Kingdom
| |
Collapse
|
3
|
Laterza V, Marchegiani F, Aisoni F, Ammendola M, Schena CA, Lavazza L, Ravaioli C, Carra MC, Costa V, De Franceschi A, De Simone B, de’Angelis N. Smart Operating Room in Digestive Surgery: A Narrative Review. Healthcare (Basel) 2024; 12:1530. [PMID: 39120233 PMCID: PMC11311806 DOI: 10.3390/healthcare12151530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 07/24/2024] [Accepted: 07/29/2024] [Indexed: 08/10/2024] Open
Abstract
The introduction of new technologies in current digestive surgical practice is progressively reshaping the operating room, defining the fourth surgical revolution. The implementation of black boxes and control towers aims at streamlining workflow and reducing surgical error by early identification and analysis, while augmented reality and artificial intelligence augment surgeons' perceptual and technical skills by superimposing three-dimensional models to real-time surgical images. Moreover, the operating room architecture is transitioning toward an integrated digital environment to improve efficiency and, ultimately, patients' outcomes. This narrative review describes the most recent evidence regarding the role of these technologies in transforming the current digestive surgical practice, underlining their potential benefits and drawbacks in terms of efficiency and patients' outcomes, as an attempt to foresee the digestive surgical practice of tomorrow.
Collapse
Affiliation(s)
- Vito Laterza
- Department of Digestive Surgical Oncology and Liver Transplantation, University Hospital of Besançon, 3 Boulevard Alexandre Fleming, 25000 Besancon, France;
| | - Francesco Marchegiani
- Unit of Colorectal and Digestive Surgery, DIGEST Department, Beaujon University Hospital, AP-HP, University of Paris Cité, Clichy, 92110 Paris, France
| | - Filippo Aisoni
- Unit of Emergency Surgery, Department of Surgery, Ferrara University Hospital, 44124 Ferrara, Italy;
| | - Michele Ammendola
- Digestive Surgery Unit, Health of Science Department, University Hospital “R.Dulbecco”, 88100 Catanzaro, Italy;
| | - Carlo Alberto Schena
- Unit of Robotic and Minimally Invasive Surgery, Department of Surgery, Ferrara University Hospital, 44124 Ferrara, Italy; (C.A.S.); (N.d.)
| | - Luca Lavazza
- Hospital Network Coordinator of Azienda Ospedaliero, Universitaria and Azienda USL di Ferrara, 44121 Ferrara, Italy;
| | - Cinzia Ravaioli
- Azienda Ospedaliero, Universitaria di Ferrara, 44121 Ferrara, Italy;
| | - Maria Clotilde Carra
- Rothschild Hospital (AP-HP), 75012 Paris, France;
- INSERM-Sorbonne Paris Cité, Epidemiology and Statistics Research Centre, 75004 Paris, France
| | - Vittore Costa
- Unit of Orthopedics, Humanitas Hospital, 24125 Bergamo, Italy;
| | | | - Belinda De Simone
- Department of Emergency Surgery, Academic Hospital of Villeneuve St Georges, 91560 Villeneuve St. Georges, France;
| | - Nicola de’Angelis
- Unit of Robotic and Minimally Invasive Surgery, Department of Surgery, Ferrara University Hospital, 44124 Ferrara, Italy; (C.A.S.); (N.d.)
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy
| |
Collapse
|
4
|
Yilmaz R, Bakhaidar M, Alsayegh A, Abou Hamdan N, Fazlollahi AM, Tee T, Langleben I, Winkler-Schwartz A, Laroche D, Santaguida C, Del Maestro RF. Real-Time multifaceted artificial intelligence vs In-Person instruction in teaching surgical technical skills: a randomized controlled trial. Sci Rep 2024; 14:15130. [PMID: 38956112 PMCID: PMC11219907 DOI: 10.1038/s41598-024-65716-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024] Open
Abstract
Trainees develop surgical technical skills by learning from experts who provide context for successful task completion, identify potential risks, and guide correct instrument handling. This expert-guided training faces significant limitations in objectively assessing skills in real-time and tracking learning. It is unknown whether AI systems can effectively replicate nuanced real-time feedback, risk identification, and guidance in mastering surgical technical skills that expert instructors offer. This randomized controlled trial compared real-time AI feedback to in-person expert instruction. Ninety-seven medical trainees completed a 90-min simulation training with five practice tumor resections followed by a realistic brain tumor resection. They were randomly assigned into 1-real-time AI feedback, 2-in-person expert instruction, and 3-no real-time feedback. Performance was assessed using a composite-score and Objective Structured Assessment of Technical Skills rating, rated by blinded experts. Training with real-time AI feedback (n = 33) resulted in significantly better performance outcomes compared to no real-time feedback (n = 32) and in-person instruction (n = 32), .266, [95% CI .107 .425], p < .001; .332, [95% CI .173 .491], p = .005, respectively. Learning from AI resulted in similar OSATS ratings (4.30 vs 4.11, p = 1) compared to in-person training with expert instruction. Intelligent systems may refine the way operating skills are taught, providing tailored, quantifiable feedback and actionable instructions in real-time.
Collapse
Affiliation(s)
- Recai Yilmaz
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada.
| | - Mohamad Bakhaidar
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
- Division of Neurosurgery, Department of Surgery, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Ahmad Alsayegh
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
- Division of Neurosurgery, Department of Surgery, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nour Abou Hamdan
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Ali M Fazlollahi
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Trisha Tee
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Ian Langleben
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
| | - Alexander Winkler-Schwartz
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| | - Denis Laroche
- National Research Council Canada, Boucherville, QC, Canada
| | - Carlo Santaguida
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| | - Rolando F Del Maestro
- Neurosurgical Simulation and Artificial Intelligence Learning Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 300 Rue Léo Pariseau, Suite 2210, Montreal, QC, H2X 4B3, Canada
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Canada
- Department of Neurology and Neurosurgery, Montreal Neurological Institute and Hospital, McGill University, Montreal, QC, Canada
| |
Collapse
|
5
|
Kang D, Wu H, Yuan L, Shi Y, Jin K, Grzybowski A. A Beginner's Guide to Artificial Intelligence for Ophthalmologists. Ophthalmol Ther 2024; 13:1841-1855. [PMID: 38734807 PMCID: PMC11178755 DOI: 10.1007/s40123-024-00958-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024] Open
Abstract
The integration of artificial intelligence (AI) in ophthalmology has promoted the development of the discipline, offering opportunities for enhancing diagnostic accuracy, patient care, and treatment outcomes. This paper aims to provide a foundational understanding of AI applications in ophthalmology, with a focus on interpreting studies related to AI-driven diagnostics. The core of our discussion is to explore various AI methods, including deep learning (DL) frameworks for detecting and quantifying ophthalmic features in imaging data, as well as using transfer learning for effective model training in limited datasets. The paper highlights the importance of high-quality, diverse datasets for training AI models and the need for transparent reporting of methodologies to ensure reproducibility and reliability in AI studies. Furthermore, we address the clinical implications of AI diagnostics, emphasizing the balance between minimizing false negatives to avoid missed diagnoses and reducing false positives to prevent unnecessary interventions. The paper also discusses the ethical considerations and potential biases in AI models, underscoring the importance of continuous monitoring and improvement of AI systems in clinical settings. In conclusion, this paper serves as a primer for ophthalmologists seeking to understand the basics of AI in their field, guiding them through the critical aspects of interpreting AI studies and the practical considerations for integrating AI into clinical practice.
Collapse
Affiliation(s)
- Daohuan Kang
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Hongkang Wu
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Lu Yuan
- Department of Ophthalmology, The Children's Hospital, Zhejiang University School of Medicine, National Clinical Research Center for Child Health, Hangzhou, China
| | - Yu Shi
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
- Zhejiang University School of Medicine, Hangzhou, China
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| |
Collapse
|
6
|
Rodler S, Ganjavi C, De Backer P, Magoulianitis V, Ramacciotti LS, De Castro Abreu AL, Gill IS, Cacciamani GE. Generative artificial intelligence in surgery. Surgery 2024; 175:1496-1502. [PMID: 38582732 DOI: 10.1016/j.surg.2024.02.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/18/2024] [Accepted: 02/23/2024] [Indexed: 04/08/2024]
Abstract
Generative artificial intelligence is able to collect, extract, digest, and generate information in an understandable way for humans. As the first surgical applications of generative artificial intelligence are applied, this perspective paper aims to provide a comprehensive overview of current applications and future perspectives for the application of generative artificial intelligence in surgery, from preoperative planning to training. Generative artificial intelligence can be used before surgery for planning and decision support by extracting patient information and providing patients with information and simulation regarding the procedure. Intraoperatively, generative artificial intelligence can document data that is normally not captured as intraoperative adverse events or provide information to help decision-making. Postoperatively, GAIs can help with patient discharge and follow-up. The ability to provide real-time feedback and store it for later review is an important capability of GAIs. GAI applications are emerging as highly specialized, task-specific tools for tasks such as data extraction, synthesis, presentation, and communication within the realm of surgery. GAIs have the potential to play a pivotal role in facilitating interaction between surgeons and artificial intelligence.
Collapse
Affiliation(s)
- Severin Rodler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA; Department of Urology, University Hospital of LMU Munich, Germany; Young Academic Working Group in Urologic Technology of the European Association of Urology, Arnhem, The Netherlands
| | - Conner Ganjavi
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA
| | - Pieter De Backer
- Young Academic Working Group in Urologic Technology of the European Association of Urology, Arnhem, The Netherlands; Department of Urology, Onze-Lieve-Vrouwziekenhuis Hospital, Aalst, Belgium; ORSI Academy, Ghent, Belgium
| | - Vasileios Magoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA
| | - Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA
| | - Andre Luis De Castro Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA
| | - Inderbir S Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA; Young Academic Working Group in Urologic Technology of the European Association of Urology, Arnhem, The Netherlands.
| |
Collapse
|
7
|
Varghese C, Harrison EM, O'Grady G, Topol EJ. Artificial intelligence in surgery. Nat Med 2024; 30:1257-1268. [PMID: 38740998 DOI: 10.1038/s41591-024-02970-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/03/2024] [Indexed: 05/16/2024]
Abstract
Artificial intelligence (AI) is rapidly emerging in healthcare, yet applications in surgery remain relatively nascent. Here we review the integration of AI in the field of surgery, centering our discussion on multifaceted improvements in surgical care in the preoperative, intraoperative and postoperative space. The emergence of foundation model architectures, wearable technologies and improving surgical data infrastructures is enabling rapid advances in AI interventions and utility. We discuss how maturing AI methods hold the potential to improve patient outcomes, facilitate surgical education and optimize surgical care. We review the current applications of deep learning approaches and outline a vision for future advances through multimodal foundation models.
Collapse
Affiliation(s)
- Chris Varghese
- Department of Surgery, University of Auckland, Auckland, New Zealand
| | - Ewen M Harrison
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Greg O'Grady
- Department of Surgery, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, CA, USA.
| |
Collapse
|
8
|
Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, Hanson J, Haas M, Spadafore M, Grafton-Clarke C, Gasiea RY, Michie C, Corral J, Kwan B, Dolmans D, Thammasitboon S. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. MEDICAL TEACHER 2024; 46:446-470. [PMID: 38423127 DOI: 10.1080/0142159x.2024.2314198] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) is rapidly transforming healthcare, and there is a critical need for a nuanced understanding of how AI is reshaping teaching, learning, and educational practice in medical education. This review aimed to map the literature regarding AI applications in medical education, core areas of findings, potential candidates for formal systematic review and gaps for future research. METHODS This rapid scoping review, conducted over 16 weeks, employed Arksey and O'Malley's framework and adhered to STORIES and BEME guidelines. A systematic and comprehensive search across PubMed/MEDLINE, EMBASE, and MedEdPublish was conducted without date or language restrictions. Publications included in the review spanned undergraduate, graduate, and continuing medical education, encompassing both original studies and perspective pieces. Data were charted by multiple author pairs and synthesized into various thematic maps and charts, ensuring a broad and detailed representation of the current landscape. RESULTS The review synthesized 278 publications, with a majority (68%) from North American and European regions. The studies covered diverse AI applications in medical education, such as AI for admissions, teaching, assessment, and clinical reasoning. The review highlighted AI's varied roles, from augmenting traditional educational methods to introducing innovative practices, and underscores the urgent need for ethical guidelines in AI's application in medical education. CONCLUSION The current literature has been charted. The findings underscore the need for ongoing research to explore uncharted areas and address potential risks associated with AI use in medical education. This work serves as a foundational resource for educators, policymakers, and researchers in navigating AI's evolving role in medical education. A framework to support future high utility reporting is proposed, the FACETS framework.
Collapse
Affiliation(s)
- Morris Gordon
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
- Blackpool Hospitals NHS Foundation Trust, Blackpool, UK
| | - Michelle Daniel
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Aderonke Ajiboye
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Hussein Uraiby
- Department of Cellular Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Nicole Y Xu
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Rangana Bartlett
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Janice Hanson
- Department of Medicine and Office of Education, School of Medicine, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Mary Haas
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maxwell Spadafore
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | | | - Colin Michie
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Janet Corral
- Department of Medicine, University of Nevada Reno, School of Medicine, Reno, NV, USA
| | - Brian Kwan
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Diana Dolmans
- School of Health Professions Education, Faculty of Health, Maastricht University, Maastricht, NL, USA
| | - Satid Thammasitboon
- Center for Research, Innovation and Scholarship in Health Professions Education, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
9
|
Cevik J, Lim B, Seth I, Sofiadellis F, Ross RJ, Cuomo R, Rozen WM. Assessment of the bias of artificial intelligence generated images and large language models on their depiction of a surgeon. ANZ J Surg 2024; 94:287-294. [PMID: 38087912 DOI: 10.1111/ans.18792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/22/2023] [Accepted: 11/12/2023] [Indexed: 03/20/2024]
Affiliation(s)
- Jevan Cevik
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia
- The Alfred Centre, Central Clinical School at Monash University, 99 Commercial Rd, Melbourne, Victoria, 3004, Australia
| | - Bryan Lim
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia
- The Alfred Centre, Central Clinical School at Monash University, 99 Commercial Rd, Melbourne, Victoria, 3004, Australia
| | - Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia
- The Alfred Centre, Central Clinical School at Monash University, 99 Commercial Rd, Melbourne, Victoria, 3004, Australia
| | - Foti Sofiadellis
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia
| | - Richard J Ross
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia
| | - Roberto Cuomo
- Plastic Surgery Unit, Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, 53100, Italy
| | - Warren M Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, 3199, Australia
- The Alfred Centre, Central Clinical School at Monash University, 99 Commercial Rd, Melbourne, Victoria, 3004, Australia
| |
Collapse
|
10
|
Knudsen JE, Ghaffar U, Ma R, Hung AJ. Clinical applications of artificial intelligence in robotic surgery. J Robot Surg 2024; 18:102. [PMID: 38427094 PMCID: PMC10907451 DOI: 10.1007/s11701-024-01867-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024]
Abstract
Artificial intelligence (AI) is revolutionizing nearly every aspect of modern life. In the medical field, robotic surgery is the sector with some of the most innovative and impactful advancements. In this narrative review, we outline recent contributions of AI to the field of robotic surgery with a particular focus on intraoperative enhancement. AI modeling is allowing surgeons to have advanced intraoperative metrics such as force and tactile measurements, enhanced detection of positive surgical margins, and even allowing for the complete automation of certain steps in surgical procedures. AI is also Query revolutionizing the field of surgical education. AI modeling applied to intraoperative surgical video feeds and instrument kinematics data is allowing for the generation of automated skills assessments. AI also shows promise for the generation and delivery of highly specialized intraoperative surgical feedback for training surgeons. Although the adoption and integration of AI show promise in robotic surgery, it raises important, complex ethical questions. Frameworks for thinking through ethical dilemmas raised by AI are outlined in this review. AI enhancements in robotic surgery is some of the most groundbreaking research happening today, and the studies outlined in this review represent some of the most exciting innovations in recent years.
Collapse
Affiliation(s)
- J Everett Knudsen
- Keck School of Medicine, University of Southern California, Los Angeles, USA
| | | | - Runzhuo Ma
- Cedars-Sinai Medical Center, Los Angeles, USA
| | | |
Collapse
|
11
|
Haque TF, Knudsen JE, You J, Hui A, Djaladat H, Ma R, Cen S, Goldenberg M, Hung AJ. Competency in Robotic Surgery: Standard Setting for Robotic Suturing Using Objective Assessment and Expert Evaluation. JOURNAL OF SURGICAL EDUCATION 2024; 81:422-430. [PMID: 38290967 PMCID: PMC10923136 DOI: 10.1016/j.jsurg.2023.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 11/17/2023] [Accepted: 12/12/2023] [Indexed: 02/01/2024]
Abstract
OBJECTIVE Surgical skill assessment tools such as the End-to-End Assessment of Suturing Expertise (EASE) can differentiate a surgeon's experience level. In this simulation-based study, we define a competency benchmark for intraoperative robotic suturing using EASE as a validated measure of performance. DESIGN Participants conducted a dry-lab vesicourethral anastomosis (VUA) exercise. Videos were each independently scored by 2 trained, blinded reviewers using EASE. Inter-rater reliability was measured with prevalence-adjusted bias-adjusted Kappa (PABAK) using 2 example videos. All videos were reviewed by an expert surgeon, who determined if the suturing skills exhibited were at a competency level expected for residency graduation (pass or fail). The Contrasting Group (CG) method was then used to set a pass/fail score at the intercept of the pass and fail cohorts' EASE score distributions. SETTING Keck School of Medicine, University of Southern California. PARTICIPANTS Twenty-six participants: 8 medical students, 8 junior residents (PGY 1-2), 7 senior residents (PGY 3-5) and 3 attending urologists. RESULTS After 1 round of consensus-building, average PABAK across EASE subskills was 0.90 (Range 0.67-1.0). The CG method produced a competency benchmark EASE score of >35/39, with a pass rate of 10/26 (38%); 27% were deemed competent by expert evaluation. False positives and negatives were defined as medical students who passed and attendings who failed the assessment, respectively. This pass/fail score produced no false positives or negatives, and fewer JR than SR were considered competent by both the expert and CG benchmark. CONCLUSIONS Using an absolute standard setting method, competency scores were set to identify trainees who could competently execute a standardized dry-lab robotic suturing exercise. This standard can be used for high stakes decisions regarding a trainee's technical readiness for independent practice. Future work includes validation of this standard in the clinical environment through correlation with clinical outcomes.
Collapse
Affiliation(s)
- Taseen F Haque
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California
| | - J Everett Knudsen
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California
| | - Jonathan You
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, California
| | - Alvin Hui
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, California
| | - Hooman Djaladat
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California
| | - Runzhuo Ma
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, California
| | - Steven Cen
- Department Radiology, University of Southern California, Los Angeles, California
| | - Mitchell Goldenberg
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California
| | - Andrew J Hung
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, California.
| |
Collapse
|
12
|
Boal MWE, Anastasiou D, Tesfai F, Ghamrawi W, Mazomenos E, Curtis N, Collins JW, Sridhar A, Kelly J, Stoyanov D, Francis NK. Evaluation of objective tools and artificial intelligence in robotic surgery technical skills assessment: a systematic review. Br J Surg 2024; 111:znad331. [PMID: 37951600 PMCID: PMC10771126 DOI: 10.1093/bjs/znad331] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 11/14/2023]
Abstract
BACKGROUND There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.
Collapse
Affiliation(s)
- Matthew W E Boal
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
| | - Dimitrios Anastasiou
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Freweini Tesfai
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
| | - Walaa Ghamrawi
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
| | - Evangelos Mazomenos
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Medical Physics and Biomedical Engineering, UCL, London, UK
| | - Nathan Curtis
- Department of General Surgey, Dorset County Hospital NHS Foundation Trust, Dorchester, UK
| | - Justin W Collins
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Ashwin Sridhar
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - John Kelly
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- University College London Hospitals NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional Surgical Sciences (WEISS), University College London (UCL), London, UK
- Computer Science, UCL, London, UK
| | - Nader K Francis
- The Griffin Institute, Northwick Park & St Marks’ Hospital, London, UK
- Division of Surgery and Interventional Science, Research Department of Targeted Intervention, UCL, London, UK
- Yeovil District Hospital, Somerset Foundation NHS Trust, Yeovil, Somerset, UK
| |
Collapse
|
13
|
Ibrahim H, Juve AM, Amin A, Railey K, Andolsek KM. Expanding the Study of Bias in Medical Education Assessment. J Grad Med Educ 2023; 15:623-626. [PMID: 38045936 PMCID: PMC10686652 DOI: 10.4300/jgme-d-23-00027.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/05/2023] Open
Affiliation(s)
- Halah Ibrahim
- Halah Ibrahim, MD, MEHP, is Associate Professor, Department of Medicine, Khalifa University College of Medicine and Health Sciences, Abu Dhabi, United Arab Emirates, and Associate Editor, Journal of Graduate Medical Education (JGME)
| | - Amy Miller Juve
- Amy Miller Juve, EdD, is Professor of Anesthesiology and Perioperative Medicine and Professional Development, and Program Improvement Specialist for Graduate Medical Education, Oregon Health & Science University, Portland, Oregon, USA
| | - Alpesh Amin
- Alpesh Amin, MD, MBA, MACP, is Professor, Department of Medicine, University of California Irvine, Irvine, California, USA
| | - Kenyon Railey
- Kenyon Railey, MD, is Associate Professor, Department of Family Medicine and Community Health, School of Medicine, Duke University School of Medicine, Durham, North Carolina, USA; and
| | - Kathryn M. Andolsek
- Kathryn M. Andolsek, MD, MPH, is Assistant Dean for Premedical Education and Professor, Department of Family Medicine and Community Health, Duke University School of Medicine, Durham, North Carolina, USA, and Associate Editor, JGME
| |
Collapse
|
14
|
Endsley MR. Ironies of artificial intelligence. ERGONOMICS 2023; 66:1656-1668. [PMID: 37534468 DOI: 10.1080/00140139.2023.2243404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
Bainbridge's Ironies of Automation was a prescient description of automation related challenges for human performance that have characterised much of the 40 years since its publication. Today a new wave of automation based on artificial intelligence (AI) is being introduced across a wide variety of domains and applications. Not only are Bainbridge's original warnings still pertinent for AI, but AI's very nature and focus on cognitive tasks has introduced many new challenges for people who interact with it. Five ironies of AI are presented including difficulties with understanding AI and forming adaptations, opaqueness in AI limitations and biases that can drive human decision biases, and difficulties in understanding the AI reliability, despite the fact that AI remains insufficiently intelligent for many of its intended applications. Future directions are provided to create more human-centered AI applications that can address these challenges.
Collapse
|
15
|
Mittermaier M, Raza M, Kvedar JC. Collaborative strategies for deploying AI-based physician decision support systems: challenges and deployment approaches. NPJ Digit Med 2023; 6:137. [PMID: 37543707 PMCID: PMC10404285 DOI: 10.1038/s41746-023-00889-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 07/26/2023] [Indexed: 08/07/2023] Open
Affiliation(s)
- Mirja Mittermaier
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Infectious Diseases, Respiratory Medicine and Critical Care, Berlin, Germany.
- Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | | | | |
Collapse
|
16
|
Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med 2023; 6:113. [PMID: 37311802 DOI: 10.1038/s41746-023-00858-z] [Citation(s) in RCA: 43] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 06/06/2023] [Indexed: 06/15/2023] Open
Affiliation(s)
- Mirja Mittermaier
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Infectious Diseases, Respiratory Medicine and Critical Care, Berlin, Germany.
- Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | | | | |
Collapse
|