1
|
Saadya A, Davis CR. Revolutionizing Plastic Surgery Education: Leveraging Artificial Intelligence for an Innovative Podcast Learning Platform. Plast Reconstr Surg 2024; 154:847e-848e. [PMID: 38635465 DOI: 10.1097/prs.0000000000011477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Affiliation(s)
- Ahmad Saadya
- Department of Plastic and Reconstructive Surgery, Queen Victoria Hospital, East Grinstead, United Kingdom
| | | |
Collapse
|
2
|
Nardone V, Marmorino F, Germani MM, Cichowska-Cwalińska N, Menditti VS, Gallo P, Studiale V, Taravella A, Landi M, Reginelli A, Cappabianca S, Girnyi S, Cwalinski T, Boccardi V, Goyal A, Skokowski J, Oviedo RJ, Abou-Mrad A, Marano L. The Role of Artificial Intelligence on Tumor Boards: Perspectives from Surgeons, Medical Oncologists and Radiation Oncologists. Curr Oncol 2024; 31:4984-5007. [PMID: 39329997 PMCID: PMC11431448 DOI: 10.3390/curroncol31090369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 08/24/2024] [Accepted: 08/26/2024] [Indexed: 09/28/2024] Open
Abstract
The integration of multidisciplinary tumor boards (MTBs) is fundamental in delivering state-of-the-art cancer treatment, facilitating collaborative diagnosis and management by a diverse team of specialists. Despite the clear benefits in personalized patient care and improved outcomes, the increasing burden on MTBs due to rising cancer incidence and financial constraints necessitates innovative solutions. The advent of artificial intelligence (AI) in the medical field offers a promising avenue to support clinical decision-making. This review explores the perspectives of clinicians dedicated to the care of cancer patients-surgeons, medical oncologists, and radiation oncologists-on the application of AI within MTBs. Additionally, it examines the role of AI across various clinical specialties involved in cancer diagnosis and treatment. By analyzing both the potential and the challenges, this study underscores how AI can enhance multidisciplinary discussions and optimize treatment plans. The findings highlight the transformative role that AI may play in refining oncology care and sustaining the efficacy of MTBs amidst growing clinical demands.
Collapse
Affiliation(s)
- Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Federica Marmorino
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Marco Maria Germani
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | | | | | - Paolo Gallo
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Vittorio Studiale
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Ada Taravella
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Matteo Landi
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Salvatore Cappabianca
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Sergii Girnyi
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
| | - Tomasz Cwalinski
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
| | - Virginia Boccardi
- Division of Gerontology and Geriatrics, Department of Medicine and Surgery, University of Perugia, 06123 Perugia, Italy
| | - Aman Goyal
- Adesh Institute of Medical Sciences and Research, Bathinda 151109, Punjab, India
| | - Jaroslaw Skokowski
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| | - Rodolfo J Oviedo
- Nacogdoches Medical Center, Nacogdoches, TX 75965, USA
- Tilman J. Fertitta Family College of Medicine, University of Houston, Houston, TX 77021, USA
- College of Osteopathic Medicine, Sam Houston State University, Conroe, TX 77304, USA
| | - Adel Abou-Mrad
- Centre Hospitalier Universitaire d'Orléans, 45100 Orléans, France
| | - Luigi Marano
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| |
Collapse
|
3
|
Gomez-Cabello CA, Borna S, Pressman SM, Haider SA, Forte AJ. Large Language Models for Intraoperative Decision Support in Plastic Surgery: A Comparison between ChatGPT-4 and Gemini. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:957. [PMID: 38929573 PMCID: PMC11205293 DOI: 10.3390/medicina60060957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/06/2024] [Accepted: 06/07/2024] [Indexed: 06/28/2024]
Abstract
Background and Objectives: Large language models (LLMs) are emerging as valuable tools in plastic surgery, potentially reducing surgeons' cognitive loads and improving patients' outcomes. This study aimed to assess and compare the current state of the two most common and readily available LLMs, Open AI's ChatGPT-4 and Google's Gemini Pro (1.0 Pro), in providing intraoperative decision support in plastic and reconstructive surgery procedures. Materials and Methods: We presented each LLM with 32 independent intraoperative scenarios spanning 5 procedures. We utilized a 5-point and a 3-point Likert scale for medical accuracy and relevance, respectively. We determined the readability of the responses using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) score. Additionally, we measured the models' response time. We compared the performance using the Mann-Whitney U test and Student's t-test. Results: ChatGPT-4 significantly outperformed Gemini in providing accurate (3.59 ± 0.84 vs. 3.13 ± 0.83, p-value = 0.022) and relevant (2.28 ± 0.77 vs. 1.88 ± 0.83, p-value = 0.032) responses. Alternatively, Gemini provided more concise and readable responses, with an average FKGL (12.80 ± 1.56) significantly lower than ChatGPT-4's (15.00 ± 1.89) (p < 0.0001). However, there was no difference in the FRE scores (p = 0.174). Moreover, Gemini's average response time was significantly faster (8.15 ± 1.42 s) than ChatGPT'-4's (13.70 ± 2.87 s) (p < 0.0001). Conclusions: Although ChatGPT-4 provided more accurate and relevant responses, both models demonstrated potential as intraoperative tools. Nevertheless, their performance inconsistency across the different procedures underscores the need for further training and optimization to ensure their reliability as intraoperative decision-support tools.
Collapse
Affiliation(s)
- Cesar A. Gomez-Cabello
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL 32224, USA
| | - Sahar Borna
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL 32224, USA
| | - Sophia M. Pressman
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL 32224, USA
| | - Syed Ali Haider
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL 32224, USA
| | - Antonio J. Forte
- Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL 32224, USA
- Center for Digital Health, Mayo Clinic, 200 First St. SW, Rochester, MN 55905, USA
| |
Collapse
|
4
|
Wise PA, Studier-Fischer A, Nickel F, Hackert T. [Status Quo of Surgical Navigation]. Zentralbl Chir 2023. [PMID: 38056501 DOI: 10.1055/a-2211-4898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
Surgical navigation, also referred to as computer-assisted or image-guided surgery, is a technique that employs a variety of methods - such as 3D imaging, tracking systems, specialised software, and robotics to support surgeons during surgical interventions. These emerging technologies aim not only to enhance the accuracy and precision of surgical procedures, but also to enable less invasive approaches, with the objective of reducing complications and improving operative outcomes for patients. By harnessing the integration of emerging digital technologies, surgical navigation holds the promise of assisting complex procedures across various medical disciplines. In recent years, the field of surgical navigation has witnessed significant advances. Abdominal surgical navigation, particularly endoscopy, laparoscopic, and robot-assisted surgery, is currently undergoing a phase of rapid evolution. Emphases include image-guided navigation, instrument tracking, and the potential integration of augmented and mixed reality (AR, MR). This article will comprehensively delve into the latest developments in surgical navigation, spanning state-of-the-art intraoperative technologies like hyperspectral and fluorescent imaging, to the integration of preoperative radiological imaging within the intraoperative setting.
Collapse
Affiliation(s)
- Philipp Anthony Wise
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Alexander Studier-Fischer
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Felix Nickel
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie, Universitätsklinikum Heidelberg, Heidelberg, Deutschland
| | - Thilo Hackert
- Klinik für Allgemein-, Viszeral- und Thoraxchirurgie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Deutschland
| |
Collapse
|
5
|
Patel N, Chaudhari K, Jyotsna G, Joshi JS. Surgical Frontiers: A Comparative Review of Robotics Versus Laparoscopy in Gynecological Interventions. Cureus 2023; 15:e49752. [PMID: 38161931 PMCID: PMC10757673 DOI: 10.7759/cureus.49752] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 11/30/2023] [Indexed: 01/03/2024] Open
Abstract
This review comprehensively examines the current state and future directions of gynecological surgery, focusing on the comparative analysis of laparoscopy and robotic surgery. The overview highlights the evolution of these surgical techniques, emphasizing their impact on patient outcomes, procedural efficiency, and safety profiles. The analysis encompasses critical factors such as cost-effectiveness, learning curves, and implications for postoperative recovery. The future of gynecological surgery is envisioned through emerging technologies, including augmented reality, single-incision laparoscopy, and artificial intelligence. The coexistence of laparoscopy and robotics is explored, acknowledging their respective strengths and roles in shaping women's healthcare. In conclusion, the dynamic nature of the field is underscored, emphasizing the need for a patient-centered and adaptable approach. Collaboration between healthcare professionals, engineers, and researchers is pivotal in unlocking these innovations' full potential, ensuring continued advancements in gynecological surgery for improved outcomes and enhanced patient care.
Collapse
Affiliation(s)
- Nainita Patel
- Obstetrics and Gynaecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Kamlesh Chaudhari
- Obstetrics and Gynaecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Garapati Jyotsna
- Obstetrics and Gynaecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Jalormy S Joshi
- Obstetrics and Gynaecology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
6
|
Han J, Montagna M, Grammenos A, Xia T, Bondareva E, Siegele-Brown C, Chauhan J, Dang T, Spathis D, Floto A, Cicuta P, Mascolo C. Evaluating Listening Performance for COVID-19 Detection by Clinicians and Machine Learning: A Comparative Study. J Med Internet Res 2023; 25:e44804. [PMID: 37126593 DOI: 10.2196/44804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 04/26/2023] [Accepted: 04/28/2023] [Indexed: 05/03/2023] Open
Abstract
BACKGROUND To date, performance comparisons between men and machines have been performed in many health domains. Yet, machine learning models and human performance comparisons in audio-based respiratory diagnosis remain largely unexplored. OBJECTIVE The primary objective of this study is to compare human clinicians and a machine learning model in predicting COVID-19 from respiratory sound recordings. METHODS In this study, we compare human clinicians and a machine learning model in predicting COVID-19 from respiratory sound recordings. Prediction performance on 24 audio samples (12 tested positive) made by 36 clinicians with experience in treating COVID-19 or other respiratory illnesses is compared with predictions made by a machine learning model trained on 1,162 samples. Each sample consists of voice, cough, and breathing sound recordings from one subject, and the length of each sample is around 20 seconds. We also investigated whether combining the predictions of the model and human experts could further enhance the performance, in terms of both accuracy and confidence. RESULTS The machine learning model outperformed the clinicians, yielding a sensitivity of 0.75 and a specificity of 0.83, while the best performance achieved by the clinician was 0.67 in terms of sensitivity and 0.75 in terms of specificity. Integrating clinicians' and model's predictions, however, could enhance performance further, achieving a sensitivity of 0.83 and a specificity of 0.92. CONCLUSIONS Our findings suggest that the clinicians and the machine learning model could make better clinical decisions via a cooperative approach and achieve higher confidence in audio-based respiratory diagnosis.
Collapse
Affiliation(s)
- Jing Han
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | | | - Andreas Grammenos
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Tong Xia
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Erika Bondareva
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | | | | | - Ting Dang
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Dimitris Spathis
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Andres Floto
- Department of Medicine, University of Cambridge, Cambridge, GB
| | - Pietro Cicuta
- Department of Physics, University of Cambridge, Cambridge, GB
| | - Cecilia Mascolo
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| |
Collapse
|
7
|
Ozer E, Bilecen AE, Ozer NB, Yanikoglu B. Intraoperative cytological diagnosis of brain tumours: A preliminary study using a deep learning model. Cytopathology 2023; 34:113-119. [PMID: 36458464 DOI: 10.1111/cyt.13192] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/26/2022] [Accepted: 11/23/2022] [Indexed: 12/05/2022]
Abstract
BACKGROUND Intraoperative pathological diagnosis of central nervous system (CNS) tumours is essential to planning patient management in neuro-oncology. Frozen section slides and cytological preparations provide architectural and cellular information that is analysed by pathologists to reach an intraoperative diagnosis. Progress in the fields of artificial intelligence and machine learning means that AI systems have significant potential for the provision of highly accurate real-time diagnosis in cytopathology. OBJECTIVE To investigate the efficiency of machine-learning models in the intraoperative cytological diagnosis of CNS tumours. MATERIALS AND METHODS We trained a deep neural network to classify biopsy material for intraoperative tissue diagnosis of four major brain lesions. Overall, 205 medical images were obtained from squash smear slides of histologically correlated cases, with 18 high-grade and 11 low-grade gliomas, 17 metastatic carcinomas, and 9 non-neoplastic pathological brain tissue samples. The neural network model was trained and evaluated using 5-fold cross-validation. RESULTS The model achieved 95% and 97% diagnostic accuracy in the patch-level classification and patient-level classification tasks, respectively. CONCLUSIONS We conclude that deep learning-based classification of cytological preparations may be a promising complementary method for the rapid and accurate intraoperative diagnosis of CNS tumours.
Collapse
Affiliation(s)
- Erdener Ozer
- Department of Pathology, Dokuz Eylul University School of Medicine, Izmir, Turkey.,Division of Anatomical Pathology, Sidra Medicine and Research Center, Doha, Qatar
| | - Ali Enver Bilecen
- Faculty of Engineering and Natural Sciences, Sabanci University, Istanbul, Turkey
| | - Nur Basak Ozer
- Faculty of Engineering and Natural Sciences, Sabanci University, Istanbul, Turkey
| | - Berrin Yanikoglu
- Faculty of Engineering and Natural Sciences, Sabanci University, Istanbul, Turkey.,Center of Excellence in Data Analytics (VERIM), Sabanci University, Istanbul, Turkey
| |
Collapse
|
8
|
Cognitive Hybrid Intelligent Diagnostic System: Typical Architecture. COMPUTATION 2022. [DOI: 10.3390/computation10050066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The research refers to the modeling of the meaningful and relatively stable visual-figurative and verbal-sign representation of real problems in medical diagnostics of the human organs and systems. The core results of the research are presented. Here, a new visual metalanguage is proposed. It describes the solution of a diagnostic problem by combining several interconnected processes of reasoning in different languages defining “a state of human organs and systems”, “a diagnostic problem” and elements of its decomposition. In the paper, a subject-figurative model of the cognitive hybrid intelligent diagnostic system, its typical architecture, and a synthesis algorithm are provided. Due to the integration of imitation of an internal subject-figurative vision of medical diagnostic problems and the corresponding communication statements of private diagnoses with imitation of the behavior inherent for councils in problem situations, the future implementation of such system prototypes will reduce the number of medical errors. The further stage of this research is the approbation of all solutions for the problem of diagnosing diseases of the pancreas on the materials of the Kaliningrad Regional Clinical Hospital and experimental study of the system. The research is limited by the subject area of medicine but can be generalized to the other areas.
Collapse
|
9
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 86] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
10
|
Artificial Intelligence in Surgery: A Research Team Perspective. Curr Probl Surg 2022; 59:101125. [DOI: 10.1016/j.cpsurg.2022.101125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
11
|
Gutierrez L, Lim JS, Foo LL, Ng WY, Yip M, Lim GYS, Wong MHY, Fong A, Rosman M, Mehta JS, Lin H, Ting DSJ, Ting DSW. Application of artificial intelligence in cataract management: current and future directions. EYE AND VISION (LONDON, ENGLAND) 2022; 9:3. [PMID: 34996524 PMCID: PMC8739505 DOI: 10.1186/s40662-021-00273-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 12/07/2021] [Indexed: 02/10/2023]
Abstract
The rise of artificial intelligence (AI) has brought breakthroughs in many areas of medicine. In ophthalmology, AI has delivered robust results in the screening and detection of diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity. Cataract management is another field that can benefit from greater AI application. Cataract is the leading cause of reversible visual impairment with a rising global clinical burden. Improved diagnosis, monitoring, and surgical management are necessary to address this challenge. In addition, patients in large developing countries often suffer from limited access to tertiary care, a problem further exacerbated by the ongoing COVID-19 pandemic. AI on the other hand, can help transform cataract management by improving automation, efficacy and overcoming geographical barriers. First, AI can be applied as a telediagnostic platform to screen and diagnose patients with cataract using slit-lamp and fundus photographs. This utilizes a deep-learning, convolutional neural network (CNN) to detect and classify referable cataracts appropriately. Second, some of the latest intraocular lens formulas have used AI to enhance prediction accuracy, achieving superior postoperative refractive results compared to traditional formulas. Third, AI can be used to augment cataract surgical skill training by identifying different phases of cataract surgery on video and to optimize operating theater workflows by accurately predicting the duration of surgical procedures. Fourth, some AI CNN models are able to effectively predict the progression of posterior capsule opacification and eventual need for YAG laser capsulotomy. These advances in AI could transform cataract management and enable delivery of efficient ophthalmic services. The key challenges include ethical management of data, ensuring data security and privacy, demonstrating clinically acceptable performance, improving the generalizability of AI models across heterogeneous populations, and improving the trust of end-users.
Collapse
Affiliation(s)
| | - Jane Sujuan Lim
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Li Lian Foo
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Michelle Yip
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | | | - Melissa Hsing Yi Wong
- Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Allan Fong
- Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Mohamad Rosman
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Jodhbir Singth Mehta
- Singapore Eye Research Institute, Singapore, Singapore.,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Haotian Lin
- Zhongshan Ophthalmic Center, Sun Yet Sen University, Guangzhou, China
| | - Darren Shu Jeng Ting
- Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, UK
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore, Singapore. .,Singapore National Eye Center, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
| |
Collapse
|
12
|
AIM in Interventional Radiology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
13
|
Bamba Y, Ogawa S, Itabashi M, Kameoka S, Okamoto T, Yamamoto M. Automated recognition of objects and types of forceps in surgical images using deep learning. Sci Rep 2021; 11:22571. [PMID: 34799625 PMCID: PMC8604928 DOI: 10.1038/s41598-021-01911-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 10/26/2021] [Indexed: 12/15/2022] Open
Abstract
Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.
Collapse
Affiliation(s)
- Yoshiko Bamba
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan.
| | - Shimpei Ogawa
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | - Michio Itabashi
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| | | | - Takahiro Okamoto
- Department of Surgery 2, Tokyo Women's Medical University, Tokyo, Japan
| | - Masakazu Yamamoto
- Department of Surgery, Institute of Gastroenterology, Tokyo Women's Medical University, 8-1, Kawadacho Shinjuku-ku, Tokyo, 162-8666, Japan
| |
Collapse
|
14
|
Birkhoff DC, van Dalen ASH, Schijven MP. A Review on the Current Applications of Artificial Intelligence in the Operating Room. Surg Innov 2021; 28:611-619. [PMID: 33625307 PMCID: PMC8450995 DOI: 10.1177/1553350621996961] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Background. Artificial intelligence (AI) is an era upcoming in medicine and, more recently, in the operating room (OR). Existing literature elaborates mainly on the future possibilities and expectations for AI in surgery. The aim of this study is to systematically provide an overview of the current actual AI applications used to support processes inside the OR. Methods. PubMed, Embase, Cochrane Library, and IEEE Xplore were searched using inclusion criteria for relevant articles up to August 25th, 2020. No study types were excluded beforehand. Articles describing current AI applications for surgical purposes inside the OR were reviewed. Results. Nine studies were included. An overview of the researched and described applications of AI in the OR is provided, including procedure duration prediction, gesture recognition, intraoperative cancer detection, intraoperative video analysis, workflow recognition, an endoscopic guidance system, knot-tying, and automatic registration and tracking of the bone in orthopedic surgery. These technologies are compared to their, often non-AI, baseline alternatives. Conclusions. Currently described applications of AI in the OR are limited to date. They may, however, have a promising future in improving surgical precision, reduce manpower, support intraoperative decision-making, and increase surgical safety. Nonetheless, the application and implementation of AI inside the OR still has several challenges to overcome. Clear regulatory, organizational, and clinical conditions are imperative for AI to redeem its promise. Future research on use of AI in the OR should therefore focus on clinical validation of AI applications, the legal and ethical considerations, and on evaluation of implementation trajectory.
Collapse
Affiliation(s)
- David C. Birkhoff
- Department of Surgery, Amsterdam UMC, University of Amsterdam, The Netherlands
| | | | - Marlies P. Schijven
- Department of Surgery, Amsterdam Gastroenterology and Metabolism, University of Amsterdam, The Netherlands
- institution-id-type="Ringgold" />Li Ka Shing Knowledge Institute, institution-id-type="Ringgold" />St Michaels Hospital, Toronto, Canada
| |
Collapse
|
15
|
Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA. Surgical data science and artificial intelligence for surgical education. J Surg Oncol 2021; 124:221-230. [PMID: 34245578 DOI: 10.1002/jso.26496] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 03/29/2021] [Accepted: 04/02/2021] [Indexed: 11/11/2022]
Abstract
Surgical data science (SDS) aims to improve the quality of interventional healthcare and its value through the capture, organization, analysis, and modeling of procedural data. As data capture has increased and artificial intelligence (AI) has advanced, SDS can help to unlock augmented and automated coaching, feedback, assessment, and decision support in surgery. We review major concepts in SDS and AI as applied to surgical education and surgical oncology.
Collapse
Affiliation(s)
- Thomas M Ward
- Department of Surgery, Surgical AI & Innovation Laboratory, Massachusetts General Hospital, Boston, Massachusetts
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France.,Fondazione Policlinico A. Gemelli IRCCS, Rome, Italy.,IHU Strasbourg, Strasbourg, France
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Canada
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France.,IHU Strasbourg, Strasbourg, France
| | | | - Daniel A Hashimoto
- Department of Surgery, Surgical AI & Innovation Laboratory, Massachusetts General Hospital, Boston, Massachusetts
| |
Collapse
|
16
|
Darbari A, Kumar K, Darbari S, Patil PL. Requirement of artificial intelligence technology awareness for thoracic surgeons. THE CARDIOTHORACIC SURGEON 2021; 29:13. [PMID: 38624757 PMCID: PMC8254051 DOI: 10.1186/s43057-021-00053-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 06/26/2021] [Indexed: 12/15/2022] Open
Abstract
Background We have recently witnessed incredible interest in computer-based, internet web-dependent mechanisms and artificial intelligence (AI)-dependent technique emergence in our day-to-day lives. In the recent era of COVID-19 pandemic, this nonhuman, machine-based technology has gained a lot of momentum. Main body of the abstract The supercomputers and robotics with AI technology have shown the potential to equal or even surpass human experts' accuracy in some tasks in the future. Artificial intelligence (AI) is prompting massive data interweaving with elements from many digital sources such as medical imaging sorting, electronic health records, and transforming healthcare delivery. But in thoracic surgical and our counterpart pulmonary medical field, AI's main applications are still for interpretation of thoracic imaging, lung histopathological slide evaluation, physiological data interpretation, and biosignal testing only. The query arises whether AI-enabled technology-based or autonomous robots could ever do or provide better thoracic surgical procedures than current surgeons but it seems like an impossibility now. Short conclusion This review article aims to provide information pertinent to the use of AI to thoracic surgical specialists. In this review article, we described AI and related terminologies, current utilisation, challenges, potential, and current need for awareness of this technology.
Collapse
Affiliation(s)
| | - Krishan Kumar
- CSE Department, National Institute of Technology, Srinagar, Uttarakhand 246174 India
| | | | | |
Collapse
|
17
|
Seibold M, Maurer S, Hoch A, Zingg P, Farshad M, Navab N, Fürnstahl P. Real-time acoustic sensing and artificial intelligence for error prevention in orthopedic surgery. Sci Rep 2021; 11:3993. [PMID: 33597615 PMCID: PMC7889943 DOI: 10.1038/s41598-021-83506-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 02/03/2021] [Indexed: 11/24/2022] Open
Abstract
In this work, we developed and validated a computer method capable of robustly detecting drill breakthrough events and show the potential of deep learning-based acoustic sensing for surgical error prevention. Bone drilling is an essential part of orthopedic surgery and has a high risk of injuring vital structures when over-drilling into adjacent soft tissue. We acquired a dataset consisting of structure-borne audio recordings of drill breakthrough sequences with custom piezo contact microphones in an experimental setup using six human cadaveric hip specimens. In the following step, we developed a deep learning-based method for the automated detection of drill breakthrough events in a fast and accurate fashion. We evaluated the proposed network regarding breakthrough detection sensitivity and latency. The best performing variant yields a sensitivity of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$93.64 \pm 2.42$$\end{document}93.64±2.42% for drill breakthrough detection in a total execution time of 139.29\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\hbox { ms}}$$\end{document}ms. The validation and performance evaluation of our solution demonstrates promising results for surgical error prevention by automated acoustic-based drill breakthrough detection in a realistic experiment while being multiple times faster than a surgeon’s reaction time. Furthermore, our proposed method represents an important step for the translation of acoustic-based breakthrough detection towards surgical use.
Collapse
Affiliation(s)
- Matthias Seibold
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany. .,Research in Orthopedic Computer Science (ROCS), University Hospital Balgrist, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland.
| | - Steven Maurer
- Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Armando Hoch
- Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Patrick Zingg
- Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Mazda Farshad
- Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science (ROCS), University Hospital Balgrist, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland.,Balgrist University Hospital, 8008, Zurich, Switzerland
| |
Collapse
|
18
|
Liu PR, Lu L, Zhang JY, Huo TT, Liu SX, Ye ZW. Application of Artificial Intelligence in Medicine: An Overview. Curr Med Sci 2021; 41:1105-1115. [PMID: 34874486 PMCID: PMC8648557 DOI: 10.1007/s11596-021-2474-3] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/01/2020] [Indexed: 02/06/2023]
Abstract
Artificial intelligence (AI) is a new technical discipline that uses computer technology to research and develop the theory, method, technique, and application system for the simulation, extension, and expansion of human intelligence. With the assistance of new AI technology, the traditional medical environment has changed a lot. For example, a patient's diagnosis based on radiological, pathological, endoscopic, ultrasonographic, and biochemical examinations has been effectively promoted with a higher accuracy and a lower human workload. The medical treatments during the perioperative period, including the preoperative preparation, surgical period, and postoperative recovery period, have been significantly enhanced with better surgical effects. In addition, AI technology has also played a crucial role in medical drug production, medical management, and medical education, taking them into a new direction. The purpose of this review is to introduce the application of AI in medicine and to provide an outlook of future trends.
Collapse
Affiliation(s)
- Peng-ran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Lin Lu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Jia-yao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Tong-tong Huo
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Song-xiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Zhe-wei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| |
Collapse
|
19
|
Datta S. AIM in Interventional Radiology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_283-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|