1
|
Vimalesvaran K, Robert D, Kumar S, Kumar A, Narbone M, Dharmadhikari R, Harrison M, Ather S, Novak A, Grzeda M, Gooch J, Woznitza N, Hall M, Shuaib H, Lowe DJ. Assessing the effectiveness of artificial intelligence (AI) in prioritising CT head interpretation: study protocol for a stepped-wedge cluster randomised trial (ACCEPT-AI). BMJ Open 2024; 14:e078227. [PMID: 38885990 PMCID: PMC11184206 DOI: 10.1136/bmjopen-2023-078227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 04/30/2024] [Indexed: 06/20/2024] Open
Abstract
INTRODUCTION Diagnostic imaging is vital in emergency departments (EDs). Accessibility and reporting impacts ED workflow and patient care. With radiology workforce shortages, reporting capacity is limited, leading to image interpretation delays. Turnaround times for image reporting are an ED bottleneck. Artificial intelligence (AI) algorithms can improve productivity, efficiency and accuracy in diagnostic radiology, contingent on their clinical efficacy. This includes positively impacting patient care and improving clinical workflow. The ACCEPT-AI study will evaluate Qure.ai's qER software in identifying and prioritising patients with critical findings from AI analysis of non-contrast head CT (NCCT) scans. METHODS AND ANALYSIS This is a multicentre trial, spanning four diverse sites, over 13 months. It will include all individuals above the age of 18 years who present to the ED, referred for an NCCT. The project will be divided into three consecutive phases (pre-implementation, implementation and post-implementation of the qER solution) in a stepped-wedge design to control for adoption bias and adjust for time-based changes in the background patient characteristics. Pre-implementation involves baseline data for standard care to support the primary and secondary outcomes. The implementation phase includes staff training and qER solution threshold adjustments in detecting target abnormalities adjusted, if necessary. The post-implementation phase will introduce a notification (prioritised flag) in the radiology information system. The radiologist can choose to agree with the qER findings or ignore it according to their clinical judgement before writing and signing off the report. Non-qER processed scans will be handled as per standard care. ETHICS AND DISSEMINATION The study will be conducted in accordance with the principles of Good Clinical Practice. The protocol was approved by the Research Ethics Committee of East Midlands (Leicester Central), in May 2023 (REC (Research Ethics Committee) 23/EM/0108). Results will be published in peer-reviewed journals and disseminated in scientific findings (ClinicalTrials.gov: NCT06027411) TRIAL REGISTRATION NUMBER: NCT06027411.
Collapse
Affiliation(s)
- Kavitha Vimalesvaran
- Clinical Scientific Computing, Guy's and St Thomas' NHS Foundation Trust, London, UK
- King's College London, London, UK
| | | | | | | | | | | | - Mark Harrison
- Northumbria Healthcare NHS Foundation Trust, North Shields, UK
| | - Sarim Ather
- Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Alex Novak
- Emergency Medicine Research Oxford (EMROx), Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | | | - Nicholas Woznitza
- Department of Radiology, Homerton University Hospital NHS Foundation Trust, London, UK
- School of Allied & Public Health, Canterbury Christ Church University, Canterbury, UK
| | - Mark Hall
- NHS Greater Glasgow and Clyde, Glasgow, UK
| | - Haris Shuaib
- Guy's and St Thomas' NHS Foundation Trust, London, UK
| | - David J Lowe
- Emergency Medicine, Queen Elizabeth University Hospital, Glasgow, UK
| |
Collapse
|
2
|
Yoon S, Goh H, Lee PC, Tan HC, Teh MM, Lim DST, Kwee A, Suresh C, Carmody D, Swee DS, Tan SYT, Wong AJW, Choo CHM, Wee Z, Bee YM. Assessing the Utility, Impact, and Adoption Challenges of an Artificial Intelligence-Enabled Prescription Advisory Tool for Type 2 Diabetes Management: Qualitative Study. JMIR Hum Factors 2024; 11:e50939. [PMID: 38869934 PMCID: PMC11211700 DOI: 10.2196/50939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/07/2023] [Accepted: 05/05/2024] [Indexed: 06/14/2024] Open
Abstract
BACKGROUND The clinical management of type 2 diabetes mellitus (T2DM) presents a significant challenge due to the constantly evolving clinical practice guidelines and growing array of drug classes available. Evidence suggests that artificial intelligence (AI)-enabled clinical decision support systems (CDSSs) have proven to be effective in assisting clinicians with informed decision-making. Despite the merits of AI-driven CDSSs, a significant research gap exists concerning the early-stage implementation and adoption of AI-enabled CDSSs in T2DM management. OBJECTIVE This study aimed to explore the perspectives of clinicians on the use and impact of the AI-enabled Prescription Advisory (APA) tool, developed using a multi-institution diabetes registry and implemented in specialist endocrinology clinics, and the challenges to its adoption and application. METHODS We conducted focus group discussions using a semistructured interview guide with purposively selected endocrinologists from a tertiary hospital. The focus group discussions were audio-recorded and transcribed verbatim. Data were thematically analyzed. RESULTS A total of 13 clinicians participated in 4 focus group discussions. Our findings suggest that the APA tool offered several useful features to assist clinicians in effectively managing T2DM. Specifically, clinicians viewed the AI-generated medication alterations as a good knowledge resource in supporting the clinician's decision-making on drug modifications at the point of care, particularly for patients with comorbidities. The complication risk prediction was seen as positively impacting patient care by facilitating early doctor-patient communication and initiating prompt clinical responses. However, the interpretability of the risk scores, concerns about overreliance and automation bias, and issues surrounding accountability and liability hindered the adoption of the APA tool in clinical practice. CONCLUSIONS Although the APA tool holds great potential as a valuable resource for improving patient care, further efforts are required to address clinicians' concerns and improve the tool's acceptance and applicability in relevant contexts.
Collapse
Affiliation(s)
- Sungwon Yoon
- Health Services and Systems Research, Duke-NUS Medical School, Singapore, Singapore
- Centre for Population Health Research and Implementation, SingHealth Regional Health System, SingHealth, Singapore, Singapore
| | - Hendra Goh
- Health Services and Systems Research, Duke-NUS Medical School, Singapore, Singapore
| | - Phong Ching Lee
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Hong Chang Tan
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Ming Ming Teh
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Dawn Shao Ting Lim
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Ann Kwee
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Chandran Suresh
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - David Carmody
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Du Soon Swee
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Sarah Ying Tse Tan
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Andy Jun-Wei Wong
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | | | - Zongwen Wee
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| | - Yong Mong Bee
- Department of Endocrinology, Singapore General Hospital, Singapore, Singapore
| |
Collapse
|
3
|
Danilatou V, Dimopoulos D, Kostoulas T, Douketis J. Machine Learning-Based Predictive Models for Patients with Venous Thromboembolism: A Systematic Review. Thromb Haemost 2024. [PMID: 38574756 DOI: 10.1055/a-2299-4758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2024]
Abstract
BACKGROUND Venous thromboembolism (VTE) is a chronic disorder with a significant health and economic burden. Several VTE-specific clinical prediction models (CPMs) have been used to assist physicians in decision-making but have several limitations. This systematic review explores if machine learning (ML) can enhance CPMs by analyzing extensive patient data derived from electronic health records. We aimed to explore ML-CPMs' applications in VTE for risk stratification, outcome prediction, diagnosis, and treatment. METHODS Three databases were searched: PubMed, Google Scholar, and IEEE electronic library. Inclusion criteria focused on studies using structured data, excluding non-English publications, studies on non-humans, and certain data types such as natural language processing and image processing. Studies involving pregnant women, cancer patients, and children were also excluded. After excluding irrelevant studies, a total of 77 studies were included. RESULTS Most studies report that ML-CPMs outperformed traditional CPMs in terms of receiver operating area under the curve in the four clinical domains that were explored. However, the majority of the studies were retrospective, monocentric, and lacked detailed model architecture description and external validation, which are essential for quality audit. This review identified research gaps and highlighted challenges related to standardized reporting, reproducibility, and model comparison. CONCLUSION ML-CPMs show promise in improving risk assessment and individualized treatment recommendations in VTE. Apparently, there is an urgent need for standardized reporting and methodology for ML models, external validation, prospective and real-world data studies, as well as interventional studies to evaluate the impact of artificial intelligence in VTE.
Collapse
Affiliation(s)
- Vasiliki Danilatou
- School of Medicine, European University of Cyprus, Nicosia, Cyprus
- Healthcare Division, Sphynx Technology Solutions, Nicosia, Cyprus
| | - Dimitrios Dimopoulos
- School of Engineering, Department of Information and Communication Systems Engineering, University of the Aegean, North Aegean, Greece
| | - Theodoros Kostoulas
- School of Engineering, Department of Information and Communication Systems Engineering, University of the Aegean, North Aegean, Greece
| | - James Douketis
- Department of Medicine, McMaster University, Hamilton, Canada
- Department of Medicine, St. Joseph's Healthcare Hamilton, Ontario, Canada
| |
Collapse
|
4
|
van Genderen ME, van de Sande D, Hooft L, Reis AA, Cornet AD, Oosterhoff JHF, van der Ster BJP, Huiskens J, Townsend R, van Bommel J, Gommers D, van den Hoven J. Charting a new course in healthcare: early-stage AI algorithm registration to enhance trust and transparency. NPJ Digit Med 2024; 7:119. [PMID: 38720011 PMCID: PMC11078921 DOI: 10.1038/s41746-024-01104-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 04/15/2024] [Indexed: 05/12/2024] Open
Affiliation(s)
- Michel E van Genderen
- Erasmus MC University Medical Center, Department of Adult Intensive Care, Rotterdam, The Netherlands.
| | - Davy van de Sande
- Erasmus MC University Medical Center, Department of Adult Intensive Care, Rotterdam, The Netherlands
| | - Lotty Hooft
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Andreas Alois Reis
- Department of Research for Health, Division of the Chief Scientist, World Health Organization, Geneva, Switzerland
| | - Alexander D Cornet
- Section editor Intensive Care at Nederlands Tijdschrift voor Geneeskunde (Dutch Journal of Medicine), Amsterdam, The Netherlands
- Department of Intensive Care, Medisch Spectrum Twente, Enschede, The Netherlands
| | - Jacobien H F Oosterhoff
- Delft University of Technology, Faculty of Technology, Policy and Management, Delft, The Netherlands
| | - Björn J P van der Ster
- Erasmus MC University Medical Center, Department of Adult Intensive Care, Rotterdam, The Netherlands
| | | | - Reggie Townsend
- Vice President Data Ethics Practice, SAS Worldwide Headquarters, Cary, N.C., USA
- National Artificial Intelligence Advisory Committee, Executive Office of the President of the United States, Washington, D.C., USA
| | - Jasper van Bommel
- Erasmus MC University Medical Center, Department of Adult Intensive Care, Rotterdam, The Netherlands
| | - Diederik Gommers
- Erasmus MC University Medical Center, Department of Adult Intensive Care, Rotterdam, The Netherlands
| | - Jeroen van den Hoven
- Delft University of Technology, Faculty of Technology, Policy and Management, Delft, The Netherlands
| |
Collapse
|
5
|
Dickinson H, Feifel J, Muylle K, Ochi T, Vallejo-Yagüe E. Learning with an evolving medicine label: how artificial intelligence-based medication recommendation systems must adapt to changing medication labels. Expert Opin Drug Saf 2024; 23:547-552. [PMID: 38597245 DOI: 10.1080/14740338.2024.2338252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/28/2024] [Indexed: 04/11/2024]
Abstract
INTRODUCTION Artificial intelligence or machine learning (AI/ML) based systems can help personalize prescribing decisions for individual patients. The recommendations of these clinical decision support systems must relate to the "label" of the medicines involved. The label of a medicine is an approved guide that indicates how to prescribe the drug in a safe and effective manner. AREAS COVERED The label for a medicine may evolve as new information on drug safety and effectiveness emerges, leading to the addition or removal of warnings, drug-drug interactions, or to permit new indications. However, the speed at which these updates are made to these AI/ML recommendation systems may be delayed and could influence the safety of prescribing decisions. This article explores the need to keep AI/ML tools 'in sync' with any label changes. Additionally, challenges relating to medicine availability and geographical suitability are discussed. EXPERT OPINION These considerations highlight the important role that pharmacoepidemiologists and drug safety professionals must play within the monitoring and use of these tools. Furthermore, these issues highlight the guiding role that regulators need to have in planning and oversight of these tools.
Collapse
Affiliation(s)
| | - Jan Feifel
- Clinical Measurements Sciences, Merck KGaA, Darmstadt, Germany
| | - Katoo Muylle
- Real World Evidence, AstraZeneca Belux, Groot-Bijgaarden, Belgium
| | - Taichi Ochi
- Department of PharmacoTherapy, -Epidemiology & -Economics, Groningen Research Institute of Pharmacy, University of Groningen, Groningen, Netherlands
| | | |
Collapse
|
6
|
Shuaib A. Transforming Healthcare with AI: Promises, Pitfalls, and Pathways Forward. Int J Gen Med 2024; 17:1765-1771. [PMID: 38706749 PMCID: PMC11070153 DOI: 10.2147/ijgm.s449598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 04/17/2024] [Indexed: 05/07/2024] Open
Abstract
This perspective paper provides a comprehensive examination of artificial intelligence (AI) in healthcare, focusing on its transformative impact on clinical practices, decision-making, and physician-patient relationships. By integrating insights from evidence, research, and real-world examples, it offers a balanced analysis of AI's capabilities and limitations, emphasizing its role in streamlining administrative processes, enhancing patient care, and reducing physician burnout while maintaining a human-centric approach in medicine. The research underscores AI's capacity to augment clinical decision-making and improve patient interactions, but it also highlights the variable impact of AI in different healthcare settings. The need for context-specific adaptations and careful integration of AI technologies into existing healthcare workflows is emphasized to maximize benefits and minimize unintended consequences. Significant attention is given to the implications of AI on the roles and competencies of healthcare professionals. The emergence of AI necessitates new skills in data literacy and technology use, prompting a shift in educational curricula towards digital health and AI training. Ethical considerations are a pivotal aspect of the discussion. The paper explores the challenges posed by data privacy concerns, algorithmic biases, and ensuring equitable access to AI-driven healthcare. It advocates for the development of comprehensive ethical frameworks and ongoing research to guide the responsible use of AI in healthcare. Conclusively, the paper advocates for a balanced approach to AI adoption in healthcare, highlighting the importance of ongoing research, strategic implementation, and the synergistic combination of human expertise with AI technologies for optimal patient care.
Collapse
Affiliation(s)
- Ali Shuaib
- Biomedical Engineering Unit, Department of Physiology, Faculty of Medicine, Kuwait University, Safat, 13110, Kuwait
| |
Collapse
|
7
|
Stogiannos N, O'Regan T, Scurr E, Litosseliti L, Pogose M, Harvey H, Kumar A, Malik R, Barnes A, McEntee MF, Malamateniou C. AI implementation in the UK landscape: Knowledge of AI governance, perceived challenges and opportunities, and ways forward for radiographers. Radiography (Lond) 2024; 30:612-621. [PMID: 38325103 DOI: 10.1016/j.radi.2024.01.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 01/26/2024] [Indexed: 02/09/2024]
Abstract
INTRODUCTION Despite the rapid increase of AI-enabled applications deployed in clinical practice, many challenges exist around AI implementation, including the clarity of governance frameworks, usability of validation of AI models, and customisation of training for radiographers. This study aimed to explore the perceptions of diagnostic and therapeutic radiographers, with existing theoretical and/or practical knowledge of AI, on issues of relevance to the field, such as AI implementation, including knowledge of AI governance and procurement, perceptions about enablers and challenges and future priorities for AI adoption. METHODS An online survey was designed and distributed to UK-based qualified radiographers who work in medical imaging and/or radiotherapy and have some previous theoretical and/or practical knowledge of working with AI. Participants were recruited through the researchers' professional networks on social media with support from the AI advisory group of the Society and College of Radiographers. Survey questions related to AI training/education, knowledge of AI governance frameworks, data privacy procedures, AI implementation considerations, and priorities for AI adoption. Descriptive statistics were employed to analyse the data, and chi-square tests were used to explore significant relationships between variables. RESULTS In total, 88 valid responses were received. Most radiographers (56.6 %) had not received any AI-related training. Also, although approximately 63 % of them used an evaluation framework to assess AI models' performance before implementation, many (36.9 %) were still unsure about suitable evaluation methods. Radiographers requested clearer guidance on AI governance, ample time to implement AI in their practice safely, adequate funding, effective leadership, and targeted support from AI champions. AI training, robust governance frameworks, and patient and public involvement were seen as priorities for the successful implementation of AI by radiographers. CONCLUSION AI implementation is progressing within radiography, but without customised training, clearer governance, key stakeholder engagement and suitable new roles created, it will be hard to harness its benefits and minimise related risks. IMPLICATIONS FOR PRACTICE The results of this study highlight some of the priorities and challenges for radiographers in relation to AI adoption, namely the need for developing robust AI governance frameworks and providing optimal AI training.
Collapse
Affiliation(s)
- N Stogiannos
- Division of Midwifery & Radiography, City, University of London, UK; Medical Imaging Department, Corfu General Hospital, Greece.
| | - T O'Regan
- The Society and College of Radiographers, London, UK.
| | - E Scurr
- The Royal Marsden NHS Foundation Trust, UK.
| | - L Litosseliti
- School of Health & Psychological Sciences, City, University of London, UK.
| | - M Pogose
- Quality Assurance and Regulatory Affairs, Hardian Health, UK.
| | | | - A Kumar
- Frimley Health NHS Foundation Trust, UK.
| | - R Malik
- Bolton NHS Foundation Trust, UK.
| | - A Barnes
- King's Technology Evaluation Centre (KiTEC), School of Biomedical Engineering & Imaging Science, King's College London, UK.
| | - M F McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, Ireland.
| | - C Malamateniou
- Division of Midwifery & Radiography, City, University of London, UK; Society and College of Radiographers AI Advisory Group, London, UK; European Society of Medical Imaging Informatics, Vienna, Austria; European Federation of Radiographer Societies, Cumieira, Portugal.
| |
Collapse
|
8
|
Oniani D, Hilsman J, Peng Y, Poropatich RK, Pamplin JC, Legault GL, Wang Y. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. NPJ Digit Med 2023; 6:225. [PMID: 38042910 PMCID: PMC10693640 DOI: 10.1038/s41746-023-00965-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 11/15/2023] [Indexed: 12/04/2023] Open
Abstract
In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the "GREAT PLEA" ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Eutonomy, for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.
Collapse
Affiliation(s)
- David Oniani
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jordan Hilsman
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, USA
| | - Ronald K Poropatich
- Division of Pulmonary, Allergy, Critical Care & Sleep Medicine, University of Pittsburgh, Pittsburgh, PA, USA
- Center for Military Medicine Research, University of Pittsburgh, Pittsburgh, PA, USA
| | - Jeremy C Pamplin
- Telemedicine & Advanced Technology Research Center, US Army, Fort Detrick, Frederick, MD, USA
| | - Gary L Legault
- Department of Surgery, Uniformed Services University, Bethesda, MD, USA
- Virtual Medical Center, Brooke Army Medical Center, San Antonio, TX, USA
| | - Yanshan Wang
- Department of Health Information Management, University of Pittsburgh, Pittsburgh, PA, USA.
- Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA, USA.
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, USA.
- Clinical and Translational Science Institute, University of Pittsburgh, Pittsburgh, PA, USA.
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA.
| |
Collapse
|
9
|
Hummelsberger P, Koch TK, Rauh S, Dorn J, Lermer E, Raue M, Hudecek MFC, Schicho A, Colak E, Ghassemi M, Gaube S. Insights on the Current State and Future Outlook of AI in Health Care: Expert Interview Study. JMIR AI 2023; 2:e47353. [PMID: 38875571 PMCID: PMC11041415 DOI: 10.2196/47353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/06/2023] [Accepted: 08/01/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. OBJECTIVE This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. METHODS For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. RESULTS Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology's performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. CONCLUSIONS Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.
Collapse
Affiliation(s)
- Pia Hummelsberger
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Timo K Koch
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
- Department of Psychology, LMU Munich, Munich, Germany
| | - Sabrina Rauh
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Julia Dorn
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Eva Lermer
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
- Department of Business Psychology, Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Martina Raue
- MIT AgeLab, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Matthias F C Hudecek
- Department of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Andreas Schicho
- Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| | - Errol Colak
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Marzyeh Ghassemi
- Electrical Engineering and Computer Science, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, United States
- Vector Institute, Toronto, ON, Canada
| | - Susanne Gaube
- UCL Global Business School for Health, University College London, London, United Kingdom
| |
Collapse
|
10
|
Nashwan AJ, Gharib S, Alhadidi M, El-Ashry AM, Alamgir A, Al-Hassan M, Khedr MA, Dawood S, Abufarsakh B. Harnessing Artificial Intelligence: Strategies for Mental Health Nurses in Optimizing Psychiatric Patient Care. Issues Ment Health Nurs 2023; 44:1020-1034. [PMID: 37850937 DOI: 10.1080/01612840.2023.2263579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
This narrative review explores the transformative impact of Artificial Intelligence (AI) on mental health nursing, particularly in enhancing psychiatric patient care. AI technologies present new strategies for early detection, risk assessment, and improving treatment adherence in mental health. They also facilitate remote patient monitoring, bridge geographical gaps, and support clinical decision-making. The evolution of virtual mental health assistants and AI-enhanced therapeutic interventions are also discussed. These technological advancements reshape the nurse-patient interactions while ensuring personalized, efficient, and high-quality care. The review also addresses AI's ethical and responsible use in mental health nursing, emphasizing patient privacy, data security, and the balance between human interaction and AI tools. As AI applications in mental health care continue to evolve, this review encourages continued innovation while advocating for responsible implementation, thereby optimally leveraging the potential of AI in mental health nursing.
Collapse
Affiliation(s)
- Abdulqadir J Nashwan
- Nursing Department, Hamad Medical Corporation, Doha, Qatar
- Department of Public Health, College of Health Sciences, QU Health, Qatar University, Doha, Qatar
| | - Suzan Gharib
- Nursing Department, Al-Khaldi Hospital, Amman, Jordan
| | - Majdi Alhadidi
- Psychiatric & Mental Health Nursing, Faculty of Nursing, Al-Zaytoonah University of Jordan, Amman, Jordan
| | | | | | | | | | - Shaimaa Dawood
- Faculty of Nursing, Alexandria University, Alexandria, Egypt
| | | |
Collapse
|
11
|
Hamed E, Sharif A, Eid A, Alfehaidi A, Alberry M. Advancing Artificial Intelligence for Clinical Knowledge Retrieval: A Case Study Using ChatGPT-4 and Link Retrieval Plug-In to Analyze Diabetic Ketoacidosis Guidelines. Cureus 2023; 15:e41916. [PMID: 37457604 PMCID: PMC10349539 DOI: 10.7759/cureus.41916] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2023] [Indexed: 07/18/2023] Open
Abstract
Introduction This case study aimed to enhance the traceability and retrieval accuracy of ChatGPT-4 in medical text by employing a step-by-step systematic approach. The focus was on retrieving clinical answers from three international guidelines on diabetic ketoacidosis (DKA). Methods A systematic methodology was developed to guide the retrieval process. One question was asked per guideline to ensure accuracy and maintain referencing. ChatGPT-4 was utilized to retrieve answers, and the 'Link Reader' plug-in was integrated to facilitate direct access to webpages containing the guidelines. Subsequently, ChatGPT-4 was employed to compile answers while providing citations to the sources. This process was iterated 30 times per question to ensure consistency. In this report, we present our observations regarding the retrieval accuracy, consistency of responses, and the challenges encountered during the process. Results Integrating ChatGPT-4 with the 'Link Reader' plug-in demonstrated notable traceability and retrieval accuracy benefits. The AI model successfully provided relevant and accurate clinical answers based on the analyzed guidelines. Despite occasional challenges with webpage access and minor memory drift, the overall performance of the integrated system was promising. The compilation of the answers was also impressive and held significant promise for further trials. Conclusion The findings of this case study contribute to the utilization of AI text-generation models as valuable tools for medical professionals and researchers. The systematic approach employed in this case study and the integration of the 'Link Reader' plug-in offer a framework for automating medical text synthesis, asking one question at a time before compilation from different sources, which has led to improving AI models' traceability and retrieval accuracy. Further advancements and refinement of AI models and integration with other software utilities hold promise for enhancing the utility and applicability of AI-generated recommendations in medicine and scientific academia. These advancements have the potential to drive significant improvements in everyday medical practice.
Collapse
Affiliation(s)
- Ehab Hamed
- Family Medicine, Qatar University Health Centre, Primary Health Care Corporation, Doha, QAT
| | - Anna Sharif
- Family Medicine, Primary Health Care Corporation, Doha, QAT
| | - Ahmad Eid
- Family Medicine, Primary Health Care Corporation, Doha, QAT
| | | | - Medhat Alberry
- Obstetrics and Gynecology, Weill Cornell Medicine - Qatar, Doha, QAT
- Fetal and Maternal Medicine, Sidra Medicine, Doha, QAT
| |
Collapse
|