1
|
Mohseni A, Ghotbi E, Kazemi F, Shababi A, Jahan SC, Mohseni A, Shababi N. Artificial Intelligence in Radiology: What Is Its True Role at Present, and Where Is the Evidence? Radiol Clin North Am 2024; 62:935-947. [PMID: 39393852 DOI: 10.1016/j.rcl.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2024]
Abstract
The integration of artificial intelligence (AI) in radiology has brought about substantial advancements and transformative potential in diagnostic imaging practices. This study presents an overview of the current research on the application of AI in radiology, highlighting key insights from recent studies and surveys. These recent studies have explored the expected impact of AI, encompassing machine learning and deep learning, on the work volume of diagnostic radiologists. The present and future role of AI in radiology holds great promise for enhancing diagnostic capabilities, improving workflow efficiency, and ultimately, advancing patient care.
Collapse
Affiliation(s)
- Alireza Mohseni
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA.
| | - Elena Ghotbi
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA
| | - Foad Kazemi
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA
| | - Amirali Shababi
- School of Medicine, Iran University of Medical Sciences, Hemat Highway next to Milad Tower 14535, Tehran, Iran
| | - Shayan Chashm Jahan
- Department of Computer Science, University of Maryland, 8125 Paint Branch Drive College Park, MD 20742, USA
| | - Anita Mohseni
- Azad University Tehran Medical Branch, Danesh, Shariati Street, Tehran, Iran 19395/1495
| | - Niloufar Shababi
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA
| |
Collapse
|
2
|
Arkoh S, Akudjedu TN, Amedu C, Antwi WK, Elshami W, Ohene-Botwe B. Current Radiology workforce perspective on the integration of artificial intelligence in clinical practice: A systematic review. J Med Imaging Radiat Sci 2024; 56:101769. [PMID: 39437624 DOI: 10.1016/j.jmir.2024.101769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/15/2024] [Accepted: 09/09/2024] [Indexed: 10/25/2024]
Abstract
INTRODUCTION Artificial Intelligence (AI) represents the application of computer systems to tasks traditionally performed by humans. The medical imaging profession has experienced a transformative shift through the integration of AI. While there have been several independent primary studies describing various aspects of AI, the current review employs a systematic approach towards describing the perspectives of radiologists and radiographers about the integration of AI in clinical practice. This review provides a holistic view from a professional standpoint towards understanding how the broad spectrum of AI tools are perceived as a unit in medical imaging practice. METHODS The study utilised a systematic review approach to collect data from quantitative, qualitative, and mixed-methods studies. Inclusion criteria encompassed articles concentrating on the viewpoints of either radiographers or radiologists regarding the incorporation of AI in medical imaging practice. A stepwise approach was employed in the systematic search across various databases. The included studies underwent quality assessment using the Quality Assessment Tool for Studies with Diverse Designs (QATSSD) checklist. A parallel-result convergent synthesis approach was employed to independently synthesise qualitative and quantitative evidence and to integrate the findings during the discussion phase. RESULTS Forty-one articles were included, all of which employed a cross-sectional study design. The main findings were themed around considerations and perspectives relating to AI education, impact on image quality and radiation dose, ethical and medico-legal implications for the use of AI, patient considerations and their perceived significance of AI for their care, and factors that influence development, implementation and job security. Despite varying emphasis, these themes collectively provide a global perspective on AI in medical imaging practice. CONCLUSION While expertise levels are varied and different, both radiographers and radiologists were generally optimistic about incorporation of AI in medical imaging practice. However, low levels of AI education and knowledge remain a critical barrier. Furthermore, equipment errors, cost, data security and operational difficulties, ethical constraints, job displacement concerns and insufficient implementation efforts are integration challenges that should merit the attention of stakeholders.
Collapse
Affiliation(s)
- Samuel Arkoh
- Department of Radiography, Scarborough Hospital, York and Scarborough NHS Foundation Trust, UK.
| | - Theophilus N Akudjedu
- Institute of Medical Imaging and Visualisation, Department of Medical Science & Public Health, Faculty of Health and Social Sciences, Bournemouth University, UK
| | - Cletus Amedu
- Diagnostic Radiography, Department of Midwifery & Radiography School of Health & Psychological Sciences City St George's, University of London, Northampton Square London EC1V 0HB, UK
| | - William K Antwi
- Department of Radiography, School of Biomedical & Allied Health Sciences, College of Health Sciences, University of Ghana, Ghana
| | - Wiam Elshami
- Faculty, Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, United Arab Emirates
| | - Benard Ohene-Botwe
- Diagnostic Radiography, Department of Midwifery & Radiography School of Health & Psychological Sciences City St George's, University of London, Northampton Square London EC1V 0HB, UK
| |
Collapse
|
3
|
Hamd ZY, Alorainy AI, Aldhahi MI, Gareeballah A, F Alsubaie N, A Alshanaiber S, S Almudayhesh N, A Alyousef R, A AlNiwaider R, A Bin Moammar L, M Abuzaid M. Evaluation of the Impact of Artificial Intelligence on Clinical Practice of Radiology in Saudi Arabia. J Multidiscip Healthc 2024; 17:4745-4756. [PMID: 39411200 PMCID: PMC11476743 DOI: 10.2147/jmdh.s465508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 08/17/2024] [Indexed: 10/19/2024] Open
Abstract
Background Artificial Intelligence (AI) is becoming integral to the health sector, particularly radiology, because it enhances diagnostic accuracy and optimizes patient care. This study aims to assess the awareness and acceptance of AI among radiology professionals in Saudi Arabia, identifying the educational and training needs to bridge knowledge gaps and enhance AI-related competencies. Methods This cross-sectional observational study surveyed radiology professionals across various hospitals in Saudi Arabia. Participants were recruited through multiple channels, including direct invitations, emails, social media, and professional societies. The survey comprised four sections: demographic details, perceptions of AI, knowledge about AI, and willingness to adopt AI in clinical practice. Results Out of 374 radiology professionals surveyed, 45.2% acknowledged AI's significant impact on their field. Approximately 44% showed enthusiasm for AI adoption. However, 58.6% reported limited AI knowledge and inadequate training, with 43.6% identifying skill development and the complexity of AI educational programs as major barriers to implementation. Conclusion While radiology professionals in Saudi Arabia are generally positive about integrating AI into clinical practice, significant gaps in knowledge and training need to be addressed. Tailored educational programs are essential to fully leverage AI's potential in improving medical imaging practices and patient care outcomes.
Collapse
Affiliation(s)
- Zuhal Y Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Amal I Alorainy
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Monira I Aldhahi
- Department of Rehabilitation Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Awadia Gareeballah
- Department of Diagnostic Radiology, College of Applied Medical Science, Taibah University, Al-Madinah Al-Munawwarah, Saudi Arabia
| | - Naifah F Alsubaie
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Shahad A Alshanaiber
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Nehal S Almudayhesh
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Raneem A Alyousef
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Reem A AlNiwaider
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Lamia A Bin Moammar
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, 11671, Saudi Arabia
| | - Mohamed M Abuzaid
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
- Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
4
|
Peeters D, Alves N, Venkadesh KV, Dinnessen R, Saghir Z, Scholten ET, Schaefer-Prokop C, Vliegenthart R, Prokop M, Jacobs C. Enhancing a deep learning model for pulmonary nodule malignancy risk estimation in chest CT with uncertainty estimation. Eur Radiol 2024; 34:6639-6651. [PMID: 38536463 PMCID: PMC11399205 DOI: 10.1007/s00330-024-10714-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 02/22/2024] [Accepted: 02/27/2024] [Indexed: 09/15/2024]
Abstract
OBJECTIVE To investigate the effect of uncertainty estimation on the performance of a Deep Learning (DL) algorithm for estimating malignancy risk of pulmonary nodules. METHODS AND MATERIALS In this retrospective study, we integrated an uncertainty estimation method into a previously developed DL algorithm for nodule malignancy risk estimation. Uncertainty thresholds were developed using CT data from the Danish Lung Cancer Screening Trial (DLCST), containing 883 nodules (65 malignant) collected between 2004 and 2010. We used thresholds on the 90th and 95th percentiles of the uncertainty score distribution to categorize nodules into certain and uncertain groups. External validation was performed on clinical CT data from a tertiary academic center containing 374 nodules (207 malignant) collected between 2004 and 2012. DL performance was measured using area under the ROC curve (AUC) for the full set of nodules, for the certain cases and for the uncertain cases. Additionally, nodule characteristics were compared to identify trends for inducing uncertainty. RESULTS The DL algorithm performed significantly worse in the uncertain group compared to the certain group of DLCST (AUC 0.62 (95% CI: 0.49, 0.76) vs 0.93 (95% CI: 0.88, 0.97); p < .001) and the clinical dataset (AUC 0.62 (95% CI: 0.50, 0.73) vs 0.90 (95% CI: 0.86, 0.94); p < .001). The uncertain group included larger benign nodules as well as more part-solid and non-solid nodules than the certain group. CONCLUSION The integrated uncertainty estimation showed excellent performance for identifying uncertain cases in which the DL-based nodule malignancy risk estimation algorithm had significantly worse performance. CLINICAL RELEVANCE STATEMENT Deep Learning algorithms often lack the ability to gauge and communicate uncertainty. For safe clinical implementation, uncertainty estimation is of pivotal importance to identify cases where the deep learning algorithm harbors doubt in its prediction. KEY POINTS • Deep learning (DL) algorithms often lack uncertainty estimation, which potentially reduce the risk of errors and improve safety during clinical adoption of the DL algorithm. • Uncertainty estimation identifies pulmonary nodules in which the discriminative performance of the DL algorithm is significantly worse. • Uncertainty estimation can further enhance the benefits of the DL algorithm and improve its safety and trustworthiness.
Collapse
Affiliation(s)
- Dré Peeters
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands.
| | - Natália Alves
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Kiran V Venkadesh
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Renate Dinnessen
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Zaigham Saghir
- Department of Medicine, Section of Pulmonary Medicine, Herlev-Gentofte Hospital, Hellerup, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Ernst T Scholten
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| | - Cornelia Schaefer-Prokop
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
- Radiology Department, Meander Medical Center, Maatweg 3, 3813 TZ, Amersfoort, The Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700RB, Groningen, The Netherlands
| | - Mathias Prokop
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
- Department of Radiology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9700RB, Groningen, The Netherlands
| | - Colin Jacobs
- Diagnostic Imaging Analysis Group, Medical Imaging Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, the Netherlands
| |
Collapse
|
5
|
Liao X, Yao C, Jin F, Zhang J, Liu L. Barriers and facilitators to implementing imaging-based diagnostic artificial intelligence-assisted decision-making software in hospitals in China: a qualitative study using the updated Consolidated Framework for Implementation Research. BMJ Open 2024; 14:e084398. [PMID: 39260855 PMCID: PMC11409362 DOI: 10.1136/bmjopen-2024-084398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/13/2024] Open
Abstract
OBJECTIVES To identify the barriers and facilitators to the successful implementation of imaging-based diagnostic artificial intelligence (AI)-assisted decision-making software in China, using the updated Consolidated Framework for Implementation Research (CFIR) as a theoretical basis to develop strategies that promote effective implementation. DESIGN This qualitative study involved semistructured interviews with key stakeholders from both clinical settings and industry. Interview guide development, coding, analysis and reporting of findings were thoroughly informed by the updated CFIR. SETTING Four healthcare institutions in Beijing and Shanghai and two vendors of AI-assisted decision-making software for lung nodules detection and diabetic retinopathy screening were selected based on purposive sampling. PARTICIPANTS A total of 23 healthcare practitioners, 6 hospital informatics specialists, 4 hospital administrators and 7 vendors of the selected AI-assisted decision-making software were included in the study. RESULTS Within the 5 CFIR domains, 10 constructs were identified as barriers, 8 as facilitators and 3 as both barriers and facilitators. Major barriers included unsatisfactory clinical performance (Innovation); lack of collaborative network between primary and tertiary hospitals, lack of information security measures and certification (outer setting); suboptimal data quality, misalignment between software functions and goals of healthcare institutions (inner setting); unmet clinical needs (individuals). Key facilitators were strong empirical evidence of effectiveness, improved clinical efficiency (innovation); national guidelines related to AI, deployment of AI software in peer hospitals (outer setting); integration of AI software into existing hospital systems (inner setting) and involvement of clinicians (implementation process). CONCLUSIONS The study findings contributed to the ongoing exploration of AI integration in healthcare from the perspective of China, emphasising the need for a comprehensive approach considering both innovation-specific factors and the broader organisational and contextual dynamics. As China and other developing countries continue to advance in adopting AI technologies, the derived insights could further inform healthcare practitioners, industry stakeholders and policy-makers, guiding policies and practices that promote the successful implementation of imaging-based diagnostic AI-assisted decision-making software in healthcare for optimal patient care.
Collapse
Affiliation(s)
- Xiwen Liao
- Peking University First Hospital, Beijing, China
- Clinical Research Institute, Institute of Advanced Clinical Medicine, Peking University, Beijing, China
| | - Chen Yao
- Peking University First Hospital, Beijing, China
- Clinical Research Institute, Institute of Advanced Clinical Medicine, Peking University, Beijing, China
| | - Feifei Jin
- Trauma Medicine Center, Peking University People's Hospital, Beijing, China
- Key Laboratory of Trauma treatment and Neural Regeneration, Peking University, Ministry of Education, Beijing, China
| | - Jun Zhang
- MSD R&D (China) Co., Ltd, Beijing, China
| | - Larry Liu
- Merck & Co Inc, Rahway, New Jersey, USA
- Weill Cornell Medical College, New York City, New York, USA
| |
Collapse
|
6
|
Sabeghi P, Kinkar KK, Castaneda GDR, Eibschutz LS, Fields BKK, Varghese BA, Patel DB, Gholamrezanezhad A. Artificial intelligence and machine learning applications for the imaging of bone and soft tissue tumors. FRONTIERS IN RADIOLOGY 2024; 4:1332535. [PMID: 39301168 PMCID: PMC11410694 DOI: 10.3389/fradi.2024.1332535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 08/01/2024] [Indexed: 09/22/2024]
Abstract
Recent advancements in artificial intelligence (AI) and machine learning offer numerous opportunities in musculoskeletal radiology to potentially bolster diagnostic accuracy, workflow efficiency, and predictive modeling. AI tools have the capability to assist radiologists in many tasks ranging from image segmentation, lesion detection, and more. In bone and soft tissue tumor imaging, radiomics and deep learning show promise for malignancy stratification, grading, prognostication, and treatment planning. However, challenges such as standardization, data integration, and ethical concerns regarding patient data need to be addressed ahead of clinical translation. In the realm of musculoskeletal oncology, AI also faces obstacles in robust algorithm development due to limited disease incidence. While many initiatives aim to develop multitasking AI systems, multidisciplinary collaboration is crucial for successful AI integration into clinical practice. Robust approaches addressing challenges and embodying ethical practices are warranted to fully realize AI's potential for enhancing diagnostic accuracy and advancing patient care.
Collapse
Affiliation(s)
- Paniz Sabeghi
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Ketki K Kinkar
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | | | - Liesl S Eibschutz
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Brandon K K Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Bino A Varghese
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Dakshesh B Patel
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Ali Gholamrezanezhad
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
7
|
Hassan M, Kushniruk A, Borycki E. Barriers to and Facilitators of Artificial Intelligence Adoption in Health Care: Scoping Review. JMIR Hum Factors 2024; 11:e48633. [PMID: 39207831 PMCID: PMC11393514 DOI: 10.2196/48633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 02/28/2024] [Accepted: 06/12/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) use cases in health care are on the rise, with the potential to improve operational efficiency and care outcomes. However, the translation of AI into practical, everyday use has been limited, as its effectiveness relies on successful implementation and adoption by clinicians, patients, and other health care stakeholders. OBJECTIVE As adoption is a key factor in the successful proliferation of an innovation, this scoping review aimed at presenting an overview of the barriers to and facilitators of AI adoption in health care. METHODS A scoping review was conducted using the guidance provided by the Joanna Briggs Institute and the framework proposed by Arksey and O'Malley. MEDLINE, IEEE Xplore, and ScienceDirect databases were searched to identify publications in English that reported on the barriers to or facilitators of AI adoption in health care. This review focused on articles published between January 2011 and December 2023. The review did not have any limitations regarding the health care setting (hospital or community) or the population (patients, clinicians, physicians, or health care administrators). A thematic analysis was conducted on the selected articles to map factors associated with the barriers to and facilitators of AI adoption in health care. RESULTS A total of 2514 articles were identified in the initial search. After title and abstract reviews, 50 (1.99%) articles were included in the final analysis. These articles were reviewed for the barriers to and facilitators of AI adoption in health care. Most articles were empirical studies, literature reviews, reports, and thought articles. Approximately 18 categories of barriers and facilitators were identified. These were organized sequentially to provide considerations for AI development, implementation, and the overall structure needed to facilitate adoption. CONCLUSIONS The literature review revealed that trust is a significant catalyst of adoption, and it was found to be impacted by several barriers identified in this review. A governance structure can be a key facilitator, among others, in ensuring all the elements identified as barriers are addressed appropriately. The findings demonstrate that the implementation of AI in health care is still, in many ways, dependent on the establishment of regulatory and legal frameworks. Further research into a combination of governance and implementation frameworks, models, or theories to enhance trust that would specifically enable adoption is needed to provide the necessary guidance to those translating AI research into practice. Future research could also be expanded to include attempts at understanding patients' perspectives on complex, high-risk AI use cases and how the use of AI applications affects clinical practice and patient care, including sociotechnical considerations, as more algorithms are implemented in actual clinical environments.
Collapse
Affiliation(s)
- Masooma Hassan
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Andre Kushniruk
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| | - Elizabeth Borycki
- Department of Health Information Science, University of Victoria, Victoria, BC, Canada
| |
Collapse
|
8
|
Visser JJ. The unquestionable marriage between AI and structured reporting. Eur Radiol 2024:10.1007/s00330-024-11038-2. [PMID: 39191995 DOI: 10.1007/s00330-024-11038-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 07/04/2024] [Accepted: 07/18/2024] [Indexed: 08/29/2024]
Affiliation(s)
- Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands.
| |
Collapse
|
9
|
Nair M, Svedberg P, Larsson I, Nygren JM. A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design. PLoS One 2024; 19:e0305949. [PMID: 39121051 PMCID: PMC11315296 DOI: 10.1371/journal.pone.0305949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 06/07/2024] [Indexed: 08/11/2024] Open
Abstract
Implementation of artificial intelligence systems for healthcare is challenging. Understanding the barriers and implementation strategies can impact their adoption and allows for better anticipation and planning. This study's objective was to create a detailed inventory of barriers to and strategies for AI implementation in healthcare to support advancements in methods and implementation processes in healthcare. A sequential explanatory mixed method design was used. Firstly, scoping reviews and systematic literature reviews were identified using PubMed. Selected studies included empirical cases of AI implementation and use in clinical practice. As the reviews were deemed insufficient to fulfil the aim of the study, data collection shifted to the primary studies included in those reviews. The primary studies were screened by title and abstract, and thereafter read in full text. Then, data on barriers to and strategies for AI implementation were extracted from the included articles, thematically coded by inductive analysis, and summarized. Subsequently, a direct qualitative content analysis of 69 interviews with healthcare leaders and healthcare professionals confirmed and added results from the literature review. Thirty-eight empirical cases from the six identified scoping and literature reviews met the inclusion and exclusion criteria. Barriers to and strategies for AI implementation were grouped under three phases of implementation (planning, implementing, and sustaining the use) and were categorized into eleven concepts; Leadership, Buy-in, Change management, Engagement, Workflow, Finance and human resources, Legal, Training, Data, Evaluation and monitoring, Maintenance. Ethics emerged as a twelfth concept through qualitative analysis of the interviews. This study illustrates the inherent challenges and useful strategies in implementing AI in healthcare practice. Future research should explore various aspects of leadership, collaboration and contracts among key stakeholders, legal strategies surrounding clinicians' liability, solutions to ethical dilemmas, infrastructure for efficient integration of AI in workflows, and define decision points in the implementation process.
Collapse
Affiliation(s)
- Monika Nair
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Ingrid Larsson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens M. Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
10
|
Kamel Rahimi A, Pienaar O, Ghadimi M, Canfell OJ, Pole JD, Shrapnel S, van der Vegt AH, Sullivan C. Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers. J Med Internet Res 2024; 26:e49655. [PMID: 39094106 PMCID: PMC11329852 DOI: 10.2196/49655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 02/08/2024] [Accepted: 05/22/2024] [Indexed: 08/04/2024] Open
Abstract
BACKGROUND Efforts are underway to capitalize on the computational power of the data collected in electronic medical records (EMRs) to achieve a learning health system (LHS). Artificial intelligence (AI) in health care has promised to improve clinical outcomes, and many researchers are developing AI algorithms on retrospective data sets. Integrating these algorithms with real-time EMR data is rare. There is a poor understanding of the current enablers and barriers to empower this shift from data set-based use to real-time implementation of AI in health systems. Exploring these factors holds promise for uncovering actionable insights toward the successful integration of AI into clinical workflows. OBJECTIVE The first objective was to conduct a systematic literature review to identify the evidence of enablers and barriers regarding the real-world implementation of AI in hospital settings. The second objective was to map the identified enablers and barriers to a 3-horizon framework to enable the successful digital health transformation of hospitals to achieve an LHS. METHODS The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were adhered to. PubMed, Scopus, Web of Science, and IEEE Xplore were searched for studies published between January 2010 and January 2022. Articles with case studies and guidelines on the implementation of AI analytics in hospital settings using EMR data were included. We excluded studies conducted in primary and community care settings. Quality assessment of the identified papers was conducted using the Mixed Methods Appraisal Tool and ADAPTE frameworks. We coded evidence from the included studies that related to enablers of and barriers to AI implementation. The findings were mapped to the 3-horizon framework to provide a road map for hospitals to integrate AI analytics. RESULTS Of the 1247 studies screened, 26 (2.09%) met the inclusion criteria. In total, 65% (17/26) of the studies implemented AI analytics for enhancing the care of hospitalized patients, whereas the remaining 35% (9/26) provided implementation guidelines. Of the final 26 papers, the quality of 21 (81%) was assessed as poor. A total of 28 enablers was identified; 8 (29%) were new in this study. A total of 18 barriers was identified; 5 (28%) were newly found. Most of these newly identified factors were related to information and technology. Actionable recommendations for the implementation of AI toward achieving an LHS were provided by mapping the findings to a 3-horizon framework. CONCLUSIONS Significant issues exist in implementing AI in health care. Shifting from validating data sets to working with live data is challenging. This review incorporated the identified enablers and barriers into a 3-horizon framework, offering actionable recommendations for implementing AI analytics to achieve an LHS. The findings of this study can assist hospitals in steering their strategic planning toward successful adoption of AI.
Collapse
Affiliation(s)
- Amir Kamel Rahimi
- Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Digital Health Cooperative Research Centre, Australian Government, Sydney, Australia
| | - Oliver Pienaar
- The School of Mathematics and Physics, The University of Queensland, Brisbane, Australia
| | - Moji Ghadimi
- The School of Mathematics and Physics, The University of Queensland, Brisbane, Australia
| | - Oliver J Canfell
- Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Digital Health Cooperative Research Centre, Australian Government, Sydney, Australia
- Business School, The University of Queensland, Brisbane, Australia
- Department of Nutritional Sciences, Faculty of Life Sciences and Medicine, King's College London, London, United Kingdom
| | - Jason D Pole
- Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Dalla Lana School of Public Health, The University of Toronto, Toronto, ON, Canada
- ICES, Toronto, ON, Canada
| | - Sally Shrapnel
- Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- The School of Mathematics and Physics, The University of Queensland, Brisbane, Australia
| | - Anton H van der Vegt
- Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | - Clair Sullivan
- Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Metro North Hospital and Health Service, Department of Health, Queensland Government, Brisbane, Australia
| |
Collapse
|
11
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. J Am Coll Radiol 2024; 21:1292-1310. [PMID: 38276923 DOI: 10.1016/j.jacr.2023.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama; American College of Radiology Data Science Institute, Reston, Virginia
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California; Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany; Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts; Tufts University Medical School, Boston, Massachusetts; Commision on Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia; College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
12
|
Pesapane F, Gnocchi G, Quarrella C, Sorce A, Nicosia L, Mariano L, Bozzini AC, Marinucci I, Priolo F, Abbate F, Carrafiello G, Cassano E. Errors in Radiology: A Standard Review. J Clin Med 2024; 13:4306. [PMID: 39124573 PMCID: PMC11312890 DOI: 10.3390/jcm13154306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 07/08/2024] [Accepted: 07/15/2024] [Indexed: 08/12/2024] Open
Abstract
Radiological interpretations, while essential, are not infallible and are best understood as expert opinions formed through the evaluation of available evidence. Acknowledging the inherent possibility of error is crucial, as it frames the discussion on improving diagnostic accuracy and patient care. A comprehensive review of error classifications highlights the complexity of diagnostic errors, drawing on recent frameworks to categorize them into perceptual and cognitive errors, among others. This classification underpins an analysis of specific error types, their prevalence, and implications for clinical practice. Additionally, we address the psychological impact of radiological practice, including the effects of mental health and burnout on diagnostic accuracy. The potential of artificial intelligence (AI) in mitigating errors is discussed, alongside ethical and regulatory considerations in its application. This research contributes to the body of knowledge on radiological errors, offering insights into preventive strategies and the integration of AI to enhance diagnostic practices. It underscores the importance of a nuanced understanding of errors in radiology, aiming to foster improvements in patient care and radiological accuracy.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Giulia Gnocchi
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy; (G.G.); (C.Q.); (A.S.); (G.C.)
| | - Cettina Quarrella
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy; (G.G.); (C.Q.); (A.S.); (G.C.)
| | - Adriana Sorce
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy; (G.G.); (C.Q.); (A.S.); (G.C.)
| | - Luca Nicosia
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Luciano Mariano
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Anna Carla Bozzini
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Irene Marinucci
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Francesca Priolo
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Francesca Abbate
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy; (G.G.); (C.Q.); (A.S.); (G.C.)
- Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, Università di Milano, 20122 Milan, Italy
| | - Enrico Cassano
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milan, Italy; (L.N.); (L.M.); (A.C.B.); (I.M.); (F.P.); (F.A.); (E.C.)
| |
Collapse
|
13
|
Champendal M, Ribeiro RST, Müller H, Prior JO, Sá Dos Reis C. Nuclear medicine technologists practice impacted by AI denoising applications in PET/CT images. Radiography (Lond) 2024; 30:1232-1239. [PMID: 38917681 DOI: 10.1016/j.radi.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/24/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE Artificial intelligence (AI) in positron emission tomography/computed tomography (PET/CT) can be used to improve image quality when it is useful to reduce the injected activity or the acquisition time. Particular attention must be paid to ensure that users adopt this technological innovation when outcomes can be improved by its use. The aim of this study was to identify the aspects that need to be analysed and discussed to implement an AI denoising PET/CT algorithm in clinical practice, based on the representations of Nuclear Medicine Technologists (NMT) from Western-Switzerland, highlighting the barriers and facilitators associated. METHODS Two focus groups were organised in June and September 2023, involving ten voluntary participants recruited from all types of medical imaging departments, forming a diverse sample of NMT. The interview guide followed the first stage of the revised model of Ottawa of Research Use. A content analysis was performed following the three-stage approach described by Wanlin. Ethics cleared the study. RESULTS Clinical practice, workload, knowledge and resources were de 4 themes identified as necessary to be thought before implementing an AI denoising PET/CT algorithm by ten NMT participants (aged 31-60), not familiar with this AI tool. The main barriers to implement this algorithm included workflow challenges, resistance from professionals and lack of education; while the main facilitators were explanations and the availability of support to ask questions such as a "local champion". CONCLUSION To implement a denoising algorithm in PET/CT, several aspects of clinical practice need to be thought to reduce the barriers to its implementation such as the procedures, the workload and the available resources. Participants emphasised also the importance of clear explanations, education, and support for successful implementation. IMPLICATIONS FOR PRACTICE To facilitate the implementation of AI tools in clinical practice, it is important to identify the barriers and propose strategies that can mitigate it.
Collapse
Affiliation(s)
- M Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - R S T Ribeiro
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| | - H Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical Faculty, University of Geneva, CH, Switzerland.
| | - J O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV): Lausanne, CH, Switzerland.
| | - C Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| |
Collapse
|
14
|
Wilkinson LS, Dunbar JK, Lip G. Clinical Integration of Artificial Intelligence for Breast Imaging. Radiol Clin North Am 2024; 62:703-716. [PMID: 38777544 DOI: 10.1016/j.rcl.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
This article describes an approach to planning and implementing artificial intelligence products in a breast screening service. It highlights the importance of an in-depth understanding of the end-to-end workflow and effective project planning by a multidisciplinary team. It discusses the need for monitoring to ensure that performance is stable and meets expectations, as well as focusing on the potential for inadvertantly generating inequality. New cross-discipline roles and expertise will be needed to enhance service delivery.
Collapse
Affiliation(s)
- Louise S Wilkinson
- Oxford Breast Imaging Centre, Churchill Hospital, Old Road, Headington, Oxford OX3 7LE, UK.
| | - J Kevin Dunbar
- Regional Head of Screening Quality Assurance Service (SQAS) - South, NHS England, England, UK
| | - Gerald Lip
- North East Scotland Breast Screening Service, Aberdeen Royal Infirmary, Foresterhill Road, Aberdeen AB25 2XF, UK
| |
Collapse
|
15
|
Malamateniou C. Technology-enabled patient care in medical radiation sciences: the two sides of the coin. J Med Radiat Sci 2024. [PMID: 38923225 DOI: 10.1002/jmrs.807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 06/12/2024] [Indexed: 06/28/2024] Open
Abstract
This is an exciting time to be working in healthcare and medical radiation sciences. This article discusses the potential benefits and risks of new technological interventions for patient benefit and outlines the need for co-production, governance and education to ensure these are used for advancing patients' well-being.
Collapse
Affiliation(s)
- Christina Malamateniou
- Department of Midwifery & Radiography, School of Health and Psychological Sciences, City, University of London, London, UK
- Discipline of Medical Imaging and Radiation Therapy, College of Medicine and Health, University College Cork, Cork, Ireland
- European Federation of Radiographer Societies, Cumiera, Portugal
- European Society of Medical Imaging Informatics, Vienna, Austria
| |
Collapse
|
16
|
Kutaiba N, Chung W, Goodwin M, Testro A, Egan G, Lim R. The impact of hepatic and splenic volumetric assessment in imaging for chronic liver disease: a narrative review. Insights Imaging 2024; 15:146. [PMID: 38886297 PMCID: PMC11183036 DOI: 10.1186/s13244-024-01727-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 05/26/2024] [Indexed: 06/20/2024] Open
Abstract
Chronic liver disease is responsible for significant morbidity and mortality worldwide. Abdominal computed tomography (CT) and magnetic resonance imaging (MRI) can fully visualise the liver and adjacent structures in the upper abdomen providing a reproducible assessment of the liver and biliary system and can detect features of portal hypertension. Subjective interpretation of CT and MRI in the assessment of liver parenchyma for early and advanced stages of fibrosis (pre-cirrhosis), as well as severity of portal hypertension, is limited. Quantitative and reproducible measurements of hepatic and splenic volumes have been shown to correlate with fibrosis staging, clinical outcomes, and mortality. In this review, we will explore the role of volumetric measurements in relation to diagnosis, assessment of severity and prediction of outcomes in chronic liver disease patients. We conclude that volumetric analysis of the liver and spleen can provide important information in such patients, has the potential to stratify patients' stage of hepatic fibrosis and disease severity, and can provide critical prognostic information. CRITICAL RELEVANCE STATEMENT: This review highlights the role of volumetric measurements of the liver and spleen using CT and MRI in relation to diagnosis, assessment of severity, and prediction of outcomes in chronic liver disease patients. KEY POINTS: Volumetry of the liver and spleen using CT and MRI correlates with hepatic fibrosis stages and cirrhosis. Volumetric measurements correlate with chronic liver disease outcomes. Fully automated methods for volumetry are required for implementation into routine clinical practice.
Collapse
Affiliation(s)
- Numan Kutaiba
- Department of Radiology, Austin Health, 145 Studley Road, Heidelberg, VIC, 3084, Australia.
- The University of Melbourne, Parkville, Melbourne, VIC, Australia.
| | - William Chung
- The University of Melbourne, Parkville, Melbourne, VIC, Australia
- Department of Gastroenterology, Austin Health, 145 Studley Road, Heidelberg, VIC, 3084, Australia
| | - Mark Goodwin
- Department of Radiology, Austin Health, 145 Studley Road, Heidelberg, VIC, 3084, Australia
- The University of Melbourne, Parkville, Melbourne, VIC, Australia
| | - Adam Testro
- The University of Melbourne, Parkville, Melbourne, VIC, Australia
- Department of Gastroenterology, Austin Health, 145 Studley Road, Heidelberg, VIC, 3084, Australia
| | - Gary Egan
- Monash Biomedical Imaging, Monash University, Clayton, VIC, 3800, Australia
| | - Ruth Lim
- Department of Radiology, Austin Health, 145 Studley Road, Heidelberg, VIC, 3084, Australia
- The University of Melbourne, Parkville, Melbourne, VIC, Australia
| |
Collapse
|
17
|
Wenderott K, Krups J, Luetkens JA, Weigl M. Radiologists' perspectives on the workflow integration of an artificial intelligence-based computer-aided detection system: A qualitative study. APPLIED ERGONOMICS 2024; 117:104243. [PMID: 38306741 DOI: 10.1016/j.apergo.2024.104243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/18/2023] [Accepted: 01/23/2024] [Indexed: 02/04/2024]
Abstract
In healthcare, artificial intelligence (AI) is expected to improve work processes, yet most research focuses on the technical features of AI rather than its real-world clinical implementation. To evaluate the implementation process of an AI-based computer-aided detection system (AI-CAD) for prostate MRI readings, we interviewed German radiologists in a pre-post design. We embedded our findings in the Model of Workflow Integration and the Technology Acceptance Model to analyze workflow effects, facilitators, and barriers. The most prominent barriers were: (i) a time delay in the work process, (ii) additional work steps to be taken, and (iii) an unstable performance of the AI-CAD. Most frequently named facilitators were (i) good self-organization, and (ii) good usability of the software. Our results underline the importance of a holistic approach to AI implementation considering the sociotechnical work system and provide valuable insights into key factors of the successful adoption of AI technologies in work systems.
Collapse
Affiliation(s)
- Katharina Wenderott
- Institute for Patient Safety, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany.
| | - Jim Krups
- Institute for Patient Safety, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
| | - Julian A Luetkens
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Germany; Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn, Germany
| | - Matthias Weigl
- Institute for Patient Safety, University Hospital Bonn, Venusberg-Campus 1, 53127, Bonn, Germany
| |
Collapse
|
18
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. Can Assoc Radiol J 2024; 75:226-244. [PMID: 38251882 DOI: 10.1177/08465371231222229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever‑growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi‑society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, QC, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- American College of Radiology, Reston, VA, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, SA, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
| |
Collapse
|
19
|
Maris MT, Koçar A, Willems DL, Pols J, Tan HL, Lindinger GL, Bak MAR. Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives. BMC Med Ethics 2024; 25:42. [PMID: 38575931 PMCID: PMC10996273 DOI: 10.1186/s12910-024-01042-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 03/27/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND The emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD). AIM Explore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD). METHODS Semi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission's Ethics Guidelines for Trustworthy AI to structure the interviews. RESULTS Six themes arose from the interviews: the ability of AI to rectify human doctors' limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the 'human touch'; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the 'human touch' in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients' individual contexts and values, in consultation with the patient. CONCLUSION The 'human touch' patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the 'right to a human doctor' is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.
Collapse
Affiliation(s)
- Menno T Maris
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands.
| | - Ayca Koçar
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Dick L Willems
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jeannette Pols
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Anthropology, University of Amsterdam, Amsterdam, The Netherlands
| | - Hanno L Tan
- Department of Clinical and Experimental Cardiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Netherlands Heart Institute, Utrecht, The Netherlands
| | - Georg L Lindinger
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Marieke A R Bak
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
20
|
Stewart J, Freeman S, Eroglu E, Dumitrascu N, Lu J, Goudie A, Sprivulis P, Akhlaghi H, Tran V, Sanfilippo F, Celenza A, Than M, Fatovich D, Walker K, Dwivedi G. Attitudes towards artificial intelligence in emergency medicine. Emerg Med Australas 2024; 36:252-265. [PMID: 38044755 DOI: 10.1111/1742-6723.14345] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/24/2023] [Accepted: 10/30/2023] [Indexed: 12/05/2023]
Abstract
OBJECTIVE To assess Australian and New Zealand emergency clinicians' attitudes towards the use of artificial intelligence (AI) in emergency medicine. METHODS We undertook a qualitative interview-based study based on grounded theory. Participants were recruited through ED internal mailing lists, the Australasian College for Emergency Medicine Bulletin, and the research teams' personal networks. Interviews were transcribed, coded and themes presented. RESULTS Twenty-five interviews were conducted between July 2021 and May 2022. Thematic saturation was achieved after 22 interviews. Most participants were from either Western Australia (52%) or Victoria (16%) and were consultants (96%). More participants reported feeling optimistic (10/25) than neutral (6/25), pessimistic (2/25) or mixed (7/25) towards the use of AI in the ED. A minority expressed scepticism regarding the feasibility or value of implementing AI into the ED. Multiple potential risks and ethical issues were discussed by participants including skill loss from overreliance on AI, algorithmic bias, patient privacy and concerns over liability. Participants also discussed perceived inadequacies in existing information technology systems. Participants felt that AI technologies would be used as decision support tools and not replace the roles of emergency clinicians. Participants were not concerned about the impact of AI on their job security. Most (17/25) participants thought that AI would impact emergency medicine within the next 10 years. CONCLUSIONS Emergency clinicians interviewed were generally optimistic about the use of AI in emergency medicine, so long as it is used as a decision support tool and they maintain the ability to override its recommendations.
Collapse
Affiliation(s)
- Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Samuel Freeman
- SensiLab, Monash University, Melbourne, Victoria, Australia
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Ege Eroglu
- School of Medicine, The University of Notre Dame Australia, Fremantle, Western Australia, Australia
| | - Nicole Dumitrascu
- School of Medicine, The University of Notre Dame Australia, Fremantle, Western Australia, Australia
| | - Juan Lu
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Computer Science and Software Engineering, The University of Western Australia, Perth, Western Australia, Australia
| | - Adrian Goudie
- Department of Emergency Medicine, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Peter Sprivulis
- Strategy and Governance Division, Western Australia Department of Health, Perth, Western Australia, Australia
| | - Hamed Akhlaghi
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Viet Tran
- School of Medicine, University of Tasmania, Hobart, Tasmania, Australia
- Department of Emergency Medicine, Royal Hobart Hospital, Hobart, Tasmania, Australia
| | - Frank Sanfilippo
- School of Population and Global Health, The University of Western Australia, Perth, Western Australia, Australia
| | - Antonio Celenza
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Emergency Medicine, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Martin Than
- Department of Emergency Medicine, Christchurch Hospital, Christchurch, New Zealand
| | - Daniel Fatovich
- Emergency Medicine, Royal Perth Hospital, The University of Western Australia, Perth, Western Australia, Australia
- Centre for Clinical Research in Emergency Medicine, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Katie Walker
- School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia
| | - Girish Dwivedi
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Cardiology, Fiona Stanley Hospital, Perth, Western Australia, Australia
| |
Collapse
|
21
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
22
|
Marco-Ruiz L, Hernández MÁT, Ngo PD, Makhlysheva A, Svenning TO, Dyb K, Chomutare T, Llatas CF, Muñoz-Gama J, Tayefi M. A multinational study on artificial intelligence adoption: Clinical implementers' perspectives. Int J Med Inform 2024; 184:105377. [PMID: 38377725 DOI: 10.1016/j.ijmedinf.2024.105377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/06/2024] [Accepted: 02/12/2024] [Indexed: 02/22/2024]
Abstract
BACKGROUND Despite substantial progress in AI research for healthcare, translating research achievements to AI systems in clinical settings is challenging and, in many cases, unsatisfactory. As a result, many AI investments have stalled at the prototype level, never reaching clinical settings. OBJECTIVE To improve the chances of future AI implementation projects succeeding, we analyzed the experiences of clinical AI system implementers to better understand the challenges and success factors in their implementations. METHODS Thirty-seven implementers of clinical AI from European and North and South American countries were interviewed. Semi-structured interviews were transcribed and analyzed qualitatively with the framework method, identifying the success factors and the reasons for challenges as well as documenting proposals from implementers to improve AI adoption in clinical settings. RESULTS We gathered the implementers' requirements for facilitating AI adoption in the clinical setting. The main findings include 1) the lesser importance of AI explainability in favor of proper clinical validation studies, 2) the need to actively involve clinical practitioners, and not only clinical researchers, in the inception of AI research projects, 3) the need for better information structures and processes to manage data access and the ethical approval of AI projects, 4) the need for better support for regulatory compliance and avoidance of duplications in data management approval bodies, 5) the need to increase both clinicians' and citizens' literacy as respects the benefits and limitations of AI, and 6) the need for better funding schemes to support the implementation, embedding, and validation of AI in the clinical workflow, beyond pilots. CONCLUSION Participants in the interviews are positive about the future of AI in clinical settings. At the same time, they proposenumerous measures to transfer research advancesinto implementations that will benefit healthcare personnel. Transferring AI research into benefits for healthcare workers and patients requires adjustments in regulations, data access procedures, education, funding schemes, and validation of AI systems.
Collapse
Affiliation(s)
- Luis Marco-Ruiz
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway.
| | | | - Phuong Dinh Ngo
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Alexandra Makhlysheva
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Therese Olsen Svenning
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Kari Dyb
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Taridzo Chomutare
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| | - Carlos Fernández Llatas
- Instituto de las Tecnologías de la Información y las Comunicaciones (ITACA), Universitat Politècnica de València (UPV), Valencia, Spain
| | - Jorge Muñoz-Gama
- Department of Computer Science, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Maryam Tayefi
- Norwegian Centre for E-Health Research, University Hospital of North Norway, Tromsø, Norway
| |
Collapse
|
23
|
Langius-Wiffen E, Slotman DJ, Groeneveld J, Ac van Osch J, Nijholt IM, de Boer E, Nijboer-Oosterveld J, Veldhuis WB, de Jong PA, Boomsma MF. External validation of the RSNA 2020 pulmonary embolism detection challenge winning deep learning algorithm. Eur J Radiol 2024; 173:111361. [PMID: 38401407 DOI: 10.1016/j.ejrad.2024.111361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/17/2024] [Accepted: 02/08/2024] [Indexed: 02/26/2024]
Abstract
PURPOSE To evaluate the diagnostic performance and generalizability of the winning DL algorithm of the RSNA 2020 PE detection challenge to a local population using CTPA data from two hospitals. MATERIALS AND METHODS Consecutive CTPA images from patients referred for suspected PE were retrospectively analysed. The winning RSNA 2020 DL algorithm was retrained on the RSNA-STR Pulmonary Embolism CT (RSPECT) dataset. The algorithm was tested in hospital A on multidetector CT (MDCT) images of 238 patients and in hospital B on spectral detector CT (SDCT) and virtual monochromatic images (VMI) of 114 patients. The output of the DL algorithm was compared with a reference standard, which included a consensus reading by at least two experienced cardiothoracic radiologists for both hospitals. Areas under the receiver operating characteristic curve (AUCs) were calculated. Sensitivity and specificity were determined using the maximum Youden index. RESULTS According to the reference standard, PE was present in 73 patients (30.7%) in hospital A and 33 patients (29.0%) in hospital B. For the DL algorithm the AUC was 0.96 (95% CI 0.92-0.98) in hospital A, 0.89 (95% CI 0.81-0.94) for conventional reconstruction in hospital B and 0.87 (95% CI 0.80-0.93) for VMI. CONCLUSION The RSNA 2020 pulmonary embolism detection on CTPA challenge winning DL algorithm, retrained on the RSPECT dataset, showed high diagnostic accuracy on MDCT images. A somewhat lower performance was observed on SDCT images, which suggest additional training on novel CT technology may improve generalizability of this DL algorithm.
Collapse
Affiliation(s)
| | - Derk J Slotman
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands; Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Jorik Groeneveld
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands
| | | | - Ingrid M Nijholt
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands; Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Erwin de Boer
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands
| | | | - Wouter B Veldhuis
- Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Pim A de Jong
- Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Martijn F Boomsma
- Department of Radiology, Isala Hospital, Zwolle, the Netherlands; Department of Radiology, University Medical Centre Utrecht, Utrecht University, Utrecht, the Netherlands
| |
Collapse
|
24
|
Daher H, Punchayil SA, Ismail AAE, Fernandes RR, Jacob J, Algazzar MH, Mansour M. Advancements in Pancreatic Cancer Detection: Integrating Biomarkers, Imaging Technologies, and Machine Learning for Early Diagnosis. Cureus 2024; 16:e56583. [PMID: 38646386 PMCID: PMC11031195 DOI: 10.7759/cureus.56583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/20/2024] [Indexed: 04/23/2024] Open
Abstract
Artificial intelligence (AI) has come to play a pivotal role in revolutionizing medical practices, particularly in the field of pancreatic cancer detection and management. As a leading cause of cancer-related deaths, pancreatic cancer warrants innovative approaches due to its typically advanced stage at diagnosis and dismal survival rates. Present detection methods, constrained by limitations in accuracy and efficiency, underscore the necessity for novel solutions. AI-driven methodologies present promising avenues for enhancing early detection and prognosis forecasting. Through the analysis of imaging data, biomarker profiles, and clinical information, AI algorithms excel in discerning subtle abnormalities indicative of pancreatic cancer with remarkable precision. Moreover, machine learning (ML) algorithms facilitate the amalgamation of diverse data sources to optimize patient care. However, despite its huge potential, the implementation of AI in pancreatic cancer detection faces various challenges. Issues such as the scarcity of comprehensive datasets, biases in algorithm development, and concerns regarding data privacy and security necessitate thorough scrutiny. While AI offers immense promise in transforming pancreatic cancer detection and management, ongoing research and collaborative efforts are indispensable in overcoming technical hurdles and ethical dilemmas. This review delves into the evolution of AI, its application in pancreatic cancer detection, and the challenges and ethical considerations inherent in its integration.
Collapse
Affiliation(s)
- Hisham Daher
- Internal Medicine, University of Debrecen, Debrecen, HUN
| | - Sneha A Punchayil
- Internal Medicine, University Hospital of North Tees, Stockton-on-Tees, GBR
| | | | | | - Joel Jacob
- General Medicine, Diana Princess of Wales Hospital, Grimsby, GBR
| | | | - Mohammad Mansour
- General Medicine, University of Debrecen, Debrecen, HUN
- General Medicine, Jordan University Hospital, Amman, JOR
| |
Collapse
|
25
|
Martindale APL, Llewellyn CD, de Visser RO, Ng B, Ngai V, Kale AU, di Ruffano LF, Golub RM, Collins GS, Moher D, McCradden MD, Oakden-Rayner L, Rivera SC, Calvert M, Kelly CJ, Lee CS, Yau C, Chan AW, Keane PA, Beam AL, Denniston AK, Liu X. Concordance of randomised controlled trials for artificial intelligence interventions with the CONSORT-AI reporting guidelines. Nat Commun 2024; 15:1619. [PMID: 38388497 PMCID: PMC10883966 DOI: 10.1038/s41467-024-45355-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/22/2024] [Indexed: 02/24/2024] Open
Abstract
The Consolidated Standards of Reporting Trials extension for Artificial Intelligence interventions (CONSORT-AI) was published in September 2020. Since its publication, several randomised controlled trials (RCTs) of AI interventions have been published but their completeness and transparency of reporting is unknown. This systematic review assesses the completeness of reporting of AI RCTs following publication of CONSORT-AI and provides a comprehensive summary of RCTs published in recent years. 65 RCTs were identified, mostly conducted in China (37%) and USA (18%). Median concordance with CONSORT-AI reporting was 90% (IQR 77-94%), although only 10 RCTs explicitly reported its use. Several items were consistently under-reported, including algorithm version, accessibility of the AI intervention or code, and references to a study protocol. Only 3 of 52 included journals explicitly endorsed or mandated CONSORT-AI. Despite a generally high concordance amongst recent AI RCTs, some AI-specific considerations remain systematically poorly reported. Further encouragement of CONSORT-AI adoption by journals and funders may enable more complete adoption of the full CONSORT-AI guidelines.
Collapse
Affiliation(s)
| | - Carrie D Llewellyn
- Department of Primary Care and Public Health, Brighton and Sussex Medical School, Brighton, UK
| | - Richard O de Visser
- Department of Primary Care and Public Health, Brighton and Sussex Medical School, Brighton, UK
| | - Benjamin Ng
- Birmingham and Midland Eye Centre, Sandwell and West Birmingham NHS Trust, Birmingham, UK
- Christ Church, University of Oxford, Oxford, UK
| | - Victoria Ngai
- University College London Medical School, London, UK
| | - Aditya U Kale
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
| | | | - Robert M Golub
- Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Gary S Collins
- Centre for Statistics in Medicine//UK EQUATOR Centre, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - David Moher
- Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottowa, Canada
| | - Melissa D McCradden
- Department of Bioethics, The Hospital for Sick Children, Toronto, Canada
- Genetics & Genome Biology Research Program, Peter Gilgan Centre for Research & Learning, Toronto, Canada
- Division of Clinical and Public Health, Dalla Lana School of Public Health, Toronto, Canada
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Samantha Cruz Rivera
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- Centre for Patient Reported Outcomes Research (CPROR), Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Melanie Calvert
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- Centre for Patient Reported Outcomes Research (CPROR), Institute of Applied Health Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
- NIHR Applied Research Collaboration (ARC) West Midlands, University of Birmingham, Birmingham, UK
- NIHR Blood and Transplant Research Unit (BTRU) in Precision Transplant and Cellular Therapeutics, University of Birmingham, Birmingham, UK
| | | | | | - Christopher Yau
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
- Health Data Research UK, London, UK
| | - An-Wen Chan
- Department of Medicine, Women's College Hospital. University of Toronto, Toronto, Canada
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Andrew L Beam
- Department of Epidemiology, Harvard. T.H. Chan School of Public Health, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Alastair K Denniston
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, University of Birmingham, Birmingham, UK
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK
- NIHR Biomedical Research Centre at Moorfields, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Xiaoxuan Liu
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK.
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK.
- Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, UK.
| |
Collapse
|
26
|
Hanneman K, Playford D, Dey D, van Assen M, Mastrodicasa D, Cook TS, Gichoya JW, Williamson EE, Rubin GD. Value Creation Through Artificial Intelligence and Cardiovascular Imaging: A Scientific Statement From the American Heart Association. Circulation 2024; 149:e296-e311. [PMID: 38193315 DOI: 10.1161/cir.0000000000001202] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Multiple applications for machine learning and artificial intelligence (AI) in cardiovascular imaging are being proposed and developed. However, the processes involved in implementing AI in cardiovascular imaging are highly diverse, varying by imaging modality, patient subtype, features to be extracted and analyzed, and clinical application. This article establishes a framework that defines value from an organizational perspective, followed by value chain analysis to identify the activities in which AI might produce the greatest incremental value creation. The various perspectives that should be considered are highlighted, including clinicians, imagers, hospitals, patients, and payers. Integrating the perspectives of all health care stakeholders is critical for creating value and ensuring the successful deployment of AI tools in a real-world setting. Different AI tools are summarized, along with the unique aspects of AI applications to various cardiac imaging modalities, including cardiac computed tomography, magnetic resonance imaging, and positron emission tomography. AI is applicable and has the potential to add value to cardiovascular imaging at every step along the patient journey, from selecting the more appropriate test to optimizing image acquisition and analysis, interpreting the results for classification and diagnosis, and predicting the risk for major adverse cardiac events.
Collapse
|
27
|
Kim RY. Radiomics and artificial intelligence for risk stratification of pulmonary nodules: Ready for primetime? Cancer Biomark 2024:CBM230360. [PMID: 38427470 PMCID: PMC11300708 DOI: 10.3233/cbm-230360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
Pulmonary nodules are ubiquitously found on computed tomography (CT) imaging either incidentally or via lung cancer screening and require careful diagnostic evaluation and management to both diagnose malignancy when present and avoid unnecessary biopsy of benign lesions. To engage in this complex decision-making, clinicians must first risk stratify pulmonary nodules to determine what the best course of action should be. Recent developments in imaging technology, computer processing power, and artificial intelligence algorithms have yielded radiomics-based computer-aided diagnosis tools that use CT imaging data including features invisible to the naked human eye to predict pulmonary nodule malignancy risk and are designed to be used as a supplement to routine clinical risk assessment. These tools vary widely in their algorithm construction, internal and external validation populations, intended-use populations, and commercial availability. While several clinical validation studies have been published, robust clinical utility and clinical effectiveness data are not yet currently available. However, there is reason for optimism as ongoing and future studies aim to target this knowledge gap, in the hopes of improving the diagnostic process for patients with pulmonary nodules.
Collapse
|
28
|
Boverhof BJ, Redekop WK, Bos D, Starmans MPA, Birch J, Rockall A, Visser JJ. Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice. Insights Imaging 2024; 15:34. [PMID: 38315288 PMCID: PMC10844175 DOI: 10.1186/s13244-023-01599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/14/2023] [Indexed: 02/07/2024] Open
Abstract
OBJECTIVE To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. METHODS This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. RESULTS RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI's lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring. CONCLUSION The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. CRITICAL RELEVANCE STATEMENT The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. KEYPOINTS • Radiology artificial intelligence lacks a comprehensive approach to value assessment. • The RADAR framework provides a dynamic, hierarchical method for thorough valuation of radiology AI. • RADAR advances clinical radiology by bridging the artificial intelligence implementation gap.
Collapse
Affiliation(s)
- Bart-Jan Boverhof
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - W Ken Redekop
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniel Bos
- Department of Epidemiology, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Martijn P A Starmans
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | | | - Andrea Rockall
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands.
| |
Collapse
|
29
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024; 68:7-26. [PMID: 38259140 DOI: 10.1111/1754-9485.13612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 01/24/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama, USA
- American College of Radiology Data Science Institute, Reston, Virginia, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts, USA
- Tufts University Medical School, Boston, Massachusetts, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Reston, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, South Australia, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
30
|
Allen MR, Webb S, Mandvi A, Frieden M, Tai-Seale M, Kallenberg G. Navigating the doctor-patient-AI relationship - a mixed-methods study of physician attitudes toward artificial intelligence in primary care. BMC PRIMARY CARE 2024; 25:42. [PMID: 38281026 PMCID: PMC10821550 DOI: 10.1186/s12875-024-02282-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is a rapidly advancing field that is beginning to enter the practice of medicine. Primary care is a cornerstone of medicine and deals with challenges such as physician shortage and burnout which impact patient care. AI and its application via digital health is increasingly presented as a possible solution. However, there is a scarcity of research focusing on primary care physician (PCP) attitudes toward AI. This study examines PCP views on AI in primary care. We explore its potential impact on topics pertinent to primary care such as the doctor-patient relationship and clinical workflow. By doing so, we aim to inform primary care stakeholders to encourage successful, equitable uptake of future AI tools. Our study is the first to our knowledge to explore PCP attitudes using specific primary care AI use cases rather than discussing AI in medicine in general terms. METHODS From June to August 2023, we conducted a survey among 47 primary care physicians affiliated with a large academic health system in Southern California. The survey quantified attitudes toward AI in general as well as concerning two specific AI use cases. Additionally, we conducted interviews with 15 survey respondents. RESULTS Our findings suggest that PCPs have largely positive views of AI. However, attitudes often hinged on the context of adoption. While some concerns reported by PCPs regarding AI in primary care focused on technology (accuracy, safety, bias), many focused on people-and-process factors (workflow, equity, reimbursement, doctor-patient relationship). CONCLUSION Our study offers nuanced insights into PCP attitudes towards AI in primary care and highlights the need for primary care stakeholder alignment on key issues raised by PCPs. AI initiatives that fail to address both the technological and people-and-process concerns raised by PCPs may struggle to make an impact.
Collapse
Affiliation(s)
- Matthew R Allen
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA.
- Division of Biomedical Informatics, University of California San Diego, La Jolla, CA, 92093, USA.
| | - Sophie Webb
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Ammar Mandvi
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Marshall Frieden
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Ming Tai-Seale
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Gene Kallenberg
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| |
Collapse
|
31
|
Kim B, Romeijn S, van Buchem M, Mehrizi MHR, Grootjans W. A holistic approach to implementing artificial intelligence in radiology. Insights Imaging 2024; 15:22. [PMID: 38270790 PMCID: PMC10811299 DOI: 10.1186/s13244-023-01586-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 12/01/2023] [Indexed: 01/26/2024] Open
Abstract
OBJECTIVE Despite the widespread recognition of the importance of artificial intelligence (AI) in healthcare, its implementation is often limited. This article aims to address this implementation gap by presenting insights from an in-depth case study of an organisation that approached AI implementation with a holistic approach. MATERIALS AND METHODS We conducted a longitudinal, qualitative case study of the implementation of AI in radiology at a large academic medical centre in the Netherlands for three years. Collected data consists of 43 days of work observations, 30 meeting observations, 18 interviews and 41 relevant documents. Abductive reasoning was used for systematic data analysis, which revealed three change initiative themes responding to specific AI implementation challenges. RESULTS This study identifies challenges of implementing AI in radiology at different levels and proposes a holistic approach to tackle those challenges. At the technology level, there is the issue of multiple narrow AI applications with no standard use interface; at the workflow level, AI results allow limited interaction with radiologists; at the people and organisational level, there are divergent expectations and limited experience with AI. The case of Southern illustrates that organisations can reap more benefits from AI implementation by investing in long-term initiatives that holistically align both social and technological aspects of clinical practice. CONCLUSION This study highlights the importance of a holistic approach to AI implementation that addresses challenges spanning technology, workflow, and organisational levels. Aligning change initiatives between these different levels has proven to be important to facilitate wide-scale implementation of AI in clinical practice. CRITICAL RELEVANCE STATEMENT Adoption of artificial intelligence is crucial for future-ready radiological care. This case study highlights the importance of a holistic approach that addresses technological, workflow, and organisational aspects, offering practical insights and solutions to facilitate successful AI adoption in clinical practice. KEY POINTS 1. Practical and actionable insights into successful AI implementation in radiology are lacking. 2. Aligning technology, workflow, organisational aspects is crucial for a successful AI implementation 3. Holistic approach aids organisations to create sustainable value through AI implementation.
Collapse
Affiliation(s)
- Bomi Kim
- House of Innovation (Department of Entrepreneurship, Innovation and Technology), Stockholm School of Economics, Stockholm, Sweden
| | - Stephan Romeijn
- Radiology, Leiden University Medical Center, Leiden, Netherlands.
| | - Mark van Buchem
- Radiology, Leiden University Medical Center, Leiden, Netherlands
| | | | - Willem Grootjans
- Radiology, Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|
32
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Insights Imaging 2024; 15:16. [PMID: 38246898 PMCID: PMC10800328 DOI: 10.1186/s13244-023-01541-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- American College of Radiology Data Science Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
33
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024; 6:e230513. [PMID: 38251899 PMCID: PMC10831521 DOI: 10.1148/ryai.230513] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical
Center, Birmingham, AL, USA
- American College of Radiology Data Science
Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich
School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and
Interventional Radiology, Medical Center, Faculty of Medicine, University of
Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA,
USA
- Stanford Center for Artificial
Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical
Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning,
University of Adelaide, Adelaide, Australia
| | - Daniel Pinto dos Santos
- Department of Radiology, University
Hospital of Cologne, Cologne, Germany
- Department of Radiology, University
Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation
Oncology, and Nuclear Medicine, Université de Montréal,
Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital
& Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston,
MA, USA
- Commission On Informatics, and Member,
Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging,
Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health,
Flinders University, Adelaide, Australia
| |
Collapse
|
34
|
Hua D, Petrina N, Young N, Cho JG, Poon SK. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med 2024; 147:102698. [PMID: 38184343 DOI: 10.1016/j.artmed.2023.102698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 09/29/2023] [Accepted: 10/29/2023] [Indexed: 01/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) technology has the potential to transform medical practice within the medical imaging industry and materially improve productivity and patient outcomes. However, low acceptability of AI as a digital healthcare intervention among medical professionals threatens to undermine user uptake levels, hinder meaningful and optimal value-added engagement, and ultimately prevent these promising benefits from being realised. Understanding the factors underpinning AI acceptability will be vital for medical institutions to pinpoint areas of deficiency and improvement within their AI implementation strategies. This scoping review aims to survey the literature to provide a comprehensive summary of the key factors influencing AI acceptability among healthcare professionals in medical imaging domains and the different approaches which have been taken to investigate them. METHODS A systematic literature search was performed across five academic databases including Medline, Cochrane Library, Web of Science, Compendex, and Scopus from January 2013 to September 2023. This was done in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Overall, 31 articles were deemed appropriate for inclusion in the scoping review. RESULTS The literature has converged towards three overarching categories of factors underpinning AI acceptability including: user factors involving trust, system understanding, AI literacy, and technology receptiveness; system usage factors entailing value proposition, self-efficacy, burden, and workflow integration; and socio-organisational-cultural factors encompassing social influence, organisational readiness, ethicality, and perceived threat to professional identity. Yet, numerous studies have overlooked a meaningful subset of these factors that are integral to the use of medical AI systems such as the impact on clinical workflow practices, trust based on perceived risk and safety, and compatibility with the norms of medical professions. This is attributable to reliance on theoretical frameworks or ad-hoc approaches which do not explicitly account for healthcare-specific factors, the novelties of AI as software as a medical device (SaMD), and the nuances of human-AI interaction from the perspective of medical professionals rather than lay consumer or business end users. CONCLUSION This is the first scoping review to survey the health informatics literature around the key factors influencing the acceptability of AI as a digital healthcare intervention in medical imaging contexts. The factors identified in this review suggest that existing theoretical frameworks used to study AI acceptability need to be modified to better capture the nuances of AI deployment in healthcare contexts where the user is a healthcare professional influenced by expert knowledge and disciplinary norms. Increasing AI acceptability among medical professionals will critically require designing human-centred AI systems which go beyond high algorithmic performance to consider accessibility to users with varying degrees of AI literacy, clinical workflow practices, the institutional and deployment context, and the cultural, ethical, and safety norms of healthcare professions. As investment into AI for healthcare increases, it would be valuable to conduct a systematic review and meta-analysis of the causal contribution of these factors to achieving high levels of AI acceptability among medical professionals.
Collapse
Affiliation(s)
- David Hua
- School of Computer Science, The University of Sydney, Australia; Sydney Law School, The University of Sydney, Australia
| | - Neysa Petrina
- School of Computer Science, The University of Sydney, Australia
| | - Noel Young
- Sydney Medical School, The University of Sydney, Australia; Lumus Imaging, Australia
| | - Jin-Gun Cho
- Sydney Medical School, The University of Sydney, Australia; Western Sydney Local Health District, Australia; Lumus Imaging, Australia
| | - Simon K Poon
- School of Computer Science, The University of Sydney, Australia; Western Sydney Local Health District, Australia.
| |
Collapse
|
35
|
Bergquist M, Rolandsson B, Gryska E, Laesser M, Hoefling N, Heckemann R, Schneiderman JF, Björkman-Burtscher IM. Trust and stakeholder perspectives on the implementation of AI tools in clinical radiology. Eur Radiol 2024; 34:338-347. [PMID: 37505245 PMCID: PMC10791850 DOI: 10.1007/s00330-023-09967-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 04/22/2023] [Accepted: 05/26/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVES To define requirements that condition trust in artificial intelligence (AI) as clinical decision support in radiology from the perspective of various stakeholders and to explore ways to fulfil these requirements. METHODS Semi-structured interviews were conducted with twenty-five respondents-nineteen directly involved in the development, implementation, or use of AI applications in radiology and six working with AI in other areas of healthcare. We designed the questions to explore three themes: development and use of AI, professional decision-making, and management and organizational procedures connected to AI. The transcribed interviews were analysed in an iterative coding process from open coding to theoretically informed thematic coding. RESULTS We identified four aspects of trust that relate to reliability, transparency, quality verification, and inter-organizational compatibility. These aspects fall under the categories of substantial and procedural requirements. CONCLUSIONS Development of appropriate levels of trust in AI in healthcare is complex and encompasses multiple dimensions of requirements. Various stakeholders will have to be involved in developing AI solutions for healthcare and radiology to fulfil these requirements. CLINICAL RELEVANCE STATEMENT For AI to achieve advances in radiology, it must be given the opportunity to support, rather than replace, human expertise. Support requires trust. Identification of aspects and conditions for trust allows developing AI implementation strategies that facilitate advancing the field. KEY POINTS • Dimensions of procedural and substantial demands that need to be fulfilled to foster appropriate levels of trust in AI in healthcare are conditioned on aspects related to reliability, transparency, quality verification, and inter-organizational compatibility. •Creating the conditions for trust to emerge requires the involvement of various stakeholders, who will have to compensate the problem's inherent complexity by finding and promoting well-defined solutions.
Collapse
Affiliation(s)
- Magnus Bergquist
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | - Bertil Rolandsson
- Department of Sociology and Work Science, University of Gothenburg, Gothenburg, Sweden
- Department of Sociology, Lund University, Lund, Sweden
| | - Emilia Gryska
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Mats Laesser
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Nickoleta Hoefling
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Rolf Heckemann
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Justin F Schneiderman
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Isabella M Björkman-Burtscher
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| |
Collapse
|
36
|
Langius-Wiffen E, de Jong PA, Mohamed Hoesein FA, Dekker L, van den Hoven AF, Nijholt IM, Boomsma MF, Veldhuis WB. Added value of an artificial intelligence algorithm in reducing the number of missed incidental acute pulmonary embolism in routine portal venous phase chest CT. Eur Radiol 2024; 34:367-373. [PMID: 37532902 DOI: 10.1007/s00330-023-10029-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/06/2023] [Accepted: 06/14/2023] [Indexed: 08/04/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate the incremental value of artificial intelligence (AI) compared to the diagnostic accuracy of radiologists alone in detecting incidental acute pulmonary embolism (PE) on routine portal venous contrast-enhanced chest computed tomography (CT). METHODS CTs of 3089 consecutive patients referred to the radiology department for a routine contrast-enhanced chest CT between 27-5-2020 and 31-12-2020, were retrospectively analysed by a CE-certified and FDA-approved AI algorithm. The diagnostic performance of the AI was compared to the initial report. To determine the reference standard, discordant findings were independently evaluated by two readers. In case of disagreement, another experienced cardiothoracic radiologist with knowledge of the initial report and the AI output adjudicated. RESULTS The prevalence of acute incidental PE in the reference standard was 2.2% (67 of 3089 patients). In 25 cases, AI detected initially unreported PE. This included three cases concerning central/lobar PE. Sensitivity of the AI algorithm was significantly higher than the outcome of the initial report (respectively 95.5% vs. 62.7%, p < 0.001), whereas specificity was very high for both (respectively 99.6% vs 99.9%, p = 0.012). The AI algorithm only showed a slightly higher amount of false-positive findings (11 vs. 2), resulting in a significantly lower PPV (85.3% vs. 95.5%, p = 0.047). CONCLUSION The AI algorithm showed high diagnostic accuracy in diagnosing incidental PE, detecting an additional 25 cases of initially unreported PE, accounting for 37.3% of all positive cases. CLINICAL RELEVANCE STATEMENT Radiologist support from AI algorithms in daily practice can prevent missed incidental acute PE on routine chest CT, without a high burden of false-positive cases. KEY POINTS • Incidental pulmonary embolism is often missed by radiologists in non-diagnostic scans with suboptimal contrast opacification within the pulmonary trunk. • An artificial intelligence algorithm showed higher sensitivity detecting incidental pulmonary embolism on routine portal venous chest CT compared to the initial report. • Implementation of artificial intelligence support in routine daily practice will reduce the number of missed incidental pulmonary embolism.
Collapse
Affiliation(s)
- Eline Langius-Wiffen
- Department of Radiology, Isala Hospital, Dr. Van Heesweg 2, 8025 AB, Zwolle, The Netherlands.
| | - Pim A de Jong
- Department of Radiology, University Medical Centre Utrecht, Utrecht, The Netherlands
| | | | - Lisette Dekker
- Department of Radiology, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Andor F van den Hoven
- Department of Radiology, University Medical Centre Utrecht, Utrecht, The Netherlands
- Department of Nuclear Medicine, St. Antonius Hospital, Nieuwegein, The Netherlands
| | - Ingrid M Nijholt
- Department of Radiology, Isala Hospital, Dr. Van Heesweg 2, 8025 AB, Zwolle, The Netherlands
| | - Martijn F Boomsma
- Department of Radiology, Isala Hospital, Dr. Van Heesweg 2, 8025 AB, Zwolle, The Netherlands
- Division of Imaging and Oncology, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Wouter B Veldhuis
- Department of Radiology, University Medical Centre Utrecht, Utrecht, The Netherlands
| |
Collapse
|
37
|
Zhong J, Xing Y, Lu J, Zhang G, Mao S, Chen H, Yin Q, Cen Q, Jiang R, Hu Y, Ding D, Ge X, Zhang H, Yao W. The endorsement of general and artificial intelligence reporting guidelines in radiological journals: a meta-research study. BMC Med Res Methodol 2023; 23:292. [PMID: 38093215 PMCID: PMC10717715 DOI: 10.1186/s12874-023-02117-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Complete reporting is essential for clinical research. However, the endorsement of reporting guidelines in radiological journals is still unclear. Further, as a field extensively utilizing artificial intelligence (AI), the adoption of both general and AI reporting guidelines would be necessary for enhancing quality and transparency of radiological research. This study aims to investigate the endorsement of general reporting guidelines and those for AI applications in medical imaging in radiological journals, and explore associated journal characteristic variables. METHODS This meta-research study screened journals from the Radiology, Nuclear Medicine & Medical Imaging category, Science Citation Index Expanded of the 2022 Journal Citation Reports, and excluded journals not publishing original research, in non-English languages, and instructions for authors unavailable. The endorsement of fifteen general reporting guidelines and ten AI reporting guidelines was rated using a five-level tool: "active strong", "active weak", "passive moderate", "passive weak", and "none". The association between endorsement and journal characteristic variables was evaluated by logistic regression analysis. RESULTS We included 117 journals. The top-five endorsed reporting guidelines were CONSORT (Consolidated Standards of Reporting Trials, 58.1%, 68/117), PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses, 54.7%, 64/117), STROBE (STrengthening the Reporting of Observational Studies in Epidemiology, 51.3%, 60/117), STARD (Standards for Reporting of Diagnostic Accuracy, 50.4%, 59/117), and ARRIVE (Animal Research Reporting of In Vivo Experiments, 35.9%, 42/117). The most implemented AI reporting guideline was CLAIM (Checklist for Artificial Intelligence in Medical Imaging, 1.7%, 2/117), while other nine AI reporting guidelines were not mentioned. The Journal Impact Factor quartile and publisher were associated with endorsement of reporting guidelines in radiological journals. CONCLUSIONS The general reporting guideline endorsement was suboptimal in radiological journals. The implementation of reporting guidelines for AI applications in medical imaging was extremely low. Their adoption should be strengthened to facilitate quality and transparency of radiological study reporting.
Collapse
Affiliation(s)
- Jingyu Zhong
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Yue Xing
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Junjie Lu
- Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Guangcheng Zhang
- Department of Orthopedics, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Shiqi Mao
- Department of Medical Oncology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, 200433, China
| | - Haoda Chen
- Department of General Surgery, Pancreatic Disease Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Qian Yin
- Department of Pathology, Shanghai Sixth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200233, China
| | - Qingqing Cen
- Department of Dermatology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
| | - Run Jiang
- Department of Pharmacovigilance, Shanghai Hansoh BioMedical Co., Ltd., Shanghai, 201203, China
| | - Yangfan Hu
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Defang Ding
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Xiang Ge
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| | - Weiwu Yao
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China.
| |
Collapse
|
38
|
López-Úbeda P, Martín-Noguerol T, Luna A. Radiology, explicability and AI: closing the gap. Eur Radiol 2023; 33:9466-9468. [PMID: 37410108 DOI: 10.1007/s00330-023-09902-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/17/2023] [Accepted: 04/28/2023] [Indexed: 07/07/2023]
Affiliation(s)
| | | | - Antonio Luna
- Radiology Department, MRI Unit, HT Medica, Jaén, Spain
| |
Collapse
|
39
|
Bajaj S, Khunte M, Moily NS, Payabvash S, Wintermark M, Gandhi D, Malhotra A. Value Proposition of FDA-Approved Artificial Intelligence Algorithms for Neuroimaging. J Am Coll Radiol 2023; 20:1241-1249. [PMID: 37574094 DOI: 10.1016/j.jacr.2023.06.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/21/2023] [Accepted: 06/30/2023] [Indexed: 08/15/2023]
Abstract
PURPOSE The number of FDA-cleared artificial intelligence (AI) algorithms for neuroimaging has grown in the past decade. The adoption of these algorithms into clinical practice depends largely on whether this technology provides value in the clinical setting. The objective of this study was to analyze trends in FDA-cleared AI algorithms for neuroimaging and understand their value proposition as advertised by the AI developers and vendors. METHODS A list of AI algorithms cleared by the FDA for neuroimaging between May 2008 and August 2022 was extracted from the ACR Data Science Institute AI Central database. Product information for each device was collected from the database. For each device, information on the advertised value as presented on the developer's website was collected. RESULTS A total of 59 AI neuroimaging algorithms were cleared by the FDA between May 2008 and August 2022. Most of these algorithms (24 of 59) were compatible with noncontrast CT, 21 with MRI, 9 with CT perfusion, 8 with CT angiography, 3 with MR perfusion, and 2 with PET. Six algorithms were compatible with multiple imaging techniques. Of the 59 algorithms, websites were located that discussed the product value for 55 algorithms. The most widely advertised value proposition was improved quality of care (38 of 55 [69.1%]). A total of 24 algorithms (43.6%) proposed saving user time, 9 (15.7%) advertised decreased costs, and 6 (10.9%) described increased revenue. Product websites for 26 algorithms (43.6%) showed user testimonials advertising the value of the technology. CONCLUSIONS The results of this study indicate a wide range of value propositions advertised by developers and vendors of AI algorithms for neuroimaging. Most vendors advertised that their products would improve patient care. Further research is necessary to determine whether the value claimed by developers is actually demonstrated in clinical practice.
Collapse
Affiliation(s)
- Suryansh Bajaj
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut
| | - Mihir Khunte
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut
| | | | - Seyedmehdi Payabvash
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut
| | - Max Wintermark
- Chair, Department of Neuroradiology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Dheeraj Gandhi
- Director, Interventional Neuroradiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut.
| |
Collapse
|
40
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
41
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
42
|
Bernstein MH, Atalay MK, Dibble EH, Maxwell AWP, Karam AR, Agarwal S, Ward RC, Healey TT, Baird GL. Can incorrect artificial intelligence (AI) results impact radiologists, and if so, what can we do about it? A multi-reader pilot study of lung cancer detection with chest radiography. Eur Radiol 2023; 33:8263-8269. [PMID: 37266657 PMCID: PMC10235827 DOI: 10.1007/s00330-023-09747-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 04/16/2023] [Accepted: 04/18/2023] [Indexed: 06/03/2023]
Abstract
OBJECTIVE To examine whether incorrect AI results impact radiologist performance, and if so, whether human factors can be optimized to reduce error. METHODS Multi-reader design, 6 radiologists interpreted 90 identical chest radiographs (follow-up CT needed: yes/no) on four occasions (09/20-01/22). No AI result was provided for session 1. Sham AI results were provided for sessions 2-4, and AI for 12 cases were manipulated to be incorrect (8 false positives (FP), 4 false negatives (FN)) (0.87 ROC-AUC). In the Delete AI (No Box) condition, radiologists were told AI results would not be saved for the evaluation. In Keep AI (No Box) and Keep AI (Box), radiologists were told results would be saved. In Keep AI (Box), the ostensible AI program visually outlined the region of suspicion. AI results were constant between conditions. RESULTS Relative to the No AI condition (FN = 2.7%, FP = 51.4%), FN and FPs were higher in the Keep AI (No Box) (FN = 33.0%, FP = 86.0%), Delete AI (No Box) (FN = 26.7%, FP = 80.5%), and Keep AI (Box) (FN = to 20.7%, FP = 80.5%) conditions (all ps < 0.05). FNs were higher in the Keep AI (No Box) condition (33.0%) than in the Keep AI (Box) condition (20.7%) (p = 0.04). FPs were higher in the Keep AI (No Box) (86.0%) condition than in the Delete AI (No Box) condition (80.5%) (p = 0.03). CONCLUSION Incorrect AI causes radiologists to make incorrect follow-up decisions when they were correct without AI. This effect is mitigated when radiologists believe AI will be deleted from the patient's file or a box is provided around the region of interest. CLINICAL RELEVANCE STATEMENT When AI is wrong, radiologists make more errors than they would have without AI. Based on human factors psychology, our manuscript provides evidence for two AI implementation strategies that reduce the deleterious effects of incorrect AI. KEY POINTS • When AI provided incorrect results, false negative and false positive rates among the radiologists increased. • False positives decreased when AI results were deleted, versus kept, in the patient's record. • False negatives and false positives decreased when AI visually outlined the region of suspicion.
Collapse
Affiliation(s)
- Michael H Bernstein
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA.
- Rhode Island Hospital, Providence, RI, USA.
- Brown Radiology Human Factors Laboratory, Providence, RI, USA.
| | - Michael K Atalay
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
- Brown Radiology Human Factors Laboratory, Providence, RI, USA
| | - Elizabeth H Dibble
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Aaron W P Maxwell
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
- Brown Radiology Human Factors Laboratory, Providence, RI, USA
| | - Adib R Karam
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Saurabh Agarwal
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Robert C Ward
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Terrance T Healey
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Grayson L Baird
- Department of Diagnostic Imaging, Warren Alpert Medical School of Brown University, Providence, RI, USA
- Rhode Island Hospital, Providence, RI, USA
- Brown Radiology Human Factors Laboratory, Providence, RI, USA
| |
Collapse
|
43
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
44
|
Ahmed MI, Spooner B, Isherwood J, Lane M, Orrock E, Dennison A. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023; 15:e46454. [PMID: 37927664 PMCID: PMC10623210 DOI: 10.7759/cureus.46454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/03/2023] [Indexed: 11/07/2023] Open
Abstract
Artificial intelligence (AI) is expected to improve healthcare outcomes by facilitating early diagnosis, reducing the medical administrative burden, aiding drug development, personalising medical and oncological management, monitoring healthcare parameters on an individual basis, and allowing clinicians to spend more time with their patients. In the post-pandemic world where there is a drive for efficient delivery of healthcare and manage long waiting times for patients to access care, AI has an important role in supporting clinicians and healthcare systems to streamline the care pathways and provide timely and high-quality care for the patients. Despite AI technologies being used in healthcare for some decades, and all the theoretical potential of AI, the uptake in healthcare has been uneven and slower than anticipated and there remain a number of barriers, both overt and covert, which have limited its incorporation. This literature review highlighted barriers in six key areas: ethical, technological, liability and regulatory, workforce, social, and patient safety barriers. Defining and understanding the barriers preventing the acceptance and implementation of AI in the setting of healthcare will enable clinical staff and healthcare leaders to overcome the identified hurdles and incorporate AI technologies for the benefit of patients and clinical staff.
Collapse
Affiliation(s)
- Molla Imaduddin Ahmed
- Paediatric Respiratory Medicine, University Hospitals of Leicester NHS Trust, Leicester, GBR
| | - Brendan Spooner
- Intensive Care and Anaesthesia, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, GBR
| | - John Isherwood
- Hepatobiliary and Pancreatic Surgery, University Hospitals of Leicester NHS Trust, Leicester, GBR
| | - Mark Lane
- Ophthalmology, Birmingham and Midland Eye Centre, Birmingham, GBR
| | - Emma Orrock
- Head of Clinical Senates, East and West Midlands Clinical Senate, Leicester, GBR
| | - Ashley Dennison
- Hepatobiliary and Pancreatic Surgery, University Hospitals of Leicester NHS Trust, Leicester, GBR
| |
Collapse
|
45
|
Jetha A, Bakhtari H, Rosella LC, Gignac MAM, Biswas A, Shahidi FV, Smith BT, Smith MJ, Mustard C, Khan N, Arrandale VH, Loewen PJ, Zuberi D, Dennerlein JT, Bonaccio S, Wu N, Irvin E, Smith PM. Artificial intelligence and the work-health interface: A research agenda for a technologically transforming world of work. Am J Ind Med 2023; 66:815-830. [PMID: 37525007 DOI: 10.1002/ajim.23517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 08/02/2023]
Abstract
The labor market is undergoing a rapid artificial intelligence (AI) revolution. There is currently limited empirical scholarship that focuses on how AI adoption affects employment opportunities and work environments in ways that shape worker health, safety, well-being and equity. In this article, we present an agenda to guide research examining the implications of AI on the intersection between work and health. To build the agenda, a full day meeting was organized and attended by 50 participants including researchers from diverse disciplines and applied stakeholders. Facilitated meeting discussions aimed to set research priorities related to workplace AI applications and its impact on the health of workers, including critical research questions, methodological approaches, data needs, and resource requirements. Discussions also aimed to identify groups of workers and working contexts that may benefit from AI adoption as well as those that may be disadvantaged by AI. Discussions were synthesized into four research agenda areas: (1) examining the impact of stronger AI on human workers; (2) advancing responsible and healthy AI; (3) informing AI policy for worker health, safety, well-being, and equitable employment; and (4) understanding and addressing worker and employer knowledge needs regarding AI applications. The agenda provides a roadmap for researchers to build a critical evidence base on the impact of AI on workers and workplaces, and will ensure that worker health, safety, well-being, and equity are at the forefront of workplace AI system design and adoption.
Collapse
Affiliation(s)
- Arif Jetha
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Hela Bakhtari
- Institute for Work & Health, Toronto, Ontario, Canada
| | - Laura C Rosella
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Temerty Centre for Artificial Intelligence Research and Education in Medicine, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Institute for Clinical Evaluative Sciences, Toronto, Ontario, Canada
- Institute for Better Health, Trillium Health Partners, Mississauga, Ontario, Canada
| | - Monique A M Gignac
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Aviroop Biswas
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Faraz V Shahidi
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Brendan T Smith
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Health Promotion, Chronic Disease, and Injury Prevention, Public Health Ontario, Toronto, Ontario, Canada
| | - Maxwell J Smith
- School of Health Studies, Faculty of Health Sciences, Western University, London, Ontario, Canada
| | - Cameron Mustard
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Naimul Khan
- Depratment of Electrical, Computer, and Biomedical Engineering, Toronto Metropolitan University, Toronto, Ontario, Canada
| | - Victoria H Arrandale
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Occupational Cancer Research Centre, Toronto, Ontario, Canada
| | - Peter J Loewen
- Munk School of Global Affairs and Public Policy, University of Toronto, Ontario, Canada
- Schwartz Reisman Institute for Technology and Society, University of Toronto, Ontario, Canada
| | - Daniyal Zuberi
- Factor-Inwentash Faculty of Social Work, University of Toronto, Ontario, Canada
| | - Jack T Dennerlein
- Department of Physical Therapy, Movement, and Rehabilitation Sciences, Bouve College of Health Sciences, Northeastern University, Boston, Massachusetts, USA
- Center for Work, Health, and Wellbeing, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Silvia Bonaccio
- Institute for Work & Health, Toronto, Ontario, Canada
- Telfer School of Management, University of Ottawa, Ottawa, Ontario, Canada
| | - Nicole Wu
- Department of Political Science, University of Toronto, Toronto, Ontario, Canada
| | - Emma Irvin
- Institute for Work & Health, Toronto, Ontario, Canada
| | - Peter M Smith
- Institute for Work & Health, Toronto, Ontario, Canada
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
46
|
Rainey C, Villikudathil AT, McConnell J, Hughes C, Bond R, McFadden S. An experimental machine learning study investigating the decision-making process of students and qualified radiographers when interpreting radiographic images. PLOS DIGITAL HEALTH 2023; 2:e0000229. [PMID: 37878569 PMCID: PMC10599497 DOI: 10.1371/journal.pdig.0000229] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 07/29/2023] [Indexed: 10/27/2023]
Abstract
AI is becoming more prevalent in healthcare and is predicted to be further integrated into workflows to ease the pressure on an already stretched service. The National Health Service in the UK has prioritised AI and Digital health as part of its Long-Term Plan. Few studies have examined the human interaction with such systems in healthcare, despite reports of biases being present with the use of AI in other technologically advanced fields, such as finance and aviation. Understanding is needed of how certain user characteristics may impact how radiographers engage with AI systems in use in the clinical setting to mitigate against problems before they arise. The aim of this study is to determine correlations of skills, confidence in AI and perceived knowledge amongst student and qualified radiographers in the UK healthcare system. A machine learning based AI model was built to predict if the interpreter was either a student (n = 67) or a qualified radiographer (n = 39) in advance, using important variables from a feature selection technique named Boruta. A survey, which required the participant to interpret a series of plain radiographic examinations with and without AI assistance, was created on the Qualtrics survey platform and promoted via social media (Twitter/LinkedIn), therefore adopting convenience, snowball sampling This survey was open to all UK radiographers, including students and retired radiographers. Pearson's correlation analysis revealed that males who were proficient in their profession were more likely than females to trust AI. Trust in AI was negatively correlated with age and with level of experience. A machine learning model was built, the best model predicted the image interpreter to be qualified radiographers with 0.93 area under curve and a prediction accuracy of 93%. Further testing in prospective validation cohorts using a larger sample size is required to determine the clinical utility of the proposed machine learning model.
Collapse
Affiliation(s)
- Clare Rainey
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, York Street, Belfast, Northern Ireland, United Kingdom
| | - Angelina T. Villikudathil
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, York Street, Belfast, Northern Ireland, United Kingdom
| | | | - Ciara Hughes
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, York Street, Belfast, Northern Ireland, United Kingdom
| | - Raymond Bond
- Faculty of Computing, School of Computing, Engineering and the Built Environment, Ulster University, York Street, Belfast, Northern Ireland, United Kingdom
| | - Sonyia McFadden
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, York Street, Belfast, Northern Ireland, United Kingdom
| |
Collapse
|
47
|
Lenharo M. An AI revolution is brewing in medicine. What will it look like? Nature 2023; 622:686-688. [PMID: 37875622 DOI: 10.1038/d41586-023-03302-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
|
48
|
Hamedani Z, Moradi M, Kalroozi F, Manafi Anari A, Jalalifar E, Ansari A, Aski BH, Nezamzadeh M, Karim B. Evaluation of acceptance, attitude, and knowledge towards artificial intelligence and its application from the point of view of physicians and nurses: A provincial survey study in Iran: A cross-sectional descriptive-analytical study. Health Sci Rep 2023; 6:e1543. [PMID: 37674620 PMCID: PMC10477406 DOI: 10.1002/hsr2.1543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/17/2023] [Accepted: 08/18/2023] [Indexed: 09/08/2023] Open
Abstract
Background and Aims The prospect of using artificial intelligence (AI) in healthcare is bright and promising, and its use can have a significant impact on cost reduction and decrease the possibility of error and negligence among healthcare workers. This study aims to investigate the level of knowledge, attitude, and acceptance among Iranian physicians and nurses. Methods This cross-sectional descriptive-analytical study was conducted in eight public university hospitals located in Tehran on 400 physicians and nurses. To conduct the study, convenient sampling was used with the help of researcher-made questionnaires. Statistical analysis was done by SPSS 21 The mean and standard deviation and Chi-square and Fisher's exact tests were used. Results In this study, the level of knowledge among the research subjects was average (14.66 ± 4.53), the level of their attitude toward AI was relatively favorable (47.81 ± 6.74), and their level of acceptance of AI was average (103.19 ± 13.70). Moreover, from the participant's perspective, AI in medicine is most widely used in increasing the accuracy of diagnostic tests (86.5%), identifying drug interactions (82.75%), and helping to analyze medical tests and imaging (80%). There was a statistically significant relationship between the variable of acceptance of AI and the participant's level of education (p = 0.028), participation in an AI training course (p = 0.022), and the hospital department where they worked (p < 0.001). Conclusion In this study, both the knowledge and the acceptance of the participants towards AI were proved to be at an average level and the attitude towards AI was relatively favorable, which is in contrast with the very rapid and inevitable expansion of AI. Although our participants were aware of the growing use of AI in medicine, they had a cautious attitude toward this.
Collapse
Affiliation(s)
- Zeinab Hamedani
- Department of Midwifery, College of Nursing and MidwiferyKaraj Islamic Azad UniversityKarajIran
| | - Mohsen Moradi
- Department of Psychiatric Nursing, School of Nursing & MidwiferyShahrekord University of Medical SciencesShahrekordIran
| | - Fatemeh Kalroozi
- Department of Pediatric Nursing, College of NursingAja University of Medical SciencesTehranIran
| | - Ali Manafi Anari
- Department of Pediatrics, School of Medicine, Ali Asghar Children's HospitalIran University of Medical ScienceTehranIran
| | - Erfan Jalalifar
- Student Research CommitteeTabriz University of Medical SciencesTabrizIran
| | - Arina Ansari
- Student Research CommitteeNorth Khorasan University of Medical SciencesBojnurdIran
| | - Behzad H. Aski
- Department of Pediatrics, School of Medicine, Ali Asghar Children's HospitalIran University of Medical ScienceTehranIran
| | - Maryam Nezamzadeh
- Department of Critical Care Nursing, Faculty of NursingAja University of Medical SciencesTehranIran
| | - Bardia Karim
- Student Research CommitteeBabol University of Medical SciencesBabolMazandaranIran
| |
Collapse
|
49
|
Eltawil FA, Atalla M, Boulos E, Amirabadi A, Tyrrell PN. Analyzing Barriers and Enablers for the Acceptance of Artificial Intelligence Innovations into Radiology Practice: A Scoping Review. Tomography 2023; 9:1443-1455. [PMID: 37624108 PMCID: PMC10459931 DOI: 10.3390/tomography9040115] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/23/2023] [Accepted: 07/26/2023] [Indexed: 08/26/2023] Open
Abstract
OBJECTIVES This scoping review was conducted to determine the barriers and enablers associated with the acceptance of artificial intelligence/machine learning (AI/ML)-enabled innovations into radiology practice from a physician's perspective. METHODS A systematic search was performed using Ovid Medline and Embase. Keywords were used to generate refined queries with the inclusion of computer-aided diagnosis, artificial intelligence, and barriers and enablers. Three reviewers assessed the articles, with a fourth reviewer used for disagreements. The risk of bias was mitigated by including both quantitative and qualitative studies. RESULTS An electronic search from January 2000 to 2023 identified 513 studies. Twelve articles were found to fulfill the inclusion criteria: qualitative studies (n = 4), survey studies (n = 7), and randomized controlled trials (RCT) (n = 1). Among the most common barriers to AI implementation into radiology practice were radiologists' lack of acceptance and trust in AI innovations; a lack of awareness, knowledge, and familiarity with the technology; and perceived threat to the professional autonomy of radiologists. The most important identified AI implementation enablers were high expectations of AI's potential added value; the potential to decrease errors in diagnosis; the potential to increase efficiency when reaching a diagnosis; and the potential to improve the quality of patient care. CONCLUSIONS This scoping review found that few studies have been designed specifically to identify barriers and enablers to the acceptance of AI in radiology practice. The majority of studies have assessed the perception of AI replacing radiologists, rather than other barriers or enablers in the adoption of AI. To comprehensively evaluate the potential advantages and disadvantages of integrating AI innovations into radiology practice, gathering more robust research evidence on stakeholder perspectives and attitudes is essential.
Collapse
Affiliation(s)
- Fatma A. Eltawil
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Michael Atalla
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Emily Boulos
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
| | - Afsaneh Amirabadi
- Diagnostic Imaging Department, The Hospital for Sick Children, Toronto, ON M5G 1E8, Canada;
| | - Pascal N. Tyrrell
- Department of Medical Imaging, University of Toronto, Toronto, ON M5S 1A1, Canada; (F.A.E.); (M.A.); (E.B.)
- Department of Statistical Sciences, University of Toronto, Toronto, ON M5G 1Z5, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON M5S 1A8, Canada
| |
Collapse
|
50
|
Abell B, Naicker S, Rodwell D, Donovan T, Tariq A, Baysari M, Blythe R, Parsons R, McPhail SM. Identifying barriers and facilitators to successful implementation of computerized clinical decision support systems in hospitals: a NASSS framework-informed scoping review. Implement Sci 2023; 18:32. [PMID: 37495997 PMCID: PMC10373265 DOI: 10.1186/s13012-023-01287-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/17/2023] [Indexed: 07/28/2023] Open
Abstract
BACKGROUND Successful implementation and utilization of Computerized Clinical Decision Support Systems (CDSS) in hospitals is complex and challenging. Implementation science, and in particular the Nonadoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) framework, may offer a systematic approach for identifying and addressing these challenges. This review aimed to identify, categorize, and describe barriers and facilitators to CDSS implementation in hospital settings and map them to the NASSS framework. Exploring the applicability of the NASSS framework to CDSS implementation was a secondary aim. METHODS Electronic database searches were conducted (21 July 2020; updated 5 April 2022) in Ovid MEDLINE, Embase, Scopus, PyscInfo, and CINAHL. Original research studies reporting on measured or perceived barriers and/or facilitators to implementation and adoption of CDSS in hospital settings, or attitudes of healthcare professionals towards CDSS were included. Articles with a primary focus on CDSS development were excluded. No language or date restrictions were applied. We used qualitative content analysis to identify determinants and organize them into higher-order themes, which were then reflexively mapped to the NASSS framework. RESULTS Forty-four publications were included. These comprised a range of study designs, geographic locations, participants, technology types, CDSS functions, and clinical contexts of implementation. A total of 227 individual barriers and 130 individual facilitators were identified across the included studies. The most commonly reported influences on implementation were fit of CDSS with workflows (19 studies), the usefulness of the CDSS output in practice (17 studies), CDSS technical dependencies and design (16 studies), trust of users in the CDSS input data and evidence base (15 studies), and the contextual fit of the CDSS with the user's role or clinical setting (14 studies). Most determinants could be appropriately categorized into domains of the NASSS framework with barriers and facilitators in the "Technology," "Organization," and "Adopters" domains most frequently reported. No determinants were assigned to the "Embedding and Adaptation Over Time" domain. CONCLUSIONS This review identified the most common determinants which could be targeted for modification to either remove barriers or facilitate the adoption and use of CDSS within hospitals. Greater adoption of implementation theory should be encouraged to support CDSS implementation.
Collapse
Affiliation(s)
- Bridget Abell
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Sundresan Naicker
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia.
| | - David Rodwell
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Thomasina Donovan
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Amina Tariq
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Melissa Baysari
- Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Camperdown, Australia
| | - Robin Blythe
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Rex Parsons
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Steven M McPhail
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|