1
|
Kenny R, Fischhoff B, Davis A, Canfield C. Improving Social Bot Detection Through Aid and Training. HUMAN FACTORS 2024; 66:2323-2344. [PMID: 37963198 PMCID: PMC11382440 DOI: 10.1177/00187208231210145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
OBJECTIVE We test the effects of three aids on individuals' ability to detect social bots among Twitter personas: a bot indicator score, a training video, and a warning. BACKGROUND Detecting social bots can prevent online deception. We use a simulated social media task to evaluate three aids. METHOD Lay participants judged whether each of 60 Twitter personas was a human or social bot in a simulated online environment, using agreement between three machine learning algorithms to estimate the probability of each persona being a bot. Experiment 1 compared a control group and two intervention groups, one provided a bot indicator score for each tweet; the other provided a warning about social bots. Experiment 2 compared a control group and two intervention groups, one receiving the bot indicator scores and the other a training video, focused on heuristics for identifying social bots. RESULTS The bot indicator score intervention improved predictive performance and reduced overconfidence in both experiments. The training video was also effective, although somewhat less so. The warning had no effect. Participants rarely reported willingness to share content for a persona that they labeled as a bot, even when they agreed with it. CONCLUSIONS Informative interventions improved social bot detection; warning alone did not. APPLICATION We offer an experimental testbed and methodology that can be used to evaluate and refine interventions designed to reduce vulnerability to social bots. We show the value of two interventions that could be applied in many settings.
Collapse
Affiliation(s)
- Ryan Kenny
- United States Army, Fayetteville, NC, USA
| | | | - Alex Davis
- Carnegie Mellon University, Pittsburgh, PA, USA
| | - Casey Canfield
- Missouri University of Science and Technology, Rolla, MO, USA
| |
Collapse
|
2
|
Shawli L, Alsobhi M, Faisal Chevidikunnan M, Rosewilliam S, Basuodan R, Khan F. Physical therapists' perceptions and attitudes towards artificial intelligence in healthcare and rehabilitation: A qualitative study. Musculoskelet Sci Pract 2024; 73:103152. [PMID: 39067366 DOI: 10.1016/j.msksp.2024.103152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is being introduced to rehabilitation practices, and it can optimize the patient's outcome through their ability to design personalized care strategies and interventions. OBJECTIVES To understand the attitudes and perceptions of physical therapy professionals on the use of AI in rehabilitation in regard to treatment planning, diagnosis, outcome prediction, and advantages and disadvantages. DESIGN AND METHODS This paper followed an exploratory, qualitative research design. Semi-structured, one-to-one interviews were conducted with participants of different experience levels and specialties in physical therapy. Results were evaluated using thematic analysis. RESULTS Four themes were identified: (i) perceptions of AI and its applications in healthcare services, (ii) impact on the workforce (iii) considerations around implementing AI within rehabilitation and (iv) AI, and the fast-approaching future. Participants shared views on the potential impact of AI on rehabilitation practices, such as aiding the decision-making process, saving time and effort of both the therapist and patients. Participants have stressed on potential pitfalls that still need to be considered, such as patient data privacy, potential loss of patient-healthcare practitioner relationship, ethical concerns regarding overreliance on these applications and how that might hinder effective patient care. CONCLUSION The findings add to the literature about physical therapists' understanding regarding the use of AI in patient care. Several concerns were raised to the adoption of AI, including concerns about patient privacy, and ethical concerns. Based on the study findings, researchers emphasize the importance of establishing guidelines when incorporating AI in rehabilitation to improve the therapist's knowledge and skills.
Collapse
Affiliation(s)
- Lama Shawli
- Department of Occupational Therapy, College of Applied Medical Sciences, King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia
| | - Mashael Alsobhi
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Mohamed Faisal Chevidikunnan
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Sheeba Rosewilliam
- School of Sports, Exercise and Rehabilitation Sciences, University of Birmingham, Birmingham, United Kingdom
| | - Reem Basuodan
- Department of Rehabilitation Sciences, College of Health and Rehabilitation Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Fayaz Khan
- Department of Physical Therapy, Faculty of Medical Rehabilitation Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
3
|
Papachristou P, Söderholm M, Pallon J, Taloyan M, Polesie S, Paoli J, Anderson CD, Falk M. Evaluation of an artificial intelligence-based decision support for the detection of cutaneous melanoma in primary care: a prospective real-life clinical trial. Br J Dermatol 2024; 191:125-133. [PMID: 38234043 DOI: 10.1093/bjd/ljae021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/19/2024]
Abstract
BACKGROUND Use of artificial intelligence (AI), or machine learning, to assess dermoscopic images of skin lesions to detect melanoma has, in several retrospective studies, shown high levels of diagnostic accuracy on par with - or even outperforming - experienced dermatologists. However, the enthusiasm around these algorithms has not yet been matched by prospective clinical trials performed in authentic clinical settings. In several European countries, including Sweden, the initial clinical assessment of suspected skin cancer is principally conducted in the primary healthcare setting by primary care physicians, with or without access to teledermoscopic support from dermatology clinics. OBJECTIVES To determine the diagnostic performance of an AI-based clinical decision support tool for cutaneous melanoma detection, operated by a smartphone application (app), when used prospectively by primary care physicians to assess skin lesions of concern due to some degree of melanoma suspicion. METHODS This prospective multicentre clinical trial was conducted at 36 primary care centres in Sweden. Physicians used the smartphone app on skin lesions of concern by photographing them dermoscopically, which resulted in a dichotomous decision support text regarding evidence for melanoma. Regardless of the app outcome, all lesions underwent standard diagnostic procedures (surgical excision or referral to a dermatologist). After investigations were complete, lesion diagnoses were collected from the patients' medical records and compared with the app's outcome and other lesion data. RESULTS In total, 253 lesions of concern in 228 patients were included, of which 21 proved to be melanomas, with 11 thin invasive melanomas and 10 melanomas in situ. The app's accuracy in identifying melanomas was reflected in an area under the receiver operating characteristic (AUROC) curve of 0.960 [95% confidence interval (CI) 0.928-0.980], corresponding to a maximum sensitivity and specificity of 95.2% and 84.5%, respectively. For invasive melanomas alone, the AUROC was 0.988 (95% CI 0.965-0.997), corresponding to a maximum sensitivity and specificity of 100% and 92.6%, respectively. CONCLUSIONS The clinical decision support tool evaluated in this investigation showed high diagnostic accuracy when used prospectively in primary care patients, which could add significant clinical value for primary care physicians assessing skin lesions for melanoma.
Collapse
Affiliation(s)
- Panagiotis Papachristou
- Division of Family Medicine and Primary Care, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Atrium Healthcare Centre, Region Stockholm, Sweden
| | - My Söderholm
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Ekholmen Primary Healthcare Centre, Region Östergötland, Linköping, Sweden
| | - Jon Pallon
- Department of Clinical Sciences in Malmö, Family Medicine, Lund University, Malmö, Sweden
- Department of Research and Development, Region Kronoberg, Växjö, Sweden
| | - Marina Taloyan
- Division of Family Medicine and Primary Care, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Atrium Healthcare Centre, Region Stockholm, Sweden
| | - Sam Polesie
- Region Västra Götaland, Sahlgrenska University Hospital, Department of Dermatology and Venereology, Gothenburg, Sweden
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - John Paoli
- Region Västra Götaland, Sahlgrenska University Hospital, Department of Dermatology and Venereology, Gothenburg, Sweden
- Department of Dermatology and Venereology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Chris D Anderson
- Department of Biomedical and Clinical Sciences, Division of Dermatology and Venereology, Linköping University, Linköping, Sweden
| | - Magnus Falk
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Region Östergötland, Kärna Primary Healthcare Centre, Linköping, Sweden
| |
Collapse
|
4
|
Hennrich J, Ritz E, Hofmann P, Urbach N. Capturing artificial intelligence applications' value proposition in healthcare - a qualitative research study. BMC Health Serv Res 2024; 24:420. [PMID: 38570809 PMCID: PMC10993548 DOI: 10.1186/s12913-024-10894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 03/25/2024] [Indexed: 04/05/2024] Open
Abstract
Artificial intelligence (AI) applications pave the way for innovations in the healthcare (HC) industry. However, their adoption in HC organizations is still nascent as organizations often face a fragmented and incomplete picture of how they can capture the value of AI applications on a managerial level. To overcome adoption hurdles, HC organizations would benefit from understanding how they can capture AI applications' potential.We conduct a comprehensive systematic literature review and 11 semi-structured expert interviews to identify, systematize, and describe 15 business objectives that translate into six value propositions of AI applications in HC.Our results demonstrate that AI applications can have several business objectives converging into risk-reduced patient care, advanced patient care, self-management, process acceleration, resource optimization, and knowledge discovery.We contribute to the literature by extending research on value creation mechanisms of AI to the HC context and guiding HC organizations in evaluating their AI applications or those of the competition on a managerial level, to assess AI investment decisions, and to align their AI application portfolio towards an overarching strategy.
Collapse
Affiliation(s)
- Jasmin Hennrich
- FIM Research Institute for Information Management, University of Bayreuth, Branch Business and Information Systems Engineering of the Fraunhofer FIT, Wittelsbacherring 10, 95444, Bayreuth, Germany.
| | - Eva Ritz
- University St. Gallen, Dufourstrasse 50, 9000, St. Gallen, Switzerland
| | - Peter Hofmann
- FIM Research Institute for Information Management, University of Bayreuth, Branch Business and Information Systems Engineering of the Fraunhofer FIT, Wittelsbacherring 10, 95444, Bayreuth, Germany
- appliedAI Initiative GmbH, August-Everding-Straße 25, 81671, Munich, Germany
| | - Nils Urbach
- FIM Research Institute for Information Management, University of Bayreuth, Branch Business and Information Systems Engineering of the Fraunhofer FIT, Wittelsbacherring 10, 95444, Bayreuth, Germany
- Faculty Business and Law, Frankfurt University of Applied Sciences, Nibelungenplatz 1, 60318, Frankfurt Am Main, Germany
| |
Collapse
|
5
|
Stewart J, Freeman S, Eroglu E, Dumitrascu N, Lu J, Goudie A, Sprivulis P, Akhlaghi H, Tran V, Sanfilippo F, Celenza A, Than M, Fatovich D, Walker K, Dwivedi G. Attitudes towards artificial intelligence in emergency medicine. Emerg Med Australas 2024; 36:252-265. [PMID: 38044755 DOI: 10.1111/1742-6723.14345] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/24/2023] [Accepted: 10/30/2023] [Indexed: 12/05/2023]
Abstract
OBJECTIVE To assess Australian and New Zealand emergency clinicians' attitudes towards the use of artificial intelligence (AI) in emergency medicine. METHODS We undertook a qualitative interview-based study based on grounded theory. Participants were recruited through ED internal mailing lists, the Australasian College for Emergency Medicine Bulletin, and the research teams' personal networks. Interviews were transcribed, coded and themes presented. RESULTS Twenty-five interviews were conducted between July 2021 and May 2022. Thematic saturation was achieved after 22 interviews. Most participants were from either Western Australia (52%) or Victoria (16%) and were consultants (96%). More participants reported feeling optimistic (10/25) than neutral (6/25), pessimistic (2/25) or mixed (7/25) towards the use of AI in the ED. A minority expressed scepticism regarding the feasibility or value of implementing AI into the ED. Multiple potential risks and ethical issues were discussed by participants including skill loss from overreliance on AI, algorithmic bias, patient privacy and concerns over liability. Participants also discussed perceived inadequacies in existing information technology systems. Participants felt that AI technologies would be used as decision support tools and not replace the roles of emergency clinicians. Participants were not concerned about the impact of AI on their job security. Most (17/25) participants thought that AI would impact emergency medicine within the next 10 years. CONCLUSIONS Emergency clinicians interviewed were generally optimistic about the use of AI in emergency medicine, so long as it is used as a decision support tool and they maintain the ability to override its recommendations.
Collapse
Affiliation(s)
- Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Samuel Freeman
- SensiLab, Monash University, Melbourne, Victoria, Australia
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Ege Eroglu
- School of Medicine, The University of Notre Dame Australia, Fremantle, Western Australia, Australia
| | - Nicole Dumitrascu
- School of Medicine, The University of Notre Dame Australia, Fremantle, Western Australia, Australia
| | - Juan Lu
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Computer Science and Software Engineering, The University of Western Australia, Perth, Western Australia, Australia
| | - Adrian Goudie
- Department of Emergency Medicine, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Peter Sprivulis
- Strategy and Governance Division, Western Australia Department of Health, Perth, Western Australia, Australia
| | - Hamed Akhlaghi
- Department of Emergency Medicine, St Vincent's Hospital Melbourne, Melbourne, Victoria, Australia
| | - Viet Tran
- School of Medicine, University of Tasmania, Hobart, Tasmania, Australia
- Department of Emergency Medicine, Royal Hobart Hospital, Hobart, Tasmania, Australia
| | - Frank Sanfilippo
- School of Population and Global Health, The University of Western Australia, Perth, Western Australia, Australia
| | - Antonio Celenza
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Emergency Medicine, Sir Charles Gairdner Hospital, Perth, Western Australia, Australia
| | - Martin Than
- Department of Emergency Medicine, Christchurch Hospital, Christchurch, New Zealand
| | - Daniel Fatovich
- Emergency Medicine, Royal Perth Hospital, The University of Western Australia, Perth, Western Australia, Australia
- Centre for Clinical Research in Emergency Medicine, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Katie Walker
- School of Clinical Sciences at Monash Health, Monash University, Melbourne, Victoria, Australia
| | - Girish Dwivedi
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
- Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
- Department of Cardiology, Fiona Stanley Hospital, Perth, Western Australia, Australia
| |
Collapse
|
6
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
7
|
Helenason J, Ekström C, Falk M, Papachristou P. Exploring the feasibility of an artificial intelligence based clinical decision support system for cutaneous melanoma detection in primary care - a mixed method study. Scand J Prim Health Care 2024; 42:51-60. [PMID: 37982736 PMCID: PMC10851794 DOI: 10.1080/02813432.2023.2283190] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 11/08/2023] [Indexed: 11/21/2023] Open
Abstract
Objective: Skin examination to detect cutaneous melanomas is commonly performed in primary care. In recent years, clinical decision support systems (CDSS) based on artificial intelligence (AI) have been introduced within several diagnostic fields.Setting: This study employs a variety of qualitative and quantitative methodologies to investigate the feasibility of an AI-based CDSS to detect cutaneous melanoma in primary care.Subjects and Design: Fifteen primary care physicians (PCPs) underwent near-live simulations using the CDSS on a simulated patient, and subsequent individual semi-structured interviews were explored with a hybrid thematic analysis approach. Additionally, twenty-five PCPs performed a reader study (diagnostic assessment on the basis of image interpretation) of 18 dermoscopic images, both with and without help from AI, investigating the value of adding AI support to a PCPs decision. Perceived instrument usability was rated on the System Usability Scale (SUS).Results: From the interviews, the importance of trust in the CDSS emerged as a central concern. Scientific evidence supporting sufficient diagnostic accuracy of the CDSS was expressed as an important factor that could increase trust. Access to AI decision support when evaluating dermoscopic images proved valuable as it formally increased the physician's diagnostic accuracy. A mean SUS score of 84.8, corresponding to 'good' usability, was measured.Conclusion: AI-based CDSS might play an important future role in cutaneous melanoma diagnostics, provided sufficient evidence of diagnostic accuracy and usability supporting its trustworthiness among the users.
Collapse
Affiliation(s)
| | | | - Magnus Falk
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| | - Panagiotis Papachristou
- Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
8
|
Giddings R, Joseph A, Callender T, Janes SM, van der Schaar M, Sheringham J, Navani N. Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review. Lancet Digit Health 2024; 6:e131-e144. [PMID: 38278615 DOI: 10.1016/s2589-7500(23)00241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 10/20/2023] [Accepted: 11/14/2023] [Indexed: 01/28/2024]
Abstract
Machine learning (ML)-based risk prediction models hold the potential to support the health-care setting in several ways; however, use of such models is scarce. We aimed to review health-care professional (HCP) and patient perceptions of ML risk prediction models in published literature, to inform future risk prediction model development. Following database and citation searches, we identified 41 articles suitable for inclusion. Article quality varied with qualitative studies performing strongest. Overall, perceptions of ML risk prediction models were positive. HCPs and patients considered that models have the potential to add benefit in the health-care setting. However, reservations remain; for example, concerns regarding data quality for model development and fears of unintended consequences following ML model use. We identified that public views regarding these models might be more negative than HCPs and that concerns (eg, extra demands on workload) were not always borne out in practice. Conclusions are tempered by the low number of patient and public studies, the absence of participant ethnic diversity, and variation in article quality. We identified gaps in knowledge (particularly views from under-represented groups) and optimum methods for model explanation and alerts, which require future research.
Collapse
Affiliation(s)
- Rebecca Giddings
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK.
| | - Anabel Joseph
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Thomas Callender
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Sam M Janes
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| | - Mihaela van der Schaar
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK; The Alan Turing Institute, London, UK
| | - Jessica Sheringham
- Department of Applied Health Research, University College London, London, UK
| | - Neal Navani
- Lungs for Living Research Centre, UCL Respiratory, University College London, London, UK
| |
Collapse
|
9
|
Gunathilaka NJ, Gooden TE, Cooper J, Flanagan S, Marshall T, Haroon S, D'Elia A, Crowe F, Jackson T, Nirantharakumar K, Greenfield S. Perceptions on artificial intelligence-based decision-making for coexisting multiple long-term health conditions: protocol for a qualitative study with patients and healthcare professionals. BMJ Open 2024; 14:e077156. [PMID: 38307535 PMCID: PMC10836375 DOI: 10.1136/bmjopen-2023-077156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 11/22/2023] [Indexed: 02/04/2024] Open
Abstract
INTRODUCTION Coexisting multiple health conditions is common among older people, a population that is increasing globally. The potential for polypharmacy, adverse events, drug interactions and development of additional health conditions complicates prescribing decisions for these patients. Artificial intelligence (AI)-generated decision-making tools may help guide clinical decisions in the context of multiple health conditions, by determining which of the multiple medication options is best. This study aims to explore the perceptions of healthcare professionals (HCPs) and patients on the use of AI in the management of multiple health conditions. METHODS AND ANALYSIS A qualitative study will be conducted using semistructured interviews. Adults (≥18 years) with multiple health conditions living in the West Midlands of England and HCPs with experience in caring for patients with multiple health conditions will be eligible and purposively sampled. Patients will be identified from Clinical Practice Research Datalink (CPRD) Aurum; CPRD will contact general practitioners who will in turn, send a letter to patients inviting them to take part. Eligible HCPs will be recruited through British HCP bodies and known contacts. Up to 30 patients and 30 HCPs will be recruited, until data saturation is achieved. Interviews will be in-person or virtual, audio recorded and transcribed verbatim. The topic guide is designed to explore participants' attitudes towards AI-informed clinical decision-making to augment clinician-directed decision-making, the perceived advantages and disadvantages of both methods and attitudes towards risk management. Case vignettes comprising a common decision pathway for patients with multiple health conditions will be presented during each interview to invite participants' opinions on how their experiences compare. Data will be analysed thematically using the Framework Method. ETHICS AND DISSEMINATION This study has been approved by the National Health Service Research Ethics Committee (Reference: 22/SC/0210). Written informed consent or verbal consent will be obtained prior to each interview. The findings from this study will be disseminated through peer-reviewed publications, conferences and lay summaries.
Collapse
Affiliation(s)
| | - Tiffany E Gooden
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Jennifer Cooper
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Sarah Flanagan
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Tom Marshall
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Shamil Haroon
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Alexander D'Elia
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Francesca Crowe
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | - Thomas Jackson
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| | | | - Sheila Greenfield
- Institute of Applied Health Research, University of Birmingham, Birmingham, West Midlands, UK
| |
Collapse
|
10
|
Allen MR, Webb S, Mandvi A, Frieden M, Tai-Seale M, Kallenberg G. Navigating the doctor-patient-AI relationship - a mixed-methods study of physician attitudes toward artificial intelligence in primary care. BMC PRIMARY CARE 2024; 25:42. [PMID: 38281026 PMCID: PMC10821550 DOI: 10.1186/s12875-024-02282-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is a rapidly advancing field that is beginning to enter the practice of medicine. Primary care is a cornerstone of medicine and deals with challenges such as physician shortage and burnout which impact patient care. AI and its application via digital health is increasingly presented as a possible solution. However, there is a scarcity of research focusing on primary care physician (PCP) attitudes toward AI. This study examines PCP views on AI in primary care. We explore its potential impact on topics pertinent to primary care such as the doctor-patient relationship and clinical workflow. By doing so, we aim to inform primary care stakeholders to encourage successful, equitable uptake of future AI tools. Our study is the first to our knowledge to explore PCP attitudes using specific primary care AI use cases rather than discussing AI in medicine in general terms. METHODS From June to August 2023, we conducted a survey among 47 primary care physicians affiliated with a large academic health system in Southern California. The survey quantified attitudes toward AI in general as well as concerning two specific AI use cases. Additionally, we conducted interviews with 15 survey respondents. RESULTS Our findings suggest that PCPs have largely positive views of AI. However, attitudes often hinged on the context of adoption. While some concerns reported by PCPs regarding AI in primary care focused on technology (accuracy, safety, bias), many focused on people-and-process factors (workflow, equity, reimbursement, doctor-patient relationship). CONCLUSION Our study offers nuanced insights into PCP attitudes towards AI in primary care and highlights the need for primary care stakeholder alignment on key issues raised by PCPs. AI initiatives that fail to address both the technological and people-and-process concerns raised by PCPs may struggle to make an impact.
Collapse
Affiliation(s)
- Matthew R Allen
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA.
- Division of Biomedical Informatics, University of California San Diego, La Jolla, CA, 92093, USA.
| | - Sophie Webb
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Ammar Mandvi
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Marshall Frieden
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Ming Tai-Seale
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| | - Gene Kallenberg
- Department of Family Medicine, University of California San Diego, La Jolla, CA, 92093, USA
| |
Collapse
|
11
|
Fazakarley CA, Breen M, Thompson B, Leeson P, Williamson V. Beliefs, experiences and concerns of using artificial intelligence in healthcare: A qualitative synthesis. Digit Health 2024; 10:20552076241230075. [PMID: 38347935 PMCID: PMC10860471 DOI: 10.1177/20552076241230075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2024] [Indexed: 02/15/2024] Open
Abstract
Objective Artificial intelligence (AI) is a developing field in the context of healthcare. As this technology continues to be implemented in patient care, there is a growing need to understand the thoughts and experiences of stakeholders in this area to ensure that future AI development and implementation is successful. The aim of this study was to conduct a literature search of qualitative studies exploring the opinions of stakeholders such as clinicians, patients, and technology experts in order to establish the most common themes and ideas that have been presented in this research. Methods A literature search was conducted of existing qualitative research on stakeholder beliefs about the use of AI use in healthcare. Twenty-one papers were selected and analysed resulting in the development of four key themes relating to patient care, patient-doctor relationships, lack of education and resources, and the need for regulations. Results Overall, patients and healthcare workers are open to the use of AI in care and appear positive about potential benefits. However, concerns were raised relating to the lack of empathy in interactions of AI tools, and potential risks that may arise from the data collection needed for AI use and development. Stakeholders in the healthcare, technology, and business sectors all stressed that there was a lack of appropriate education, funding, and guidelines surrounding AI, and these concerns needed to be addressed to ensure future implementation is safe and suitable for patient care. Conclusion Ultimately, the results found in this study highlighted that there was a need for communication between stakeholder in order for these concerns to be addressed, mitigate potential risks, and maximise benefits for patients and clinicians alike. The results also identified a need for further qualitative research in this area to further understand stakeholder experiences as AI use continues to develop.
Collapse
Affiliation(s)
| | | | | | - Paul Leeson
- RDM Division of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | - Victoria Williamson
- King's Centre for Military Health Research, King's College London, London, UK
| |
Collapse
|
12
|
Diaz-Asper C, Chandler C, Elvevåg B. Cognitive Screening for Mild Cognitive Impairment: Clinician Perspectives on Current Practices and Future Directions. J Alzheimers Dis 2024; 99:869-876. [PMID: 38728193 DOI: 10.3233/jad-240293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2024]
Abstract
This study surveyed 51 specialist clinicians for their views on existing cognitive screening tests for mild cognitive impairment and their opinions about a hypothetical remote screener driven by artificial intelligence (AI). Responses revealed significant concerns regarding the sensitivity, specificity, and time taken to administer current tests, along with a general willingness to consider adopting telephone-based screening driven by AI. Findings highlight the need to design screeners that address the challenges of recognizing the earliest stages of cognitive decline and that prioritize not only accuracy but also stakeholder input.
Collapse
Affiliation(s)
- Catherine Diaz-Asper
- Department of Psychology & Center for Optimal Aging, Marymount University, Arlington, VA, USA
| | - Chelsea Chandler
- Institute of Cognitive Science, University of Colorado, Boulder, CO, USA
| | - Brita Elvevåg
- Department of Clinical Medicine, University of Tromsø-the Arctic University of Norway, Tromsø-, Norway
| |
Collapse
|
13
|
Townsend BA, Plant KL, Hodge VJ, Ashaolu O, Calinescu R. Medical practitioner perspectives on AI in emergency triage. Front Digit Health 2023; 5:1297073. [PMID: 38125759 PMCID: PMC10731272 DOI: 10.3389/fdgth.2023.1297073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023] Open
Abstract
Introduction A proposed Diagnostic AI System for Robot-Assisted Triage ("DAISY") is under development to support Emergency Department ("ED") triage following increasing reports of overcrowding and shortage of staff in ED care experienced within National Health Service, England ("NHS") but also globally. DAISY aims to reduce ED patient wait times and medical practitioner overload. The objective of this study was to explore NHS health practitioners' perspectives and attitudes towards the future use of AI-supported technologies in ED triage. Methods Between July and August 2022 a qualitative-exploratory research study was conducted to collect and capture the perceptions and attitudes of nine NHS healthcare practitioners to better understand the challenges and benefits of a DAISY deployment. The study was based on a thematic analysis of semi-structured interviews. The study involved qualitative data analysis of the interviewees' responses. Audio-recordings were transcribed verbatim, and notes included into data documents. The transcripts were coded line-by-line, and data were organised into themes and sub-themes. Both inductive and deductive approaches to thematic analysis were used to analyse such data. Results Based on a qualitative analysis of coded interviews with the practitioners, responses were categorised into broad main thematic-types, namely: trust; current practice; social, legal, ethical, and cultural concerns; and empathetic practice. Sub-themes were identified for each main theme. Further quantitative analyses explored the vocabulary and sentiments of the participants when talking generally about NHS ED practices compared to discussing DAISY. Limitations include a small sample size and the requirement that research participants imagine a prototype AI-supported system still under development. The expectation is that such a system would work alongside the practitioner. Findings can be generalisable to other healthcare AI-supported systems and to other domains. Discussion This study highlights the benefits and challenges for an AI-supported triage healthcare solution. The study shows that most NHS ED practitioners interviewed were positive about such adoption. Benefits cited were a reduction in patient wait times in the ED, assistance in the streamlining of the triage process, support in calling for appropriate diagnostics and for further patient examination, and identification of those very unwell and requiring more immediate and urgent attention. Words used to describe the system were that DAISY is a "good idea", "help", helpful, "easier", "value", and "accurate". Our study demonstrates that trust in the system is a significant driver of use and a potential barrier to adoption. Participants emphasised social, legal, ethical, and cultural considerations and barriers to DAISY adoption and the importance of empathy and non-verbal cues in patient interactions. Findings demonstrate how DAISY might support and augment human medical performance in ED care, and provide an understanding of attitudinal barriers and considerations for the development and implementation of future triage AI-supported systems.
Collapse
Affiliation(s)
| | - Katherine L. Plant
- Faculty of Engineering & Physical Sciences, University of Southampton, Southampton, Hampshire, United Kingdom
| | - Victoria J. Hodge
- Department of Computer Science, University of York, York, United Kingdom
| | | | - Radu Calinescu
- Department of Computer Science, University of York, York, United Kingdom
| |
Collapse
|
14
|
Hummelsberger P, Koch TK, Rauh S, Dorn J, Lermer E, Raue M, Hudecek MFC, Schicho A, Colak E, Ghassemi M, Gaube S. Insights on the Current State and Future Outlook of AI in Health Care: Expert Interview Study. JMIR AI 2023; 2:e47353. [PMID: 38875571 PMCID: PMC11041415 DOI: 10.2196/47353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 07/06/2023] [Accepted: 08/01/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is often promoted as a potential solution for many challenges health care systems face worldwide. However, its implementation in clinical practice lags behind its technological development. OBJECTIVE This study aims to gain insights into the current state and prospects of AI technology from the stakeholders most directly involved in its adoption in the health care sector whose perspectives have received limited attention in research to date. METHODS For this purpose, the perspectives of AI researchers and health care IT professionals in North America and Western Europe were collected and compared for profession-specific and regional differences. In this preregistered, mixed methods, cross-sectional study, 23 experts were interviewed using a semistructured guide. Data from the interviews were analyzed using deductive and inductive qualitative methods for the thematic analysis along with topic modeling to identify latent topics. RESULTS Through our thematic analysis, four major categories emerged: (1) the current state of AI systems in health care, (2) the criteria and requirements for implementing AI systems in health care, (3) the challenges in implementing AI systems in health care, and (4) the prospects of the technology. Experts discussed the capabilities and limitations of current AI systems in health care in addition to their prevalence and regional differences. Several criteria and requirements deemed necessary for the successful implementation of AI systems were identified, including the technology's performance and security, smooth system integration and human-AI interaction, costs, stakeholder involvement, and employee training. However, regulatory, logistical, and technical issues were identified as the most critical barriers to an effective technology implementation process. In the future, our experts predicted both various threats and many opportunities related to AI technology in the health care sector. CONCLUSIONS Our work provides new insights into the current state, criteria, challenges, and outlook for implementing AI technology in health care from the perspective of AI researchers and IT professionals in North America and Western Europe. For the full potential of AI-enabled technologies to be exploited and for them to contribute to solving current health care challenges, critical implementation criteria must be met, and all groups involved in the process must work together.
Collapse
Affiliation(s)
- Pia Hummelsberger
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Timo K Koch
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
- Department of Psychology, LMU Munich, Munich, Germany
| | - Sabrina Rauh
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Julia Dorn
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
| | - Eva Lermer
- LMU Center for Leadership and People Management, Department of Psychology, LMU Munich, Munich, Germany
- Department of Business Psychology, Technical University of Applied Sciences Augsburg, Augsburg, Germany
| | - Martina Raue
- MIT AgeLab, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Matthias F C Hudecek
- Department of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Andreas Schicho
- Department of Radiology, University Hospital Regensburg, Regensburg, Germany
| | - Errol Colak
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Marzyeh Ghassemi
- Electrical Engineering and Computer Science, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, United States
- Vector Institute, Toronto, ON, Canada
| | - Susanne Gaube
- UCL Global Business School for Health, University College London, London, United Kingdom
| |
Collapse
|
15
|
Chen Y, Wu Z, Wang P, Xie L, Yan M, Jiang M, Yang Z, Zheng J, Zhang J, Zhu J. Radiology Residents' Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study. J Med Internet Res 2023; 25:e48249. [PMID: 37856181 PMCID: PMC10623237 DOI: 10.2196/48249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/07/2023] [Accepted: 09/01/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) is transforming various fields, with health care, especially diagnostic specialties such as radiology, being a key but controversial battleground. However, there is limited research systematically examining the response of "human intelligence" to AI. OBJECTIVE This study aims to comprehend radiologists' perceptions regarding AI, including their views on its potential to replace them, its usefulness, and their willingness to accept it. We examine the influence of various factors, encompassing demographic characteristics, working status, psychosocial aspects, personal experience, and contextual factors. METHODS Between December 1, 2020, and April 30, 2021, a cross-sectional survey was completed by 3666 radiology residents in China. We used multivariable logistic regression models to examine factors and associations, reporting odds ratios (ORs) and 95% CIs. RESULTS In summary, radiology residents generally hold a positive attitude toward AI, with 29.90% (1096/3666) agreeing that AI may reduce the demand for radiologists, 72.80% (2669/3666) believing AI improves disease diagnosis, and 78.18% (2866/3666) feeling that radiologists should embrace AI. Several associated factors, including age, gender, education, region, eye strain, working hours, time spent on medical images, resilience, burnout, AI experience, and perceptions of residency support and stress, significantly influence AI attitudes. For instance, burnout symptoms were associated with greater concerns about AI replacement (OR 1.89; P<.001), less favorable views on AI usefulness (OR 0.77; P=.005), and reduced willingness to use AI (OR 0.71; P<.001). Moreover, after adjusting for all other factors, perceived AI replacement (OR 0.81; P<.001) and AI usefulness (OR 5.97; P<.001) were shown to significantly impact the intention to use AI. CONCLUSIONS This study profiles radiology residents who are accepting of AI. Our comprehensive findings provide insights for a multidimensional approach to help physicians adapt to AI. Targeted policies, such as digital health care initiatives and medical education, can be developed accordingly.
Collapse
Affiliation(s)
- Yanhua Chen
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Ziye Wu
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Peicheng Wang
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Linbo Xie
- Vanke School of Public Health, Tsinghua University, Beijing, China
- School of Medicine, Tsinghua University, Beijing, China
| | - Mengsha Yan
- Vanke School of Public Health, Tsinghua University, Beijing, China
| | - Maoqing Jiang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jianjun Zheng
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jingfeng Zhang
- Department of Radiology, Ningbo No. 2 Hospital, Ningbo, China
| | - Jiming Zhu
- Vanke School of Public Health, Tsinghua University, Beijing, China
- Institute for Healthy China, Tsinghua University, Beijing, China
| |
Collapse
|
16
|
Diel S, Doctor E, Reith R, Buck C, Eymann T. Examining supporting and constraining factors of physicians' acceptance of telemedical online consultations: a survey study. BMC Health Serv Res 2023; 23:1128. [PMID: 37858170 PMCID: PMC10588103 DOI: 10.1186/s12913-023-10032-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 09/14/2023] [Indexed: 10/21/2023] Open
Abstract
As healthcare demands exceed outpatient physicians' capacities, telemedicine holds far-reaching potential for both physicians and patients. It is crucial to holistically analyze physicians' acceptance of telemedical applications, such as online consultations. This study seeks to identify supporting and constraining factors that influence outpatient physicians' acceptance of telemedicine.We develop a model based on the unified theory of acceptance and use of technology (UTAUT). To empirically examine our research model, we conducted a survey among German physicians (n = 127) in 2018-2019. We used the partial least squares (PLS) modeling approach to test our model, including a mediation analysis. The results indicate that performance expectancy (β = .397, P < .001), effort expectancy (β = .134, P = .03), and social influence (β = .337, P < .001) strongly impact the intention to conduct online consultations and explain 55% of its variance. Structural conditions regarding data security comprise a key antecedent, associating with performance expectancy (β = .193, P < .001) and effort expectancy (β = .295, P < .001). Regarding potential barriers to usage intentions, we find that IT anxiety predicts performance (β = -.342, P < .001) and effort expectancy (β = -.364, P < .001), while performance expectancy fully mediates (βdirect = .022, P = .71; βindirect = -.138, P < .001) the direct relationship between IT anxiety and the intention to use telemedical applications.This research provides explanations for physicians' behavioral intention to use online consultations, underlining UTAUT's applicability in healthcare contexts. To boost acceptance, social influences, such as personal connections and networking are vital, as colleagues can serve as multipliers to reach convergence on online consultations among peers. To overcome physicians' IT anxiety, training, demonstrations, knowledge sharing, and management incentives are recommended. Furthermore, regulations and standards to build trust in the compliance of online consultations with data protection guidelines need reinforcement from policymakers and hospital management alike.
Collapse
Affiliation(s)
- Sören Diel
- Branch Business & Information Systems Engineering of the Fraunhofer FIT and FIM Research Center for Information Management, University of Bayreuth, Wittelsbacherring 10, 95444, Bayreuth, Germany
| | - Eileen Doctor
- Branch Business & Information Systems Engineering of the Fraunhofer FIT and FIM Research Center for Information Management, University of Bayreuth, Wittelsbacherring 10, 95444, Bayreuth, Germany.
| | - Riccardo Reith
- Chair of General Business Management, University of Bayreuth, Universitätsstraße 30, 95447, Bayreuth, Germany
| | - Christoph Buck
- Faculty of Informatics, Augsburg University of Applied Sciences and Branch Business & Information Systems Engineering of the Fraunhofer FIT, Alter Postweg 101, 86159, Augsburg, Germany
- QUT Business School, Centre for Future Enterprise, Queensland University of Technology, 2 George St, Brisbane, QLD-4000, Australia
| | - Torsten Eymann
- Branch Business & Information Systems Engineering of the Fraunhofer FIT and FIM Research Center for Information Management, University of Bayreuth, Wittelsbacherring 10, 95444, Bayreuth, Germany
| |
Collapse
|
17
|
Hamedani Z, Moradi M, Kalroozi F, Manafi Anari A, Jalalifar E, Ansari A, Aski BH, Nezamzadeh M, Karim B. Evaluation of acceptance, attitude, and knowledge towards artificial intelligence and its application from the point of view of physicians and nurses: A provincial survey study in Iran: A cross-sectional descriptive-analytical study. Health Sci Rep 2023; 6:e1543. [PMID: 37674620 PMCID: PMC10477406 DOI: 10.1002/hsr2.1543] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/17/2023] [Accepted: 08/18/2023] [Indexed: 09/08/2023] Open
Abstract
Background and Aims The prospect of using artificial intelligence (AI) in healthcare is bright and promising, and its use can have a significant impact on cost reduction and decrease the possibility of error and negligence among healthcare workers. This study aims to investigate the level of knowledge, attitude, and acceptance among Iranian physicians and nurses. Methods This cross-sectional descriptive-analytical study was conducted in eight public university hospitals located in Tehran on 400 physicians and nurses. To conduct the study, convenient sampling was used with the help of researcher-made questionnaires. Statistical analysis was done by SPSS 21 The mean and standard deviation and Chi-square and Fisher's exact tests were used. Results In this study, the level of knowledge among the research subjects was average (14.66 ± 4.53), the level of their attitude toward AI was relatively favorable (47.81 ± 6.74), and their level of acceptance of AI was average (103.19 ± 13.70). Moreover, from the participant's perspective, AI in medicine is most widely used in increasing the accuracy of diagnostic tests (86.5%), identifying drug interactions (82.75%), and helping to analyze medical tests and imaging (80%). There was a statistically significant relationship between the variable of acceptance of AI and the participant's level of education (p = 0.028), participation in an AI training course (p = 0.022), and the hospital department where they worked (p < 0.001). Conclusion In this study, both the knowledge and the acceptance of the participants towards AI were proved to be at an average level and the attitude towards AI was relatively favorable, which is in contrast with the very rapid and inevitable expansion of AI. Although our participants were aware of the growing use of AI in medicine, they had a cautious attitude toward this.
Collapse
Affiliation(s)
- Zeinab Hamedani
- Department of Midwifery, College of Nursing and MidwiferyKaraj Islamic Azad UniversityKarajIran
| | - Mohsen Moradi
- Department of Psychiatric Nursing, School of Nursing & MidwiferyShahrekord University of Medical SciencesShahrekordIran
| | - Fatemeh Kalroozi
- Department of Pediatric Nursing, College of NursingAja University of Medical SciencesTehranIran
| | - Ali Manafi Anari
- Department of Pediatrics, School of Medicine, Ali Asghar Children's HospitalIran University of Medical ScienceTehranIran
| | - Erfan Jalalifar
- Student Research CommitteeTabriz University of Medical SciencesTabrizIran
| | - Arina Ansari
- Student Research CommitteeNorth Khorasan University of Medical SciencesBojnurdIran
| | - Behzad H. Aski
- Department of Pediatrics, School of Medicine, Ali Asghar Children's HospitalIran University of Medical ScienceTehranIran
| | - Maryam Nezamzadeh
- Department of Critical Care Nursing, Faculty of NursingAja University of Medical SciencesTehranIran
| | - Bardia Karim
- Student Research CommitteeBabol University of Medical SciencesBabolMazandaranIran
| |
Collapse
|
18
|
Hogg HDJ, Al-Zubaidy M, Keane PA, Hughes G, Beyer FR, Maniatopoulos G. Evaluating the translation of implementation science to clinical artificial intelligence: a bibliometric study of qualitative research. FRONTIERS IN HEALTH SERVICES 2023; 3:1161822. [PMID: 37492632 PMCID: PMC10364639 DOI: 10.3389/frhs.2023.1161822] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 06/26/2023] [Indexed: 07/27/2023]
Abstract
Introduction Whilst a theoretical basis for implementation research is seen as advantageous, there is little clarity over if and how the application of theories, models or frameworks (TMF) impact implementation outcomes. Clinical artificial intelligence (AI) continues to receive multi-stakeholder interest and investment, yet a significant implementation gap remains. This bibliometric study aims to measure and characterize TMF application in qualitative clinical AI research to identify opportunities to improve research practice and its impact on clinical AI implementation. Methods Qualitative research of stakeholder perspectives on clinical AI published between January 2014 and October 2022 was systematically identified. Eligible studies were characterized by their publication type, clinical and geographical context, type of clinical AI studied, data collection method, participants and application of any TMF. Each TMF applied by eligible studies, its justification and mode of application was characterized. Results Of 202 eligible studies, 70 (34.7%) applied a TMF. There was an 8-fold increase in the number of publications between 2014 and 2022 but no significant increase in the proportion applying TMFs. Of the 50 TMFs applied, 40 (80%) were only applied once, with the Technology Acceptance Model applied most frequently (n = 9). Seven TMFs were novel contributions embedded within an eligible study. A minority of studies justified TMF application (n = 51,58.6%) and it was uncommon to discuss an alternative TMF or the limitations of the one selected (n = 11,12.6%). The most common way in which a TMF was applied in eligible studies was data analysis (n = 44,50.6%). Implementation guidelines or tools were explicitly referenced by 2 reports (1.0%). Conclusion TMFs have not been commonly applied in qualitative research of clinical AI. When TMFs have been applied there has been (i) little consensus on TMF selection (ii) limited description of selection rationale and (iii) lack of clarity over how TMFs inform research. We consider this to represent an opportunity to improve implementation science's translation to clinical AI research and clinical AI into practice by promoting the rigor and frequency of TMF application. We recommend that the finite resources of the implementation science community are diverted toward increasing accessibility and engagement with theory informed practices. The considered application of theories, models and frameworks (TMF) are thought to contribute to the impact of implementation science on the translation of innovations into real-world care. The frequency and nature of TMF use are yet to be described within digital health innovations, including the prominent field of clinical AI. A well-known implementation gap, coined as the "AI chasm" continues to limit the impact of clinical AI on real-world care. From this bibliometric study of the frequency and quality of TMF use within qualitative clinical AI research, we found that TMFs are usually not applied, their selection is highly varied between studies and there is not often a convincing rationale for their selection. Promoting the rigor and frequency of TMF use appears to present an opportunity to improve the translation of clinical AI into practice.
Collapse
Affiliation(s)
- H. D. J. Hogg
- Faculty of Medical Sciences, Newcastle University, Newcastle Upon Tyne, United Kingdom
- The Royal Victoria Infirmary, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle Upon Tyne, United Kingdom
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - M. Al-Zubaidy
- The Royal Victoria Infirmary, Newcastle Upon Tyne Hospitals NHS Foundation Trust, Newcastle Upon Tyne, United Kingdom
| | - P. A. Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - G. Hughes
- Nuffield Department of Primary Care Health Sciences, Oxford University, Oxford, United Kingdom
- University ofLeicester School of Business, University of Leicester, Leicester, United Kingdom
| | - F. R. Beyer
- Evidence Synthesis Group, Population Health Sciences Institute, Newcastle University, Newcastle Upon Tyne, United Kingdom
| | - G. Maniatopoulos
- Faculty of Medical Sciences, Newcastle University, Newcastle Upon Tyne, United Kingdom
- University ofLeicester School of Business, University of Leicester, Leicester, United Kingdom
| |
Collapse
|
19
|
Thirunavukarasu AJ, Hassan R, Mahmood S, Sanghera R, Barzangi K, El Mukashfi M, Shah S. Trialling a Large Language Model (ChatGPT) in General Practice With the Applied Knowledge Test: Observational Study Demonstrating Opportunities and Limitations in Primary Care. JMIR MEDICAL EDUCATION 2023; 9:e46599. [PMID: 37083633 PMCID: PMC10163403 DOI: 10.2196/46599] [Citation(s) in RCA: 55] [Impact Index Per Article: 55.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/31/2023] [Accepted: 04/11/2023] [Indexed: 05/03/2023]
Abstract
BACKGROUND Large language models exhibiting human-level performance in specialized tasks are emerging; examples include Generative Pretrained Transformer 3.5, which underlies the processing of ChatGPT. Rigorous trials are required to understand the capabilities of emerging technology, so that innovation can be directed to benefit patients and practitioners. OBJECTIVE Here, we evaluated the strengths and weaknesses of ChatGPT in primary care using the Membership of the Royal College of General Practitioners Applied Knowledge Test (AKT) as a medium. METHODS AKT questions were sourced from a web-based question bank and 2 AKT practice papers. In total, 674 unique AKT questions were inputted to ChatGPT, with the model's answers recorded and compared to correct answers provided by the Royal College of General Practitioners. Each question was inputted twice in separate ChatGPT sessions, with answers on repeated trials compared to gauge consistency. Subject difficulty was gauged by referring to examiners' reports from 2018 to 2022. Novel explanations from ChatGPT-defined as information provided that was not inputted within the question or multiple answer choices-were recorded. Performance was analyzed with respect to subject, difficulty, question source, and novel model outputs to explore ChatGPT's strengths and weaknesses. RESULTS Average overall performance of ChatGPT was 60.17%, which is below the mean passing mark in the last 2 years (70.42%). Accuracy differed between sources (P=.04 and .06). ChatGPT's performance varied with subject category (P=.02 and .02), but variation did not correlate with difficulty (Spearman ρ=-0.241 and -0.238; P=.19 and .20). The proclivity of ChatGPT to provide novel explanations did not affect accuracy (P>.99 and .23). CONCLUSIONS Large language models are approaching human expert-level performance, although further development is required to match the performance of qualified primary care physicians in the AKT. Validated high-performance models may serve as assistants or autonomous clinical tools to ameliorate the general practice workforce crisis.
Collapse
Affiliation(s)
| | - Refaat Hassan
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | - Shathar Mahmood
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | - Rohan Sanghera
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | - Kara Barzangi
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | | | - Sachin Shah
- Attenborough Surgery, Bushey Medical Centre, Bushey, United Kingdom
| |
Collapse
|
20
|
Frisinger A, Papachristou P. The voice of healthcare: introducing digital decision support systems into clinical practice - a qualitative study. BMC PRIMARY CARE 2023; 24:67. [PMID: 36907875 PMCID: PMC10008705 DOI: 10.1186/s12875-023-02024-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 03/01/2023] [Indexed: 03/14/2023]
Abstract
BACKGROUND There is a need to accelerate digital transformation in healthcare to meet increasing needs and demands. The accuracy of medical digital diagnosis tools is improving. The introduction of new technology in healthcare can however be challenging and it is unclear how it should be done to reach desired results. The aim of this study was to explore perceptions and experiences of introducing new Information Technology (IT) in a primary healthcare organisation, exemplified with a Clinical Decision Support System (CDSS) for malignant melanoma. METHODS A qualitative interview-based study was performed in Region Stockholm, Sweden, with fifteen medical doctors representing three different organisational levels - primary care physician, primary healthcare centre manager, and regional manager/chief medical officer. In addition, one software provider was included. Interview data were analysed according to content analysis. RESULTS One central theme "Introduction of digital CDSS in primary healthcare requires a multidimensional perspective and handling" along with seven main categories and thirty-three subcategories emerged from the analysis. Digital transformation showed to be key for current healthcare providers to stay relevant and competitive. However, healthcare represents a closed community, very capable but with lack of time, fostered to be sceptical to new why change needs to bring true value and be inspired by people with medical background to motivate the powerful frontline. CONCLUSIONS This qualitative study revealed structured information of what goes wrong and right and what needs to be considered when driving digital change in primary care organisations. The task shows to be complex and the importance of listening to the voice of healthcare is valuable for understanding the conditions that need to be fulfilled when adopting new technology into a healthcare organization. By considering the findings of this study upcoming digital transformations can improve their success-rate. The information may also be used in developing a holistic approach or framework model, adapted to primary health care, that can support and accelerate the needed digitalization in healthcare as such.
Collapse
Affiliation(s)
- Ann Frisinger
- Study Programme in Medicine, Karolinska Institutet, Stockholm, Sweden.
| | - Panagiotis Papachristou
- Division of Family Medicine and Primary Care, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, SE-141 83, Stockholm, Sweden.
| |
Collapse
|
21
|
Stanley AL, Edwards TC, Jaere MD, Lex JR, Jones GG. An automated, web-based triage tool may optimise referral pathways in elective orthopaedic surgery: A proof-of-concept study. Digit Health 2023; 9:20552076231152177. [PMID: 36762026 PMCID: PMC9903022 DOI: 10.1177/20552076231152177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 01/03/2023] [Indexed: 01/28/2023] Open
Abstract
Introduction Knee pain is caused by various pathologies, making evaluation in primary-care challenging. Subsequently, an over-reliance on imaging, such as radiographs and MRI exists. Electronic-triage tools represent an innovative solution to this problem. The aims of this study were to establish the magnitude of unnecessary knee imaging prior to orthopaedic surgeon referral, and ascertain whether an e-triage tool outperforms existing clinical pathways to recommend correct imaging. Methods Patients ≥18 years presenting with knee pain treated with arthroscopy or arthroplasty at a single academic hospital between 2015 and 2020 were retrospectively identified. The timing and appropriateness of imaging were assessed according to national guidelines, and classified as 'necessary', 'unnecessary' or 'required MRI'. Based on an eDelphi consensus study, a symptom-based e-triage tool was developed and piloted to preliminarily diagnose five common knee pathologies and suggest appropriate imaging. Results 1462 patients were identified. 17.2% (n = 132) of arthroplasty patients received an 'unnecessary MRI', 27.6% (n = 192) of arthroscopy patients did not have a 'necessary MRI', requiring follow-up. Forty-one patients trialled the e-triage pilot (mean age: 58.4 years, 58.5% female). Preliminary diagnoses were available for 33 patients. The e-triage tool correctly identified three of the four knee pathologies (one pathology did not present). 79.2% (n = 19) of participants would use the tool again. Conclusion A substantial number of knee pain patients receive incorrect imaging, incurring delays and unnecessary costs. A symptom-based e-triage tool was developed, with promising performance and user feedback. With refinement using larger datasets, this tool has the potential to improve wait-times, referral quality and reduce cost.
Collapse
Affiliation(s)
| | - Thomas C. Edwards
- Faculty of Medicine, Imperial College London, London, UK,MSk Lab, Imperial College London, London, UK
| | - Martin D. Jaere
- Faculty of Medicine, Imperial College London, London, UK,MSk Lab, Imperial College London, London, UK
| | - Johnathan R. Lex
- Division of Orthopaedic Surgery, Department of Surgery, University of Toronto, Toronto, Canada
| | - Gareth G. Jones
- Faculty of Medicine, Imperial College London, London, UK,MSk Lab, Imperial College London, London, UK,Gareth G. Jones, MSk Lab, Sir Michael Uren Hub, Imperial College London, White City Campus, 86 Wood Lane, London W12 0BZ, UK.
| |
Collapse
|
22
|
Čartolovni A, Malešević A, Poslon L. Critical analysis of the AI impact on the patient-physician relationship: A multi-stakeholder qualitative study. Digit Health 2023; 9:20552076231220833. [PMID: 38130798 PMCID: PMC10734361 DOI: 10.1177/20552076231220833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Objective This qualitative study aims to present the aspirations, expectations and critical analysis of the potential for artificial intelligence (AI) to transform patient-physician relationship, according to multi-stakeholder insight. Methods This study was conducted from June to December 2021, using an anticipatory ethics approach and sociology of expectations as the theoretical frameworks. It focused mainly on three groups of stakeholders; namely, physicians (n = 12), patients (n = 15) and healthcare managers (n = 11), all of whom are directly related to the adoption of AI in medicine (n = 38). Results In this study, interviews were conducted with 40% of the patients in the sample (15/38), as well as 31% of the physicians (12/38) and 29% of health managers in the sample (11/38). The findings highlight the following: (1) the impact of AI on fundamental aspects of the patient-physician relationship and the underlying importance of a synergistic relationship between the physician and AI; (2) the potential for AI to alleviate workload and reduce administrative burden by saving time and putting the patient at the centre of the caring process and (3) the potential risk to the holistic approach by neglecting humanness in healthcare. Conclusions This multi-stakeholder qualitative study, which focused on the micro-level of healthcare decision-making, sheds new light on the impact of AI on healthcare and the potential transformation of patient-physician relationship. The results of the current study highlight the need to adopt a critical awareness approach to the implementation of AI in healthcare by applying critical thinking and reasoning. It is important not to rely solely upon the recommendations of AI while neglecting clinical reasoning and physicians' knowledge of best clinical practices. Instead, it is vital that the core values of the existing patient-physician relationship - such as trust and honesty, conveyed through open and sincere communication - are preserved.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Zagreb, Croatia
| | - Anamaria Malešević
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| | - Luka Poslon
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| |
Collapse
|
23
|
Wewetzer L, Held LA, Goetz K, Steinhäuser J. Determinants of the implementation of artificial intelligence-based screening for diabetic retinopathy-a cross-sectional study with general practitioners in Germany. Digit Health 2023; 9:20552076231176644. [PMID: 37274367 PMCID: PMC10233602 DOI: 10.1177/20552076231176644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 05/02/2023] [Indexed: 06/06/2023] Open
Abstract
Objective Diabetic retinopathy (DR) may lead to irreversible damage to the eye and cause blindness if diagnosed in its advanced stages. Artificial intelligence (AI) may support screening and contribute to a timely diagnosis. The aim of this study was to evaluate factors that might influence the success of implementing AI-supported devices for DR screenings in general practice. Methods A questionnaire with modules on attitudes toward digital solutions, technical factors, perceived patient perspectives, and sociodemographic data was constructed and 2100 general practitioners (GPs) in Germany were invited to participate via a personal letter. Results Two hundred nine physicians participated in the survey (10% response rate, mean age = 54 years, 46% women). Acquisition costs (mean = 1.37), remuneration (mean = 1.46), and running costs (mean = 1.40) were considered particularly relevant in the context of AI-based screening tools. GPs indicated that a mean of €27.00 (SD = 19) was considered to be an appropriate reimbursement for an AI-based screening for DR in their practice. Less relevant factors were availability of a smartphone used in the practice (mean = 2.53) and time until the examination result was available (mean = 2.29). Important technical factors were practicability of the device (mean = 1.27), unproblematic installation of any necessary software (mean = 1.34), and the integrability into the practice information system (mean = 1.44). Considering the patient welfare, physicians rated the accuracy of the examination, omission of pupil dilation, and the duration of the examination as the most important factors. Participants ranked the factors broadening the scope of care, strengthening the primary care (PC) range, and signs of modern medical practice as the most important factors for making an AI-based screening tool attractive for their practice. Conclusions These findings serve as a basis for a successful implementation of AI-assisted screening devices in PC and might facilitate early screenings for ophthalmological diseases in general practice. The most relevant barriers that need to be overcome for a successful implementation of such tools include clarification of the costs and reimbursement policies.
Collapse
Affiliation(s)
- Larisa Wewetzer
- Institute for Family Medicine, University Medical Center
Schleswig-Holstein, Lubeck Campus, Lubeck, Germany
| | - Linda A. Held
- Institute for Family Medicine, University Medical Center
Schleswig-Holstein, Lubeck Campus, Lubeck, Germany
| | - Katja Goetz
- Institute for Family Medicine, University Medical Center
Schleswig-Holstein, Lubeck Campus, Lubeck, Germany
| | - Jost Steinhäuser
- Institute for Family Medicine, University Medical Center
Schleswig-Holstein, Lubeck Campus, Lubeck, Germany
| |
Collapse
|
24
|
D'Hondt E, Ashby TJ, Chakroun I, Koninckx T, Wuyts R. Identifying and evaluating barriers for the implementation of machine learning in the intensive care unit. COMMUNICATIONS MEDICINE 2022; 2:162. [PMID: 36543940 PMCID: PMC9768782 DOI: 10.1038/s43856-022-00225-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Despite apparent promise and the availability of numerous examples in the literature, machine learning models are rarely used in practice in ICU units. This mismatch suggests that there are poorly understood barriers preventing uptake, which we aim to identify. METHODS We begin with a qualitative study with 29 interviews of 40 Intensive Care Unit-, hospital- and MedTech company staff members. As a follow-up to the study, we attempt to quantify some of the technical issues raised. To perform experiments we selected two models based on criteria such as medical relevance. Using these models we measure the loss of performance in predictive models due to drift over time, change of available patient features, scarceness of data, and deploying a model in a different context to the one it was built in. RESULTS The qualitative study confirms our assumptions on the potential of AI-driven analytics for patient care, as well as showing the prevalence and type of technical blocking factors that are responsible for its slow uptake. The experiments confirm that each of these issues can cause important loss of predictive model performance, depending on the model and the issue. CONCLUSIONS Based on the qualitative study and quantitative experiments we conclude that more research on practical solutions to enable AI-driven innovation in Intensive Care Units is needed. Furthermore, the general poor situation with respect to public, usable implementations of predictive models would appear to limit the possibilities for both the scientific repeatability of the underlying research and the transfer of this research into practice.
Collapse
Affiliation(s)
| | | | | | | | - Roel Wuyts
- Exascience Life Lab, imec, Leuven, Belgium.
| |
Collapse
|
25
|
Rashid A. Yonder: Primary aldosteronism, artificial intelligence, irritable bowel syndrome, and financial toxicity. Br J Gen Pract 2022; 72:534. [PMID: 36302672 PMCID: PMC9591093 DOI: 10.3399/bjgp22x721085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
|
26
|
Chen M, Zhang B, Cai Z, Seery S, Gonzalez MJ, Ali NM, Ren R, Qiao Y, Xue P, Jiang Y. Acceptance of clinical artificial intelligence among physicians and medical students: A systematic review with cross-sectional survey. Front Med (Lausanne) 2022; 9:990604. [PMID: 36117979 PMCID: PMC9472134 DOI: 10.3389/fmed.2022.990604] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Background Artificial intelligence (AI) needs to be accepted and understood by physicians and medical students, but few have systematically assessed their attitudes. We investigated clinical AI acceptance among physicians and medical students around the world to provide implementation guidance. Materials and methods We conducted a two-stage study, involving a foundational systematic review of physician and medical student acceptance of clinical AI. This enabled us to design a suitable web-based questionnaire which was then distributed among practitioners and trainees around the world. Results Sixty studies were included in this systematic review, and 758 respondents from 39 countries completed the online questionnaire. Five (62.50%) of eight studies reported 65% or higher awareness regarding the application of clinical AI. Although, only 10–30% had actually used AI and 26 (74.28%) of 35 studies suggested there was a lack of AI knowledge. Our questionnaire uncovered 38% awareness rate and 20% utility rate of clinical AI, although 53% lacked basic knowledge of clinical AI. Forty-five studies mentioned attitudes toward clinical AI, and over 60% from 38 (84.44%) studies were positive about AI, although they were also concerned about the potential for unpredictable, incorrect results. Seventy-seven percent were optimistic about the prospect of clinical AI. The support rate for the statement that AI could replace physicians ranged from 6 to 78% across 40 studies which mentioned this topic. Five studies recommended that efforts should be made to increase collaboration. Our questionnaire showed 68% disagreed that AI would become a surrogate physician, but believed it should assist in clinical decision-making. Participants with different identities, experience and from different countries hold similar but subtly different attitudes. Conclusion Most physicians and medical students appear aware of the increasing application of clinical AI, but lack practical experience and related knowledge. Overall, participants have positive but reserved attitudes about AI. In spite of the mixed opinions around clinical AI becoming a surrogate physician, there was a consensus that collaborations between the two should be strengthened. Further education should be conducted to alleviate anxieties associated with change and adopting new technologies.
Collapse
Affiliation(s)
- Mingyang Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Zhang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ziting Cai
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Samuel Seery
- Faculty of Health and Medicine, Division of Health Research, Lancaster University, Lancaster, United Kingdom
| | | | - Nasra M. Ali
- The First Affiliated Hospital, Dalian Medical University, Dalian, China
| | - Ran Ren
- Global Health Research Center, Dalian Medical University, Dalian, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- *Correspondence: Youlin Qiao,
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Peng Xue,
| | - Yu Jiang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- Yu Jiang,
| |
Collapse
|