1
|
Le Lagadec D, Kornhaber R, Cleary M. Navigating the impact of artificial intelligence on our healthcare workforce. J Clin Nurs 2024; 33:2369-2370. [PMID: 38661118 DOI: 10.1111/jocn.17191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 04/15/2024] [Indexed: 04/26/2024]
Affiliation(s)
- Danielle Le Lagadec
- School of Nursing, Midwifery and Social Sciences, CQUniversity, Bundaberg, Queensland, Australia
| | - Rachel Kornhaber
- School of Nursing, Paramedicine and Healthcare Sciences, Charles Sturt University, Bathurst, New South Wales, Australia
| | - Michelle Cleary
- School of Nursing, Midwifery and Social Sciences, CQUniversity, Sydney, New South Wales, Australia
| |
Collapse
|
2
|
Nilsen P, Sundemo D, Heintz F, Neher M, Nygren J, Svedberg P, Petersson L. Towards evidence-based practice 2.0: leveraging artificial intelligence in healthcare. FRONTIERS IN HEALTH SERVICES 2024; 4:1368030. [PMID: 38919828 PMCID: PMC11196845 DOI: 10.3389/frhs.2024.1368030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 05/31/2024] [Indexed: 06/27/2024]
Abstract
Background Evidence-based practice (EBP) involves making clinical decisions based on three sources of information: evidence, clinical experience and patient preferences. Despite popularization of EBP, research has shown that there are many barriers to achieving the goals of the EBP model. The use of artificial intelligence (AI) in healthcare has been proposed as a means to improve clinical decision-making. The aim of this paper was to pinpoint key challenges pertaining to the three pillars of EBP and to investigate the potential of AI in surmounting these challenges and contributing to a more evidence-based healthcare practice. We conducted a selective review of the literature on EBP and the integration of AI in healthcare to achieve this. Challenges with the three components of EBP Clinical decision-making in line with the EBP model presents several challenges. The availability and existence of robust evidence sometimes pose limitations due to slow generation and dissemination processes, as well as the scarcity of high-quality evidence. Direct application of evidence is not always viable because studies often involve patient groups distinct from those encountered in routine healthcare. Clinicians need to rely on their clinical experience to interpret the relevance of evidence and contextualize it within the unique needs of their patients. Moreover, clinical decision-making might be influenced by cognitive and implicit biases. Achieving patient involvement and shared decision-making between clinicians and patients remains challenging in routine healthcare practice due to factors such as low levels of health literacy among patients and their reluctance to actively participate, barriers rooted in clinicians' attitudes, scepticism towards patient knowledge and ineffective communication strategies, busy healthcare environments and limited resources. AI assistance for the three components of EBP AI presents a promising solution to address several challenges inherent in the research process, from conducting studies, generating evidence, synthesizing findings, and disseminating crucial information to clinicians to implementing these findings into routine practice. AI systems have a distinct advantage over human clinicians in processing specific types of data and information. The use of AI has shown great promise in areas such as image analysis. AI presents promising avenues to enhance patient engagement by saving time for clinicians and has the potential to increase patient autonomy although there is a lack of research on this issue. Conclusion This review underscores AI's potential to augment evidence-based healthcare practices, potentially marking the emergence of EBP 2.0. However, there are also uncertainties regarding how AI will contribute to a more evidence-based healthcare. Hence, empirical research is essential to validate and substantiate various aspects of AI use in healthcare.
Collapse
Affiliation(s)
- Per Nilsen
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| | - David Sundemo
- School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Lerum Närhälsan Primary Healthcare Center, Lerum, Sweden
| | - Fredrik Heintz
- Department of Computer and Information Science, Linköping University, Linköping, Sweden
| | - Margit Neher
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Lena Petersson
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
3
|
McKee M, van Schalkwyk MC, Greenley R. Meeting the challenges of the 21st century: the fundamental importance of trust for transformation. Isr J Health Policy Res 2024; 13:21. [PMID: 38650050 PMCID: PMC11036603 DOI: 10.1186/s13584-024-00611-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Accepted: 04/18/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND This paper is one of a collection on challenges facing health systems in the future. One obvious challenge is how to transform to meet changing health needs and take advantage of emerging treatment opportunities. However, we argue that effective transformations are only possible if there is trust in the health system. MAIN BODY We focus on three of the many relationships that require trust in health systems, trust by patients and the public, by health workers, and by politicians. Unfortunately, we are seeing a concerning loss of trust in these relationships and, for too long, the importance of trust to health policymaking and health system functioning has been overlooked and under-valued. We contend that trust must be given the attention, time, and resources it warrants as an indispensable element of any health system and, in this paper, we review why trust is so important in health systems, how trust has been thought about by scholars from different disciplines, what we know about its place in health systems, and how we can give it greater prominence in research and policy. CONCLUSION Trust is essential if health systems are to meet the challenges of the 21st century but it is too often overlooked or, in some cases, undermined.
Collapse
Affiliation(s)
- Martin McKee
- Department of Health Services Research & Policy, London School of Hygiene & Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK.
| | - May Ci van Schalkwyk
- Department of Health Services Research & Policy, London School of Hygiene & Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK
| | - Rachel Greenley
- Department of Health Services Research & Policy, London School of Hygiene & Tropical Medicine, 15-17 Tavistock Place, London, WC1H 9SH, UK
| |
Collapse
|
4
|
Shahab O, El Kurdi B, Shaukat A, Nadkarni G, Soroush A. Large language models: a primer and gastroenterology applications. Therap Adv Gastroenterol 2024; 17:17562848241227031. [PMID: 38390029 PMCID: PMC10883116 DOI: 10.1177/17562848241227031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/02/2024] [Indexed: 02/24/2024] Open
Abstract
Over the past year, the emergence of state-of-the-art large language models (LLMs) in tools like ChatGPT has ushered in a rapid acceleration in artificial intelligence (AI) innovation. These powerful AI models can generate tailored and high-quality text responses to instructions and questions without the need for labor-intensive task-specific training data or complex software engineering. As the technology continues to mature, LLMs hold immense potential for transforming clinical workflows, enhancing patient outcomes, improving medical education, and optimizing medical research. In this review, we provide a practical discussion of LLMs, tailored to gastroenterologists. We highlight the technical foundations of LLMs, emphasizing their key strengths and limitations as well as how to interact with them safely and effectively. We discuss some potential LLM use cases for clinical gastroenterology practice, education, and research. Finally, we review critical barriers to implementation and ongoing work to address these issues. This review aims to equip gastroenterologists with a foundational understanding of LLMs to facilitate a more active clinician role in the development and implementation of this rapidly emerging technology.
Collapse
Affiliation(s)
- Omer Shahab
- Division of Gastroenterology, Department of Medicine, VHC Health, Arlington, VA, USA
| | - Bara El Kurdi
- Division of Gastroenterology and Hepatology, Department of Medicine, Virginia Tech Carilion School of Medicine, Roanoke, VA, USA
| | - Aasma Shaukat
- Division of Gastroenterology, Department of Medicine, NYU Grossman School of Medicine, New York, NY, USA VA
- New York Harbor Veterans Affairs Healthcare System New York City, New York, NY, USA
| | - Girish Nadkarni
- Division of Data-Driven and Digital Medicine, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ali Soroush
- Division of Data-Driven and Digital Medicine, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Place, New York, NY 10029-6574, USA
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Henry D. Janowitz Division of Gastroenterology, Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
5
|
Rony MKK, Parvin MR, Wahiduzzaman M, Debnath M, Bala SD, Kayesh I. "I Wonder if my Years of Training and Expertise Will be Devalued by Machines": Concerns About the Replacement of Medical Professionals by Artificial Intelligence. SAGE Open Nurs 2024; 10:23779608241245220. [PMID: 38596508 PMCID: PMC11003342 DOI: 10.1177/23779608241245220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 03/08/2024] [Accepted: 03/15/2024] [Indexed: 04/11/2024] Open
Abstract
Background The rapid integration of artificial intelligence (AI) into healthcare has raised concerns among healthcare professionals about the potential displacement of human medical professionals by AI technologies. However, the apprehensions and perspectives of healthcare workers regarding the potential substitution of them with AI are unknown. Objective This qualitative research aimed to investigate healthcare workers' concerns about artificial intelligence replacing medical professionals. Methods A descriptive and exploratory research design was employed, drawing upon the Technology Acceptance Model (TAM), Technology Threat Avoidance Theory, and Sociotechnical Systems Theory as theoretical frameworks. Participants were purposively sampled from various healthcare settings, representing a diverse range of roles and backgrounds. Data were collected through individual interviews and focus group discussions, followed by thematic analysis. Results The analysis revealed seven key themes reflecting healthcare workers' concerns, including job security and economic concerns; trust and acceptance of AI; ethical and moral dilemmas; quality of patient care; workforce role redefinition and training; patient-provider relationships; healthcare policy and regulation. Conclusions This research underscores the multifaceted concerns of healthcare workers regarding the increasing role of AI in healthcare. Addressing job security, fostering trust, addressing ethical dilemmas, and redefining workforce roles are crucial factors to consider in the successful integration of AI into healthcare. Healthcare policy and regulation must be developed to guide this transformation while maintaining the quality of patient care and preserving patient-provider relationships. The study findings offer insights for policymakers and healthcare institutions to navigate the evolving landscape of AI in healthcare while addressing the concerns of healthcare professionals.
Collapse
Affiliation(s)
- Moustaq Karim Khan Rony
- Master of Public Health, Bangladesh Open University, Gazipur, Bangladesh
- Institute of Social Welfare and Research, University of Dhaka, Dhaka, Bangladesh
| | - Mst. Rina Parvin
- Armed Forces Nursing Service, Major at Bangladesh Army (AFNS Officer), Combined Military Hospital, Dhaka, Bangladesh
| | - Md. Wahiduzzaman
- School of Medical Sciences, Shahjalal University of Science and Technology, Sylhet, Bangladesh
| | - Mitun Debnath
- Master of Public Health, National Institute of Preventive and Social Medicine, Dhaka, Bangladesh
| | - Shuvashish Das Bala
- College of Nursing, International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | - Ibne Kayesh
- Institute of Social Welfare and Research, University of Dhaka, Dhaka, Bangladesh
- Faculty of Graduate Studies, University of Kelaniya, Colombo, Sri Lanka
| |
Collapse
|
6
|
Fazakarley CA, Breen M, Leeson P, Thompson B, Williamson V. Experiences of using artificial intelligence in healthcare: a qualitative study of UK clinician and key stakeholder perspectives. BMJ Open 2023; 13:e076950. [PMID: 38081671 PMCID: PMC10729128 DOI: 10.1136/bmjopen-2023-076950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/24/2023] [Indexed: 12/18/2023] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is a rapidly developing field in healthcare, with tools being developed across various specialties to support healthcare professionals and reduce workloads. It is important to understand the experiences of professionals working in healthcare to ensure that future AI tools are acceptable and effectively implemented. The aim of this study was to gain an in-depth understanding of the experiences and perceptions of UK healthcare workers and other key stakeholders about the use of AI in the National Health Service (NHS). DESIGN A qualitative study using semistructured interviews conducted remotely via MS Teams. Thematic analysis was carried out. SETTING NHS and UK higher education institutes. PARTICIPANTS Thirteen participants were recruited, including clinical and non-clinical participants working for the NHS and researchers working to develop AI tools for healthcare settings. RESULTS Four core themes were identified: positive perceptions of AI; potential barriers to using AI in healthcare; concerns regarding AI use and steps needed to ensure the acceptability of future AI tools. Overall, we found that those working in healthcare were generally open to the use of AI and expected it to have many benefits for patients and facilitate access to care. However, concerns were raised regarding the security of patient data, the potential for misdiagnosis and that AI could increase the burden on already strained healthcare staff. CONCLUSION This study found that healthcare staff are willing to engage with AI research and incorporate AI tools into care pathways. Going forward, the NHS and AI developers will need to collaborate closely to ensure that future tools are suitable for their intended use and do not negatively impact workloads or patient trust. Future AI studies should continue to incorporate the views of key stakeholders to improve tool acceptability. TRIAL REGISTRATION NUMBER NCT05028179; ISRCTN15113915; IRAS ref: 293515.
Collapse
Affiliation(s)
| | - Maria Breen
- School of Psychology & Clinical Language Sciences, University of Reading, Reading, UK
- Breen Clinical Research, London, UK
| | - Paul Leeson
- Division of Cardiovascular Medicine, University of Oxford, Oxford, UK
| | | | - Victoria Williamson
- King's College London, London, UK
- Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
7
|
Vo V, Chen G, Aquino YSJ, Carter SM, Do QN, Woode ME. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med 2023; 338:116357. [PMID: 37949020 DOI: 10.1016/j.socscimed.2023.116357] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 09/04/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023]
Abstract
INTRODUCTION Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals' perspectives to understand these issues from multiple perspectives. METHODOLOGY A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human - AI relationship. RESULTS The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. CONCLUSIONS While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
Collapse
Affiliation(s)
- Vinh Vo
- Centre for Health Economics, Monash University, Australia.
| | - Gang Chen
- Centre for Health Economics, Monash University, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Soceity, University of Wollongong, Australia
| | - Quynh Nga Do
- Department of Economics, Monash University, Australia
| | - Maame Esi Woode
- Centre for Health Economics, Monash University, Australia; Monash Data Futures Research Institute, Australia
| |
Collapse
|
8
|
Schulz PJ, Lwin MO, Kee KM, Goh WWB, Lam TYT, Sung JJY. Modeling the influence of attitudes, trust, and beliefs on endoscopists' acceptance of artificial intelligence applications in medical practice. Front Public Health 2023; 11:1301563. [PMID: 38089040 PMCID: PMC10715310 DOI: 10.3389/fpubh.2023.1301563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 11/03/2023] [Indexed: 12/18/2023] Open
Abstract
Introduction The potential for deployment of Artificial Intelligence (AI) technologies in various fields of medicine is vast, yet acceptance of AI amongst clinicians has been patchy. This research therefore examines the role of antecedents, namely trust, attitude, and beliefs in driving AI acceptance in clinical practice. Methods We utilized online surveys to gather data from clinicians in the field of gastroenterology. Results A total of 164 participants responded to the survey. Participants had a mean age of 44.49 (SD = 9.65). Most participants were male (n = 116, 70.30%) and specialized in gastroenterology (n = 153, 92.73%). Based on the results collected, we proposed and tested a model of AI acceptance in medical practice. Our findings showed that while the proposed drivers had a positive impact on AI tools' acceptance, not all effects were direct. Trust and belief were found to fully mediate the effects of attitude on AI acceptance by clinicians. Discussion The role of trust and beliefs as primary mediators of the acceptance of AI in medical practice suggest that these should be areas of focus in AI education, engagement and training. This has implications for how AI systems can gain greater clinician acceptance to engender greater trust and adoption amongst public health systems and professional networks which in turn would impact how populations interface with AI. Implications for policy and practice, as well as future research in this nascent field, are discussed.
Collapse
Affiliation(s)
- Peter J. Schulz
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - May O. Lwin
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Kalya M. Kee
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore, Singapore
| | - Wilson W. B. Goh
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- School of Biological Sciences, Nanyang Technological University, Singapore, Singapore
- Center for Biomedical Informatics, Nanyang Technological University, Singapore, Singapore
| | - Thomas Y. T Lam
- Faculty of Medicine, Institute of Digestive Diseases, The Chinese University of Hong Kong, Hong Kong, China
| | - Joseph J. Y. Sung
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
9
|
Li LT, Haley LC, Boyd AK, Bernstam EV. Technical/Algorithm, Stakeholder, and Society (TASS) barriers to the application of artificial intelligence in medicine: A systematic review. J Biomed Inform 2023; 147:104531. [PMID: 37884177 DOI: 10.1016/j.jbi.2023.104531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/14/2023] [Accepted: 10/22/2023] [Indexed: 10/28/2023]
Abstract
INTRODUCTION The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.
Collapse
Affiliation(s)
- Linda T Li
- Department of Surgery, Division of Pediatric Surgery, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY 10029, United States; McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States.
| | - Lauren C Haley
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Alexandra K Boyd
- McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| | - Elmer V Bernstam
- McWilliams School of Biomedical Informatics at UT Health Houston, 7000 Fannin St, Suite 600, Houston, TX 77030, United States; McGovern Medical School at the University of Texas Health Science Center at Houston, 6431 Fannin St, Houston, TX 77030, United States.
| |
Collapse
|
10
|
Aquino YSJ, Carter SM, Houssami N, Braunack-Mayer A, Win KT, Degeling C, Wang L, Rogers WA. Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives. JOURNAL OF MEDICAL ETHICS 2023:jme-2022-108850. [PMID: 36823101 DOI: 10.1136/jme-2022-108850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 02/16/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). OBJECTIVES Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. METHODOLOGY The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. RESULTS Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias. CONCLUSION/SIGNIFICANCE Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.
Collapse
Affiliation(s)
- Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Nehmat Houssami
- School of Public Health, The University of Sydney, Sydney, New South Wales, Australia
- The Daffodil Centre, Sydney, New South Wales, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Khin Than Win
- Centre for Persuasive Technology and Society, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
| | - Lei Wang
- Centre for Artificial Intelligence, School of Computing and Information Technology, University of Wollongong, Wollongong, New South Wales, Australia
| | - Wendy A Rogers
- Department of Philosophy and School of Medicine, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
11
|
Carter SM, Carolan L, Saint James Aquino Y, Frazer H, Rogers WA, Hall J, Degeling C, Braunack-Mayer A, Houssami N. Australian women's judgements about using artificial intelligence to read mammograms in breast cancer screening. Digit Health 2023; 9:20552076231191057. [PMID: 37559826 PMCID: PMC10408316 DOI: 10.1177/20552076231191057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 07/13/2023] [Indexed: 08/11/2023] Open
Abstract
Objective Mammographic screening for breast cancer is an early use case for artificial intelligence (AI) in healthcare. This is an active area of research, mostly focused on the development and evaluation of individual algorithms. A growing normative literature argues that AI systems should reflect human values, but it is unclear what this requires in specific AI implementation scenarios. Our objective was to understand women's values regarding the use of AI to read mammograms in breast cancer screening. Methods We ran eight online discussion groups with a total of 50 women, focused on their expectations and normative judgements regarding the use of AI in breast screening. Results Although women were positive about the potential of breast screening AI, they argued strongly that humans must remain as central actors in breast screening systems and consistently expressed high expectations of the performance of breast screening AI. Women expected clear lines of responsibility for decision-making, to be able to contest decisions, and for AI to perform equally well for all programme participants. Women often imagined both that AI might replace radiographers and that AI implementation might allow more women to be screened: screening programmes will need to communicate carefully about these issues. Conclusions To meet women's expectations, screening programmes should delay implementation until there is strong evidence that the use of AI systems improves screening performance, should ensure that human expertise and responsibility remain central in screening programmes, and should avoid using AI in ways that exacerbate inequities.
Collapse
Affiliation(s)
- Stacy M Carter
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Lucy Carolan
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Yves Saint James Aquino
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Helen Frazer
- St Vincent's Hospital BreastScreen, BreastScreen Victoria, Fitzroy, Victoria, Australia
| | - Wendy A Rogers
- Philosophy Department and School of Medicine, Macquarie University, Sydney, NSW, Australia
| | - Julie Hall
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Chris Degeling
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Annette Braunack-Mayer
- Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health & Society, University of Wollongong, Wollongong, NSW, Australia
| | - Nehmat Houssami
- Daffodil Centre, University of Sydney, Joint Venture with Cancer Council NSW, Sydney, NSW, Australia
- Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
12
|
Čartolovni A, Malešević A, Poslon L. Critical analysis of the AI impact on the patient-physician relationship: A multi-stakeholder qualitative study. Digit Health 2023; 9:20552076231220833. [PMID: 38130798 PMCID: PMC10734361 DOI: 10.1177/20552076231220833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Objective This qualitative study aims to present the aspirations, expectations and critical analysis of the potential for artificial intelligence (AI) to transform patient-physician relationship, according to multi-stakeholder insight. Methods This study was conducted from June to December 2021, using an anticipatory ethics approach and sociology of expectations as the theoretical frameworks. It focused mainly on three groups of stakeholders; namely, physicians (n = 12), patients (n = 15) and healthcare managers (n = 11), all of whom are directly related to the adoption of AI in medicine (n = 38). Results In this study, interviews were conducted with 40% of the patients in the sample (15/38), as well as 31% of the physicians (12/38) and 29% of health managers in the sample (11/38). The findings highlight the following: (1) the impact of AI on fundamental aspects of the patient-physician relationship and the underlying importance of a synergistic relationship between the physician and AI; (2) the potential for AI to alleviate workload and reduce administrative burden by saving time and putting the patient at the centre of the caring process and (3) the potential risk to the holistic approach by neglecting humanness in healthcare. Conclusions This multi-stakeholder qualitative study, which focused on the micro-level of healthcare decision-making, sheds new light on the impact of AI on healthcare and the potential transformation of patient-physician relationship. The results of the current study highlight the need to adopt a critical awareness approach to the implementation of AI in healthcare by applying critical thinking and reasoning. It is important not to rely solely upon the recommendations of AI while neglecting clinical reasoning and physicians' knowledge of best clinical practices. Instead, it is vital that the core values of the existing patient-physician relationship - such as trust and honesty, conveyed through open and sincere communication - are preserved.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Zagreb, Croatia
| | - Anamaria Malešević
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| | - Luka Poslon
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| |
Collapse
|