1
|
Hussain SA, Bresnahan M, Zhuang J. The bias algorithm: how AI in healthcare exacerbates ethnic and racial disparities - a scoping review. ETHNICITY & HEALTH 2024:1-18. [PMID: 39488857 DOI: 10.1080/13557858.2024.2422848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 10/24/2024] [Indexed: 11/05/2024]
Abstract
This scoping review examined racial and ethnic bias in artificial intelligence health algorithms (AIHA), the role of stakeholders in oversight, and the consequences of AIHA for health equity. Using the PRISMA-ScR guidelines, databases were searched between 2020 and 2024 using the terms racial and ethnic bias in health algorithms resulting in a final sample of 23 sources. Suggestions for how to mitigate algorithmic bias were compiled and evaluated, roles played by stakeholders were identified, and governance and stewardship plans for AIHA were examined. While AIHA represent a significant breakthrough in predictive analytics and treatment optimization, regularly outperforming humans in diagnostic precision and accuracy, they also present serious challenges to patient privacy, data security, institutional transparency, and health equity. Evidence from extant sources including those in this review showed that AIHA carry the potential to perpetuate health inequities. While the current study considered AIHA in the US, the use of AIHA carries implications for global health equity.
Collapse
Affiliation(s)
| | - Mary Bresnahan
- Department of Communication, Michigan State University, East Lansing, MI, USA
| | - Jie Zhuang
- Department of Communication, Texas Christian University, Fort Worth, TX, USA
| |
Collapse
|
2
|
Mooghali M, Stroud AM, Yoo DW, Barry BA, Grimshaw AA, Ross JS, Zhu X, Miller JE. Trustworthy and ethical AI-enabled cardiovascular care: a rapid review. BMC Med Inform Decis Mak 2024; 24:247. [PMID: 39232725 PMCID: PMC11373417 DOI: 10.1186/s12911-024-02653-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 08/26/2024] [Indexed: 09/06/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care. METHODS In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design. RESULTS After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights. CONCLUSION This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.
Collapse
Affiliation(s)
- Maryam Mooghali
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
- Yale Center for Outcomes Research and Evaluation (CORE), 195 Church Street, New Haven, CT, 06510, USA.
| | - Austin M Stroud
- Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN, USA
| | - Dong Whi Yoo
- School of Information, Kent State University, Kent, OH, USA
| | - Barbara A Barry
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Delivery Research, Mayo Clinic, Rochester, MN, USA
| | - Alyssa A Grimshaw
- Harvey Cushing/John Hay Whitney Medical Library, Yale University, New Haven, CT, USA
| | - Joseph S Ross
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
| | - Xuan Zhu
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
| | - Jennifer E Miller
- Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| |
Collapse
|
3
|
Fröling E, Rajaeean N, Hinrichsmeyer KS, Domrös-Zoungrana D, Urban JN, Lenz C. Artificial Intelligence in Medical Affairs: A New Paradigm with Novel Opportunities. Pharmaceut Med 2024; 38:331-342. [PMID: 39259426 PMCID: PMC11473552 DOI: 10.1007/s40290-024-00536-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2024] [Indexed: 09/13/2024]
Abstract
The advent of artificial intelligence (AI) revolutionizes the ways of working in many areas of business and life science. In Medical Affairs (MA) departments of the pharmaceutical industry AI holds great potential for positively influencing the medical mission of identifying and addressing unmet medical needs and care gaps, and fostering solutions that improve the egalitarian and unbiased access of patients to treatments worldwide. Given the essential position of MA in corporate interactions with various healthcare stakeholders, AI offers broad possibilities to support strategic decision-making and to pioneer novel approaches in medical stakeholder interactions. By analyzing data derived from the healthcare environment and by streamlining operations in medical content generation, AI advances data-based prioritization and strategy execution. In this review, we discuss promising AI-based solutions in MA that support the effective use of heterogenous information from observations of the healthcare environment, the enhancement of medical education, and the analysis of real-world data. For a successful implementation of such solutions, specific considerations partly unique to healthcare must be taken care of, for example, transparency, data privacy, healthcare regulations, and in predictive applications, explainability.
Collapse
Affiliation(s)
- Emma Fröling
- Pfizer Pharma GmbH, Friedrichstraße 110, 10117, Berlin, Germany.
| | - Neda Rajaeean
- Pfizer Pharma GmbH, Friedrichstraße 110, 10117, Berlin, Germany
| | | | | | | | - Christian Lenz
- Pfizer Pharma GmbH, Friedrichstraße 110, 10117, Berlin, Germany
| |
Collapse
|
4
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
5
|
Nong P, Adler-Milstein J, Platt J. How patients distinguish between clinical and administrative predictive models in health care. THE AMERICAN JOURNAL OF MANAGED CARE 2024; 30:31-37. [PMID: 38271580 PMCID: PMC10962331 DOI: 10.37765/ajmc.2024.89484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES To understand patient perceptions of specific applications of predictive models in health care. STUDY DESIGN Original, cross-sectional national survey. METHODS We conducted a national online survey of US adults with the National Opinion Research Center from November to December 2021. Measures of internal consistency were used to identify how patients differentiate between clinical and administrative predictive models. Multivariable logistic regressions were used to identify relationships between comfort with various types of predictive models and patient demographics, perceptions of privacy protections, and experiences in the health care system. RESULTS A total of 1541 respondents completed the survey. After excluding observations with missing data for the variables of interest, the final analytic sample was 1488. We found that patients differentiate between clinical and administrative predictive models. Comfort with prediction of bill payment and missed appointments was especially low (21.6% and 36.6%, respectively). Comfort was higher with clinical predictive models, such as predicting stroke in an emergency (55.8%). Experiences of discrimination were significant negative predictors of comfort with administrative predictive models. Health system transparency around privacy policies was a significant positive predictor of comfort with both clinical and administrative predictive models. CONCLUSIONS Patients are more comfortable with clinical applications of predictive models than administrative ones. Privacy protections and transparency about how health care systems protect patient data may facilitate patient comfort with these technologies. However, larger inequities and negative experiences in health care remain important for how patients perceive administrative applications of prediction.
Collapse
Affiliation(s)
- Paige Nong
- Division of Health Policy and Management, University of Minnesota School of Public Health, 516 Delaware St SE, Minneapolis, MN 55455.
| | | | | |
Collapse
|
6
|
Laux J, Wachter S, Mittelstadt B. Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. REGULATION & GOVERNANCE 2024; 18:3-32. [PMID: 38435808 PMCID: PMC10903109 DOI: 10.1111/rego.12512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/08/2023] [Indexed: 03/05/2024]
Abstract
In its AI Act, the European Union chose to understand trustworthiness of AI in terms of the acceptability of its risks. Based on a narrative systematic literature review on institutional trust and AI in the public sector, this article argues that the EU adopted a simplistic conceptualization of trust and is overselling its regulatory ambition. The paper begins by reconstructing the conflation of "trustworthiness" with "acceptability" in the AI Act. It continues by developing a prescriptive set of variables for reviewing trust research in the context of AI. The paper then uses those variables for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Finally, it relates the findings of the review to the EU's AI policy. Its prospects to successfully engineer citizen's trust are uncertain. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI.
Collapse
Affiliation(s)
- Johann Laux
- Oxford Internet InstituteUniversity of Oxford1 St GilesOxfordOX1 3JSUK
| | - Sandra Wachter
- Oxford Internet InstituteUniversity of Oxford1 St GilesOxfordOX1 3JSUK
| | - Brent Mittelstadt
- Oxford Internet InstituteUniversity of Oxford1 St GilesOxfordOX1 3JSUK
| |
Collapse
|
7
|
Chin MH, Afsar-Manesh N, Bierman AS, Chang C, Colón-Rodríguez CJ, Dullabh P, Duran DG, Fair M, Hernandez-Boussard T, Hightower M, Jain A, Jordan WB, Konya S, Moore RH, Moore TT, Rodriguez R, Shaheen G, Snyder LP, Srinivasan M, Umscheid CA, Ohno-Machado L. Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Netw Open 2023; 6:e2345050. [PMID: 38100101 PMCID: PMC11181958 DOI: 10.1001/jamanetworkopen.2023.45050] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/18/2023] Open
Abstract
Importance Health care algorithms are used for diagnosis, treatment, prognosis, risk stratification, and allocation of resources. Bias in the development and use of algorithms can lead to worse outcomes for racial and ethnic minoritized groups and other historically marginalized populations such as individuals with lower income. Objective To provide a conceptual framework and guiding principles for mitigating and preventing bias in health care algorithms to promote health and health care equity. Evidence Review The Agency for Healthcare Research and Quality and the National Institute for Minority Health and Health Disparities convened a diverse panel of experts to review evidence, hear from stakeholders, and receive community feedback. Findings The panel developed a conceptual framework to apply guiding principles across an algorithm's life cycle, centering health and health care equity for patients and communities as the goal, within the wider context of structural racism and discrimination. Multiple stakeholders can mitigate and prevent bias at each phase of the algorithm life cycle, including problem formulation (phase 1); data selection, assessment, and management (phase 2); algorithm development, training, and validation (phase 3); deployment and integration of algorithms in intended settings (phase 4); and algorithm monitoring, maintenance, updating, or deimplementation (phase 5). Five principles should guide these efforts: (1) promote health and health care equity during all phases of the health care algorithm life cycle; (2) ensure health care algorithms and their use are transparent and explainable; (3) authentically engage patients and communities during all phases of the health care algorithm life cycle and earn trustworthiness; (4) explicitly identify health care algorithmic fairness issues and trade-offs; and (5) establish accountability for equity and fairness in outcomes from health care algorithms. Conclusions and Relevance Multiple stakeholders must partner to create systems, processes, regulations, incentives, standards, and policies to mitigate and prevent algorithmic bias. Reforms should implement guiding principles that support promotion of health and health care equity in all phases of the algorithm life cycle as well as transparency and explainability, authentic community engagement and ethical partnerships, explicit identification of fairness issues and trade-offs, and accountability for equity and fairness.
Collapse
Affiliation(s)
| | | | | | - Christine Chang
- Agency for Healthcare Research and Quality, Rockville, Maryland
| | | | | | | | - Malika Fair
- Association of American Medical Colleges, Washington, DC
| | | | | | - Anjali Jain
- Agency for Healthcare Research and Quality, Rockville, Maryland
| | | | - Stephen Konya
- Office of the National Coordinator for Health Information Technology, Washington, DC
| | - Roslyn Holliday Moore
- US Department of Health and Human Services Office of Minority Health, Rockville, Maryland
| | | | | | | | | | | | | | | |
Collapse
|
8
|
Rojas JC, Teran M, Umscheid CA. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated. Crit Care Clin 2023; 39:769-782. [PMID: 37704339 DOI: 10.1016/j.ccc.2023.02.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Predictive analytics based on artificial intelligence (AI) offer clinicians the opportunity to leverage big data available in electronic health records (EHR) to improve clinical decision-making, and thus patient outcomes. Despite this, many barriers exist to facilitating trust between clinicians and AI-based tools, limiting its current impact. Potential solutions are available at both the local and national level. It will take a broad and diverse coalition of stakeholders, from health-care systems, EHR vendors, and clinical educators to regulators, researchers and the patient community, to help facilitate this trust so that the promise of AI in health care can be realized.
Collapse
Affiliation(s)
- Juan C Rojas
- Department of Internal Medicine, Rush University, 1725 West Harrison Street, Suite 010, Chicago, IL 60612, USA.
| | - Mario Teran
- Agency for Healthcare Research and Quality, 5600 Fishers Lane, Mail Stop 06E53A, Rockville, MD 20857, USA
| | - Craig A Umscheid
- Agency for Healthcare Research and Quality, 5600 Fishers Lane, Mail Stop 06E53A, Rockville, MD 20857, USA
| |
Collapse
|
9
|
Steerling E, Siira E, Nilsen P, Svedberg P, Nygren J. Implementing AI in healthcare-the relevance of trust: a scoping review. FRONTIERS IN HEALTH SERVICES 2023; 3:1211150. [PMID: 37693234 PMCID: PMC10484529 DOI: 10.3389/frhs.2023.1211150] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 08/11/2023] [Indexed: 09/12/2023]
Abstract
Background The process of translation of AI and its potential benefits into practice in healthcare services has been slow in spite of its rapid development. Trust in AI in relation to implementation processes is an important aspect. Without a clear understanding, the development of effective implementation strategies will not be possible, nor will AI advance despite the significant investments and possibilities. Objective This study aimed to explore the scientific literature regarding how trust in AI in relation to implementation in healthcare is conceptualized and what influences trust in AI in relation to implementation in healthcare. Methods This scoping review included five scientific databases. These were searched to identify publications related to the study aims. Articles were included if they were published in English, after 2012, and peer-reviewed. Two independent reviewers conducted an abstract and full-text review, as well as carrying out a thematic analysis with an inductive approach to address the study aims. The review was reported in accordance with the PRISMA-ScR guidelines. Results A total of eight studies were included in the final review. We found that trust was conceptualized in different ways. Most empirical studies had an individual perspective where trust was directed toward the technology's capability. Two studies focused on trust as relational between people in the context of the AI application rather than as having trust in the technology itself. Trust was also understood by its determinants and as having a mediating role, positioned between characteristics and AI use. The thematic analysis yielded three themes: individual characteristics, AI characteristics and contextual characteristics, which influence trust in AI in relation to implementation in healthcare. Conclusions Findings showed that the conceptualization of trust in AI differed between the studies, as well as which determinants they accounted for as influencing trust. Few studies looked beyond individual characteristics and AI characteristics. Future empirical research addressing trust in AI in relation to implementation in healthcare should have a more holistic view of the concept to be able to manage the many challenges, uncertainties, and perceived risks.
Collapse
Affiliation(s)
- Emilie Steerling
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Elin Siira
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Per Nilsen
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
10
|
Brereton TA, Malik MM, Lifson M, Greenwood JD, Peterson KJ, Overgaard SM. The Role of Artificial Intelligence Model Documentation in Translational Science: Scoping Review. Interact J Med Res 2023; 12:e45903. [PMID: 37450330 PMCID: PMC10382950 DOI: 10.2196/45903] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/10/2023] [Accepted: 05/11/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND Despite the touted potential of artificial intelligence (AI) and machine learning (ML) to revolutionize health care, clinical decision support tools, herein referred to as medical modeling software (MMS), have yet to realize the anticipated benefits. One proposed obstacle is the acknowledged gaps in AI translation. These gaps stem partly from the fragmentation of processes and resources to support MMS transparent documentation. Consequently, the absence of transparent reporting hinders the provision of evidence to support the implementation of MMS in clinical practice, thereby serving as a substantial barrier to the successful translation of software from research settings to clinical practice. OBJECTIVE This study aimed to scope the current landscape of AI- and ML-based MMS documentation practices and elucidate the function of documentation in facilitating the translation of ethical and explainable MMS into clinical workflows. METHODS A scoping review was conducted in accordance with PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. PubMed was searched using Medical Subject Headings key concepts of AI, ML, ethical considerations, and explainability to identify publications detailing AI- and ML-based MMS documentation, in addition to snowball sampling of selected reference lists. To include the possibility of implicit documentation practices not explicitly labeled as such, we did not use documentation as a key concept but as an inclusion criterion. A 2-stage screening process (title and abstract screening and full-text review) was conducted by 1 author. A data extraction template was used to record publication-related information; barriers to developing ethical and explainable MMS; available standards, regulations, frameworks, or governance strategies related to documentation; and recommendations for documentation for papers that met the inclusion criteria. RESULTS Of the 115 papers retrieved, 21 (18.3%) papers met the requirements for inclusion. Ethics and explainability were investigated in the context of AI- and ML-based MMS documentation and translation. Data detailing the current state and challenges and recommendations for future studies were synthesized. Notable themes defining the current state and challenges that required thorough review included bias, accountability, governance, and explainability. Recommendations identified in the literature to address present barriers call for a proactive evaluation of MMS, multidisciplinary collaboration, adherence to investigation and validation protocols, transparency and traceability requirements, and guiding standards and frameworks that enhance documentation efforts and support the translation of AI- and ML-based MMS. CONCLUSIONS Resolving barriers to translation is critical for MMS to deliver on expectations, including those barriers identified in this scoping review related to bias, accountability, governance, and explainability. Our findings suggest that transparent strategic documentation, aligning translational science and regulatory science, will support the translation of MMS by coordinating communication and reporting and reducing translational barriers, thereby furthering the adoption of MMS.
Collapse
Affiliation(s)
- Tracey A Brereton
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | - Momin M Malik
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | - Mark Lifson
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | - Jason D Greenwood
- Department of Family Medicine, Mayo Clinic, Rochester, MN, United States
| | - Kevin J Peterson
- Center for Digital Health, Mayo Clinic, Rochester, MN, United States
| | | |
Collapse
|
11
|
Robinson R, Liday C, Lee S, Williams IC, Wright M, An S, Nguyen E. Artificial Intelligence in Health Care-Understanding Patient Information Needs and Designing Comprehensible Transparency: Qualitative Study. JMIR AI 2023; 2:e46487. [PMID: 38333424 PMCID: PMC10851077 DOI: 10.2196/46487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/10/2023] [Accepted: 05/14/2023] [Indexed: 02/10/2024]
Abstract
Background Artificial intelligence (AI) is as a branch of computer science that uses advanced computational methods such as machine learning (ML), to calculate and/or predict health outcomes and address patient and provider health needs. While these technologies show great promise for improving healthcare, especially in diabetes management, there are usability and safety concerns for both patients and providers about the use of AI/ML in healthcare management. Objectives To support and ensure safe use of AI/ML technologies in healthcare, the team worked to better understand: 1) patient information and training needs, 2) the factors that influence patients' perceived value and trust in AI/ML healthcare applications; and 3) on how best to support safe and appropriate use of AI/ML enabled devices and applications among people living with diabetes. Methods To understand general patient perspectives and information needs related to the use of AI/ML in healthcare, we conducted a series of focus groups (n=9) and interviews (n=3) with patients (n=40) and interviews with providers (n=6) in Alaska, Idaho, and Virginia. Grounded Theory guided data gathering, synthesis, and analysis. Thematic content and constant comparison analysis were used to identify relevant themes and sub-themes. Inductive approaches were used to link data to key concepts including preferred patient-provider-interactions, patient perceptions of trust, accuracy, value, assurances, and information transparency. Results Key summary themes and recommendations focused on: 1) patient preferences for AI/ML enabled device and/or application information; 2) patient and provider AI/ML-related device and/or application training needs; 3) factors contributing to patient and provider trust in AI/ML enabled devices and/or application; and 4) AI/ML-related device and/or application functionality and safety considerations. A number of participant (patients and providers) recommendations to improve device functionality to guide information and labeling mandates (e.g., links to online video resources, and access to 24/7 live in-person or virtual emergency support). Other patient recommendations include: 1) access to practice devices; 2) connection to local supports and reputable community resources; 3) simplified display and alert limits. Conclusion Recommendations from both patients and providers could be used by Federal Oversight Agencies to improve utilization of AI/ML monitoring of technology use in diabetes, improving device safety and efficacy.
Collapse
Affiliation(s)
- Renee Robinson
- College of Pharmacy, Idaho State University, Anchorage, AK, US
| | - Cara Liday
- College of Pharmacy, Idaho State University, Pocatello, ID, US
| | - Sarah Lee
- College of Pharmacy, Idaho State University, Meridian, ID, US
| | - Ishan C Williams
- School of Nursing, University of Virginia, Charlottesville, VA, US
| | - Melanie Wright
- College of Pharmacy, Idaho State University, Meridian, ID, US
| | - Sungjoon An
- College of Pharmacy, Idaho State University, Meridian, ID, US
| | - Elaine Nguyen
- College of Pharmacy, Idaho State University, Meridian, ID, US
| |
Collapse
|
12
|
Trust and ethics in AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01473-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
13
|
van de Sande D, Van Genderen ME, Smit JM, Huiskens J, Visser JJ, Veen RER, van Unen E, Ba OH, Gommers D, Bommel JV. Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter. BMJ Health Care Inform 2022; 29:bmjhci-2021-100495. [PMID: 35185012 PMCID: PMC8860016 DOI: 10.1136/bmjhci-2021-100495] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 01/24/2022] [Indexed: 12/23/2022] Open
Abstract
Objective Although the role of artificial intelligence (AI) in medicine is increasingly studied, most patients do not benefit because the majority of AI models remain in the testing and prototyping environment. The development and implementation trajectory of clinical AI models are complex and a structured overview is missing. We therefore propose a step-by-step overview to enhance clinicians’ understanding and to promote quality of medical AI research. Methods We summarised key elements (such as current guidelines, challenges, regulatory documents and good practices) that are needed to develop and safely implement AI in medicine. Conclusion This overview complements other frameworks in a way that it is accessible to stakeholders without prior AI knowledge and as such provides a step-by-step approach incorporating all the key elements and current guidelines that are essential for implementation, and can thereby help to move AI from bytes to bedside.
Collapse
Affiliation(s)
- Davy van de Sande
- Department of Adult Intensive Care, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Michel E Van Genderen
- Department of Adult Intensive Care, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jim M Smit
- Department of Adult Intensive Care, Erasmus Medical Center, Rotterdam, The Netherlands.,Pattern Recognition and Bioinformatics group, EEMCS, Delft University of Technology, Delft, The Netherlands
| | | | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus Medical Center, Rotterdam, The Netherlands.,Department of Information Technology, Chief Medical Information Officer, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Robert E R Veen
- Department of Information Technology, theme Research Suite, Erasmus Medical Center, Rotterdam, The Netherlands
| | | | - Oliver Hilgers Ba
- Active Medical Devices/Medical Device Software, CE Plus GmbH, Badenweiler, Germany
| | - Diederik Gommers
- Department of Adult Intensive Care, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jasper van Bommel
- Department of Adult Intensive Care, Erasmus Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
14
|
Yousefi Nooraie R, Lyons PG, Baumann AA, Saboury B. Equitable Implementation of Artificial Intelligence in Medical Imaging: What Can be Learned from Implementation Science? PET Clin 2021; 16:643-653. [PMID: 34537134 DOI: 10.1016/j.cpet.2021.07.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Artificial intelligence (AI) has been rapidly adopted in various health care domains. Molecular imaging, accordingly, has demonstrated growing academic and commercial interest in AI. Unprepared and inequitable implementation and scale-up of AI in health care may pose challenges. Implementation of AI, as a complex intervention, may face various barriers, at individual, interindividual, organizational, health system, and community levels. To address these barriers, recommendations have been developed to consider health equity as a critical lens to sensitize implementation, engage stakeholders in implementation and evaluation, recognize and incorporate the iterative nature of implementation, and integrate equity and implementation in early-stage AI research.
Collapse
Affiliation(s)
- Reza Yousefi Nooraie
- Department of Public Health Sciences, University of Rochester School of Medicine and Dentistry, 265 Crittenden Blvd, Rochester, NY 14642, USA.
| | - Patrick G Lyons
- Department of Medicine, Division of Pulmonary and Critical Care Medicine, Washington University School of Medicine in St Louis, 660 South Euclid Avenue, MSC 8052-43-14, St. Louis, MO 63110-1010, USA; Healthcare Innovation Lab, BJC HealthCare, St Louis, MO, USA
| | - Ana A Baumann
- Brown School of Social Work, Washington University in St. Louis, 600 S. Taylor Ave, MSC:8100-0094-02, St. Louis, MO 63110, USA
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Building 10, Room 1C455, Baltimore, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|