1
|
Liebrenz M, Bhugra D, Alibudbud R, Ventriglio A, Smith A. AI in health care and the fragile pursuit of equity and social justice. Lancet 2024; 404:843. [PMID: 39216963 DOI: 10.1016/s0140-6736(24)01604-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2024] [Accepted: 07/31/2024] [Indexed: 09/04/2024]
Affiliation(s)
- Michael Liebrenz
- Department of Forensic Psychiatry, University of Bern, Bern 3012, Switzerland.
| | - Dinesh Bhugra
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, UK
| | - Rowalt Alibudbud
- Department of Sociology and Behavioral Sciences, De La Salle University, Manila, Philippines
| | - Antonio Ventriglio
- Department of Clinical and Experimental Medicine, University of Foggia, Foggia, Italy
| | - Alexander Smith
- Department of Forensic Psychiatry, University of Bern, Bern 3012, Switzerland
| |
Collapse
|
2
|
Lacsa JEM. Is internet access really the key to achieving AI-driven health equity in the Philippines, or should we focus on direct healthcare investments instead? J Public Health (Oxf) 2024:fdae123. [PMID: 38964782 DOI: 10.1093/pubmed/fdae123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 06/20/2024] [Indexed: 07/06/2024] Open
Affiliation(s)
- Jose Eric M Lacsa
- Department of Sociology and Behavioral Sciences, De La Salle University, De La Salle University, 1004 Taft Avenue, Manila, Philippines
| |
Collapse
|
3
|
Nadarzynski T, Knights N, Husbands D, Graham CA, Llewellyn CD, Buchanan T, Montgomery I, Ridge D. Achieving health equity through conversational AI: A roadmap for design and implementation of inclusive chatbots in healthcare. PLOS DIGITAL HEALTH 2024; 3:e0000492. [PMID: 38696359 PMCID: PMC11065243 DOI: 10.1371/journal.pdig.0000492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 03/25/2024] [Indexed: 05/04/2024]
Abstract
BACKGROUND The rapid evolution of conversational and generative artificial intelligence (AI) has led to the increased deployment of AI tools in healthcare settings. While these conversational AI tools promise efficiency and expanded access to healthcare services, there are growing concerns ethically, practically and in terms of inclusivity. This study aimed to identify activities which reduce bias in conversational AI and make their designs and implementation more equitable. METHODS A qualitative research approach was employed to develop an analytical framework based on the content analysis of 17 guidelines about AI use in clinical settings. A stakeholder consultation was subsequently conducted with a total of 33 ethnically diverse community members, AI designers, industry experts and relevant health professionals to further develop a roadmap for equitable design and implementation of conversational AI in healthcare. Framework analysis was conducted on the interview data. RESULTS A 10-stage roadmap was developed to outline activities relevant to equitable conversational AI design and implementation phases: 1) Conception and planning, 2) Diversity and collaboration, 3) Preliminary research, 4) Co-production, 5) Safety measures, 6) Preliminary testing, 7) Healthcare integration, 8) Service evaluation and auditing, 9) Maintenance, and 10) Termination. DISCUSSION We have made specific recommendations to increase conversational AI's equity as part of healthcare services. These emphasise the importance of a collaborative approach and the involvement of patient groups in navigating the rapid evolution of conversational AI technologies. Further research must assess the impact of recommended activities on chatbots' fairness and their ability to reduce health inequalities.
Collapse
Affiliation(s)
- Tom Nadarzynski
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Nicky Knights
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Deborah Husbands
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Cynthia A. Graham
- Kinsey Institute and Department of Gender Studies, Indiana University, Bloomington, United States of America
| | - Carrie D. Llewellyn
- Brighton and Sussex Medical School, University of Sussex, Brighton, United Kingdom
| | - Tom Buchanan
- School of Social Sciences, University of Westminster, London, United Kingdom
| | | | - Damien Ridge
- School of Social Sciences, University of Westminster, London, United Kingdom
| |
Collapse
|
4
|
Jennings AM, Cox DJ. Starting the Conversation Around the Ethical Use of Artificial Intelligence in Applied Behavior Analysis. Behav Anal Pract 2024; 17:107-122. [PMID: 38405299 PMCID: PMC10891004 DOI: 10.1007/s40617-023-00868-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 02/27/2024] Open
Abstract
Artificial intelligence (AI) is increasingly a part of our everyday lives. Though much AI work in healthcare has been outside of applied behavior analysis (ABA), researchers within ABA have begun to demonstrate many different ways that AI might improve the delivery of ABA services. Though AI offers many exciting advances, absent from the behavior analytic literature thus far is conversation around ethical considerations when developing, building, and deploying AI technologies. Further, though AI is already in the process of coming to ABA, it is unknown the extent to which behavior analytic practitioners are familiar (and comfortable) with the use of AI in ABA. The purpose of this article is twofold. First, to describe how existing ethical publications (e.g., BACB Code of Ethics) do and do not speak to the unique ethical concerns with deploying AI in everyday, ABA service delivery settings. Second, to raise questions for consideration that might inform future ethical guidelines when developing and using AI in ABA service delivery. In total, we hope this article sparks proactive dialog around the ethical use of AI in ABA before the field is required to have a reactionary conversation.
Collapse
Affiliation(s)
- Adrienne M. Jennings
- Department of Behavioral Science, Daemen University, 4380 Main Street, Amherst, NY 14226 USA
| | - David J. Cox
- Institute for Applied Behavioral Science, Endicott College, Beverly, MA USA
- RethinkFirst, 49 W 27th St, 8th floor, New York, NY 10001 USA
| |
Collapse
|
5
|
Hendricks-Sturrup R, Simmons M, Anders S, Aneni K, Wright Clayton E, Coco J, Collins B, Heitman E, Hussain S, Joshi K, Lemieux J, Lovett Novak L, Rubin DJ, Shanker A, Washington T, Waters G, Webb Harris J, Yin R, Wagner T, Yin Z, Malin B. Developing Ethics and Equity Principles, Terms, and Engagement Tools to Advance Health Equity and Researcher Diversity in AI and Machine Learning: Modified Delphi Approach. JMIR AI 2023; 2:e52888. [PMID: 38875540 PMCID: PMC11041493 DOI: 10.2196/52888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/01/2023] [Accepted: 11/05/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all sociohumanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research. OBJECTIVE AI and ML has the potential to account for and assess a variety of factors that contribute to health and disease and to improve prevention, diagnosis, and therapy. Here, we describe recent activities within the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Ethics and Equity Workgroup (EEWG) that led to the development of deliverables that will help put ethics and fairness at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. METHODS The AIM-AHEAD EEWG was created in 2021 with 3 cochairs and 51 members in year 1 and 2 cochairs and ~40 members in year 2. Members in both years included AIM-AHEAD principal investigators, coinvestigators, leadership fellows, and research fellows. The EEWG used a modified Delphi approach using polling, ranking, and other exercises to facilitate discussions around tangible steps, key terms, and definitions needed to ensure that ethics and fairness are at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. RESULTS The EEWG developed a set of ethics and equity principles, a glossary, and an interview guide. The ethics and equity principles comprise 5 core principles, each with subparts, which articulate best practices for working with stakeholders from historically and presently underrepresented communities. The glossary contains 12 terms and definitions, with particular emphasis on optimal development, refinement, and implementation of AI and ML in health equity research. To accompany the glossary, the EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts. Lastly, the interview guide provides questions that can be used or adapted to garner stakeholder and community perspectives on the principles and glossary. CONCLUSIONS Ongoing engagement is needed around our principles and glossary to identify and predict potential limitations in their uses in AI and ML research settings, especially for institutions with limited resources. This requires time, careful consideration, and honest discussions around what classifies an engagement incentive as meaningful to support and sustain their full engagement. By slowing down to meet historically and presently underresourced institutions and communities where they are and where they are capable of engaging and competing, there is higher potential to achieve needed diversity, ethics, and equity in AI and ML implementation in health research.
Collapse
Affiliation(s)
| | - Malaika Simmons
- National Alliance Against Disparities in Patient Health, Woodbridge, VA, United States
| | - Shilo Anders
- Vanderbilt University Medical Center, Nashville, TN, United States
| | | | | | - Joseph Coco
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Benjamin Collins
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Elizabeth Heitman
- University of Texas Southwestern Medical Center, Dallas, TX, United States
| | | | - Karuna Joshi
- University of Maryland, Baltimore County, Baltimore, MD, United States
| | | | | | | | - Anil Shanker
- Meharry Medical College, Nashville, TN, United States
| | - Talitha Washington
- AUC Data Science Initiative, Clark Atlanta University, Atlanta, GA, United States
| | - Gabriella Waters
- Morgan State University, Center for Equitable AI & Machine Learning Systems, Baltimore, MD, United States
| | | | - Rui Yin
- University of Florida, Gainesville, FL, United States
| | - Teresa Wagner
- University of North Texas Health Science Center, SaferCare Texas, Fort Worth, TX, United States
| | - Zhijun Yin
- Vanderbilt University Medical Center, Nashville, TN, United States
| | - Bradley Malin
- Vanderbilt University Medical Center, Nashville, TN, United States
| |
Collapse
|