1
|
Fagan P. Clicks and tricks: The dark art of online persuasion. Curr Opin Psychol 2024; 58:101844. [PMID: 39029271 DOI: 10.1016/j.copsyc.2024.101844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 07/21/2024]
Abstract
Internet users are inundated with attempts to persuade, including digital nudges like defaults, friction, and reinforcement. When these nudges fail to be transparent, optional, and beneficial, they can become 'dark patterns', categorised here under the acronym FORCES (Frame, Obstruct, Ruse, Compel, Entangle, Seduce). Elsewhere, psychological principles like negativity bias, the curiosity gap, and fluency are exploited to make social content viral, while more covert tactics including astroturfing, meta-nudging, and inoculation are used to manufacture consensus. The power of these techniques is set to increase in line with technological advances such as predictive algorithms, generative AI, and virtual reality. Digital nudges can be used for altruistic purposes including protection against manipulation, but behavioural interventions have mixed effects at best.
Collapse
Affiliation(s)
- Patrick Fagan
- Patrick Fagan University of the Arts London, United Kingdom.
| |
Collapse
|
2
|
Banerjee S, Dunn P, Conard S, Ali A. Mental Health Applications of Generative AI and Large Language Modeling in the United States. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2024; 21:910. [PMID: 39063487 PMCID: PMC11276907 DOI: 10.3390/ijerph21070910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024]
Abstract
(1) Background: Artificial intelligence (AI) has flourished in recent years. More specifically, generative AI has had broad applications in many disciplines. While mental illness is on the rise, AI has proven valuable in aiding the diagnosis and treatment of mental disorders. However, there is little to no research about precisely how much interest there is in AI technology. (2) Methods: We performed a Google Trends search for "AI and mental health" and compared relative search volume (RSV) indices of "AI", "AI and Depression", and "AI and anxiety". This time series study employed Box-Jenkins time series modeling to forecast long-term interest through the end of 2024. (3) Results: Within the United States, AI interest steadily increased throughout 2023, with some anomalies due to media reporting. Through predictive models, we found that this trend is predicted to increase 114% through the end of the year 2024, with public interest in AI applications being on the rise. (4) Conclusions: According to our study, we found that the awareness of AI has drastically increased throughout 2023, especially in mental health. This demonstrates increasing public awareness of mental health and AI, making advocacy and education about AI technology of paramount importance.
Collapse
Affiliation(s)
- Sri Banerjee
- School of Health Sciences and Public Policy, Walden University, Minneapolis, MN 55401, USA
| | - Pat Dunn
- Center for Health Technology & Innovation American Heart Association, Dallas, TX 75231, USA;
| | | | - Asif Ali
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA;
| |
Collapse
|
3
|
Chaudhry BM, Debi HR. User perceptions and experiences of an AI-driven conversational agent for mental health support. Mhealth 2024; 10:22. [PMID: 39114462 PMCID: PMC11304096 DOI: 10.21037/mhealth-23-55] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 06/05/2024] [Indexed: 08/10/2024] Open
Abstract
Background The increasing prevalence of artificial intelligence (AI)-driven mental health conversational agents necessitates a comprehensive understanding of user engagement and user perceptions of this technology. This study aims to fill the existing knowledge gap by focusing on Wysa, a commercially available mobile conversational agent designed to provide personalized mental health support. Methods A total of 159 user reviews posted between January, 2020 and March, 2024, on the Wysa app's Google Play page were collected. Thematic analysis was then used to perform open and inductive coding of the collected data. Results Seven major themes emerged from the user reviews: "a trusting environment promotes wellbeing", "ubiquitous access offers real-time support", "AI limitations detract from the user experience", "perceived effectiveness of Wysa", "desire for cohesive and predictable interactions", "humanness in AI is welcomed", and "the need for improvements in the user interface". These themes highlight both the benefits and limitations of the AI-driven mental health conversational agents. Conclusions Users find that Wysa is effective in fostering a strong connection with its users, encouraging them to engage with the app and take positive steps towards emotional resilience and self-improvement. However, its AI needs several improvements to enhance user experience with the application. The findings contribute to the design and implementation of more effective, ethical, and user-aligned AI-driven mental health support systems.
Collapse
Affiliation(s)
- Beenish Moalla Chaudhry
- School of Computing and Informatics, Ray P. Authement College of Sciences, University of Louisiana at Lafayette, Lafayette, LA, USA
| | - Happy Rani Debi
- School of Computing and Informatics, Ray P. Authement College of Sciences, University of Louisiana at Lafayette, Lafayette, LA, USA
| |
Collapse
|
4
|
Khawaja Z, Bélisle-Pipon JC. Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots. Front Digit Health 2023; 5:1278186. [PMID: 38026836 PMCID: PMC10663264 DOI: 10.3389/fdgth.2023.1278186] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Artificial intelligence (AI)-powered chatbots have the potential to substantially increase access to affordable and effective mental health services by supplementing the work of clinicians. Their 24/7 availability and accessibility through a mobile phone allow individuals to obtain help whenever and wherever needed, overcoming financial and logistical barriers. Although psychological AI chatbots have the ability to make significant improvements in providing mental health care services, they do not come without ethical and technical challenges. Some major concerns include providing inadequate or harmful support, exploiting vulnerable populations, and potentially producing discriminatory advice due to algorithmic bias. However, it is not always obvious for users to fully understand the nature of the relationship they have with chatbots. There can be significant misunderstandings about the exact purpose of the chatbot, particularly in terms of care expectations, ability to adapt to the particularities of users and responsiveness in terms of the needs and resources/treatments that can be offered. Hence, it is imperative that users are aware of the limited therapeutic relationship they can enjoy when interacting with mental health chatbots. Ignorance or misunderstanding of such limitations or of the role of psychological AI chatbots may lead to a therapeutic misconception (TM) where the user would underestimate the restrictions of such technologies and overestimate their ability to provide actual therapeutic support and guidance. TM raises major ethical concerns that can exacerbate one's mental health contributing to the global mental health crisis. This paper will explore the various ways in which TM can occur particularly through inaccurate marketing of these chatbots, forming a digital therapeutic alliance with them, receiving harmful advice due to bias in the design and algorithm, and the chatbots inability to foster autonomy with patients.
Collapse
|
5
|
Sarkar S, Gaur M, Chen LK, Garg M, Srivastava B. A review of the explainability and safety of conversational agents for mental health to identify avenues for improvement. Front Artif Intell 2023; 6:1229805. [PMID: 37899961 PMCID: PMC10601652 DOI: 10.3389/frai.2023.1229805] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Accepted: 08/29/2023] [Indexed: 10/31/2023] Open
Abstract
Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.
Collapse
Affiliation(s)
- Surjodeep Sarkar
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Manas Gaur
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Lujie Karen Chen
- Department of Information Systems, University of Maryland, Baltimore County, Baltimore, MD, United States
| | - Muskan Garg
- Department of AI & Informatics, Mayo Clinic, Rochester, MN, United States
| | - Biplav Srivastava
- AI Institute, University of South Carolina, Columbia, SC, United States
| |
Collapse
|
6
|
Sahoo JP, Narayan BN, Santi NS. The future of psychiatry with artificial intelligence: can the man-machine duo redefine the tenets? CONSORTIUM PSYCHIATRICUM 2023; 4:72-76. [PMID: 38249529 PMCID: PMC10795941 DOI: 10.17816/cp13626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/15/2023] [Indexed: 01/23/2024] Open
Abstract
As one of the largest contributors of morbidity and mortality, psychiatric disorders are anticipated to triple in prevalence over the coming decade or so. Major obstacles to psychiatric care include stigma, funding constraints, and a dearth of resources and psychiatrists. The main thrust of our present-day discussion has been towards the direction of how machine learning and artificial intelligence could influence the way that patients experience care. To better grasp the issues regarding trust, privacy, and autonomy, their societal and ethical ramifications need to be probed. There is always the possibility that the artificial mind could malfunction or exhibit behavioral abnormalities. An in-depth philosophical understanding of these possibilities in both human and artificial intelligence could offer correlational insights into the robotic management of mental disorders in the future. This article looks into the role of artificial intelligence, the different challenges associated with it, as well as the perspectives in the management of such mental illnesses as depression, anxiety, and schizophrenia.
Collapse
Affiliation(s)
| | | | - N Simple Santi
- Veer Surendra Sai Institute Of Medical Science And Research
| |
Collapse
|
7
|
Hadfi R, Okuhara S, Haqbeen J, Sahab S, Ohnuma S, Ito T. Conversational agents enhance women's contribution in online debates. Sci Rep 2023; 13:14534. [PMID: 37666917 PMCID: PMC10477209 DOI: 10.1038/s41598-023-41703-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 08/30/2023] [Indexed: 09/06/2023] Open
Abstract
The advent of Artificial Intelligence (AI) is fostering the development of innovative methods of communication and collaboration. Integrating AI into Information and Communication Technologies (ICTs) is now ushering in an era of social progress that has the potential to empower marginalized groups. This transformation paves the way to a digital inclusion that could qualitatively empower the online presence of women, particularly in conservative and male-dominated regions. To explore this possibility, we investigated the effect of integrating conversational agents into online debates encompassing 240 Afghans discussing the fall of Kabul in August 2021. We found that the agent leads to quantitative differences in how both genders contribute to the debate by raising issues, presenting ideas, and articulating arguments. We also found increased ideation and reduced inhibition for both genders, particularly females, when interacting exclusively with other females or the agent. The enabling character of the conversational agent reveals an apparatus that could empower women and increase their agency on online platforms.
Collapse
Affiliation(s)
- Rafik Hadfi
- Department of Social Informatics, Kyoto University, Kyoto, Japan.
| | - Shun Okuhara
- Graduate School of Engineering, Mie University, Tsu, Mie, Japan
| | - Jawad Haqbeen
- Department of Social Informatics, Kyoto University, Kyoto, Japan
| | - Sofia Sahab
- Department of Social Informatics, Kyoto University, Kyoto, Japan
| | - Susumu Ohnuma
- Department of Behavioral Science, Hokkaido University, Sapporo, Japan
| | - Takayuki Ito
- Department of Social Informatics, Kyoto University, Kyoto, Japan
| |
Collapse
|
8
|
Grodniewicz JP, Hohol M. Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence. Front Psychiatry 2023; 14:1190084. [PMID: 37324824 PMCID: PMC10267322 DOI: 10.3389/fpsyt.2023.1190084] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/15/2023] [Indexed: 06/17/2023] Open
Abstract
Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI systems capable of delivering psychotherapy in the future. To this end, we formulate and discuss three challenges central to this quest. Firstly, we might not be able to develop effective AI-based psychotherapy unless we deepen our understanding of what makes human-delivered psychotherapy effective. Secondly, assuming that it requires building a therapeutic relationship, it is not clear whether psychotherapy can be delivered by non-human agents. Thirdly, conducting psychotherapy might be a problem too complicated for narrow AI, i.e., AI proficient in dealing with only relatively simple and well-delineated tasks. If this is the case, we should not expect CAI to be capable of delivering fully-fledged psychotherapy until the so-called "general" or "human-like" AI is developed. While we believe that all these challenges can ultimately be overcome, we think that being mindful of them is crucial to ensure well-balanced and steady progress on our path to AI-based psychotherapy.
Collapse
|
9
|
Sulis E, Mariani S, Montagna S. A survey on agents applications in healthcare: Opportunities, challenges and trends. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107525. [PMID: 37084529 DOI: 10.1016/j.cmpb.2023.107525] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 03/31/2023] [Accepted: 04/01/2023] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The agent abstraction is a powerful one, developed decades ago to represent crucial aspects of artificial intelligence research. The meaning has transformed over the years and now there are different nuances across research communities. At its core, an agent is an autonomous computational entity capable of sensing, acting, and capturing interactions with other agents and its environment. This review examines how agent-based techniques have been implemented and evaluated in a specific and very important domain, i.e. healthcare research. METHODS We survey key areas of agent-based research in healthcare, e.g. individual and collective behaviours, communicable and non-communicable diseases, and social epidemiology. We propose a systematic search and critical review of relevant recent works, introduced by an exploratory network analysis. RESULTS Network analysis enables to devise out 5 main research clusters, the most active authors, and 4 main research topics. CONCLUSIONS Our findings support discussion of some future directions for increasing the value of agent-based approaches in healthcare.
Collapse
Affiliation(s)
- Emilio Sulis
- Computer Science Department, University of Torino, Via Pessinetto 12, Turin, 10149, Italy.
| | - Stefano Mariani
- Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Viale A. Allegri 9, Reggio Emilia, 42121, Italy
| | - Sara Montagna
- Department of Pure and Applied Sciences, University of Urbino, Piazza della Repubblica, 13, Urbino, 61029, Italy
| |
Collapse
|
10
|
Sasseville M, Barony Sanchez RH, Yameogo AR, Bergeron-Drolet LA, Bergeron F, Gagnon MP. Interactive conversational agents for health promotion, prevention, and care: A mixed methods systematic scoping review protocol (Preprint). JMIR Res Protoc 2022; 11:e40265. [PMID: 36222804 PMCID: PMC9597423 DOI: 10.2196/40265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 09/01/2022] [Accepted: 09/22/2022] [Indexed: 11/13/2022] Open
Abstract
Background Interactive conversational agents, also known as “chatbots,” are computer programs that use natural language processing to engage in conversations with humans to provide or collect information. Although the literature on the development and use of chatbots for health interventions is growing, important knowledge gaps remain, such as identifying design aspects relevant to health care and functions to offer transparency in decision-making automation. Objective This paper presents the protocol for a scoping review that aims to identify and categorize the interactive conversational agents currently used in health care. Methods A mixed methods systematic scoping review will be conducted according to the Arksey and O’Malley framework and the guidance of Peters et al for systematic scoping reviews. A specific search strategy will be formulated for 5 of the most relevant databases to identify studies published in the last 20 years. Two reviewers will independently apply the inclusion criteria using the full texts and extract data. We will use structured narrative summaries of main themes to present a portrait of the current scope of available interactive conversational agents targeting health promotion, prevention, and care. We will also summarize the differences and similarities between these conversational agents. Results The search strategy and screening steps were completed in March 2022. Data extraction and analysis started in May 2022, and the results are expected to be published in October 2022. Conclusions This fundamental knowledge will be useful for the development of interactive conversational agents adapted to specific groups in vulnerable situations in health care and community settings. International Registered Report Identifier (IRRID) DERR1-10.2196/40265
Collapse
Affiliation(s)
- Maxime Sasseville
- Faculté des Sciences Infirmières, Université Laval, Québec, QC, Canada
| | | | - Achille R Yameogo
- Faculté des Sciences Infirmières, Université Laval, Québec, QC, Canada
| | | | - Frédéric Bergeron
- Bibliothèque - Direction des Services-Conseils, Université Laval, Québec, QC, Canada
| | | |
Collapse
|
11
|
Requirements and Solution Approaches to Personality-Adaptive Conversational Agents in Mental Health Care. SUSTAINABILITY 2022. [DOI: 10.3390/su14073832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Artificial intelligence (AI) technologies enable Conversational Agents (CAs) to perform highly complex tasks in a human-like manner and may help people cope with anxiety to improve their mental health and well-being. To support patients with their mental well-being in an authentic way, CAs need to be imbued with human-like behavior, such as personality. In this paper we cover an innovative form of CA, so-called Personality-Adaptive Conversational Agents (PACAs) that automatically infer users’ personality traits and adapt accordingly to their personality. We empirically investigate their benefits and caveats in mental health care. The results of our study show that PACAs can be beneficial for mental health support, but they also raise concerns about trust and privacy issues. We present a set of relevant requirements for designing PACAs and provide solution approaches that can be followed when designing and implementing PACAs for mental health care.
Collapse
|
12
|
Ahmad R, Siemon D, Gnewuch U, Robra-Bissantz S. Designing Personality-Adaptive Conversational Agents for Mental Health Care. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2022; 24:923-943. [PMID: 35250365 PMCID: PMC8889396 DOI: 10.1007/s10796-022-10254-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/19/2022] [Indexed: 05/06/2023]
Abstract
Millions of people experience mental health issues each year, increasing the necessity for health-related services. One emerging technology with the potential to help address the resulting shortage in health care providers and other barriers to treatment access are conversational agents (CAs). CAs are software-based systems designed to interact with humans through natural language. However, CAs do not live up to their full potential yet because they are unable to capture dynamic human behavior to an adequate extent to provide responses tailored to users' personalities. To address this problem, we conducted a design science research (DSR) project to design personality-adaptive conversational agents (PACAs). Following an iterative and multi-step approach, we derive and formulate six design principles for PACAs for the domain of mental health care. The results of our evaluation with psychologists and psychiatrists suggest that PACAs can be a promising source of mental health support. With our design principles, we contribute to the body of design knowledge for CAs and provide guidance for practitioners who intend to design PACAs. Instantiating the principles may improve interaction with users who seek support for mental health issues.
Collapse
Affiliation(s)
- Rangina Ahmad
- Chair of Information Management, Institute of Business Information Systems, Technische Universität Braunschweig, Mühlenpfordtstraße 23, 38106 Braunschweig, Germany
| | - Dominik Siemon
- Department of Software Engineering, School of Engineering Science, LUT University, Mukkulankatu 19, 15210 Lahti, Finland
| | - Ulrich Gnewuch
- Institute of Information Systems and Marketing, Karlsruhe Institute of Technology (KIT), Kaiserstraße 89-93, 76133 Karlsruhe, Germany
| | - Susanne Robra-Bissantz
- Chair of Information Management, Institute of Business Information Systems, Technische Universität Braunschweig, Mühlenpfordtstraße 23, 38106 Braunschweig, Germany
| |
Collapse
|