1
|
MacNeill AL, Doucet S, Luke A. Effectiveness of a Mental Health Chatbot for People With Chronic Diseases: Randomized Controlled Trial. JMIR Form Res 2024; 8:e50025. [PMID: 38814681 PMCID: PMC11176869 DOI: 10.2196/50025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 12/20/2023] [Accepted: 03/07/2024] [Indexed: 05/31/2024] Open
Abstract
BACKGROUND People with chronic diseases tend to experience more mental health issues than their peers without these health conditions. Mental health chatbots offer a potential source of mental health support for people with chronic diseases. OBJECTIVE The aim of this study was to determine whether a mental health chatbot can improve mental health in people with chronic diseases. We focused on 2 chronic diseases in particular: arthritis and diabetes. METHODS Individuals with arthritis or diabetes were recruited using various web-based methods. Participants were randomly assigned to 1 of 2 groups. Those in the treatment group used a mental health chatbot app (Wysa [Wysa Inc]) over a period of 4 weeks. Those in the control group received no intervention. Participants completed measures of depression (Patient Health Questionnaire-9), anxiety (Generalized Anxiety Disorder Scale-7), and stress (Perceived Stress Scale-10) at baseline, with follow-up testing 2 and 4 weeks later. Participants in the treatment group completed feedback questions on their experiences with the app at the final assessment point. RESULTS A total of 68 participants (n=47, 69% women; mean age 42.87, SD 11.27 years) were included in the analysis. Participants were divided evenly between the treatment and control groups. Those in the treatment group reported decreases in depression (P<.001) and anxiety (P<.001) severity over the study period. No such changes were found among participants in the control group. No changes in stress were reported by participants in either group. Participants with arthritis reported higher levels of depression (P=.004) and anxiety (P=.004) severity than participants with diabetes over the course of the study, as well as higher levels of stress (P=.01); otherwise, patterns of results were similar across these health conditions. In response to the feedback questions, participants in the treatment group said that they liked many of the functions and features of the app, the general design of the app, and the user experience. They also disliked some aspects of the app, with most of these reports focusing on the chatbot's conversational abilities. CONCLUSIONS The results of this study suggest that mental health chatbots can be an effective source of mental health support for people with chronic diseases such as arthritis and diabetes. Although cost-effective and accessible, these programs have limitations and may not be well suited for all individuals. TRIAL REGISTRATION ClinicalTrials.gov NCT04620668; https://www.clinicaltrials.gov/study/NCT04620668.
Collapse
Affiliation(s)
- A Luke MacNeill
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Shelley Doucet
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Alison Luke
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| |
Collapse
|
2
|
MacNeill AL, MacNeill L, Yi S, Goudreau A, Luke A, Doucet S. Depiction of conversational agents as health professionals: a scoping review. JBI Evid Synth 2024; 22:831-855. [PMID: 38482610 DOI: 10.11124/jbies-23-00029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2024]
Abstract
OBJECTIVE The purpose of this scoping review was to examine the depiction of conversational agents as health professionals. We identified the professional characteristics that are used with these depictions and determined the prevalence of these characteristics among conversational agents that are used for health care. INTRODUCTION The depiction of conversational agents as health professionals has implications for both the users and the developers of these programs. For this reason, it is important to know more about these depictions and how they are implemented in practical settings. INCLUSION CRITERIA This review included scholarly literature on conversational agents that are used for health care. It focused on conversational agents designed for patients and health seekers, not health professionals or trainees. Conversational agents that address physical and/or mental health care were considered, as were programs that promote healthy behaviors. METHODS This review was conducted in accordance with JBI methodology for scoping reviews. The databases searched included MEDLINE (PubMed), Embase, CINAHL with Full Text (EBSCOhost), Scopus, Web of Science, ACM Guide to Computing Literature (Association for Computing Machinery Digital Library), and IEEE Xplore (IEEE). The main database search was conducted in June 2021, and an updated search was conducted in January 2022. Extracted data included characteristics of the report, basic characteristics of the conversational agent, and professional characteristics of the conversational agent. Extracted data were summarized using descriptive statistics. Results are presented in a narrative summary and accompanying tables. RESULTS A total of 38 health-related conversational agents were identified across 41 reports. Six of these conversational agents (15.8%) had professional characteristics. Four conversational agents (10.5%) had a professional appearance in which they displayed the clothing and accessories of health professionals and appeared in professional settings. One conversational agent (2.6%) had a professional title (Dr), and 4 conversational agents (10.5%) were described as having professional roles. Professional characteristics were more common among embodied vs disembodied conversational agents. CONCLUSIONS The results of this review show that the depiction of conversational agents as health professionals is not particularly common, although it does occur. More discussion is needed on the potential ethical and legal issues surrounding the depiction of conversational agents as health professionals. Future research should examine the impact of these depictions, as well as people's attitudes toward them, to better inform recommendations for practice.
Collapse
Affiliation(s)
- A Luke MacNeill
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Lillian MacNeill
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
| | - Sungmin Yi
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- College of Pharmacy, Dalhousie University, Halifax, NS, Canada
| | - Alex Goudreau
- University of New Brunswick Libraries, Saint John, NB, Canada
- The University of New Brunswick (UNB) Saint John Collaboration for Evidence-Informed Healthcare: A JBI Centre of Excellence, Saint John, NB, Canada
| | - Alison Luke
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
- The University of New Brunswick (UNB) Saint John Collaboration for Evidence-Informed Healthcare: A JBI Centre of Excellence, Saint John, NB, Canada
| | - Shelley Doucet
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada
- Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada
- The University of New Brunswick (UNB) Saint John Collaboration for Evidence-Informed Healthcare: A JBI Centre of Excellence, Saint John, NB, Canada
| |
Collapse
|
3
|
Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health 2024; 6:1280235. [PMID: 38562663 PMCID: PMC10982476 DOI: 10.3389/fdgth.2024.1280235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
The paper reviews the entire spectrum of Artificial Intelligence (AI) in mental health and its positive role in mental health. AI has a huge number of promises to offer mental health care and this paper looks at multiple facets of the same. The paper first defines AI and its scope in the area of mental health. It then looks at various facets of AI like machine learning, supervised machine learning and unsupervised machine learning and other facets of AI. The role of AI in various psychiatric disorders like neurodegenerative disorders, intellectual disability and seizures are discussed along with the role of AI in awareness, diagnosis and intervention in mental health disorders. The role of AI in positive emotional regulation and its impact in schizophrenia, autism spectrum disorders and mood disorders is also highlighted. The article also discusses the limitations of AI based approaches and the need for AI based approaches in mental health to be culturally aware, with structured flexible algorithms and an awareness of biases that can arise in AI. The ethical issues that may arise with the use of AI in mental health are also visited.
Collapse
|
4
|
Hwang G, Lee DY, Seol S, Jung J, Choi Y, Her ES, An MH, Park RW. Assessing the potential of ChatGPT for psychodynamic formulations in psychiatry: An exploratory study. Psychiatry Res 2024; 331:115655. [PMID: 38056130 DOI: 10.1016/j.psychres.2023.115655] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/08/2023]
Abstract
Although there were several attempts to apply ChatGPT (Generative Pre-Trained Transformer) to medicine, little is known about therapeutic applications in psychiatry. In this exploratory study, we aimed to evaluate the characteristics and appropriateness of the psychodynamic formulations created by ChatGPT. Along with a case selected from the psychoanalytic literature, input prompts were designed to include different levels of background knowledge. These included naïve prompts, keywords created by ChatGPT, keywords created by psychiatrists, and psychodynamic concepts from the literature. The psychodynamic formulations generated from the different prompts were evaluated by five psychiatrists from different institutions. We next conducted further tests in which instructions on the use of different psychodynamic models were added to the input prompts. The models used were ego psychology, self-psychology, and object relations. The results from naïve prompts and psychodynamic concepts were rated as appropriate by most raters. The psychodynamic concept prompt output was rated the highest. Interrater agreement was statistically significant. The results from the tests using instructions in different psychoanalytic theories were also rated as appropriate by most raters. They included key elements of the psychodynamic formulation and suggested interpretations similar to the literature. These findings suggest potential of ChatGPT for use in psychiatry.
Collapse
Affiliation(s)
- Gyubeom Hwang
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea; Department of Medical Sciences, Graduate School of Ajou University, Suwon, Republic of Korea
| | - Dong Yun Lee
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea; Department of Medical Sciences, Graduate School of Ajou University, Suwon, Republic of Korea
| | - Soobeen Seol
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea
| | - Jaeoh Jung
- Department of Child and Adolescent Psychiatry, Seoul Metropolitan Eunpyeong Hospital, Seoul, Republic of Korea
| | - Yeonkyu Choi
- Armed Forces Yangju Hospital, Yang-ju, Republic of Korea
| | - Eun Sil Her
- Ajou Big Tree Psychiatric Clinic, Suwon, Republic of Korea
| | - Min Ho An
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea; Department of Medical Sciences, Graduate School of Ajou University, Suwon, Republic of Korea
| | - Rae Woong Park
- Department of Biomedical Informatics, Ajou University School of Medicine, Suwon, Republic of Korea; Department of Medical Sciences, Graduate School of Ajou University, Suwon, Republic of Korea; Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Republic of Korea.
| |
Collapse
|
5
|
Tate S, Fouladvand S, Chen JH, Chen CYA. The ChatGPT therapist will see you now: Navigating generative artificial intelligence's potential in addiction medicine research and patient care. Addiction 2023; 118:2249-2251. [PMID: 37735091 DOI: 10.1111/add.16341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 08/23/2023] [Indexed: 09/23/2023]
Affiliation(s)
- Steven Tate
- Department of Psychiatry and Behavioural Sciences, Stanford University School of Medicine, Palo Alto, California, USA
| | - Sajjad Fouladvand
- Department of Medicine, Stanford University School of Medicine, Palo Alto, California, USA
| | - Jonathan H Chen
- Department of Medicine, Stanford University School of Medicine, Palo Alto, California, USA
| | - Chwen-Yuen Angie Chen
- Department of Medicine, Stanford University School of Medicine, Palo Alto, California, USA
| |
Collapse
|
6
|
Yang HS, Wang F, Greenblatt MB, Huang SX, Zhang Y. AI Chatbots in Clinical Laboratory Medicine: Foundations and Trends. Clin Chem 2023; 69:1238-1246. [PMID: 37664912 DOI: 10.1093/clinchem/hvad106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 06/05/2023] [Indexed: 09/05/2023]
Abstract
BACKGROUND Artificial intelligence (AI) conversational agents, or chatbots, are computer programs designed to simulate human conversations using natural language processing. They offer diverse functions and applications across an expanding range of healthcare domains. However, their roles in laboratory medicine remain unclear, as their accuracy, repeatability, and ability to interpret complex laboratory data have yet to be rigorously evaluated. CONTENT This review provides an overview of the history of chatbots, two major chatbot development approaches, and their respective advantages and limitations. We discuss the capabilities and potential applications of chatbots in healthcare, focusing on the laboratory medicine field. Recent evaluations of chatbot performance are presented, with a special emphasis on large language models such as the Chat Generative Pre-trained Transformer in response to laboratory medicine questions across different categories, such as medical knowledge, laboratory operations, regulations, and interpretation of laboratory results as related to clinical context. We analyze the causes of chatbots' limitations and suggest research directions for developing more accurate, reliable, and manageable chatbots for applications in laboratory medicine. SUMMARY Chatbots, which are rapidly evolving AI applications, hold tremendous potential to improve medical education, provide timely responses to clinical inquiries concerning laboratory tests, assist in interpreting laboratory results, and facilitate communication among patients, physicians, and laboratorians. Nevertheless, users should be vigilant of existing chatbots' limitations, such as misinformation, inconsistencies, and lack of human-like reasoning abilities. To be effectively used in laboratory medicine, chatbots must undergo extensive training on rigorously validated medical knowledge and be thoroughly evaluated against standard clinical practice.
Collapse
Affiliation(s)
- He S Yang
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, NY, United States
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, NY, United States
| | - Matthew B Greenblatt
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, NY, United States
- Research Division, Hospital for Special Surgery, New York, NY, United States
| | - Sharon X Huang
- College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA, United States
| | - Yi Zhang
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, United States
| |
Collapse
|
7
|
Malgaroli M, Hull TD, Zech JM, Althoff T. Natural language processing for mental health interventions: a systematic review and research framework. Transl Psychiatry 2023; 13:309. [PMID: 37798296 PMCID: PMC10556019 DOI: 10.1038/s41398-023-02592-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 08/31/2023] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Neuropsychiatric disorders pose a high societal cost, but their treatment is hindered by lack of objective outcomes and fidelity metrics. AI technologies and specifically Natural Language Processing (NLP) have emerged as tools to study mental health interventions (MHI) at the level of their constituent conversations. However, NLP's potential to address clinical and research challenges remains unclear. We therefore conducted a pre-registered systematic review of NLP-MHI studies using PRISMA guidelines (osf.io/s52jh) to evaluate their models, clinical applications, and to identify biases and gaps. Candidate studies (n = 19,756), including peer-reviewed AI conference manuscripts, were collected up to January 2023 through PubMed, PsycINFO, Scopus, Google Scholar, and ArXiv. A total of 102 articles were included to investigate their computational characteristics (NLP algorithms, audio features, machine learning pipelines, outcome metrics), clinical characteristics (clinical ground truths, study samples, clinical focus), and limitations. Results indicate a rapid growth of NLP MHI studies since 2019, characterized by increased sample sizes and use of large language models. Digital health platforms were the largest providers of MHI data. Ground truth for supervised learning models was based on clinician ratings (n = 31), patient self-report (n = 29) and annotations by raters (n = 26). Text-based features contributed more to model accuracy than audio markers. Patients' clinical presentation (n = 34), response to intervention (n = 11), intervention monitoring (n = 20), providers' characteristics (n = 12), relational dynamics (n = 14), and data preparation (n = 4) were commonly investigated clinical categories. Limitations of reviewed studies included lack of linguistic diversity, limited reproducibility, and population bias. A research framework is developed and validated (NLPxMHI) to assist computational and clinical researchers in addressing the remaining gaps in applying NLP to MHI, with the goal of improving clinical utility, data access, and fairness.
Collapse
Affiliation(s)
- Matteo Malgaroli
- Department of Psychiatry, New York University, Grossman School of Medicine, New York, NY, 10016, USA.
| | | | - James M Zech
- Talkspace, New York, NY, 10025, USA
- Department of Psychology, Florida State University, Tallahassee, FL, 32306, USA
| | - Tim Althoff
- Department of Computer Science, University of Washington, Seattle, WA, 98195, USA
| |
Collapse
|
8
|
Abbate-Daga G, Taverna A, Martini M. The oracle of Delphi 2.0: considering artificial intelligence as a challenging tool for the treatment of eating disorders. Eat Weight Disord 2023; 28:50. [PMID: 37337063 DOI: 10.1007/s40519-023-01579-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 06/06/2023] [Indexed: 06/21/2023] Open
Abstract
In this editorial, we discuss how the diffusion of Artificial Intelligence (AI)-based tools-such as the recently available conversational AIs-could impact and transform eating disorders (EDs) care. We try to envision the possible use of AI by individuals affected by EDs and by clinicians, in terms of prevention, support to treatment, and development of new and actually personalized treatment strategies. We then focus on how the introduction of AI into psychotherapy could either represent an element of disruption for the therapeutical relationship or be positively and creatively integrated into session and inter-sessional dynamics. As technological advancements open scenarios where anyone could have access to a personal and all-knowing "oracle", the ability to formulate questions, individuals' experiences, and the scientific rigor with which clinicians study them must remain at the center of our work. Ethical and legal issues about the use of AI are also considered.
Collapse
Affiliation(s)
| | | | - Matteo Martini
- Department of Neuroscience, University of Turin, Via Cherasco 15, Turin, Italy
| |
Collapse
|
9
|
Yuan F, Zhou W, Dodge HH, Zhao X. Short: Causal structural learning of conversational engagement for socially isolated older adults. SMART HEALTH (AMSTERDAM, NETHERLANDS) 2023; 28:100384. [PMID: 37065441 PMCID: PMC10101035 DOI: 10.1016/j.smhl.2023.100384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Social isolation has become a growing public health concern in older adults and older adults with mild cognitive impairment. Coping strategies must be developed to increase social contact for socially isolated older adults. In this paper, we explored the conversational strategy between trained conversation moderators and socially isolated adults during a conversational engagement clinical trial (Clinicaltrials.gov: NCT02871921). We carried out structural learning and causality analysis to investigate the conversation strategies used by the trained moderators to engage socially isolated adults in the conversation and the causal effects of the strategy on engagement. Causal relations and effects were inferred between participants' emotions, the dialogue strategies used by moderators, and participants' following emotions. The results found in this paper may be used to support the development of cost-efficient, trustworthy AI- and/or robot-based platform to promote conversational engagement for older adults to address the challenges in social interaction.
Collapse
Affiliation(s)
- Fengpei Yuan
- Department of Mechanical, Aerospace and Biomedical Engineering, The University of Tennessee Knoxville, 1512 Middle Drive, Knoxville, TN, 37996, USA
| | - Wenjun Zhou
- Department of Business Analytics and Statistics, The University of Tennessee Knoxville, 916 Volunteer Blvd., Knoxville, TN, 37996, USA
| | - Hiroko H. Dodge
- Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA
| | - Xiaopeng Zhao
- Department of Mechanical, Aerospace and Biomedical Engineering, The University of Tennessee Knoxville, 1512 Middle Drive, Knoxville, TN, 37996, USA
| |
Collapse
|
10
|
Grodniewicz JP, Hohol M. Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence. Front Psychiatry 2023; 14:1190084. [PMID: 37324824 PMCID: PMC10267322 DOI: 10.3389/fpsyt.2023.1190084] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/15/2023] [Indexed: 06/17/2023] Open
Abstract
Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI systems capable of delivering psychotherapy in the future. To this end, we formulate and discuss three challenges central to this quest. Firstly, we might not be able to develop effective AI-based psychotherapy unless we deepen our understanding of what makes human-delivered psychotherapy effective. Secondly, assuming that it requires building a therapeutic relationship, it is not clear whether psychotherapy can be delivered by non-human agents. Thirdly, conducting psychotherapy might be a problem too complicated for narrow AI, i.e., AI proficient in dealing with only relatively simple and well-delineated tasks. If this is the case, we should not expect CAI to be capable of delivering fully-fledged psychotherapy until the so-called "general" or "human-like" AI is developed. While we believe that all these challenges can ultimately be overcome, we think that being mindful of them is crucial to ensure well-balanced and steady progress on our path to AI-based psychotherapy.
Collapse
|
11
|
Kannampallil T, Ajilore OA, Lv N, Smyth JM, Wittels NE, Ronneberg CR, Kumar V, Xiao L, Dosala S, Barve A, Zhang A, Tan KC, Cao KK, Patel CR, Gerber BS, Johnson JA, Kringle EA, Ma J. Effects of a virtual voice-based coach delivering problem-solving treatment on emotional distress and brain function: a pilot RCT in depression and anxiety. Transl Psychiatry 2023; 13:166. [PMID: 37173334 PMCID: PMC10175049 DOI: 10.1038/s41398-023-02462-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 04/14/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
Consumer-based voice assistants have the ability to deliver evidence-based treatment, but their therapeutic potential is largely unknown. In a pilot trial of a virtual voice-based coach, Lumen, delivering problem-solving treatment, adults with mild-to-moderate depression and/or anxiety were randomized to the Lumen intervention (n = 42) or waitlist control (n = 21). The main outcomes included changes in neural measures of emotional reactivity and cognitive control, and Hospital Anxiety and Depression Scale [HADS] symptom scores over 16 weeks. Participants were 37.8 years (SD = 12.4), 68% women, 25% Black, 24% Latino, and 11% Asian. Activation of the right dlPFC (neural region of interest in cognitive control) decreased in the intervention group but increased in the control group, with an effect size meeting the prespecified threshold for a meaningful effect (Cohen's d = 0.3). Between-group differences in the change in activation of the left dlPFC and bilateral amygdala were observed, but were of smaller magnitude (d = 0.2). Change in right dlPFC activation was also meaningfully associated (r ≥ 0.4) with changes in self-reported problem-solving ability and avoidance in the intervention. Lumen intervention also led to decreased HADS depression, anxiety, and overall psychological distress scores, with medium effect sizes (Cohen's d = 0.49, 0.51, and 0.55, respectively), compared with the waitlist control group. This pilot trial showed promising effects of a novel digital mental health intervention on cognitive control using neuroimaging and depression and anxiety symptoms, providing foundational evidence for a future confirmatory study.
Collapse
Affiliation(s)
- Thomas Kannampallil
- Department of Anesthesiology, Washington University School of Medicine, St Louis, MO, USA
- Institute for Informatics, Washington University School of Medicine, St Louis, MO, USA
| | - Olusola A Ajilore
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | - Nan Lv
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Joshua M Smyth
- Department of Biobehavioral Health, The Pennsylvania State University, University Park, PA, USA
| | - Nancy E Wittels
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Corina R Ronneberg
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Vikas Kumar
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Lan Xiao
- Department of Epidemiology and Population Health, Stanford University, Stanford, USA
| | - Susanth Dosala
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Amruta Barve
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Aifeng Zhang
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | - Kevin C Tan
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Kevin K Cao
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Charmi R Patel
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Ben S Gerber
- Department of Population & Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, USA
| | - Jillian A Johnson
- Department of Biobehavioral Health, The Pennsylvania State University, University Park, PA, USA
| | - Emily A Kringle
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA
| | - Jun Ma
- Department of Medicine, University of Illinois at Chicago, Chicago, IL, USA.
| |
Collapse
|
12
|
Holohan M, Buyx A, Fiske A. Staying Curious With Conversational AI in Psychotherapy. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:14-16. [PMID: 37130403 DOI: 10.1080/15265161.2023.2191059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
|
13
|
Sedlakova J, Trachsel M. Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:4-13. [PMID: 35362368 DOI: 10.1080/15265161.2022.2048739] [Citation(s) in RCA: 36] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape-such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an agent. This question serves as a framework for the subsequent ethical analysis of CAI focusing on topics of (self-) knowledge, (self-)understanding, and relationships. Second, we propose further conceptual and ethical analysis regarding human-AI interaction and argue that CAI cannot be considered as an equal partner in a conversation as is the case with a human therapist. Instead, CAI's role in a conversation should be restricted to specific functions.
Collapse
Affiliation(s)
| | - Manuel Trachsel
- University of Zurich
- University Hospital Basel
- University Psychiatric Clinics Basel
| |
Collapse
|
14
|
Sharma A, Lin IW, Miner AS, Atkins DC, Althoff T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00593-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
15
|
A Multi-Industry Analysis of the Future Use of AI Chatbots. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2022. [DOI: 10.1155/2022/2552099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial intelligence (AI) chatbots are set to be the defining technology of the next decade due to their ability to increase human capability at a low cost. However, more research is required to assess individuals’ behavioural intentions to use this technology when it becomes publicly available. This study applied an extended Technology Acceptance Model (TAM), with additional predictors of trust and privacy concerns, to assess individuals’ behavioural intentions to use AI chatbots across three industries: mental health care, online shopping, and online banking. These services were selected due to the current popularity of regular chatbots in these fields. Participants (
, 202 females) aged between 17 and 85 years (
,
) completed a 71-item online, cross-sectional survey. As hypothesised, perceived usefulness and trust were significant positive predictors of behavioural intentions across all three behaviours. However, the influence of the perceived ease of use and privacy concerns on behavioural intentions differed across the three behaviours. These findings highlight that the combination of predictors within the extended TAM have different influences on behavioural intentions to use AI chatbots for mental health care, online shopping, and online banking. This research contributes to the literature by demonstrating that the influence of the variables in one field cannot be generalised across all uses of AI chatbots.
Collapse
|
16
|
Cioffi V, Mosca LL, Moretto E, Ragozzino O, Stanzione R, Bottone M, Maldonato NM, Muzii B, Sperandeo R. Computational Methods in Psychotherapy: A Scoping Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:12358. [PMID: 36231657 PMCID: PMC9565968 DOI: 10.3390/ijerph191912358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 09/21/2022] [Accepted: 09/22/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND The study of complex systems, such as the psychotherapeutic encounter, transcends the mechanistic and reductionist methods for describing linear processes and needs suitable approaches to describe probabilistic and scarcely predictable phenomena. OBJECTIVE The present study undertakes a scoping review of research on the computational methods in psychotherapy to gather new developments in this field and to better understand the phenomena occurring in psychotherapeutic interactions as well as in human interaction more generally. DESIGN Online databases were used to identify papers published 2011-2022, from which we selected 18 publications from different resources, selected according to criteria established in advance and described in the text. A flow chart and a summary table of the articles consulted have been created. RESULTS The majority of publications (44.4%) reported combined computational and experimental approaches, so we grouped the studies according to the types of computational methods used. All but one of the studies collected measured data. All the studies confirmed the usefulness of predictive and learning models in the study of complex variables such as those belonging to psychological, psychopathological and psychotherapeutic processes. CONCLUSIONS Research on computational methods will benefit from a careful selection of reference methods and standards. Therefore, this review represents an attempt to systematise the empirical literature on the applications of computational methods in psychotherapy research in order to offer clinicians an overview of the usefulness of these methods and the possibilities of their use in the various fields of application, highlighting their clinical implications, and ultimately attempting to identify potential opportunities for further research.
Collapse
Affiliation(s)
- Valeria Cioffi
- SiPGI–Postgraduate School of Integrated Gestalt Psychotherapy, 80058 Torre Annunziata, Italy
| | - Lucia Luciana Mosca
- SiPGI–Postgraduate School of Integrated Gestalt Psychotherapy, 80058 Torre Annunziata, Italy
| | - Enrico Moretto
- SiPGI–Postgraduate School of Integrated Gestalt Psychotherapy, 80058 Torre Annunziata, Italy
| | - Ottavio Ragozzino
- SiPGI–Postgraduate School of Integrated Gestalt Psychotherapy, 80058 Torre Annunziata, Italy
| | - Roberta Stanzione
- SiPGI–Postgraduate School of Integrated Gestalt Psychotherapy, 80058 Torre Annunziata, Italy
| | - Mario Bottone
- Department of Neurosciences and Reproductive and Odontostomatological Sciences, University of Naples Federico II, 80131 Naples, Italy
| | - Nelson Mauro Maldonato
- Department of Neurosciences and Reproductive and Odontostomatological Sciences, University of Naples Federico II, 80131 Naples, Italy
| | - Benedetta Muzii
- Department of Humanistic Studies, University of Naples Federico II, 80131 Naples, Italy
| | - Raffaele Sperandeo
- SiPGI–Postgraduate School of Integrated Gestalt Psychotherapy, 80058 Torre Annunziata, Italy
| |
Collapse
|
17
|
Using Artificial Intelligence to Enhance Ongoing Psychological Interventions for Emotional Problems in Real- or Close to Real-Time: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19137737. [PMID: 35805395 PMCID: PMC9266240 DOI: 10.3390/ijerph19137737] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/17/2022] [Accepted: 06/22/2022] [Indexed: 12/10/2022]
Abstract
Emotional disorders are the most common mental disorders globally. Psychological treatments have been found to be useful for a significant number of cases, but up to 40% of patients do not respond to psychotherapy as expected. Artificial intelligence (AI) methods might enhance psychotherapy by providing therapists and patients with real- or close to real-time recommendations according to the patient’s response to treatment. The goal of this investigation is to systematically review the evidence on the use of AI-based methods to enhance outcomes in psychological interventions in real-time or close to real-time. The search included studies indexed in the electronic databases Scopus, Pubmed, Web of Science, and Cochrane Library. The terms used for the electronic search included variations of the words “psychotherapy”, “artificial intelligence”, and “emotional disorders”. From the 85 full texts assessed, only 10 studies met our eligibility criteria. In these, the most frequently used AI technique was conversational AI agents, which are chatbots based on software that can be accessed online with a computer or a smartphone. Overall, the reviewed investigations indicated significant positive consequences of using AI to enhance psychotherapy and reduce clinical symptomatology. Additionally, most studies reported high satisfaction, engagement, and retention rates when implementing AI to enhance psychotherapy in real- or close to real-time. Despite the potential of AI to make interventions more flexible and tailored to patients’ needs, more methodologically robust studies are needed.
Collapse
|
18
|
A Systematic Review on Healthcare Artificial Intelligent Conversational Agents for Chronic Conditions. SENSORS 2022; 22:s22072625. [PMID: 35408238 PMCID: PMC9003264 DOI: 10.3390/s22072625] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 03/12/2022] [Accepted: 03/24/2022] [Indexed: 02/06/2023]
Abstract
This paper reviews different types of conversational agents used in health care for chronic conditions, examining their underlying communication technology, evaluation measures, and AI methods. A systematic search was performed in February 2021 on PubMed Medline, EMBASE, PsycINFO, CINAHL, Web of Science, and ACM Digital Library. Studies were included if they focused on consumers, caregivers, or healthcare professionals in the prevention, treatment, or rehabilitation of chronic diseases, involved conversational agents, and tested the system with human users. The search retrieved 1087 articles. Twenty-six studies met the inclusion criteria. Out of 26 conversational agents (CAs), 16 were chatbots, seven were embodied conversational agents (ECA), one was a conversational agent in a robot, and another was a relational agent. One agent was not specified. Based on this review, the overall acceptance of CAs by users for the self-management of their chronic conditions is promising. Users’ feedback shows helpfulness, satisfaction, and ease of use in more than half of included studies. Although many users in the studies appear to feel more comfortable with CAs, there is still a lack of reliable and comparable evidence to determine the efficacy of AI-enabled CAs for chronic health conditions due to the insufficient reporting of technical implementation details.
Collapse
|
19
|
Is Artificial Intelligence Better than Manpower? The Effects of Different Types of Online Customer Services on Customer Purchase Intentions. SUSTAINABILITY 2022. [DOI: 10.3390/su14073974] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial intelligence has been widely applied to e-commerce and the online business service field. However, few studies have focused on studying the differences in the effects of types of customer service on customer purchase intentions. Based on service encounter theory and superposition theory, we designed two shopping experiments to capture customers’ thoughts and feelings, in order to explore the differences in the effects of three different types of online customer service (AI customer service, manual customer service, and human–machine collaboration customer service) on customer purchase intention, and analyses the superposition effect of human–machine collaboration customer service. The results show that the consumer’s perceived service quality positively influences the customer’s purchase intention, and plays a mediating role in the effect of different types of online customer service on customer purchase intention; the product type plays a moderating role in the relationship between online customer service and customer purchase intention, and human–machine collaboration customer service has a superposition effect. This study helped to deepen the understanding of AI developers and e-commerce platforms regarding the application of AI in online business service, and provides reference suggestions for the formulation of more perfect business service strategies.
Collapse
|
20
|
Lattie EG, Stiles-Shields C, Graham AK. An overview of and recommendations for more accessible digital mental health services. NATURE REVIEWS PSYCHOLOGY 2022; 1:87-100. [PMID: 38515434 PMCID: PMC10956902 DOI: 10.1038/s44159-021-00003-1] [Citation(s) in RCA: 77] [Impact Index Per Article: 38.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/27/2021] [Indexed: 03/23/2024]
Abstract
Mental health concerns are common, and various evidence-based interventions for mental health conditions have been developed. However, many people have difficulty accessing appropriate mental health care and this has been exacerbated by the COVID-19 pandemic. Digital mental health services, such as those delivered by mobile phone or web-based platforms, offer the possibility of expanding the reach and accessibility of mental health care. To achieve this goal, digital mental health interventions and plans for their implementation must be designed with the end users in mind. In this Review, we describe the evidence base for digital mental health interventions across various diagnoses and treatment targets. Then, we explain the different formats for digital mental health intervention delivery, and offer considerations for their use across key age groups. We discuss the role that the COVID-19 pandemic has played in emphasizing the value of these interventions, and offer considerations for ensuring equity in access to digital mental health interventions among diverse populations. As healthcare providers continue to embrace the role that technology can play in broadening access to care, the design and implementation of digital mental healthcare solutions must be carefully considered to maximize their effectiveness and accessibility.
Collapse
Affiliation(s)
- Emily G. Lattie
- Department of Medical Social Sciences, Northwestern University, Chicago, IL, USA
| | - Colleen Stiles-Shields
- Department of Psychiatry and Behavioral Sciences, Rush University Medical Center, Chicago, IL, USA
| | - Andrea K. Graham
- Department of Medical Social Sciences, Northwestern University, Chicago, IL, USA
| |
Collapse
|
21
|
Boulos LJ, Mendes A, Delmas A, Chraibi Kaadoud I. An Iterative and Collaborative End-to-End Methodology Applied to Digital Mental Health. Front Psychiatry 2021; 12:574440. [PMID: 34630171 PMCID: PMC8495427 DOI: 10.3389/fpsyt.2021.574440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 08/12/2021] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) algorithms together with advances in data storage have recently made it possible to better characterize, predict, prevent, and treat a range of psychiatric illnesses. Amid the rapidly growing number of biological devices and the exponential accumulation of data in the mental health sector, the upcoming years are facing a need to homogenize research and development processes in academia as well as in the private sector and to centralize data into federalizing platforms. This has become even more important in light of the current global pandemic. Here, we propose an end-to-end methodology that optimizes and homogenizes digital research processes. Each step of the process is elaborated from project conception to knowledge extraction, with a focus on data analysis. The methodology is based on iterative processes, thus allowing an adaptation to the rate at which digital technologies evolve. The methodology also advocates for interdisciplinary (from mathematics to psychology) and intersectoral (from academia to the industry) collaborations to merge the gap between fundamental and applied research. We also pinpoint the ethical challenges and technical and human biases (from data recorded to the end user) associated with digital mental health. In conclusion, our work provides guidelines for upcoming digital mental health studies, which will accompany the translation of fundamental mental health research to digital technologies.
Collapse
|
22
|
Marcoux A, Tessier MH, Grondin F, Reduron L, Jackson PL. Perspectives fondamentale, clinique et sociétale de l’utilisation des personnages virtuels en santé mentale. SANTE MENTALE AU QUEBEC 2021. [DOI: 10.7202/1081509ar] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Avec l’attrait engendré par les avancées en informatique et en intelligence artificielle, les personnages virtuels (c.-à-d. personnages représentés numériquement d’apparence humaine ou non) sont pressentis comme de futurs prestataires de soins en santé mentale. À ce jour, l’utilisation principale de tels personnages est toutefois marginale et se limite à une aide complémentaire à la pratique des cliniciens. Des préoccupations liées à la sécurité et l’efficacité, ainsi qu’un manque de connaissances et de compétences peuvent expliquer cette discordance entre ce que certains s’imaginent être l’utilisation future (voire futuriste) des personnages virtuels et leur utilisation actuelle. Un aperçu des récentes données probantes contribuerait à réduire cette divergence et à mieux saisir les enjeux associés à leur utilisation plus répandue en santé mentale.
Objectif Cet article vise à informer tous les acteurs impliqués, dont les cliniciens, quant au potentiel des personnages virtuels en santé mentale, et de les sensibiliser aux enjeux associés à leur usage.
Méthode Une recension narrative de la littérature a été réalisée afin de synthétiser les informations obtenues de la recherche fondamentale et clinique, et de discuter des considérations sociétales.
Résultats Plusieurs caractéristiques des personnages virtuels provenant de la recherche fondamentale ont le potentiel d’influencer les interactions entre un patient et un clinicien. Elles peuvent être regroupées en deux grandes catégories : les caractéristiques liées à la perception (p. ex. réalisme) et celles liées à l’attribution spontanée d’une catégorie sociale au personnage virtuel par un observateur (p. ex. genre). Selon la recherche clinique, plusieurs interventions ou évaluations utilisant des personnages virtuels ont montré divers degrés d’efficacité en santé mentale, et certains éléments de la relation thérapeutique (p. ex. alliance et empathie) peuvent d’ailleurs être présents lors d’une relation avec un personnage virtuel. De multiples enjeux socioéconomiques et éthiques doivent aussi être discutés en vue d’un développement et d’une utilisation plus accrue qui soient responsables et éthiques. Bien que l’accessibilité et la disponibilité des personnages virtuels constituent un avantage indéniable pour l’offre de services en santé mentale, certaines iniquités demeurent. L’accumulation de données biométriques (p. ex. rythme cardiaque) a également le potentiel d’enrichir le travail des cliniciens, mais aussi de mener au développement de personnages virtuels autonomes à l’aide de l’intelligence artificielle, ce qui pourrait conduire à certains dérapages (p. ex. erreurs de décision clinique). Quelques pistes de recommandations visant à éviter ces effets indésirables sont présentées.
Conclusion L’emploi des personnages virtuels sera de plus en plus répandu en santé mentale en raison de leurs avantages prometteurs. Ainsi, il est souhaitable que tous les acteurs impliqués s’informent sur leur usage dans ce contexte, se sensibilisent aux enjeux spécifiques, participent activement aux discussions quant à leur développement et adoptent des recommandations uniformes en vue d’un usage sécuritaire et éthique en santé mentale.
Collapse
Affiliation(s)
- Audrey Marcoux
- École de Psychologie, Université Laval
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (Cirris)
- Centre de recherche CERVO
| | - Marie-Hélène Tessier
- École de Psychologie, Université Laval
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (Cirris)
- Centre de recherche CERVO
| | - Frédéric Grondin
- École de Psychologie, Université Laval
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (Cirris)
- Centre de recherche CERVO
| | | | - Philip L. Jackson
- École de Psychologie, Université Laval
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (Cirris)
- Centre de recherche CERVO
| |
Collapse
|
23
|
Bickman L. Improving Mental Health Services: A 50-Year Journey from Randomized Experiments to Artificial Intelligence and Precision Mental Health. ADMINISTRATION AND POLICY IN MENTAL HEALTH AND MENTAL HEALTH SERVICES RESEARCH 2021; 47:795-843. [PMID: 32715427 PMCID: PMC7382706 DOI: 10.1007/s10488-020-01065-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
This conceptual paper describes the current state of mental health services, identifies critical problems, and suggests how to solve them. I focus on the potential contributions of artificial intelligence and precision mental health to improving mental health services. Toward that end, I draw upon my own research, which has changed over the last half century, to highlight the need to transform the way we conduct mental health services research. I identify exemplars from the emerging literature on artificial intelligence and precision approaches to treatment in which there is an attempt to personalize or fit the treatment to the client in order to produce more effective interventions.
Collapse
Affiliation(s)
- Leonard Bickman
- Center for Children and Families; Psychology, Academic Health Center 1, Florida International University, 11200 Southwest 8th Street, Room 140, Miami, FL, 33199, USA.
| |
Collapse
|
24
|
MacNeill AL, MacNeill L, Doucet S, Luke A. The professional representation of conversational agents for health care: a scoping review protocol. JBI Evid Synth 2021; 20:666-673. [PMID: 34374689 DOI: 10.11124/jbies-20-00589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVE The purpose of this scoping review is to examine the professional representation of conversational agents that are used for health care. Professional characteristics associated with these agents will be identified, and the prevalence of these characteristics will be determined. INTRODUCTION Conversational agents that are used for health care lack the qualifications and capabilities of real health professionals, but this fact may not be clear to some patients and health seekers. This problem may be exacerbated when conversational agents are described as health professionals or are given professional titles or appearances. To date, the professional representation of conversational agents that are used for health care has received little attention in the literature. INCLUSION CRITERIA This review will include scholarly publications on conversational agents that are used for health care, particularly descriptive/developmental case studies and intervention/evaluation studies. This review will consider conversational agents designed for patients and health seekers, but not health professionals or trainees. Agents addressing physical and/or mental health will be considered. METHODS This review will be conducted in accordance with JBI methodology for scoping reviews. The databases to be searched will include MEDLINE (PubMed), Embase (Elsevier), CINAHL with Full Text (EBSCO), Scopus (Elsevier), Web of Science (Clarivate), ACM Guide to Computing Literature (ACM Digital Library), and IEEE Xplore (IEEE). The extracted data will include study characteristics, basic characteristics of the conversational agent, and characteristics relating to the professional representation of the conversational agent. The extracted data will be presented in tabular form and summarized using frequency analysis. These results will be accompanied by a narrative summary.
Collapse
Affiliation(s)
- A Luke MacNeill
- Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada The University of New Brunswick (UNB) Saint John Collaboration for Evidence-Informed Healthcare: A JBI Centre of Excellence, Saint John, NB, Canada
| | | | | | | |
Collapse
|
25
|
Zipfel S, Junne F, Giel KE. Measuring Success in Psychotherapy Trials: The Challenge of Choosing the Adequate Control Condition. PSYCHOTHERAPY AND PSYCHOSOMATICS 2021; 89:195-199. [PMID: 32375149 DOI: 10.1159/000507454] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 03/24/2020] [Indexed: 11/19/2022]
Affiliation(s)
- Stephan Zipfel
- Department of Psychosomatic Medicine and Psychotherapy, University of Tuebingen, Tuebingen, Germany,
| | - Florian Junne
- Department of Psychosomatic Medicine and Psychotherapy, University of Tuebingen, Tuebingen, Germany
| | - Katrin E Giel
- Department of Psychosomatic Medicine and Psychotherapy, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
26
|
Ermolina A, Tiberius V. Voice-Controlled Intelligent Personal Assistants in Health Care: International Delphi Study. J Med Internet Res 2021; 23:e25312. [PMID: 33835032 PMCID: PMC8065565 DOI: 10.2196/25312] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 03/05/2021] [Accepted: 03/16/2021] [Indexed: 12/13/2022] Open
Abstract
Background Voice-controlled intelligent personal assistants (VIPAs), such as Amazon Echo and Google Home, involve artificial intelligence–powered algorithms designed to simulate humans. Their hands-free interface and growing capabilities have a wide range of applications in health care, covering off-clinic education, health monitoring, and communication. However, conflicting factors, such as patient safety and privacy concerns, make it difficult to foresee the further development of VIPAs in health care. Objective This study aimed to develop a plausible scenario for the further development of VIPAs in health care to support decision making regarding the procurement of VIPAs in health care organizations. Methods We conducted a two-stage Delphi study with an internationally recruited panel consisting of voice assistant experts, medical professionals, and representatives of academia, governmental health authorities, and nonprofit health associations having expertise with voice technology. Twenty projections were formulated and evaluated by the panelists. Descriptive statistics were used to derive the desired scenario. Results The panelists expect VIPAs to be able to provide solid medical advice based on patients’ personal health information and to have human-like conversations. However, in the short term, voice assistants might neither provide frustration-free user experience nor outperform or replace humans in health care. With a high level of consensus, the experts agreed with the potential of VIPAs to support elderly people and be widely used as anamnesis, informational, self-therapy, and communication tools by patients and health care professionals. Although users’ and governments’ privacy concerns are not expected to decrease in the near future, the panelists believe that strict regulations capable of preventing VIPAs from providing medical help services will not be imposed. Conclusions According to the surveyed experts, VIPAs will show notable technological development and gain more user trust in the near future, resulting in widespread application in health care. However, voice assistants are expected to solely support health care professionals in their daily operations and will not be able to outperform or replace medical staff.
Collapse
Affiliation(s)
- Alena Ermolina
- Faculty of Economics and Social Sciences, University of Potsdam, Potsdam, Germany
| | - Victor Tiberius
- Faculty of Economics and Social Sciences, University of Potsdam, Potsdam, Germany
| |
Collapse
|
27
|
Vaidyam AN, Linggonegoro D, Torous J. Changes to the Psychiatric Chatbot Landscape: A Systematic Review of Conversational Agents in Serious Mental Illness: Changements du paysage psychiatrique des chatbots: une revue systématique des agents conversationnels dans la maladie mentale sérieuse. CANADIAN JOURNAL OF PSYCHIATRY. REVUE CANADIENNE DE PSYCHIATRIE 2021; 66:339-348. [PMID: 33063526 PMCID: PMC8172347 DOI: 10.1177/0706743720966429] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
OBJECTIVE The need for digital tools in mental health is clear, with insufficient access to mental health services. Conversational agents, also known as chatbots or voice assistants, are digital tools capable of holding natural language conversations. Since our last review in 2018, many new conversational agents and research have emerged, and we aimed to reassess the conversational agent landscape in this updated systematic review. METHODS A systematic literature search was conducted in January 2020 using the PubMed, Embase, PsychINFO, and Cochrane databases. Studies included were those that involved a conversational agent assessing serious mental illness: major depressive disorder, schizophrenia spectrum disorders, bipolar disorder, or anxiety disorder. RESULTS Of the 247 references identified from selected databases, 7 studies met inclusion criteria. Overall, there were generally positive experiences with conversational agents in regard to diagnostic quality, therapeutic efficacy, or acceptability. There continues to be, however, a lack of standard measures that allow ease of comparison of studies in this space. There were several populations that lacked representation such as the pediatric population and those with schizophrenia or bipolar disorder. While comparing 2018 to 2020 research offers useful insight into changes and growth, the high degree of heterogeneity between all studies in this space makes direct comparison challenging. CONCLUSIONS This review revealed few but generally positive outcomes regarding conversational agents' diagnostic quality, therapeutic efficacy, and acceptability, which may augment mental health care. Despite this increase in research activity, there continues to be a lack of standard measures for evaluating conversational agents as well as several neglected populations. We recommend that the standardization of conversational agent studies should include patient adherence and engagement, therapeutic efficacy, and clinician perspectives.
Collapse
Affiliation(s)
- Aditya Nrusimha Vaidyam
- 1859Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Danny Linggonegoro
- 1859Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - John Torous
- 1859Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
28
|
Bérubé C, Schachner T, Keller R, Fleisch E, V Wangenheim F, Barata F, Kowatsch T. Voice-Based Conversational Agents for the Prevention and Management of Chronic and Mental Health Conditions: Systematic Literature Review. J Med Internet Res 2021; 23:e25933. [PMID: 33658174 PMCID: PMC8042539 DOI: 10.2196/25933] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 03/03/2021] [Indexed: 01/04/2023] Open
Abstract
Background Chronic and mental health conditions are increasingly prevalent worldwide. As devices in our everyday lives offer more and more voice-based self-service, voice-based conversational agents (VCAs) have the potential to support the prevention and management of these conditions in a scalable manner. However, evidence on VCAs dedicated to the prevention and management of chronic and mental health conditions is unclear. Objective This study provides a better understanding of the current methods used in the evaluation of health interventions for the prevention and management of chronic and mental health conditions delivered through VCAs. Methods We conducted a systematic literature review using PubMed MEDLINE, Embase, PsycINFO, Scopus, and Web of Science databases. We included primary research involving the prevention or management of chronic or mental health conditions through a VCA and reporting an empirical evaluation of the system either in terms of system accuracy, technology acceptance, or both. A total of 2 independent reviewers conducted the screening and data extraction, and agreement between them was measured using Cohen kappa. A narrative approach was used to synthesize the selected records. Results Of 7170 prescreened papers, 12 met the inclusion criteria. All studies were nonexperimental. The VCAs provided behavioral support (n=5), health monitoring services (n=3), or both (n=4). The interventions were delivered via smartphones (n=5), tablets (n=2), or smart speakers (n=3). In 2 cases, no device was specified. A total of 3 VCAs targeted cancer, whereas 2 VCAs targeted diabetes and heart failure. The other VCAs targeted hearing impairment, asthma, Parkinson disease, dementia, autism, intellectual disability, and depression. The majority of the studies (n=7) assessed technology acceptance, but only few studies (n=3) used validated instruments. Half of the studies (n=6) reported either performance measures on speech recognition or on the ability of VCAs to respond to health-related queries. Only a minority of the studies (n=2) reported behavioral measures or a measure of attitudes toward intervention-targeted health behavior. Moreover, only a minority of studies (n=4) reported controlling for participants’ previous experience with technology. Finally, risk bias varied markedly. Conclusions The heterogeneity in the methods, the limited number of studies identified, and the high risk of bias show that research on VCAs for chronic and mental health conditions is still in its infancy. Although the results of system accuracy and technology acceptance are encouraging, there is still a need to establish more conclusive evidence on the efficacy of VCAs for the prevention and management of chronic and mental health conditions, both in absolute terms and in comparison with standard health care.
Collapse
Affiliation(s)
- Caterina Bérubé
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Theresa Schachner
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Roman Keller
- Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore
| | - Elgar Fleisch
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.,Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore.,Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland
| | - Florian V Wangenheim
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.,Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore
| | - Filipe Barata
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Tobias Kowatsch
- Center for Digital Health Interventions, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.,Future Health Technologies Programme, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore-ETH Centre, Singapore, Singapore.,Center for Digital Health Interventions, Institute of Technology Management, University of St. Gallen, St. Gallen, Switzerland.,Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| |
Collapse
|
29
|
Prochaska JJ, Vogel EA, Chieng A, Kendra M, Baiocchi M, Pajarito S, Robinson A. A Therapeutic Relational Agent for Reducing Problematic Substance Use (Woebot): Development and Usability Study. J Med Internet Res 2021; 23:e24850. [PMID: 33755028 PMCID: PMC8074987 DOI: 10.2196/24850] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 01/19/2021] [Accepted: 01/31/2021] [Indexed: 01/02/2023] Open
Abstract
Background Misuse of substances is common, can be serious and costly to society, and often goes untreated due to barriers to accessing care. Woebot is a mental health digital solution informed by cognitive behavioral therapy and built upon an artificial intelligence–driven platform to deliver tailored content to users. In a previous 2-week randomized controlled trial, Woebot alleviated depressive symptoms. Objective This study aims to adapt Woebot for the treatment of substance use disorders (W-SUDs) and examine its feasibility, acceptability, and preliminary efficacy. Methods American adults (aged 18-65 years) who screened positive for substance misuse without major health contraindications were recruited from online sources and flyers and enrolled between March 27 and May 6, 2020. In a single-group pre/postdesign, all participants received W-SUDs for 8 weeks. W-SUDs provided mood, craving, and pain tracking and modules (psychoeducational lessons and psychotherapeutic tools) using elements of dialectical behavior therapy and motivational interviewing. Paired samples t tests and McNemar nonparametric tests were used to examine within-subject changes from pre- to posttreatment on measures of substance use, confidence, cravings, mood, and pain. Results The sample (N=101) had a mean age of 36.8 years (SD 10.0), and 75.2% (76/101) of the participants were female, 78.2% (79/101) were non-Hispanic White, and 72.3% (73/101) were employed. Participants’ W-SUDs use averaged 15.7 (SD 14.2) days, 12.1 (SD 8.3) modules, and 600.7 (SD 556.5) sent messages. About 94% (562/598) of all completed psychoeducational lessons were rated positively. From treatment start to end, in-app craving ratings were reduced by half (87/101, 86.1% reporting cravings in the app; odds ratio 0.48, 95% CI 0.32-0.73). Posttreatment assessment completion was 50.5% (51/101), with better retention among those who initially screened higher on substance misuse. From pre- to posttreatment, confidence to resist urges to use substances significantly increased (mean score change +16.9, SD 21.4; P<.001), whereas past month substance use occasions (mean change −9.3, SD 14.1; P<.001) and scores on the Alcohol Use Disorders Identification Test-Concise (mean change −1.3, SD 2.6; P<.001), 10-item Drug Abuse Screening Test (mean change −1.2, SD 2.0; P<.001), Patient Health Questionnaire-8 item (mean change 2.1, SD 5.2; P=.005), Generalized Anxiety Disorder-7 (mean change −2.3, SD 4.7; P=.001), and cravings scale (68.6% vs 47.1% moderate to extreme; P=.01) significantly decreased. Most participants would recommend W-SUDs to a friend (39/51, 76%) and reported receiving the service they desired (41/51, 80%). Fewer felt W-SUDs met most or all of their needs (22/51, 43%). Conclusions W-SUDs was feasible to deliver, engaging, and acceptable and was associated with significant improvements in substance use, confidence, cravings, depression, and anxiety. Study attrition was high. Future research will evaluate W-SUDs in a randomized controlled trial with a more diverse sample and with the use of greater study retention strategies. Trial Registration ClinicalTrials.gov NCT04096001; http://clinicaltrials.gov/ct2/show/NCT04096001.
Collapse
Affiliation(s)
- Judith J Prochaska
- Stanford Prevention Research Center, School of Medicine, Stanford University, Stanford, CA, United States
| | - Erin A Vogel
- Stanford Prevention Research Center, School of Medicine, Stanford University, Stanford, CA, United States
| | - Amy Chieng
- Stanford Prevention Research Center, School of Medicine, Stanford University, Stanford, CA, United States
| | - Matthew Kendra
- Department of Psychiatry & Behavioral Sciences, School of Medicine, Stanford University, Stanford, CA, United States
| | - Michael Baiocchi
- Department of Epidemiology & Population Health, School of Medicine, Stanford University, Stanford, CA, United States
| | | | | |
Collapse
|
30
|
Aafjes-van Doorn K, Kamsteeg C, Bate J, Aafjes M. A scoping review of machine learning in psychotherapy research. Psychother Res 2020; 31:92-116. [PMID: 32862761 DOI: 10.1080/10503307.2020.1808729] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
Abstract
Machine learning (ML) offers robust statistical and probabilistic techniques that can help to make sense of large amounts of data. This scoping review paper aims to broadly explore the nature of research activity using ML in the context of psychological talk therapies, highlighting the scope of current methods and considerations for clinical practice and directions for future research. Using a systematic search methodology, fifty-one studies were identified. A narrative synthesis indicates two types of studies, those who developed and tested an ML model (k=44), and those who reported on the feasibility of a particular treatment tool that uses an ML algorithm (k=7). Most model development studies used supervised learning techniques to classify or predict labeled treatment process or outcome data, whereas others used unsupervised techniques to identify clusters in the unlabeled patient or treatment data. Overall, the current applications of ML in psychotherapy research demonstrated a range of possible benefits for indications of treatment process, adherence, therapist skills and treatment response prediction, as well as ways to accelerate research through automated behavioral or linguistic process coding. Given the novelty and potential of this research field, these proof-of-concept studies are encouraging, however, do not necessarily translate to improved clinical practice (yet).
Collapse
Affiliation(s)
| | | | - Jordan Bate
- Ferkauf Graduate School of Psychology, Yeshiva University, Bronx, NY, USA
| | | |
Collapse
|
31
|
Miner AS, Haque A, Fries JA, Fleming SL, Wilfley DE, Terence Wilson G, Milstein A, Jurafsky D, Arnow BA, Stewart Agras W, Fei-Fei L, Shah NH. Assessing the accuracy of automatic speech recognition for psychotherapy. NPJ Digit Med 2020; 3:82. [PMID: 32550644 PMCID: PMC7270106 DOI: 10.1038/s41746-020-0285-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 04/30/2020] [Indexed: 01/17/2023] Open
Abstract
Accurate transcription of audio recordings in psychotherapy would improve therapy effectiveness, clinician training, and safety monitoring. Although automatic speech recognition software is commercially available, its accuracy in mental health settings has not been well described. It is unclear which metrics and thresholds are appropriate for different clinical use cases, which may range from population descriptions to individual safety monitoring. Here we show that automatic speech recognition is feasible in psychotherapy, but further improvements in accuracy are needed before widespread use. Our HIPAA-compliant automatic speech recognition system demonstrated a transcription word error rate of 25%. For depression-related utterances, sensitivity was 80% and positive predictive value was 83%. For clinician-identified harm-related sentences, the word error rate was 34%. These results suggest that automatic speech recognition may support understanding of language patterns and subgroup variation in existing treatments but may not be ready for individual-level safety surveillance.
Collapse
Affiliation(s)
- Adam S. Miner
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA USA
- Department of Health Research and Policy, Stanford University, CA, USA
- Center for Biomedical Informatics Research, Stanford University, Stanford, CA USA
| | - Albert Haque
- Department of Computer Science, Stanford University, Stanford, CA USA
| | - Jason A. Fries
- Center for Biomedical Informatics Research, Stanford University, Stanford, CA USA
| | - Scott L. Fleming
- Department of Biomedical Data Science, Stanford University, Stanford, CA USA
| | - Denise E. Wilfley
- Departments of Psychiatry, Medicine, Pediatrics, and Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO USA
| | - G. Terence Wilson
- Graduate School of Applied and Professional Psychology, Rutgers, the State University of New Jersey, New Brunswick, New Jersey USA
| | - Arnold Milstein
- Clinical Excellence Research Center, Stanford University, Stanford, CA USA
| | - Dan Jurafsky
- Department of Computer Science, Stanford University, Stanford, CA USA
- Department of Linguistics, Stanford University, Stanford, CA USA
| | - Bruce A. Arnow
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA USA
| | - W. Stewart Agras
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA USA
| | - Li Fei-Fei
- Department of Computer Science, Stanford University, Stanford, CA USA
| | - Nigam H. Shah
- Center for Biomedical Informatics Research, Stanford University, Stanford, CA USA
| |
Collapse
|