1
|
Omisore OM, Odenigbo I, Orji J, Beltran AIH, Meier S, Baghaei N, Orji R. Extended Reality for Mental Health Evaluation: Scoping Review. JMIR Serious Games 2024; 12:e38413. [PMID: 39047289 PMCID: PMC11306946 DOI: 10.2196/38413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 09/05/2022] [Accepted: 03/24/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND Mental health disorders are the leading cause of health-related problems worldwide. It is projected that mental health disorders will be the leading cause of morbidity among adults as the incidence rates of anxiety and depression grow worldwide. Recently, "extended reality" (XR), a general term covering virtual reality (VR), augmented reality (AR), and mixed reality (MR), is paving the way for the delivery of mental health care. OBJECTIVE We aimed to investigate the adoption and implementation of XR technology used in interventions for mental disorders and to provide statistical analyses of the design, usage, and effectiveness of XR technology for mental health interventions with a worldwide demographic focus. METHODS In this paper, we conducted a scoping review of the development and application of XR in the area of mental disorders. We performed a database search to identify relevant studies indexed in Google Scholar, PubMed, and the ACM Digital Library. A search period between August 2016 and December 2023 was defined to select papers related to the usage of VR, AR, and MR in a mental health context. The database search was performed with predefined queries, and a total of 831 papers were identified. Ten papers were identified through professional recommendation. Inclusion and exclusion criteria were designed and applied to ensure that only relevant studies were included in the literature review. RESULTS We identified a total of 85 studies from 27 countries worldwide that used different types of VR, AR, and MR techniques for managing 14 types of mental disorders. By performing data analysis, we found that most of the studies focused on high-income countries, such as the United States (n=14, 16.47%) and Germany (n=12, 14.12%). None of the studies were for African countries. The majority of papers reported that XR techniques lead to a significant reduction in symptoms of anxiety or depression. The majority of studies were published in 2021 (n=26, 30.59%). This could indicate that mental disorder intervention received higher attention when COVID-19 emerged. Most studies (n=65, 76.47%) focused on a population in the age range of 18-65 years, while few studies (n=2, 3.35%) focused on teenagers (ie, subjects in the age range of 10-19 years). In addition, more studies were conducted experimentally (n=67, 78.82%) rather than by using analytical and modeling approaches (n=8, 9.41%). This shows that there is a rapid development of XR technology for mental health care. Furthermore, these studies showed that XR technology can effectively be used for evaluating mental disorders in a similar or better way that conventional approaches. CONCLUSIONS In this scoping review, we studied the adoption and implementation of XR technology for mental disorder care. Our review shows that XR treatment yields high patient satisfaction, and follow-up assessments show significant improvement with large effect sizes. Moreover, the studies adopted unique designs that were set up to record and analyze the symptoms reported by their participants. This review may aid future research and development of various XR mechanisms for differentiated mental disorder procedures.
Collapse
Affiliation(s)
- Olatunji Mumini Omisore
- Research Centre for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ifeanyi Odenigbo
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Joseph Orji
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | | | - Sandra Meier
- Department of Psychiatry, Dalhousie University, Halifax, NS, Canada
| | - Nilufar Baghaei
- School of Electrical Engineering and Computer Science, University of Queensland, St Lucia, Australia
| | - Rita Orji
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| |
Collapse
|
2
|
Sulis E, Mariani S, Montagna S. A survey on agents applications in healthcare: Opportunities, challenges and trends. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107525. [PMID: 37084529 DOI: 10.1016/j.cmpb.2023.107525] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 03/31/2023] [Accepted: 04/01/2023] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The agent abstraction is a powerful one, developed decades ago to represent crucial aspects of artificial intelligence research. The meaning has transformed over the years and now there are different nuances across research communities. At its core, an agent is an autonomous computational entity capable of sensing, acting, and capturing interactions with other agents and its environment. This review examines how agent-based techniques have been implemented and evaluated in a specific and very important domain, i.e. healthcare research. METHODS We survey key areas of agent-based research in healthcare, e.g. individual and collective behaviours, communicable and non-communicable diseases, and social epidemiology. We propose a systematic search and critical review of relevant recent works, introduced by an exploratory network analysis. RESULTS Network analysis enables to devise out 5 main research clusters, the most active authors, and 4 main research topics. CONCLUSIONS Our findings support discussion of some future directions for increasing the value of agent-based approaches in healthcare.
Collapse
Affiliation(s)
- Emilio Sulis
- Computer Science Department, University of Torino, Via Pessinetto 12, Turin, 10149, Italy.
| | - Stefano Mariani
- Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Viale A. Allegri 9, Reggio Emilia, 42121, Italy
| | - Sara Montagna
- Department of Pure and Applied Sciences, University of Urbino, Piazza della Repubblica, 13, Urbino, 61029, Italy
| |
Collapse
|
3
|
Esposito A, Amorese T, Cuciniello M, Esposito AM, Cordasco G. Do you like me? Behavioral and physical features for socially and emotionally engaging interactive systems. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1138501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023] Open
Abstract
With the aim to give an overview of the most recent discoveries in the field of socially engaging interactive systems, the present paper discusses features affecting users' acceptance of virtual agents, robots, and chatbots. In addition, questionnaires exploited in several investigations to assess the acceptance of virtual agents, robots, and chatbots (voice only) are discussed and reported in the Supplementary material to make them available to the scientific community. These questionnaires were developed by the authors as a scientific contribution to the H2020 project EMPATHIC (http://www.empathic-project.eu/), Menhir (https://menhir-project.eu/), and the Italian-funded projects SIROBOTICS (https://www.exprivia.it/it-tile-6009-si-robotics/) and ANDROIDS (https://www.psicologia.unicampania.it/android-project) to guide the design and implementation of the promised assistive interactive dialog systems. They aimed to quantitatively evaluate Virtual Agents Acceptance (VAAQ), Robot Acceptance (RAQ), and Synthetic Virtual Agent Voice Acceptance (VAVAQ).
Collapse
|
4
|
Kaywan P, Ahmed K, Ibaida A, Miao Y, Gu B. Early detection of depression using a conversational AI bot: A non-clinical trial. PLoS One 2023; 18:e0279743. [PMID: 36735701 PMCID: PMC9897524 DOI: 10.1371/journal.pone.0279743] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 11/24/2022] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has gained momentum in behavioural health interventions in recent years. However, a limited number of studies use or apply such methodologies in the early detection of depression. A large population needing psychological-intervention is left unidentified due to barriers such as cost, location, stigma and a global shortage of health workers. Therefore, it is essential to develop a mass screening integrative approach that can identify people with depression at its early stage to avoid a potential crisis. OBJECTIVES This study aims to understand the feasibility and efficacy of using AI-enabled chatbots in the early detection of depression. METHODS We use Dialogflow as a conversation interface to build a Depression Analysisn (DEPRA) chatbot. A structured and authoritative early detection depression interview guide, which contains 27 questions combining the structured interview guide for the Hamilton Depression Scale (SIGH-D) and the inventory of depressive symptomatology (IDS-C), underpins the design of the conversation flow. To attain better accuracy and a wide variety of responses, we train Dialogflow with the utterances collected from a focus group of 10 people. The occupation of the focus group members included academics and HDR candidates who are conscious, vigilant and have a clear understanding of the questions. In addition, DEPRA is integrated with a social media platform to provide practical access to all the participants. For the non-clinical trial, we recruited 50 participants aged between 18 and 80 from across Australia. To evaluate the practicability and performance of DEPRA, we also asked participants to submit a user satisfaction survey at the end of the conversation. RESULTS A sample of 50 participants, with an average age of 34.7 years, completed this non-clinical trial. More than half of the participants (54%) are male and the major ethnicities are Asian (63%), Middle Eastern (25%), and others 12%. The first group comprises professional academic staff and HDR candidates, the second and third groups comprise relatives, friends, and volunteers who were recruited via social media promotions. DEPRA uses two scientific scoring systems, QIDS-SR and IDS-SR to verify the results of early depression detection. As the results indicate, both scoring systems return a similar outcome with slight variations for different depression levels. According to IDS-SR, 30% of participants were healthy, 14% mild, 22% moderate, 14% severe, and 20% very severe. QIDS-SR suggests 32% were healthy, 18% mild, 10% moderate, 18% severe, and 22% very severe. Furthermore, the overall satisfaction rate of using DEPRA was 79% indicating that the participants had a high rate of user satisfaction and engagement. CONCLUSION DEPRA shows promises as a feasible option for developing a mass screening integrated approach for early detection of depression. Although the chatbot is not intended to replace the functionality of mental health professionals, it does show promise as a means of assisting with automation and concealed communication with verified scoring systems.
Collapse
Affiliation(s)
- Payam Kaywan
- Intelligent Technology Innovation Lab, Victoria University, Melbourne, Victoria, Australia
| | - Khandakar Ahmed
- Intelligent Technology Innovation Lab, Victoria University, Melbourne, Victoria, Australia
- * E-mail:
| | - Ayman Ibaida
- Intelligent Technology Innovation Lab, Victoria University, Melbourne, Victoria, Australia
| | - Yuan Miao
- Intelligent Technology Innovation Lab, Victoria University, Melbourne, Victoria, Australia
| | - Bruce Gu
- Intelligent Technology Innovation Lab, Victoria University, Melbourne, Victoria, Australia
| |
Collapse
|
5
|
Eysenbach G, May R. Developing a Technical-Oriented Taxonomy to Define Archetypes of Conversational Agents in Health Care: Literature Review and Cluster Analysis. J Med Internet Res 2023; 25:e41583. [PMID: 36716093 PMCID: PMC9926340 DOI: 10.2196/41583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 11/20/2022] [Accepted: 12/19/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND The evolution of artificial intelligence and natural language processing generates new opportunities for conversational agents (CAs) that communicate and interact with individuals. In the health domain, CAs became popular as they allow for simulating the real-life experience in a health care setting, which is the conversation with a physician. However, it is still unclear which technical archetypes of health CAs can be distinguished. Such technical archetypes are required, among other things, for harmonizing evaluation metrics or describing the landscape of health CAs. OBJECTIVE The objective of this work was to develop a technical-oriented taxonomy for health CAs and characterize archetypes of health CAs based on their technical characteristics. METHODS We developed a taxonomy of technical characteristics for health CAs based on scientific literature and empirical data and by applying a taxonomy development framework. To demonstrate the applicability of the taxonomy, we analyzed the landscape of health CAs of the last years based on a literature review. To form technical design archetypes of health CAs, we applied a k-means clustering method. RESULTS Our taxonomy comprises 18 unique dimensions corresponding to 4 perspectives of technical characteristics (setting, data processing, interaction, and agent appearance). Each dimension consists of 2 to 5 characteristics. The taxonomy was validated based on 173 unique health CAs that were identified out of 1671 initially retrieved publications. The 173 CAs were clustered into 4 distinctive archetypes: a text-based ad hoc supporter; a multilingual, hybrid ad hoc supporter; a hybrid, single-language temporary advisor; and, finally, an embodied temporary advisor, rule based with hybrid input and output options. CONCLUSIONS From the cluster analysis, we learned that the time dimension is important from a technical perspective to distinguish health CA archetypes. Moreover, we were able to identify additional distinctive, dominant characteristics that are relevant when evaluating health-related CAs (eg, input and output options or the complexity of the CA personality). Our archetypes reflect the current landscape of health CAs, which is characterized by rule based, simple systems in terms of CA personality and interaction. With an increase in research interest in this field, we expect that more complex systems will arise. The archetype-building process should be repeated after some time to check whether new design archetypes emerge.
Collapse
Affiliation(s)
| | - Richard May
- Harz University of Applied Sciences, Wernigerode, Germany
| |
Collapse
|
6
|
Minder B, Wolf P, Baldauf M, Verma S. Voice assistants in private households: a conceptual framework for future research in an interdisciplinary field. HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS 2023; 10:173. [PMID: 37096242 PMCID: PMC10113989 DOI: 10.1057/s41599-023-01615-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 03/14/2023] [Indexed: 05/03/2023]
Abstract
The present study identifies, organizes, and structures the available scientific knowledge on the recent use and the prospects of Voice Assistants (VA) in private households. The systematic review of the 207 articles from the Computer, Social, and Business and Management research domains combines bibliometric with qualitative content analysis. The study contributes to earlier research by consolidating the as yet dispersed insights from scholarly research, and by conceptualizing linkages between research domains around common themes. We find that, despite advances in the technological development of VA, research largely lacks cross-fertilization between findings from the Social and Business and Management Sciences. This is needed for developing and monetizing meaningful VA use cases and solutions that match the needs of private households. Few articles show that future research is well-advised to make interdisciplinary efforts to create a common understanding from complementary findings-e.g., what necessary social, legal, functional, and technological extensions could integrate social, behavioral, and business aspects with technological development. We identify future VA-based business opportunities and propose integrated future research avenues for aligning the different disciplines' scholarly efforts.
Collapse
Affiliation(s)
- Bettina Minder
- Lucerne School of Information Technology and Computer Sciences, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
| | - Patricia Wolf
- Department of Business & Management, University of Southern Denmark, Odense, Denmark
- Department of Management, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
| | - Matthias Baldauf
- Institute for Information and Process Management, Eastern Switzerland University of Applied Sciences, St.Gallen, Switzerland
| | - Surabhi Verma
- Department of Business & Management, University of Southern Denmark, Odense, Denmark
- Department of Economics and Business Economics, Aarhus University, Aarhus, Denmark
| |
Collapse
|
7
|
Wiebe A, Kannen K, Selaskowski B, Mehren A, Thöne AK, Pramme L, Blumenthal N, Li M, Asché L, Jonas S, Bey K, Schulze M, Steffens M, Pensel MC, Guth M, Rohlfsen F, Ekhlas M, Lügering H, Fileccia H, Pakos J, Lux S, Philipsen A, Braun N. Virtual reality in the diagnostic and therapy for mental disorders: A systematic review. Clin Psychol Rev 2022; 98:102213. [PMID: 36356351 DOI: 10.1016/j.cpr.2022.102213] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 08/21/2022] [Accepted: 10/11/2022] [Indexed: 01/27/2023]
Abstract
BACKGROUND Virtual reality (VR) technologies are playing an increasingly important role in the diagnostics and treatment of mental disorders. OBJECTIVE To systematically review the current evidence regarding the use of VR in the diagnostics and treatment of mental disorders. DATA SOURCE Systematic literature searches via PubMed (last literature update: 9th of May 2022) were conducted for the following areas of psychopathology: Specific phobias, panic disorder and agoraphobia, social anxiety disorder, generalized anxiety disorder, posttraumatic stress disorder (PTSD), obsessive-compulsive disorder, eating disorders, dementia disorders, attention-deficit/hyperactivity disorder, depression, autism spectrum disorder, schizophrenia spectrum disorders, and addiction disorders. ELIGIBILITY CRITERIA To be eligible, studies had to be published in English, to be peer-reviewed, to report original research data, to be VR-related, and to deal with one of the above-mentioned areas of psychopathology. STUDY EVALUATION For each study included, various study characteristics (including interventions and conditions, comparators, major outcomes and study designs) were retrieved and a risk of bias score was calculated based on predefined study quality criteria. RESULTS Across all areas of psychopathology, k = 9315 studies were inspected, of which k = 721 studies met the eligibility criteria. From these studies, 43.97% were considered assessment-related, 55.48% therapy-related, and 0.55% were mixed. The highest research activity was found for VR exposure therapy in anxiety disorders, PTSD and addiction disorders, where the most convincing evidence was found, as well as for cognitive trainings in dementia and social skill trainings in autism spectrum disorder. CONCLUSION While VR exposure therapy will likely find its way successively into regular patient care, there are also many other promising approaches, but most are not yet mature enough for clinical application. REVIEW REGISTRATION PROSPERO register CRD42020188436. FUNDING The review was funded by budgets from the University of Bonn. No third party funding was involved.
Collapse
Affiliation(s)
- Annika Wiebe
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Kyra Kannen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Benjamin Selaskowski
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Aylin Mehren
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Ann-Kathrin Thöne
- School of Child and Adolescent Cognitive Behavior Therapy (AKiP), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Lisa Pramme
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Nike Blumenthal
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Mengtong Li
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Laura Asché
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Stephan Jonas
- Institute for Digital Medicine, University Hospital Bonn, Bonn, Germany
| | - Katharina Bey
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Marcel Schulze
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Maria Steffens
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Max Christian Pensel
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Matthias Guth
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Felicia Rohlfsen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Mogda Ekhlas
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Helena Lügering
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Helena Fileccia
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Julian Pakos
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Silke Lux
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Alexandra Philipsen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Niclas Braun
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany.
| |
Collapse
|
8
|
Chaudhry BM, Islam A. A Mobile Application-Based Relational Agent as a Health Professional for COVID-19 Patients: Design, Approach, and Implications. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13794. [PMID: 36360674 PMCID: PMC9656668 DOI: 10.3390/ijerph192113794] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/19/2022] [Accepted: 10/20/2022] [Indexed: 06/16/2023]
Abstract
Relational Agents' (RAs) ability to maintain socio-emotional relationships with users can be an asset to COVID-19 patients. The goal of this research was to identify principles for designing an RA that can act as a health professional for a COVID-19 patient. We first identified tasks that such an RA can provide by interviewing 33 individuals, who had recovered from COVID-19. The transcribed interviews were analyzed using qualitative thematic analysis. Based on the findings, four sets of hypothetical conversations were handcrafted to illustrate how the proposed RA will execute the identified tasks. These conversations were then evaluated by 43 healthcare professionals in a qualitative study. Thematic analysis was again used to identify characteristics that would be suitable for the proposed RA. The results suggest that the RA must: model clinical protocols; incorporate evidence-based interventions; inform, educate, and remind patients; build trusting relationships, and support their socio-emotional needs. The findings have implications for designing RAs for other healthcare contexts beyond the pandemic.
Collapse
|
9
|
Islam A, Chaudhry BM. Design Validation of a Relational Agent by COVID-19 Patients (Preprint). JMIR Hum Factors 2022. [DOI: 10.2196/42740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
10
|
Islam A, Chaudhry BM. A Relational Agent for the COVID-19 Patients: Design, Approach, and Implications. JMIR Hum Factors 2022. [PMID: 36098997 DOI: 10.2196/37734] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Relational agents (RAs) have shown effectiveness in various health interventions with and without doctors and hospital facilities. We suggest that in situations such as a pandemic like the COVID-19 when healthcare professionals (HCPs) and facilities are unable to cope with increased demands, RAs can play a major role in ameliorating the situation. OBJECTIVE The goal of this research was to seek design validation on a prototypical RA to address healthcare needs of the COVID-19 patients. METHODS Therefore, RAs can deliver health interventions during COVID-19 pandemic, but they have not been well-explored in this domain. To address this gap, a prototypical RA is iteratively designed and developed in collaboration with infected patients (n=21) and two groups of HCPs (n=19 and n=16 respectively) to aid COVID-19 patients at various stages by performing four main tasks: testing guidance, support during self-isolation, handling emergency situations, and promoting post-recovery mental well-being. RESULTS A survey with 98 individuals was used to evaluate the usability of the prototype by system usability scale (SUS) and it received an average score of 58.82. Moreover, participants indicated perceived usefulness and acceptability of the system on Likert Scales where 89.65% perceived it to be helpful, 68.97% accepted it as a viable alternative to HCPs. CONCLUSIONS The prototypical RA received favorable feedback from the participants and they were inclined to accept it as an alternative to HCPs in non-life-threatening scenarios despite the usability rating falling below the acceptable threshold. Based on participants' feedback, we recommend further development of the RA with improved automation and emotional support, ability to provide information, tracking, and specific recommendations.
Collapse
Affiliation(s)
- Ashraful Islam
- University of Louisiana at Lafayette, 104 East University Avenue, Lafayette, US
| | | |
Collapse
|
11
|
Koulouri T, Macredie RD, Olakitan D. Chatbots to Support Young Adults’ Mental Health: an Exploratory Study of Acceptability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3485874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Despite the prevalence of mental health conditions, stigma, lack of awareness and limited resources impede access to care, creating a need to improve mental health support. The recent surge in scientific and commercial interest in conversational agents and their potential to improve diagnosis and treatment seems a potentially fruitful area in this respect, particularly for young adults who widely use such systems in other contexts. Yet, there is little research that considers the acceptability of conversational agents in mental health. This study, therefore, presents three research activities that explore whether conversational agents and, in particular, chatbots can be an acceptable solution in mental healthcare for young adults. First, a survey of young adults (in a university setting) provides an understanding of the landscape of mental health in this age group and of their views around mental health technology, including chatbots. Second, a literature review synthesises current evidence relating to the acceptability of mental health conversational agents and points to future research priorities. Third, interviews with counsellors who work with young adults, supported by a chatbot prototype and user-centred design techniques, reveal the perceived benefits and potential roles of mental health chatbots from the perspective of mental health professionals, while suggesting preconditions for the acceptability of the technology. Taken together, these research activities: provide evidence that chatbots are an acceptable solution to offering mental health support for young adults; identify specific challenges relating to both the technology and environment; and argue for the application of user-centred approaches during development of mental health chatbots and more systematic and rigorous evaluations of the resulting solutions.
Collapse
|
12
|
A Systematic Review on Healthcare Artificial Intelligent Conversational Agents for Chronic Conditions. SENSORS 2022; 22:s22072625. [PMID: 35408238 PMCID: PMC9003264 DOI: 10.3390/s22072625] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 03/12/2022] [Accepted: 03/24/2022] [Indexed: 02/06/2023]
Abstract
This paper reviews different types of conversational agents used in health care for chronic conditions, examining their underlying communication technology, evaluation measures, and AI methods. A systematic search was performed in February 2021 on PubMed Medline, EMBASE, PsycINFO, CINAHL, Web of Science, and ACM Digital Library. Studies were included if they focused on consumers, caregivers, or healthcare professionals in the prevention, treatment, or rehabilitation of chronic diseases, involved conversational agents, and tested the system with human users. The search retrieved 1087 articles. Twenty-six studies met the inclusion criteria. Out of 26 conversational agents (CAs), 16 were chatbots, seven were embodied conversational agents (ECA), one was a conversational agent in a robot, and another was a relational agent. One agent was not specified. Based on this review, the overall acceptance of CAs by users for the self-management of their chronic conditions is promising. Users’ feedback shows helpfulness, satisfaction, and ease of use in more than half of included studies. Although many users in the studies appear to feel more comfortable with CAs, there is still a lack of reliable and comparable evidence to determine the efficacy of AI-enabled CAs for chronic health conditions due to the insufficient reporting of technical implementation details.
Collapse
|
13
|
Goonesekera Y, Donkin L. A Cognitive Behavior Therapy Chatbot (Otis) for Health Anxiety Management: A Mixed-Methods Pilot Study (Preprint). JMIR Form Res 2022; 6:e37877. [PMID: 36150049 PMCID: PMC9586257 DOI: 10.2196/37877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 09/01/2022] [Accepted: 09/20/2022] [Indexed: 11/13/2022] Open
Abstract
Background An increase in health anxiety was observed during the COVID-19 pandemic. However, due to physical distancing restrictions and a strained mental health system, people were unable to access support to manage health anxiety. Chatbots are emerging as an interactive means to deliver psychological interventions in a scalable manner and provide an opportunity for novel therapy delivery to large groups of people including those who might struggle to access traditional therapies. Objective The aim of this mixed methods pilot study was to investigate the feasibility, acceptability, engagement, and effectiveness of a cognitive behavioral therapy (CBT)–based chatbot (Otis) as an early health anxiety management intervention for adults in New Zealand during the COVID-19 pandemic. Methods Users were asked to complete a 14-day program run by Otis, a primarily decision tree–based chatbot on Facebook Messenger. Health anxiety, general anxiety, intolerance of uncertainty, personal well-being, and quality of life were measured pre-intervention, postintervention, and at a 12-week follow-up. Paired samples t tests and 1-way ANOVAs were conducted to investigate the associated changes in the outcomes over time. Semistructured interviews and written responses in the self-report questionnaires and Facebook Messenger were thematically analyzed. Results The trial was completed by 29 participants who provided outcome measures at both postintervention and follow-up. Although an average decrease in health anxiety did not reach significance at postintervention (P=.55) or follow-up (P=.08), qualitative analysis demonstrated that participants perceived benefiting from the intervention. Significant improvement in general anxiety, personal well-being, and quality of life was associated with the use of Otis at postintervention and follow-up. Anthropomorphism, Otis’ appearance, and delivery of content facilitated the use of Otis. Technical difficulties and high performance and effort expectancy were, in contrast, barriers to acceptance and engagement of Otis. Conclusions Otis may be a feasible, acceptable, and engaging means of delivering CBT to improve anxiety management, quality of life, and personal well-being but might not significantly reduce health anxiety.
Collapse
Affiliation(s)
- Yenushka Goonesekera
- Department of Psychological Medicine, The University of Auckland, Auckland, New Zealand
| | - Liesje Donkin
- Department of Psychological Medicine, The University of Auckland, Auckland, New Zealand
- Department of Psychology and Neuroscience, Auckland University of Technology, Auckland, New Zealand
| |
Collapse
|
14
|
Zhou S, Zhao J, Zhang L. Application of Artificial Intelligence on Psychological Interventions and Diagnosis: An Overview. Front Psychiatry 2022; 13:811665. [PMID: 35370846 PMCID: PMC8968136 DOI: 10.3389/fpsyt.2022.811665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 02/21/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Innovative technologies, such as machine learning, big data, and artificial intelligence (AI) are approaches adopted for personalized medicine, and psychological interventions and diagnosis are facing huge paradigm shifts. In this literature review, we aim to highlight potential applications of AI on psychological interventions and diagnosis. METHODS This literature review manifest studies that discuss how innovative technology as deep learning (DL) and AI is affecting psychological assessment and psychotherapy, we performed a search on PUBMED, and Web of Science using the terms "psychological interventions," "diagnosis on mental health disorders," "artificial intelligence," and "deep learning." Only studies considering patients' datasets are considered. RESULTS Nine studies met the inclusion criteria. Beneficial effects on clinical symptoms or prediction were shown in these studies, but future study is needed to determine the long-term effects. LIMITATIONS The major limitation for the current study is the small sample size, and lies in the lack of long-term follow-up-controlled studies for a certain symptom. CONCLUSIONS AI such as DL applications showed promising results on clinical practice, which could lead to profound impact on personalized medicine for mental health conditions. Future studies can improve furthermore by increasing sample sizes and focusing on ethical approvals and adherence for online-therapy.
Collapse
Affiliation(s)
- Sijia Zhou
- Department of Psychiatry, Guangzhou First People's Hospital, The Second Affiliated Hospital of South China University of Technology, Guangzhou, China
| | - Jingping Zhao
- Mental Health Institute of the Second Xiangya Hospital, Central South University, Changsha, China.,Chinese National Clinical Research Center on Mental Disorders, Changsha, China.,Department of Psychiatry, Chinese National Technology Institute on Mental Disorders, Changsha, China.,Hunan Key Laboratory of Psychiatry and Mental Health, Changsha, China
| | - Lulu Zhang
- Department of Psychiatry, Guangzhou First People's Hospital, The Second Affiliated Hospital of South China University of Technology, Guangzhou, China
| |
Collapse
|
15
|
Pawassar CM, Tiberius V. Virtual Reality in Health Care: Bibliometric Analysis. JMIR Serious Games 2021; 9:e32721. [PMID: 34855606 PMCID: PMC8686483 DOI: 10.2196/32721] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 09/20/2021] [Accepted: 09/24/2021] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with. OBJECTIVE We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords. METHODS Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer. RESULTS The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than "virtual" and "reality" are "training," "trial," and "patients." The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames. CONCLUSIONS The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
Collapse
Affiliation(s)
| | - Victor Tiberius
- Faculty of Economics and Social Sciences, University of Potsdam, Potsdam, Germany
| |
Collapse
|
16
|
May R, Denecke K. Security, privacy, and healthcare-related conversational agents: a scoping review. Inform Health Soc Care 2021; 47:194-210. [PMID: 34617857 DOI: 10.1080/17538157.2021.1983578] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Health chatbots interview patients and collect health data. This process makes demands on data security and data privacy. To identify how and to what extent security and privacy are considered in current health chatbots. We conducted a scoping review by searching three bibliographic databases (PubMed, ACM Digital Library, IEEExplore) for papers reporting on chatbots in healthcare. We extracted which, how, and where data is stored by health chatbots and identified which external services have access to the data. Out of 1026 retrieved papers, we included 70 studies in the qualitative synthesis. Most papers report on chatbots that collect and process personal health data, usually in the context of mental health coaching applications. The majority did not provide any information regarding security or privacy aspects. We were able to determine limitations in literature and identified concrete challenges, including data access and usage of (third-party) services, data storage, data security methods, use case peculiarities and data privacy, as well as legal requirements. Data privacy and security in health chatbots are still underresearched and related information is underrepresented in scientific literature. By addressing the five key challenges in future, the transfer of theoretical solutions into practice can be facilitated.
Collapse
Affiliation(s)
- Richard May
- Faculty of Automation and Computer Science, Harz University of Applied Sciences, Wernigerode, Germany
| | - Kerstin Denecke
- Institute for Medical Informatics, Bern University of Applied Sciences, Biel/Bienne, Switzerland
| |
Collapse
|
17
|
Experimental Disproof of a Manga Character Construction Model. Symmetry (Basel) 2021. [DOI: 10.3390/sym13050838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In prior works, the impression of elements of virtual agents/manga character and the overall impression of virtual agents/manga character were considered completely symmetric. In this work, we conducted a preliminary experiment to develop a system that creates designs of virtual agents depending on a text. In this experiment, the participants read the text and chose the image of an agent and social group that resembled their mental image. We introduced the lattice derived by the rough set induction method to suggest the model to analyze the mental image. In this model, we constructed the lattice from two interpretations to evaluate the complexity of the mental image generation process. As a result, the lattices derived by social groups and appearance were non-Boolean; however, those derived by two kinds of design features were not non-Boolean. This result shows that the mental appearance and social images cannot be combined voluntarily. This result showed that it is not symmetric between each element of virtual agents/manga character and overall virtual agents/manga character.
Collapse
|
18
|
Vaidyam AN, Linggonegoro D, Torous J. Changes to the Psychiatric Chatbot Landscape: A Systematic Review of Conversational Agents in Serious Mental Illness: Changements du paysage psychiatrique des chatbots: une revue systématique des agents conversationnels dans la maladie mentale sérieuse. CANADIAN JOURNAL OF PSYCHIATRY. REVUE CANADIENNE DE PSYCHIATRIE 2021; 66:339-348. [PMID: 33063526 PMCID: PMC8172347 DOI: 10.1177/0706743720966429] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
OBJECTIVE The need for digital tools in mental health is clear, with insufficient access to mental health services. Conversational agents, also known as chatbots or voice assistants, are digital tools capable of holding natural language conversations. Since our last review in 2018, many new conversational agents and research have emerged, and we aimed to reassess the conversational agent landscape in this updated systematic review. METHODS A systematic literature search was conducted in January 2020 using the PubMed, Embase, PsychINFO, and Cochrane databases. Studies included were those that involved a conversational agent assessing serious mental illness: major depressive disorder, schizophrenia spectrum disorders, bipolar disorder, or anxiety disorder. RESULTS Of the 247 references identified from selected databases, 7 studies met inclusion criteria. Overall, there were generally positive experiences with conversational agents in regard to diagnostic quality, therapeutic efficacy, or acceptability. There continues to be, however, a lack of standard measures that allow ease of comparison of studies in this space. There were several populations that lacked representation such as the pediatric population and those with schizophrenia or bipolar disorder. While comparing 2018 to 2020 research offers useful insight into changes and growth, the high degree of heterogeneity between all studies in this space makes direct comparison challenging. CONCLUSIONS This review revealed few but generally positive outcomes regarding conversational agents' diagnostic quality, therapeutic efficacy, and acceptability, which may augment mental health care. Despite this increase in research activity, there continues to be a lack of standard measures for evaluating conversational agents as well as several neglected populations. We recommend that the standardization of conversational agent studies should include patient adherence and engagement, therapeutic efficacy, and clinician perspectives.
Collapse
Affiliation(s)
- Aditya Nrusimha Vaidyam
- 1859Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Danny Linggonegoro
- 1859Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - John Torous
- 1859Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
19
|
Prochaska JJ, Vogel EA, Chieng A, Kendra M, Baiocchi M, Pajarito S, Robinson A. A Therapeutic Relational Agent for Reducing Problematic Substance Use (Woebot): Development and Usability Study. J Med Internet Res 2021; 23:e24850. [PMID: 33755028 PMCID: PMC8074987 DOI: 10.2196/24850] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 01/19/2021] [Accepted: 01/31/2021] [Indexed: 01/02/2023] Open
Abstract
Background Misuse of substances is common, can be serious and costly to society, and often goes untreated due to barriers to accessing care. Woebot is a mental health digital solution informed by cognitive behavioral therapy and built upon an artificial intelligence–driven platform to deliver tailored content to users. In a previous 2-week randomized controlled trial, Woebot alleviated depressive symptoms. Objective This study aims to adapt Woebot for the treatment of substance use disorders (W-SUDs) and examine its feasibility, acceptability, and preliminary efficacy. Methods American adults (aged 18-65 years) who screened positive for substance misuse without major health contraindications were recruited from online sources and flyers and enrolled between March 27 and May 6, 2020. In a single-group pre/postdesign, all participants received W-SUDs for 8 weeks. W-SUDs provided mood, craving, and pain tracking and modules (psychoeducational lessons and psychotherapeutic tools) using elements of dialectical behavior therapy and motivational interviewing. Paired samples t tests and McNemar nonparametric tests were used to examine within-subject changes from pre- to posttreatment on measures of substance use, confidence, cravings, mood, and pain. Results The sample (N=101) had a mean age of 36.8 years (SD 10.0), and 75.2% (76/101) of the participants were female, 78.2% (79/101) were non-Hispanic White, and 72.3% (73/101) were employed. Participants’ W-SUDs use averaged 15.7 (SD 14.2) days, 12.1 (SD 8.3) modules, and 600.7 (SD 556.5) sent messages. About 94% (562/598) of all completed psychoeducational lessons were rated positively. From treatment start to end, in-app craving ratings were reduced by half (87/101, 86.1% reporting cravings in the app; odds ratio 0.48, 95% CI 0.32-0.73). Posttreatment assessment completion was 50.5% (51/101), with better retention among those who initially screened higher on substance misuse. From pre- to posttreatment, confidence to resist urges to use substances significantly increased (mean score change +16.9, SD 21.4; P<.001), whereas past month substance use occasions (mean change −9.3, SD 14.1; P<.001) and scores on the Alcohol Use Disorders Identification Test-Concise (mean change −1.3, SD 2.6; P<.001), 10-item Drug Abuse Screening Test (mean change −1.2, SD 2.0; P<.001), Patient Health Questionnaire-8 item (mean change 2.1, SD 5.2; P=.005), Generalized Anxiety Disorder-7 (mean change −2.3, SD 4.7; P=.001), and cravings scale (68.6% vs 47.1% moderate to extreme; P=.01) significantly decreased. Most participants would recommend W-SUDs to a friend (39/51, 76%) and reported receiving the service they desired (41/51, 80%). Fewer felt W-SUDs met most or all of their needs (22/51, 43%). Conclusions W-SUDs was feasible to deliver, engaging, and acceptable and was associated with significant improvements in substance use, confidence, cravings, depression, and anxiety. Study attrition was high. Future research will evaluate W-SUDs in a randomized controlled trial with a more diverse sample and with the use of greater study retention strategies. Trial Registration ClinicalTrials.gov NCT04096001; http://clinicaltrials.gov/ct2/show/NCT04096001.
Collapse
Affiliation(s)
- Judith J Prochaska
- Stanford Prevention Research Center, School of Medicine, Stanford University, Stanford, CA, United States
| | - Erin A Vogel
- Stanford Prevention Research Center, School of Medicine, Stanford University, Stanford, CA, United States
| | - Amy Chieng
- Stanford Prevention Research Center, School of Medicine, Stanford University, Stanford, CA, United States
| | - Matthew Kendra
- Department of Psychiatry & Behavioral Sciences, School of Medicine, Stanford University, Stanford, CA, United States
| | - Michael Baiocchi
- Department of Epidemiology & Population Health, School of Medicine, Stanford University, Stanford, CA, United States
| | | | | |
Collapse
|
20
|
Mariamo A, Temcheff CE, Léger PM, Senecal S, Lau MA. Emotional Reactions and Likelihood of Response to Questions Designed for a Mental Health Chatbot Among Adolescents: Experimental Study. JMIR Hum Factors 2021; 8:e24343. [PMID: 33734089 PMCID: PMC8080266 DOI: 10.2196/24343] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 12/27/2020] [Accepted: 01/17/2021] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND Psychological distress increases across adolescence and has been associated with several important health outcomes with consequences that can extend into adulthood. One type of technological innovation that may serve as a unique intervention for youth experiencing psychological distress is the conversational agent, otherwise known as a chatbot. Further research is needed on the factors that may make mental health chatbots destined for adolescents more appealing and increase the likelihood that adolescents will use them. OBJECTIVE The aim of this study was to assess adolescents' emotional reactions and likelihood of responding to questions that could be posed by a mental health chatbot. Understanding adolescent preferences and factors that could increase adolescents' likelihood of responding to chatbot questions could assist in future mental health chatbot design destined for youth. METHODS We recruited 19 adolescents aged 14 to 17 years to participate in a study with a 2×2×3 within-subjects factorial design. Each participant was sequentially presented with 96 chatbot questions for a duration of 8 seconds per question. Following each presentation, participants were asked to indicate how likely they were to respond to the question, as well as their perceived affective reaction to the question. Demographic data were collected, and an informal debriefing was conducted with each participant. RESULTS Participants were an average of 15.3 years old (SD 1.00) and mostly female (11/19, 58%). Logistic regressions showed that the presence of GIFs predicted perceived emotional valence (β=-.40, P<.001), such that questions without GIFs were associated with a negative perceived emotional valence. Question type predicted emotional valence, such that yes/no questions (β=-.23, P=.03) and open-ended questions (β=-.26, P=.01) were associated with a negative perceived emotional valence compared to multiple response choice questions. Question type also predicted the likelihood of response, such that yes/no questions were associated with a lower likelihood of response compared to multiple response choice questions (β=-.24, P=.03) and a higher likelihood of response compared to open-ended questions (β=.54, P<.001). CONCLUSIONS The findings of this study add to the rapidly growing field of teen-computer interaction and contribute to our understanding of adolescent user experience in their interactions with a mental health chatbot. The insights gained from this study may be of assistance to developers and designers of mental health chatbots.
Collapse
Affiliation(s)
- Audrey Mariamo
- Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada
| | | | | | | | - Marianne Alexandra Lau
- Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada
| |
Collapse
|
21
|
Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and Opinions of Patients About Mental Health Chatbots: Scoping Review. J Med Internet Res 2021; 23:e17828. [PMID: 33439133 PMCID: PMC7840290 DOI: 10.2196/17828] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 06/01/2020] [Accepted: 06/21/2020] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Chatbots have been used in the last decade to improve access to mental health care services. Perceptions and opinions of patients influence the adoption of chatbots for health care. Many studies have been conducted to assess the perceptions and opinions of patients about mental health chatbots. To the best of our knowledge, there has been no review of the evidence surrounding perceptions and opinions of patients about mental health chatbots. OBJECTIVE This study aims to conduct a scoping review of the perceptions and opinions of patients about chatbots for mental health. METHODS The scoping review was carried out in line with the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) extension for scoping reviews guidelines. Studies were identified by searching 8 electronic databases (eg, MEDLINE and Embase) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. In total, 2 reviewers independently selected studies and extracted data from the included studies. Data were synthesized using thematic analysis. RESULTS Of 1072 citations retrieved, 37 unique studies were included in the review. The thematic analysis generated 10 themes from the findings of the studies: usefulness, ease of use, responsiveness, understandability, acceptability, attractiveness, trustworthiness, enjoyability, content, and comparisons. CONCLUSIONS The results demonstrated overall positive perceptions and opinions of patients about chatbots for mental health. Important issues to be addressed in the future are the linguistic capabilities of the chatbots: they have to be able to deal adequately with unexpected user input, provide high-quality responses, and have to show high variability in responses. To be useful for clinical practice, we have to find ways to harmonize chatbot content with individual treatment recommendations, that is, a personalization of chatbot conversations is required.
Collapse
Affiliation(s)
- Alaa A Abd-Alrazaq
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Mohannad Alajlani
- Institute of Digital Healthcare, University of Warwick, Warwick, United Kingdom
| | - Nashva Ali
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Kerstin Denecke
- Institute for Medical Informatics, Bern University of Applied Science, Bern, Switzerland
| | - Bridgette M Bewick
- Leeds Institute of Health Sciences, School of Medicine, University of Leeds, Leeds, United Kingdom
| | - Mowafa Househ
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| |
Collapse
|
22
|
Text Messaging-Based Medical Diagnosis Using Natural Language Processing and Fuzzy Logic. JOURNAL OF HEALTHCARE ENGINEERING 2020. [DOI: 10.1155/2020/8839524] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The use of natural language processing (NLP) methods and their application to developing conversational systems for health diagnosis increases patients’ access to medical knowledge. In this study, a chatbot service was developed for the Covenant University Doctor (CUDoctor) telehealth system based on fuzzy logic rules and fuzzy inference. The service focuses on assessing the symptoms of tropical diseases in Nigeria. Telegram Bot Application Programming Interface (API) was used to create the interconnection between the chatbot and the system, while Twilio API was used for interconnectivity between the system and a short messaging service (SMS) subscriber. The service uses the knowledge base consisting of known facts on diseases and symptoms acquired from medical ontologies. A fuzzy support vector machine (SVM) is used to effectively predict the disease based on the symptoms inputted. The inputs of the users are recognized by NLP and are forwarded to the CUDoctor for decision support. Finally, a notification message displaying the end of the diagnosis process is sent to the user. The result is a medical diagnosis system which provides a personalized diagnosis utilizing self-input from users to effectively diagnose diseases. The usability of the developed system was evaluated using the system usability scale (SUS), yielding a mean SUS score of 80.4, which indicates the overall positive evaluation.
Collapse
|
23
|
Gainer D, Alam S, Alam H, Redding H. A FLASH OF HOPE: Eye Movement Desensitization and Reprocessing (EMDR) Therapy. INNOVATIONS IN CLINICAL NEUROSCIENCE 2020; 17:12-20. [PMID: 33520399 PMCID: PMC7839656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
DEPARTMENT EDITORS Julie P. Gentile, MD Professor and Chair, Department of Psychiatry, Wright State University, Boonshoft School of Medicine, Dayton, Ohio Allison E. Cowan, MD Associate Professor, Department of Psychiatry, Wright State University, Boonshoft School of Medicine, Dayton, Ohio EDITORS' NOTE The patient cases presented in Psychotherapy Rounds are composite cases written to illustrate certain diagnostic characteristics and to instruct on treatment techniques. The composite cases are not real patients in treatment. Any resemblance to a real patient is purely coincidental. ABSTRACT Eye movement desensitization and reprocessing (EMDR) is a specific treatment modality that utilizes bilateral stimulation to help individuals who have experienced trauma. This stimulation can occur in a variety of forms, including left-right eye movements, tapping on the knees, headphones, or handheld buzzers, known as tappers. This type of psychotherapy allows the individuals to redefine their self-assessment and responses to a given traumatic event in eight defined steps. While EMDR is relatively new type of psychotherapy, existing literature has demonstrated positive results using this form of therapy when treating patients with post-traumatic stress disorder (PTSD) by utilizing eye movements to detract from negative conceptualizations as a response to a specific trigger, while reaffirming positive self-assessments. Research indicates that EMDR could be a promising treatment for mental health issues other than PTSD, including bipolar disorder, substance use disorders, and depressive disorders. In this article, the eight fundamental processes of EMDR are illustrated through a composite case vignette and examined alongside relevant research regarding its efficacy in treating PTSD.
Collapse
Affiliation(s)
- Danielle Gainer
- Dr. Gainer, Sarah Alam, and Ms. Redding are with Wright State University Boonshoft School of Medicine in Fairborn, Ohio. Harris Alam is with University of Central Florida in Orlando, Florida
| | - Sarah Alam
- Dr. Gainer, Sarah Alam, and Ms. Redding are with Wright State University Boonshoft School of Medicine in Fairborn, Ohio. Harris Alam is with University of Central Florida in Orlando, Florida
| | - Harris Alam
- Dr. Gainer, Sarah Alam, and Ms. Redding are with Wright State University Boonshoft School of Medicine in Fairborn, Ohio. Harris Alam is with University of Central Florida in Orlando, Florida
| | - Hannah Redding
- Dr. Gainer, Sarah Alam, and Ms. Redding are with Wright State University Boonshoft School of Medicine in Fairborn, Ohio. Harris Alam is with University of Central Florida in Orlando, Florida
| |
Collapse
|
24
|
Abd-Alrazaq A, Safi Z, Alajlani M, Warren J, Househ M, Denecke K. Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review. J Med Internet Res 2020; 22:e18301. [PMID: 32442157 PMCID: PMC7305563 DOI: 10.2196/18301] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/13/2020] [Accepted: 04/15/2020] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field. OBJECTIVE This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots. METHODS Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated. RESULTS Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content). CONCLUSIONS The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.
Collapse
Affiliation(s)
- Alaa Abd-Alrazaq
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Zeineb Safi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mohannad Alajlani
- Institute of Digital Healthcare, University of Warwick, Coventry, United Kingdom
| | - Jim Warren
- School of Computer Science, University of Auckland, Auckland, New Zealand
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Kerstin Denecke
- Institute for Medical Informatics, Bern University of Applied Sciences, Bern, Switzerland
| |
Collapse
|
25
|
Abd-alrazaq A, Safi Z, Alajlani M, Warren J, Househ M, Denecke K. Technical Metrics Used to Evaluate Health Care Chatbots: Scoping Review (Preprint).. [DOI: 10.2196/preprints.18301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
BACKGROUND
Dialog agents (chatbots) have a long history of application in health care, where they have been used for tasks such as supporting patient self-management and providing counseling. Their use is expected to grow with increasing demands on health systems and improving artificial intelligence (AI) capability. Approaches to the evaluation of health care chatbots, however, appear to be diverse and haphazard, resulting in a potential barrier to the advancement of the field.
OBJECTIVE
This study aims to identify the technical (nonclinical) metrics used by previous studies to evaluate health care chatbots.
METHODS
Studies were identified by searching 7 bibliographic databases (eg, MEDLINE and PsycINFO) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. The studies were independently selected by two reviewers who then extracted data from the included studies. Extracted data were synthesized narratively by grouping the identified metrics into categories based on the aspect of chatbots that the metrics evaluated.
RESULTS
Of the 1498 citations retrieved, 65 studies were included in this review. Chatbots were evaluated using 27 technical metrics, which were related to chatbots as a whole (eg, usability, classifier performance, speed), response generation (eg, comprehensibility, realism, repetitiveness), response understanding (eg, chatbot understanding as assessed by users, word error rate, concept error rate), and esthetics (eg, appearance of the virtual agent, background color, and content).
CONCLUSIONS
The technical metrics of health chatbot studies were diverse, with survey designs and global usability metrics dominating. The lack of standardization and paucity of objective measures make it difficult to compare the performance of health chatbots and could inhibit advancement of the field. We suggest that researchers more frequently include metrics computed from conversation logs. In addition, we recommend the development of a framework of technical metrics with recommendations for specific circumstances for their inclusion in chatbot studies.
Collapse
|
26
|
Abd-alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and Opinions of Patients About Mental Health Chatbots: Scoping Review (Preprint).. [DOI: 10.2196/preprints.17828] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
BACKGROUND
Chatbots have been used in the last decade to improve access to mental health care services. Perceptions and opinions of patients influence the adoption of chatbots for health care. Many studies have been conducted to assess the perceptions and opinions of patients about mental health chatbots. To the best of our knowledge, there has been no review of the evidence surrounding perceptions and opinions of patients about mental health chatbots.
OBJECTIVE
This study aims to conduct a scoping review of the perceptions and opinions of patients about chatbots for mental health.
METHODS
The scoping review was carried out in line with the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) extension for scoping reviews guidelines. Studies were identified by searching 8 electronic databases (eg, MEDLINE and Embase) in addition to conducting backward and forward reference list checking of the included studies and relevant reviews. In total, 2 reviewers independently selected studies and extracted data from the included studies. Data were synthesized using thematic analysis.
RESULTS
Of 1072 citations retrieved, 37 unique studies were included in the review. The thematic analysis generated 10 themes from the findings of the studies: usefulness, ease of use, responsiveness, understandability, acceptability, attractiveness, trustworthiness, enjoyability, content, and comparisons.
CONCLUSIONS
The results demonstrated overall positive perceptions and opinions of patients about chatbots for mental health. Important issues to be addressed in the future are the linguistic capabilities of the chatbots: they have to be able to deal adequately with unexpected user input, provide high-quality responses, and have to show high variability in responses. To be useful for clinical practice, we have to find ways to harmonize chatbot content with individual treatment recommendations, that is, a personalization of chatbot conversations is required.
Collapse
|
27
|
Amith M, Roberts K, Tao C. Conceiving an application ontology to model patient human papillomavirus vaccine counseling for dialogue management. BMC Bioinformatics 2019; 20:706. [PMID: 31865902 PMCID: PMC6927108 DOI: 10.1186/s12859-019-3193-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Background In the United States and parts of the world, the human papillomavirus vaccine uptake is below the prescribed coverage rate for the population. Some research have noted that dialogue that communicates the risks and benefits, as well as patient concerns, can improve the uptake levels. In this paper, we introduce an application ontology for health information dialogue called Patient Health Information Dialogue Ontology for patient-level human papillomavirus vaccine counseling and potentially for any health-related counseling. Results The ontology’s class level hierarchy is segmented into 4 basic levels - Discussion, Goal, Utterance, and Speech Task. The ontology also defines core low-level utterance interaction for communicating human papillomavirus health information. We discuss the design of the ontology and the execution of the utterance interaction. Conclusion With an ontology that represents patient-centric dialogue to communicate health information, we have an application-driven model that formalizes the structure for the communication of health information, and a reusable scaffold that can be integrated for software agents. Our next step will to be develop the software engine that will utilize the ontology and automate the dialogue interaction of a software agent.
Collapse
Affiliation(s)
- Muhammad Amith
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, 7000 Fannin Road, Suite 600, Houston, TX, 77030, USA
| | - Kirk Roberts
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, 7000 Fannin Road, Suite 600, Houston, TX, 77030, USA
| | - Cui Tao
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, 7000 Fannin Road, Suite 600, Houston, TX, 77030, USA.
| |
Collapse
|
28
|
Reconstructing Personal Stories in Virtual Reality sas a Mechanism to Recover the Self. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 17:ijerph17010026. [PMID: 31861440 PMCID: PMC6981862 DOI: 10.3390/ijerph17010026] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2019] [Accepted: 12/15/2019] [Indexed: 02/07/2023]
Abstract
Advances in virtual reality present opportunities to relive experiences in an immersive medium that can change the way we perceive our life stories, potentially shaping our realities for the better. This paper studies the role of virtual reality as a tool for the creation of stories with the concept of the self as a narrator and the life of the self as a storyline. The basis of the study is the philosophical notion of the self-narrative as an explanatory story of the events in one’s life that constitutes the notion of one’s self. This application is suitable for cases when individuals need to recreate their self, such as during recovery after traumatic events. The analysis of the effects of virtual reality shows that it enables a person to engage in a process of deeper self-observation to understand and explain adverse events and to give meaning to these events to form a new story, which can complement the therapeutic outcomes of exposure treatments. This study proposes concrete examples of immersive scenarios used to reconstruct personal stories. Several possible levels of experience are proposed to suggest that recovery can be achieved through the gradual retelling of the self-narrative, addressing all of the underlying narratives. Considering the ethical challenges that might arise, this paper explores the ways in which immersion in virtual reality can benefit a person’s view toward life as a story and his or her self as its author, comparing this idea with previous research on the application of virtual reality for trauma treatment. The analysis also emphasizes the perception of narrative authorship in virtual reality as an essential method for recovering the self-narrative and improving a patient’s mental health during self-actualization.
Collapse
|
29
|
Tuerk PW, Schaeffer CM, McGuire JF, Adams Larsen M, Capobianco N, Piacentini J. Adapting Evidence-Based Treatments for Digital Technologies: a Critical Review of Functions, Tools, and the Use of Branded Solutions. Curr Psychiatry Rep 2019; 21:106. [PMID: 31584124 DOI: 10.1007/s11920-019-1092-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
PURPOSE OF REVIEW We provide a critical review of digital technologies in evidence-based treatments (EBTs) for mental health with a focus on the functions technologies are intended to serve. The review highlights issues related to clarity of purpose, usability, and assumptions related to EBT technology integration, branding, and packaging. RECENT FINDINGS Developers continue to use technology in creative ways, often combining multiple functions to convey existing EBTs or to create new technology-enabled EBTs. Developers have a strong preference for creating and investigating whole-source, branded solutions related to specific EBTs, in comparison to developing or investigating technology tools related to specific components of behavior change, or developing specific clinical protocols that can be delivered via existing technologies. Default assumptions that new applications are required for each individual EBT, that EBTs are best served by the use of only one technology solution rather than multiple tools, and that an EBT-specific technology product should include or convey all portions of an EBT slow scientific progress and increase risk of usability issues that negatively impact uptake. We contend that a purposeful, functions-based approach should guide the selection, development, and application of technology in support of EBT delivery.
Collapse
Affiliation(s)
- Peter W Tuerk
- Sheila C. Johnson Center for Clinical Services, University of Virginia, Charlottesville, VA, USA.
- Department of Human Services, University of Virginia, 417 Emmet St. South, Charlottesville, VA, 22904, USA.
| | - Cindy M Schaeffer
- Division of Child and Adolescent Psychiatry, University of Maryland-Baltimore, Baltimore, MD, USA
| | - Joseph F McGuire
- Division of Child and Adolescent Psychiatry, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA, USA
| | | | - Nicole Capobianco
- Department of Human Services, University of Virginia, Charlottesville, VA, USA
| | - John Piacentini
- UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA, USA
| |
Collapse
|
30
|
Fonseka TM, Bhat V, Kennedy SH. The utility of artificial intelligence in suicide risk prediction and the management of suicidal behaviors. Aust N Z J Psychiatry 2019; 53:954-964. [PMID: 31347389 DOI: 10.1177/0004867419864428] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE Suicide is a growing public health concern with a global prevalence of approximately 800,000 deaths per year. The current process of evaluating suicide risk is highly subjective, which can limit the efficacy and accuracy of prediction efforts. Consequently, suicide detection strategies are shifting toward artificial intelligence platforms that can identify patterns within 'big data' to generate risk algorithms that can determine the effects of risk (and protective) factors on suicide outcomes, predict suicide outbreaks and identify at-risk individuals or populations. In this review, we summarize the role of artificial intelligence in optimizing suicide risk prediction and behavior management. METHODS This paper provides a general review of the literature. A literature search was conducted in OVID Medline, EMBASE and PsycINFO databases with coverage from January 1990 to June 2019. Results were restricted to peer-reviewed, English-language articles. Conference and dissertation proceedings, case reports, protocol papers and opinion pieces were excluded. Reference lists were also examined for additional articles of relevance. RESULTS At the individual level, prediction analytics help to identify individuals in crisis to intervene with emotional support, crisis and psychoeducational resources, and alerts for emergency assistance. At the population level, algorithms can identify at-risk groups or suicide hotspots, which help inform resource mobilization, policy reform and advocacy efforts. Artificial intelligence has also been used to support the clinical management of suicide across diagnostics and evaluation, medication management and behavioral therapy delivery. There could be several advantages of incorporating artificial intelligence into suicide care, which includes a time- and resource-effective alternative to clinician-based strategies, adaptability to various settings and demographics, and suitability for use in remote locations with limited access to mental healthcare supports. CONCLUSION Based on the observed benefits to date, artificial intelligence has a demonstrated utility within suicide prediction and clinical management efforts and will continue to advance mental healthcare forward.
Collapse
Affiliation(s)
- Trehani M Fonseka
- Centre for Mental Health and Krembil Research Centre, University Health Network, Toronto, ON, Canada.,Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada.,School of Social Work, King's University College, Western University, London, ON, Canada
| | - Venkat Bhat
- Centre for Mental Health and Krembil Research Centre, University Health Network, Toronto, ON, Canada.,Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada.,Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Sidney H Kennedy
- Centre for Mental Health and Krembil Research Centre, University Health Network, Toronto, ON, Canada.,Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada.,Department of Psychiatry, University of Toronto, Toronto, ON, Canada.,Keenan Research Centre for Biomedical Science, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, ON, Canada
| |
Collapse
|
31
|
Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. CANADIAN JOURNAL OF PSYCHIATRY. REVUE CANADIENNE DE PSYCHIATRIE 2019; 64:456-464. [PMID: 30897957 PMCID: PMC6610568 DOI: 10.1177/0706743719828977] [Citation(s) in RCA: 228] [Impact Index Per Article: 45.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE The aim of this review was to explore the current evidence for conversational agents or chatbots in the field of psychiatry and their role in screening, diagnosis, and treatment of mental illnesses. METHODS A systematic literature search in June 2018 was conducted in PubMed, EmBase, PsycINFO, Cochrane, Web of Science, and IEEE Xplore. Studies were included that involved a chatbot in a mental health setting focusing on populations with or at high risk of developing depression, anxiety, schizophrenia, bipolar, and substance abuse disorders. RESULTS From the selected databases, 1466 records were retrieved and 8 studies met the inclusion criteria. Two additional studies were included from reference list screening for a total of 10 included studies. Overall, potential for conversational agents in psychiatric use was reported to be high across all studies. In particular, conversational agents showed potential for benefit in psychoeducation and self-adherence. In addition, satisfaction rating of chatbots was high across all studies, suggesting that they would be an effective and enjoyable tool in psychiatric treatment. CONCLUSION Preliminary evidence for psychiatric use of chatbots is favourable. However, given the heterogeneity of the reviewed studies, further research with standardized outcomes reporting is required to more thoroughly examine the effectiveness of conversational agents. Regardless, early evidence shows that with the proper approach and research, the mental health field could use conversational agents in psychiatric treatment.
Collapse
Affiliation(s)
| | - Hannah Wisniewski
- 1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - John David Halamka
- 1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Matcheri S Kashavan
- 1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - John Blake Torous
- 1 Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
32
|
Tielman ML, Neerincx MA, Brinkman WP. Design and Evaluation of Personalized Motivational Messages by a Virtual Agent that Assists in Post-Traumatic Stress Disorder Therapy. J Med Internet Res 2019; 21:e9240. [PMID: 30916660 PMCID: PMC6456821 DOI: 10.2196/jmir.9240] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Revised: 04/24/2018] [Accepted: 12/05/2018] [Indexed: 11/20/2022] Open
Abstract
Background Systems incorporating virtual agents can play a major role in electronic-mental (e-mental) health care, as barriers to care still prevent some patients from receiving the help they need. To properly assist the users of these systems, a virtual agent needs to promote motivation. This can be done by offering motivational messages. Objective The objective of this study was two-fold. The first was to build a motivational message system for a virtual agent assisting in post-traumatic stress disorder (PTSD) therapy based on domain knowledge from experts. The second was to test the hypotheses that (1) computer-generated motivating messages influence users’ motivation to continue with therapy, trust in a good therapy outcome, and the feeling of being heard by the agent and (2) personalized messages outperform generic messages on these factors. Methods A system capable of generating motivational messages was built by analyzing expert (N=13) knowledge on what types of motivational statements to use in what situation. To test the 2 hypotheses, a Web-based study was performed (N=207). Participants were asked to imagine they were in a certain situation, specified by the progression of their symptoms and initial trust in a good therapy outcome. After this, they received a message from a virtual agent containing either personalized motivation as generated by the system, general motivation, or no motivational content. They were asked how this message changed their motivation to continue and trust in a good outcome as well as how much they felt they were being heard by the agent. Results Overall, findings confirmed the first hypothesis, as well as the second hypothesis for the measure feeling of being heard by the agent. Personalization of the messages was also shown to be important in those situations where the symptoms were getting worse. In these situations, personalized messages outperformed general messages both in terms of motivation to continue and trust in a good therapy outcome. Conclusions Expert input can successfully be used to develop a personalized motivational message system. Messages generated by such a system seem to improve people’s motivation and trust in PTSD therapy as well as the user’s feeling of being heard by a virtual agent. Given the importance of motivation, trust, and therapeutic alliance for successful therapy, we anticipate that the proposed system can improve adherence in e-mental therapy for PTSD and that it can provide a blueprint for the development of an adaptive system for persuasive messages based on expert input.
Collapse
Affiliation(s)
| | - Mark A Neerincx
- Delft University of Technology, Delft, Netherlands.,Netherlands Organisation for Applied Scientific Research (TNO), Soesterberg, Netherlands
| | | |
Collapse
|
33
|
Tielman ML, Neerincx MA, Pagliari C, Rizzo A, Brinkman WP. Considering patient safety in autonomous e-mental health systems - detecting risk situations and referring patients back to human care. BMC Med Inform Decis Mak 2019; 19:47. [PMID: 30885190 PMCID: PMC6421702 DOI: 10.1186/s12911-019-0796-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 03/07/2019] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Digital health interventions can fill gaps in mental healthcare provision. However, autonomous e-mental health (AEMH) systems also present challenges for effective risk management. To balance autonomy and safety, AEMH systems need to detect risk situations and act on these appropriately. One option is sending automatic alerts to carers, but such 'auto-referral' could lead to missed cases or false alerts. Requiring users to actively self-refer offers an alternative, but this can also be risky as it relies on their motivation to do so. This study set out with two objectives. Firstly, to develop guidelines for risk detection and auto-referral systems. Secondly, to understand how persuasive techniques, mediated by a virtual agent, can facilitate self-referral. METHODS In a formative phase, interviews with experts, alongside a literature review, were used to develop a risk detection protocol. Two referral protocols were developed - one involving auto-referral, the other motivating users to self-refer. This latter was tested via crowd-sourcing (n = 160). Participants were asked to imagine they had sleeping problems with differing severity and user stance on seeking help. They then chatted with a virtual agent, who either directly facilitated referral, tried to persuade the user, or accepted that they did not want help. After the conversation, participants rated their intention to self-refer, to chat with the agent again, and their feeling of being heard by the agent. RESULTS Whether the virtual agent facilitated, persuaded or accepted, influenced all of these measures. Users who were initially negative or doubtful about self-referral could be persuaded. For users who were initially positive about seeking human care, this persuasion did not affect their intentions, indicating that a simply facilitating referral without persuasion was sufficient. CONCLUSION This paper presents a protocol that elucidates the steps and decisions involved in risk detection, something that is relevant for all types of AEMH systems. In the case of self-referral, our study shows that a virtual agent can increase users' intention to self-refer. Moreover, the strategy of the agent influenced the intentions of the user afterwards. This highlights the importance of a personalised approach to promote the user's access to appropriate care.
Collapse
Affiliation(s)
- Myrthe L. Tielman
- Department of Interactive Intelligence, Delft University of Technology, van Mourik Broekmanweg 6, 2628 XE Delft, The Netherlands
| | - Mark A. Neerincx
- Department of Interactive Intelligence, Delft University of Technology, van Mourik Broekmanweg 6, 2628 XE Delft, The Netherlands
- TNO Perceptual and Cognitive Systems, Soesterberg, The Netherlands
| | | | - Albert Rizzo
- USC Institute of Creative Technologies, Playa Vista, California USA
| | - Willem-Paul Brinkman
- Department of Interactive Intelligence, Delft University of Technology, van Mourik Broekmanweg 6, 2628 XE Delft, The Netherlands
| |
Collapse
|
34
|
van Bennekom MJ, de Koning PP. Reducing the stigma on posttraumatic stress disorder in militaries through virtual reality. Mhealth 2018; 4:5. [PMID: 29683127 PMCID: PMC5897704 DOI: 10.21037/mhealth.2018.03.01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 02/26/2018] [Indexed: 11/06/2022] Open
Affiliation(s)
- Martine J van Bennekom
- Department of Psychiatry, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | - Pelle P de Koning
- Department of Psychiatry, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
35
|
Benyoucef Y, Lesport P, Chassagneux A. The Emergent Role of Virtual Reality in the Treatment of Neuropsychiatric Disease. Front Neurosci 2017; 11:491. [PMID: 28928630 PMCID: PMC5591848 DOI: 10.3389/fnins.2017.00491] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 08/21/2017] [Indexed: 12/28/2022] Open
|