1
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
2
|
Taher R, Bhanushali P, Allan S, Alvarez-Jimenez M, Bolton H, Dennison L, Wallace BE, Hadjistavropoulos HD, Hall CL, Hardy A, Henry AL, Lane S, Maguire T, Moreton A, Moukhtarian TR, Vallejos EP, Shergill S, Stahl D, Thew GR, Timulak L, van den Berg D, Viganò N, Stock BW, Young KS, Yiend J. Bridging the gap from medical to psychological safety assessment: consensus study in a digital mental health context. BJPsych Open 2024; 10:e126. [PMID: 38828683 DOI: 10.1192/bjo.2024.713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Digital Mental Health Interventions (DMHIs) that meet the definition of a medical device are regulated by the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK. The MHRA uses procedures that were originally developed for pharmaceuticals to assess the safety of DMHIs. There is recognition that this may not be ideal, as is evident by an ongoing consultation for reform led by the MHRA and the National Institute for Health and Care Excellence. AIMS The aim of this study was to generate an experts' consensus on how the medical regulatory method used for assessing safety could best be adapted for DMHIs. METHOD An online Delphi study containing three rounds was conducted with an international panel of 20 experts with experience/knowledge in the field of UK digital mental health. RESULTS Sixty-four items were generated, of which 41 achieved consensus (64%). Consensus emerged around ten recommendations, falling into five main themes: Enhancing the quality of adverse events data in DMHIs; Re-defining serious adverse events for DMHIs; Reassessing short-term symptom deterioration in psychological interventions as a therapeutic risk; Maximising the benefit of the Yellow Card Scheme; and Developing a harmonised approach for assessing the safety of psychological interventions in general. CONCLUSION The implementation of the recommendations provided by this consensus could improve the assessment of safety of DMHIs, making them more effective in detecting and mitigating risk.
Collapse
Affiliation(s)
- Rayan Taher
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Palak Bhanushali
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Stephanie Allan
- Institute of Health and Wellbeing, University of Glasgow, UK
| | - Mario Alvarez-Jimenez
- Centre for Youth Mental Health, University of Melbourne, Australia
- Orygen, Parkville, Australia
| | | | | | | | | | - Charlotte L Hall
- NIHR MindTech-MedTech Co-operative, NIHR Nottingham Biomedical Research Centre, School of Medicine, Institute of Mental Health, University of Nottingham, UK
| | - Amy Hardy
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | | | - Sam Lane
- SilverCloud by Amwell, Boston, USA
| | - Tess Maguire
- School of Psychology, University of Southampton, UK
| | | | - Talar R Moukhtarian
- Mental Health and Wellbeing Unit, Warwick Medical School, University of Warwick, UK
| | - Elvira Perez Vallejos
- NIHR MindTech-MedTech Co-operative, NIHR Nottingham Biomedical Research Centre, School of Medicine, Institute of Mental Health, University of Nottingham, UK
| | - Sukhi Shergill
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
- Kent and Medway Medical School, Canterbury, UK
| | - Daniel Stahl
- Department of Biostatistics and Health Informatics, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Graham R Thew
- Department of Experimental Psychology, University of Oxford, UK
- Oxford Health NHS Foundation Trust, Oxford, UK
| | | | - David van den Berg
- Department of Clinical Psychology, VU University and Amsterdam Public Health Research, Amsterdam, Netherlands
| | | | - Ben Wensley Stock
- University of Oxford Medical Sciences Division, University of Oxford, UK
| | - Katherine S Young
- SilverCloud by Amwell, Boston, USA
- Social Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Jenny Yiend
- Department of Psychosis Studies, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| |
Collapse
|
3
|
Li L, Peng W, Rheu MMJ. Factors Predicting Intentions of Adoption and Continued Use of Artificial Intelligence Chatbots for Mental Health: Examining the Role of UTAUT Model, Stigma, Privacy Concerns, and Artificial Intelligence Hesitancy. Telemed J E Health 2024; 30:722-730. [PMID: 37756224 DOI: 10.1089/tmj.2023.0313] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023] Open
Abstract
Background: Artificial intelligence-based chatbots (AI chatbots) can potentially improve mental health care, yet factors predicting their adoption and continued use are unclear. Methods: We conducted an online survey with a sample of U.S. adults with symptoms of depression and anxiety (N = 393) in 2021 before the release of ChatGPT. We explored factors predicting the adoption and continued use of AI chatbots, including factors of the unified theory of acceptance and use of technology model, stigma, privacy concerns, and AI hesitancy. Results: Results from the regression indicated that for nonusers, performance expectancy, price value, descriptive norm, and psychological distress are positively related to the intention of adopting AI chatbots, while AI hesitancy and effort expectancy are negatively associated with adopting AI chatbots. For those with experience in using AI chatbots for mental health, performance expectancy, price value, descriptive norm, and injunctive norm are positively related to the intention of continuing to use AI chatbots. Conclusions: Understanding the adoption and continued use of AI chatbots among adults with symptoms of depression and anxiety is essential given that there is a widening gap in the supply and demand of care. AI chatbots provide new opportunities for quality care by supporting accessible, affordable, efficient, and personalized care. This study provides insights for developing and deploying AI chatbots such as ChatGPT in the context of mental health care. Findings could be used to design innovative interventions that encourage the adoption and continued use of AI chatbots among people with symptoms of depression and anxiety and who have difficulty accessing care.
Collapse
Affiliation(s)
- Lin Li
- Department of Informatics, University of California Irvine, Irvine, California, USA
| | - Wei Peng
- Department of Media and Information, Michigan State University, East Lansing, Michigan, USA
| | - Minjin M J Rheu
- School of Communication, Loyola University Chicago, Chicago, Illinois, USA
| |
Collapse
|
4
|
Hurley ME, Sonig A, Herrington J, Storch EA, Lázaro-Muñoz G, Blumenthal-Barby J, Kostick-Quenet K. Ethical considerations for integrating multimodal computer perception and neurotechnology. Front Hum Neurosci 2024; 18:1332451. [PMID: 38435745 PMCID: PMC10904467 DOI: 10.3389/fnhum.2024.1332451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/30/2024] [Indexed: 03/05/2024] Open
Abstract
Background Artificial intelligence (AI)-based computer perception technologies (e.g., digital phenotyping and affective computing) promise to transform clinical approaches to personalized care in psychiatry and beyond by offering more objective measures of emotional states and behavior, enabling precision treatment, diagnosis, and symptom monitoring. At the same time, passive and continuous nature by which they often collect data from patients in non-clinical settings raises ethical issues related to privacy and self-determination. Little is known about how such concerns may be exacerbated by the integration of neural data, as parallel advances in computer perception, AI, and neurotechnology enable new insights into subjective states. Here, we present findings from a multi-site NCATS-funded study of ethical considerations for translating computer perception into clinical care and contextualize them within the neuroethics and neurorights literatures. Methods We conducted qualitative interviews with patients (n = 20), caregivers (n = 20), clinicians (n = 12), developers (n = 12), and clinician developers (n = 2) regarding their perspective toward using PC in clinical care. Transcripts were analyzed in MAXQDA using Thematic Content Analysis. Results Stakeholder groups voiced concerns related to (1) perceived invasiveness of passive and continuous data collection in private settings; (2) data protection and security and the potential for negative downstream/future impacts on patients of unintended disclosure; and (3) ethical issues related to patients' limited versus hyper awareness of passive and continuous data collection and monitoring. Clinicians and developers highlighted that these concerns may be exacerbated by the integration of neural data with other computer perception data. Discussion Our findings suggest that the integration of neurotechnologies with existing computer perception technologies raises novel concerns around dignity-related and other harms (e.g., stigma, discrimination) that stem from data security threats and the growing potential for reidentification of sensitive data. Further, our findings suggest that patients' awareness and preoccupation with feeling monitored via computer sensors ranges from hypo- to hyper-awareness, with either extreme accompanied by ethical concerns (consent vs. anxiety and preoccupation). These results highlight the need for systematic research into how best to implement these technologies into clinical care in ways that reduce disruption, maximize patient benefits, and mitigate long-term risks associated with the passive collection of sensitive emotional, behavioral and neural data.
Collapse
Affiliation(s)
- Meghan E. Hurley
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Anika Sonig
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - John Herrington
- Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, United States
| | - Eric A. Storch
- Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - Gabriel Lázaro-Muñoz
- Center for Bioethics, Harvard Medical School, Boston, MA, United States
- Department of Psychiatry and Behavioral Sciences, Massachusetts General Hospital, Boston, MA, United States
| | | | - Kristin Kostick-Quenet
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
5
|
Nagappan A, Kalokairinou L, Wexler A. Ethical issues in direct-to-consumer healthcare: A scoping review. PLOS DIGITAL HEALTH 2024; 3:e0000452. [PMID: 38349902 PMCID: PMC10863864 DOI: 10.1371/journal.pdig.0000452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024]
Abstract
An increasing number of health products and services are being offered on a direct-to-consumer (DTC) basis. To date, however, scholarship on DTC healthcare products and services has largely proceeded in a domain-specific fashion, with discussions of relevant ethical challenges occurring within specific medical specialties. The present study therefore aimed to provide a scoping review of ethical issues raised in the academic literature across types of DTC healthcare products and services. A systematic search for relevant publications between 2011-2021 was conducted on PubMed and Google Scholar using iteratively developed search terms. The final sample included 86 publications that discussed ethical issues related to DTC healthcare products and services. All publications were coded for ethical issues mentioned, primary DTC product or service discussed, type of study, year of publication, and geographical context. We found that the types of DTC healthcare products and services mentioned in our sample spanned six categories: neurotechnology (34%), testing (20%), in-person services (17%), digital health tools (14%), telemedicine (13%), and physical interventions (2%). Ethical arguments in favor of DTC healthcare included improved access (e.g., financial, geographical; 31%), increased autonomy (29%), and enhanced convenience (16%). Commonly raised ethical concerns included insufficient regulation (72%), questionable efficacy and quality (70%), safety and physical harms (66%), misleading advertising claims (56%), and privacy (34%). Other frequently occurring ethical concerns pertained to financial costs, targeting vulnerable groups, informed consent, and potential burdens on healthcare providers, the healthcare system, and society. Our findings offer insights into the cross-cutting ethical issues associated with DTC healthcare and underscore the need for increased interdisciplinary communication to address the challenges they raise.
Collapse
Affiliation(s)
- Ashwini Nagappan
- Department of Health Policy and Management, University of California, Los Angeles, Los Angeles, California, United States of America
- Department of Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Louiza Kalokairinou
- Department of Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, Texas, United States of America
| | - Anna Wexler
- Department of Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
6
|
Park HJ. Patient perspectives on informed consent for medical AI: A web-based experiment. Digit Health 2024; 10:20552076241247938. [PMID: 38698829 PMCID: PMC11064747 DOI: 10.1177/20552076241247938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/28/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Despite the increasing use of AI applications as a clinical decision support tool in healthcare, patients are often unaware of their use in the physician's decision-making process. This study aims to determine whether doctors should disclose the use of AI tools in diagnosis and what kind of information should be provided. Methods A survey experiment with 1000 respondents in South Korea was conducted to estimate the patients' perceived importance of information regarding the use of an AI tool in diagnosis in deciding whether to receive the treatment. Results The study found that the use of an AI tool increases the perceived importance of information related to its use, compared with when a physician consults with a human radiologist. Information regarding the AI tool when AI is used was perceived by participants either as more important than or similar to the regularly disclosed information regarding short-term effects when AI is not used. Further analysis revealed that gender, age, and income have a statistically significant effect on the perceived importance of every piece of AI information. Conclusions This study supports the disclosure of AI use in diagnosis during the informed consent process. However, the disclosure should be tailored to the individual patient's needs, as patient preferences for information regarding AI use vary across gender, age and income levels. It is recommended that ethical guidelines be developed for informed consent when using AI in diagnoses that go beyond mere legal requirements.
Collapse
Affiliation(s)
- Hai Jin Park
- Center for AI and Law, Hanyang University Law School, Seoul, South Korea
| |
Collapse
|
7
|
Wutz M, Hermes M, Winter V, Köberlein-Neu J. Factors Influencing the Acceptability, Acceptance, and Adoption of Conversational Agents in Health Care: Integrative Review. J Med Internet Res 2023; 25:e46548. [PMID: 37751279 PMCID: PMC10565637 DOI: 10.2196/46548] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 05/10/2023] [Accepted: 07/10/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Conversational agents (CAs), also known as chatbots, are digital dialog systems that enable people to have a text-based, speech-based, or nonverbal conversation with a computer or another machine based on natural language via an interface. The use of CAs offers new opportunities and various benefits for health care. However, they are not yet ubiquitous in daily practice. Nevertheless, research regarding the implementation of CAs in health care has grown tremendously in recent years. OBJECTIVE This review aims to present a synthesis of the factors that facilitate or hinder the implementation of CAs from the perspectives of patients and health care professionals. Specifically, it focuses on the early implementation outcomes of acceptability, acceptance, and adoption as cornerstones of later implementation success. METHODS We performed an integrative review. To identify relevant literature, a broad literature search was conducted in June 2021 with no date limits and using all fields in PubMed, Cochrane Library, Web of Science, LIVIVO, and PsycINFO. To keep the review current, another search was conducted in March 2022. To identify as many eligible primary sources as possible, we used a snowballing approach by searching reference lists and conducted a hand search. Factors influencing the acceptability, acceptance, and adoption of CAs in health care were coded through parallel deductive and inductive approaches, which were informed by current technology acceptance and adoption models. Finally, the factors were synthesized in a thematic map. RESULTS Overall, 76 studies were included in this review. We identified influencing factors related to 4 core Unified Theory of Acceptance and Use of Technology (UTAUT) and Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) factors (performance expectancy, effort expectancy, facilitating conditions, and hedonic motivation), with most studies underlining the relevance of performance and effort expectancy. To meet the particularities of the health care context, we redefined the UTAUT2 factors social influence, habit, and price value. We identified 6 other influencing factors: perceived risk, trust, anthropomorphism, health issue, working alliance, and user characteristics. Overall, we identified 10 factors influencing acceptability, acceptance, and adoption among health care professionals (performance expectancy, effort expectancy, facilitating conditions, social influence, price value, perceived risk, trust, anthropomorphism, working alliance, and user characteristics) and 13 factors influencing acceptability, acceptance, and adoption among patients (additionally hedonic motivation, habit, and health issue). CONCLUSIONS This review shows manifold factors influencing the acceptability, acceptance, and adoption of CAs in health care. Knowledge of these factors is fundamental for implementation planning. Therefore, the findings of this review can serve as a basis for future studies to develop appropriate implementation strategies. Furthermore, this review provides an empirical test of current technology acceptance and adoption models and identifies areas where additional research is necessary. TRIAL REGISTRATION PROSPERO CRD42022343690; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=343690.
Collapse
Affiliation(s)
- Maximilian Wutz
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| | - Marius Hermes
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| | - Vera Winter
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| | - Juliane Köberlein-Neu
- Center for Health Economics and Health Services Research, Schumpeter School of Business and Economics, University of Wuppertal, Wuppertal, Germany
| |
Collapse
|
8
|
Jo E, Kouaho WJ, Schueller SM, Epstein DA. Exploring User Perspectives of and Ethical Experiences With Teletherapy Apps: Qualitative Analysis of User Reviews. JMIR Ment Health 2023; 10:e49684. [PMID: 37738085 PMCID: PMC10559192 DOI: 10.2196/49684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 09/23/2023] Open
Abstract
BACKGROUND Teletherapy apps have emerged as a promising alternative to traditional in-person therapy, especially after the COVID-19 pandemic, as they help overcome a range of geographical and emotional barriers to accessing care. However, the rapid proliferation of teletherapy apps has occurred in an environment in which development has outpaced the various regulatory and ethical considerations of this space. Thus, researchers have raised concerns about the ethical implications and potential risks of teletherapy apps given the lack of regulation and oversight. Teletherapy apps have distinct aims to more directly replicate practices of traditional care, as opposed to mental health apps, which primarily provide supplemental support, suggesting a need to examine the ethical considerations of teletherapy apps from the lens of existing ethical guidelines for providing therapy. OBJECTIVE In this study, we examined user reviews of commercial teletherapy apps to understand user perceptions of whether and how ethical principles are followed and incorporated. METHODS We identified 8 mobile apps that (1) provided teletherapy on 2 dominant mobile app stores (Google Play and Apple App Store) and (2) had received >5000 app reviews on both app stores. We wrote Python scripts (Python Software Foundation) to scrape user reviews from the 8 apps, collecting 3268 user reviews combined across 2 app stores. We used thematic analysis to qualitatively analyze user reviews, developing a codebook drawing from the ethical codes of conduct for psychologists, psychiatrists, and social workers. RESULTS The qualitative analysis of user reviews revealed the ethical concerns and opportunities of teletherapy app users. Users frequently perceived unprofessionalism in their teletherapists, mentioning that their therapists did not listen to them, were distracted during therapy sessions, and did not keep their appointments. Users also noted technical glitches and therapist unavailability on teletherapy apps that might affect their ability to provide continuity of care. Users held varied opinions on the affordability of those apps, with some perceiving them as affordable and others not. Users further brought up that the subscription model resulted in unfair pricing and expressed concerns about the lack of cost transparency. Users perceived that these apps could help promote access to care by overcoming geographical and social constraints. CONCLUSIONS Our study suggests that users perceive commercial teletherapy apps as adhering to many ethical principles pertaining to therapy but falling short in key areas regarding professionalism, continuity of care, cost fairness, and cost transparency. Our findings suggest that, to provide high-quality care, teletherapy apps should prioritize fair compensation for therapists, develop more flexible and transparent payment models, and invest in measures to ensure app stability and therapist availability. Future work is needed to develop standards for teletherapy and improve the quality and accessibility of those services.
Collapse
Affiliation(s)
- Eunkyung Jo
- Department of Informatics, University of California, Irvine, CA, United States
| | | | - Stephen M Schueller
- Department of Informatics, University of California, Irvine, CA, United States
- Department of Psychological Science, University of California, Irvine, CA, United States
| | - Daniel A Epstein
- Department of Informatics, University of California, Irvine, CA, United States
| |
Collapse
|
9
|
Hunt X, Jivan DC, Naslund JA, Breet E, Bantjes J. South African university students' experiences of online group cognitive behavioural therapy: Implications for delivering digital mental health interventions to young people. Glob Ment Health (Camb) 2023; 10:e45. [PMID: 37854416 PMCID: PMC10579664 DOI: 10.1017/gmh.2023.39] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Revised: 06/17/2023] [Accepted: 07/21/2023] [Indexed: 10/20/2023] Open
Abstract
Mental disorders are common among university students. In the face of a large treatment gap, resource constraints and low uptake of traditional in-person psychotherapy services by students, there has been interest in the role that digital mental health solutions could play in meeting students' mental health needs. This study is a cross-sectional, qualitative inquiry into university students' experiences of an online group cognitive behavioural therapy (GCBT) intervention. A total of 125 respondents who had participated in an online GCBT intervention completed a qualitative questionnaire, and 12 participated in in-depth interviews. The findings provide insights into how the context in which the intervention took place, students' need for and expectations about the intervention; and the online format impacted their engagement and perception of its utility. The findings of this study also suggest that, while online GCBT can capitalise on some of the strengths of both digital and in-person approaches to mental health programming, it also suffers from some of the weaknesses of both digital delivery and those associated with in-person therapies.
Collapse
Affiliation(s)
- Xanthe Hunt
- Institute for Life Course Health Research, Department of Global Health, Faculty of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
| | - Dionne C. Jivan
- Department of Psychology, Faculty of Arts and Social Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - John A. Naslund
- Department of Global Health and Social Medicine, Harvard Medical School, Boston, MA, USA
| | - Elsie Breet
- Institute for Life Course Health Research, Department of Global Health, Faculty of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
| | - Jason Bantjes
- Institute for Life Course Health Research, Department of Global Health, Faculty of Medicine and Health Sciences, Stellenbosch University, Cape Town, South Africa
- Alcohol, Tobacco and Other Drugs Research Unit, South African Medical Research Council, Cape Town, South Africa
| |
Collapse
|
10
|
Kassinopoulos O, Vasiliou V, Karekla M. Overcoming challenges in adherence and engagement digital interventions: The development of the ALGEApp for chronic pain management. Internet Interv 2023; 32:100611. [PMID: 36910302 PMCID: PMC9999164 DOI: 10.1016/j.invent.2023.100611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 01/22/2023] [Accepted: 02/24/2023] [Indexed: 03/02/2023] Open
Abstract
Despite the growing body of evidence for the effectiveness of clinic-based interventions in increasing daily functioning in individuals with chronic pain, many sufferers still remain untreated or inadequately treated. Digital psychological interventions have been proposed as a means to overcome many of the barriers to face-to-face treatment (e.g., access, mobility, transportation problems) with the aim to improve health care for persons with chronic conditions in the convenience of their own space and time (home care). The main challenge of digital interventions however, is low user engagement and adherence. Focusing on users' engagement during the design phase of a digital intervention development can increase adherence, effectiveness, and acceptability. The purpose of this paper is to illustrate how we leveraged a recently proposed four-dimensional framework with evidence-based best practices and recommendations to develop a new digital intervention for chronic pain management, called the ALGEApp. A detailed presentation of how ALGEApp was designed and developed to adopt the recommendations and how this can aid engagement within digital interventions is proposed.
Collapse
Affiliation(s)
| | - Vasilis Vasiliou
- NHS South Wales Clinical Psychology (PsyD) Programme, School of Psychology, University of Cardiff & Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, United Kingdom of Great Britain and Northern Ireland
| | | |
Collapse
|
11
|
Deng D, Rogers T, Naslund JA. The Role of Moderators in Facilitating and Encouraging Peer-to-Peer Support in an Online Mental Health Community: A Qualitative Exploratory Study. JOURNAL OF TECHNOLOGY IN BEHAVIORAL SCIENCE 2023; 8:128-139. [PMID: 36810998 PMCID: PMC9933803 DOI: 10.1007/s41347-023-00302-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/04/2022] [Accepted: 01/17/2023] [Indexed: 02/18/2023]
Abstract
Online peer support platforms have gained popularity as a potential way for people struggling with mental health problems to share information and provide support to each other. While these platforms can offer an open space to discuss emotionally difficult issues, unsafe or unmoderated communities can allow potential harm to users by spreading triggering content, misinformation or hostile interactions. The purpose of this study was to explore the role of moderators in these online communities, and how moderators can facilitate peer-to-peer support, while minimizing harms to users and amplifying potential benefits. Moderators of the Togetherall peer support platform were recruited to participate in qualitative interviews. The moderators, referred to as 'Wall Guides', were asked about their day-to-day responsibilities, positive and negative experiences they have witnessed on the platform and the strategies they employ when encountering problems such as lack of engagement or posting of inappropriate content. The data were then analyzed qualitatively using thematic content analysis and consensus codes were deduced and reviewed to reach final results and representative themes. In total, 20 moderators participated in this study, and described their experiences and efforts to follow a consistent and shared protocol for responding to common scenarios in the online community. Many reported the deep connections formed by the online community, the helpful and thoughtful responses that members give each other and the satisfaction of seeing progress in members' recovery. They also reported occasional aggressive, sensitive or inconsiderate comments and posts on the platform. They respond by removing or revising the hurtful post or reaching out to the affected member to maintain the 'house rules'. Lastly, many discussed strategies they elicit to promote engagement from members within the community and ensure each member is supported through their use of the platform. This study sheds light on the critical role of moderators of online peer support communities, and their ability to contribute to the potential benefits of digital peer support while minimizing risks to users. The findings reported here accentuate the importance of having well-trained moderators on online peer support platforms and can guide future efforts to effectively train and supervise prospective peer support moderators. Moderators can become an active 'shaping force' and bring a cohesive culture of expressed empathy, sensitivity and care. The delivery of a healthy and safe community contrasts starkly with non-moderated online forums, which can become unhealthy and unsafe as a result.
Collapse
Affiliation(s)
- Davy Deng
- grid.189504.10000 0004 1936 7558Harvard Chan School of Public Health, Boston, MA USA
| | | | - John A. Naslund
- grid.38142.3c000000041936754XDepartment of Global Health and Social Medicine, Harvard Medical School, Boston, MA USA
| |
Collapse
|
12
|
Sharma A, Lin IW, Miner AS, Atkins DC, Althoff T. Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00593-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
13
|
Gruebner O, van Haasteren A, Hug A, Elayan S, Sykora M, Albanese E, Stettner GM, Waldboth V, Messmer-Khosla S, Enzmann C, Baumann D, von Wyl V, Fadda M, Wolf M, von Rhein M. Mental health challenges and digital platform opportunities in patients and families affected by pediatric neuromuscular diseases - experiences from Switzerland. Digit Health 2023; 9:20552076231213700. [PMID: 38025108 PMCID: PMC10656806 DOI: 10.1177/20552076231213700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
Receiving the diagnosis of a severe disease may present a traumatic event for patients and their families. To cope with the related challenges, digital interventions can be combined with traditional psychological support to help meet respective needs. We aimed to 1) discuss the most common consequences and challenges for resilience in Neuro Muscular Disease patients and family members and 2) elicit practical needs, concerns, and opportunities for digital platform use. We draw from findings of a transdisciplinary workshop and conference with participants ranging from the fields of clinical practice to patient representatives. Reported consequences of the severe diseases were related to psychosocial challenges, living in the nexus between physical development and disease progression, social exclusion, care-related challenges, structural and financial challenges, and non-inclusive urban design. Practical needs and concerns regarding digital platform use included social and professional support through these platforms, credibility and trust in online information, and concerns about privacy and informed consent. Furthermore, the need for safe, reliable, and expert-guided information on digital platforms and psychosocial and relationship-based digital interventions was expressed. There is a need to focus on a family-centered approach in digital health and social care and a further need in researching the suitability of digital platforms to promote resilience in the affected population. Our results can also inform city councils regarding investments in inclusive urban design allowing for disability affected groups to enjoy a better quality of life.
Collapse
Affiliation(s)
- Oliver Gruebner
- Department of Geography, University of Zurich, Zurich, Switzerland
- Department of Epidemiology, Epidemiology, Biostatistics, and Prevention Institute, University of Zurich, Zurich, Switzerland
| | - Afua van Haasteren
- Institute of Public Health, Università della Svizzera italiana, Lugano, Switzerland
| | - Anna Hug
- Department of Geography, University of Zurich, Zurich, Switzerland
| | - Suzanne Elayan
- Centre for Information Management, School of Business and Economics, Loughborough University, Loughborough, UK
| | - Martin Sykora
- Centre for Information Management, School of Business and Economics, Loughborough University, Loughborough, UK
| | - Emiliano Albanese
- Institute of Public Health, Università della Svizzera italiana, Lugano, Switzerland
| | - Georg M. Stettner
- Neuromuscular Center Zurich and Department of Pediatric Neurology, University of Zurich, University Children’s Hospital Zurich, Zurich, Switzerland
| | - Veronika Waldboth
- Institute of Nursing, School of Health Sciences, Zurich University of Applied Sciences, Winterthur, Switzerland
| | | | - Cornelia Enzmann
- Department of Neuropediatrics, Neuromuscular Center, University Children's Hospital Basel, Basel, Switzerland
| | - Dominique Baumann
- Swiss Registry for Neuromuscular Disorders (Swiss-Reg-NMD), Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland
| | - Viktor von Wyl
- Department of Epidemiology, Epidemiology, Biostatistics, and Prevention Institute, University of Zurich, Zurich, Switzerland
| | - Marta Fadda
- Institute of Public Health, Università della Svizzera italiana, Lugano, Switzerland
| | - Markus Wolf
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Michael von Rhein
- Child Development Center, University Children’s Hospital Zurich, University of Zurich, Zurich, Switzerland
| |
Collapse
|
14
|
Seow LSE, Chang S, Sambasivam R, Subramaniam M, Lu SH, Assudani H, Tan CYG, Vaingankar JA. Psychotherapists’ perspective of the use of eHealth services to enhance positive mental health promotion. Digit Health 2023. [DOI: 10.1177/20552076221147411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Objective Keyes’s two-continua model of mental health proposes that mental illness and positive mental health are two separate, correlated, unipolar dimensions. eHealth services have been used to deliver mental health care, though the focus remained largely on symptom reduction and management of negative aspects of mental health. The potential of eHealth services to promote positive mental well-being, however, has not been tapped sufficiently. The present study aims to explore psychotherapists’ perspective on the feasibility of eHealth services to enhance positive mental health promotion. Methods Seven focus group discussions were conducted among professionals ( n = 38) who delivered psychotherapy to examine positive mental health in their practice. Responses related to the use of e-psychotherapy to promote mental well-being were extracted for use in a secondary analysis of data in this study. Thematic analysis of data via inductive approach was conducted to allow emergence of common themes. Results Three main themes related to psychotherapists’ perspective on the feasibility of eHealth intervention in enhancing positive mental health were identified: (1) use of eHealth to educate and improve positive mental health; (2) concerns on incorporating psychotherapy into online services; (3) other factors that affect uptake or effectiveness of eHealth intervention for positive mental health. Conclusions The study generally found support among psychotherapists for the feasibility of eHealth intervention in promoting positive mental health among clients. Potential difficulties in implementation and practicality concerns were discussed.
Collapse
Affiliation(s)
| | - Sherilyn Chang
- Research Division, Institute of Mental Health, Singapore
| | | | | | | | - Hanita Assudani
- Department of Psychology, Institute of Mental Health, Singapore
| | | | | |
Collapse
|
15
|
Coghlan S, Leins K, Sheldrick S, Cheong M, Gooding P, D'Alfonso S. To chat or bot to chat: Ethical issues with using chatbots in mental health. Digit Health 2023; 9:20552076231183542. [PMID: 37377565 PMCID: PMC10291862 DOI: 10.1177/20552076231183542] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/05/2023] [Indexed: 06/29/2023] Open
Abstract
This paper presents a critical review of key ethical issues raised by the emergence of mental health chatbots. Chatbots use varying degrees of artificial intelligence and are increasingly deployed in many different domains including mental health. The technology may sometimes be beneficial, such as when it promotes access to mental health information and services. Yet, chatbots raise a variety of ethical concerns that are often magnified in people experiencing mental ill-health. These ethical challenges need to be appreciated and addressed throughout the technology pipeline. After identifying and examining four important ethical issues by means of a recognised ethical framework comprised of five key principles, the paper offers recommendations to guide chatbot designers, purveyers, researchers and mental health practitioners in the ethical creation and deployment of chatbots for mental health.
Collapse
Affiliation(s)
- Simon Coghlan
- School of Computing and Information Systems, The University of Melbourne
| | - Kobi Leins
- School of Computing and Information Systems, The University of Melbourne
- Department of War Studies, King's College London
| | - Susie Sheldrick
- School of Computing and Information Systems, The University of Melbourne
| | - Marc Cheong
- School of Computing and Information Systems, The University of Melbourne
| | | | - Simon D'Alfonso
- School of Computing and Information Systems, The University of Melbourne
| |
Collapse
|
16
|
Khati A, Wickersham JA, Rosen AO, Luces JRB, Copenhaver N, Jeri-Wahrhaftig A, Ab Halim MA, Azwa I, Gautam K, Ooi KH, Shrestha R. Ethical Issues in the Use of Smartphone Apps for HIV Prevention in Malaysia: Focus Group Study With Men Who Have Sex With Men. JMIR Form Res 2022; 6:e42939. [PMID: 36563046 PMCID: PMC9823573 DOI: 10.2196/42939] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 11/19/2022] [Accepted: 11/28/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND The use of smartphone apps can improve the HIV prevention cascade for key populations such as men who have sex with men (MSM). In Malaysia, where stigma and discrimination toward MSM are high, mobile health app-based strategies have the potential to open new frontiers for HIV prevention. However, little guidance is available to inform researchers about the ethical concerns that are unique to the development and implementation of app-based HIV prevention programs. OBJECTIVE This study aimed to fill this gap by characterizing the attitudes and concerns of Malaysian MSM regarding HIV prevention mobile apps, particularly regarding the ethical aspects surrounding their use. METHODS We conducted web-based focus group discussions with 23 MSM between August and September 2021. Using in-depth semistructured interviews, participants were asked about the risks and ethical issues they perceived to be associated with using mobile apps for HIV prevention. Each session was digitally recorded and transcribed. Transcripts were inductively coded using the Dedoose software (SocioCultural Research Consultants) and analyzed to identify and interpret emerging themes. RESULTS Although participants were highly willing to use app-based strategies for HIV prevention, they raised several ethical concerns related to their use. Prominent concerns raised by participants included privacy and confidentiality concerns, including fear of third-party access to personal health information (eg, friends or family and government agencies), issues around personal health data storage and management, equity and equitable access, informed consent, and regulation. CONCLUSIONS The study's findings highlight the role of ethical concerns related to the use of app-based HIV prevention programs. Given the ever-growing nature of such technological platforms that are intermixed with a complex ethical-legal landscape, mobile health platforms must be safe and secure to minimize unintended harm, safeguard user privacy and confidentiality, and obtain public trust and uptake.
Collapse
Affiliation(s)
- Antoine Khati
- Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States
| | | | - Aviana O Rosen
- Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States
| | | | - Nicholas Copenhaver
- Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States
| | - Alma Jeri-Wahrhaftig
- Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States
| | - Mohd Akbar Ab Halim
- Centre of Excellence for Research in AIDS (CERiA), University of Malaya, Kuala Lumpur, Malaysia
| | - Iskandar Azwa
- Centre of Excellence for Research in AIDS (CERiA), University of Malaya, Kuala Lumpur, Malaysia
| | - Kamal Gautam
- Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States
| | - Kai Hong Ooi
- Centre of Excellence for Research in AIDS (CERiA), University of Malaya, Kuala Lumpur, Malaysia
| | - Roman Shrestha
- Department of Allied Health Sciences, University of Connecticut, Storrs, CT, United States
- AIDS Program, Yale School of Medicine, New Haven, CT, United States
| |
Collapse
|
17
|
Kerr JI, Naegelin M, Benk M, V Wangenheim F, Meins E, Viganò E, Ferrario A. Investigating Employees’ Concerns and Wishes for Digital Stress Management Interventions with Value Sensitive Design: Mixed Methods Study (Preprint). J Med Internet Res 2022; 25:e44131. [PMID: 37052996 PMCID: PMC10141316 DOI: 10.2196/44131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 02/21/2023] [Accepted: 03/12/2023] [Indexed: 03/14/2023] Open
Abstract
BACKGROUND Work stress places a heavy economic and disease burden on society. Recent technological advances include digital health interventions for helping employees prevent and manage their stress at work effectively. Although such digital solutions come with an array of ethical risks, especially if they involve biomedical big data, the incorporation of employees' values in their design and deployment has been widely overlooked. OBJECTIVE To bridge this gap, we used the value sensitive design (VSD) framework to identify relevant values concerning a digital stress management intervention (dSMI) at the workplace, assess how users comprehend these values, and derive specific requirements for an ethics-informed design of dSMIs. VSD is a theoretically grounded framework that front-loads ethics by accounting for values throughout the design process of a technology. METHODS We conducted a literature search to identify relevant values of dSMIs at the workplace. To understand how potential users comprehend these values and derive design requirements, we conducted a web-based study that contained closed and open questions with employees of a Swiss company, allowing both quantitative and qualitative analyses. RESULTS The values health and well-being, privacy, autonomy, accountability, and identity were identified through our literature search. Statistical analysis of 170 responses from the web-based study revealed that the intention to use and perceived usefulness of a dSMI were moderate to high. Employees' moderate to high health and well-being concerns included worries that a dSMI would not be effective or would even amplify their stress levels. Privacy concerns were also rated on the higher end of the score range, whereas concerns regarding autonomy, accountability, and identity were rated lower. Moreover, a personalized dSMI with a monitoring system involving a machine learning-based analysis of data led to significantly higher privacy (P=.009) and accountability concerns (P=.04) than a dSMI without a monitoring system. In addition, integrability, user-friendliness, and digital independence emerged as novel values from the qualitative analysis of 85 text responses. CONCLUSIONS Although most surveyed employees were willing to use a dSMI at the workplace, there were considerable health and well-being concerns with regard to effectiveness and problem perpetuation. For a minority of employees who value digital independence, a nondigital offer might be more suitable. In terms of the type of dSMI, privacy and accountability concerns must be particularly well addressed if a machine learning-based monitoring component is included. To help mitigate these concerns, we propose specific requirements to support the VSD of a dSMI at the workplace. The results of this work and our research protocol will inform future research on VSD-based interventions and further advance the integration of ethics in digital health.
Collapse
Affiliation(s)
- Jasmine I Kerr
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Mara Naegelin
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Michaela Benk
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Florian V Wangenheim
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Erika Meins
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
| | - Eleonora Viganò
- Institute of Biomedical Ethics and History of Medicine, University of Zurich, Zurich, Switzerland
| | - Andrea Ferrario
- Mobiliar Lab for Analytics at ETH Zurich, Department of Management, Technology, and Economics, ETH Zurich, Zürich, Switzerland
- Chair of Technology Marketing, Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
18
|
Alzahrani A, Gay V, Alturki R. Exploring Saudi Individuals' Perspectives and Needs to Design a Hypertension Management Mobile Technology Solution: Qualitative Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:12956. [PMID: 36232254 PMCID: PMC9566460 DOI: 10.3390/ijerph191912956] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/03/2022] [Accepted: 10/06/2022] [Indexed: 06/16/2023]
Abstract
Hypertension is a chronic condition caused by a poor lifestyle that affects patients' lives. Adherence to self-management programs increases hypertension self-monitoring, and allows greater prevention and disease management. Patient compliance with hypertension self-management is low in general; therefore, mobile health applications (mHealth-Apps) are becoming a daily necessity and provide opportunities to improve the prevention and treatment of chronic diseases, including hypertension. This research aims to explore Saudi individuals' perspectives and needs regarding designing a hypertension management mobile app to be used by hypertension patients to better manage their illnesses. Semi-structured interviews were conducted with 21 Saudi participants to explore their perspectives and views about the needs and requirements in designing a hypertension mobile technology solution, as well as usability and culture in the Saudi context. The study used NVivo to analyze data and divided the themes into four main themes: the app's perceived health benefits, features and usability, suggestions for the app's content, and security and privacy. The results showed that there are many suggestions for improvements in mobile health apps that developers should take into consideration when designing apps. The mobile health apps should include physical activity tracking, related diet information, and reminders, which are promising, and could increase adherence to healthy lifestyles and consequently improve the self-management of hypertension patients. Mobile health apps provide opportunities to improve hypertension patients' self-management and self-monitoring. However, this study asserts that mobile health apps should not share users' data, and that adequate privacy disclosures should be implemented.
Collapse
Affiliation(s)
- Adel Alzahrani
- School of Electrical and Data Engineering, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney 2007, Australia
| | - Valerie Gay
- School of Electrical and Data Engineering, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney 2007, Australia
| | - Ryan Alturki
- Department of Information Science, College of Computer and Information Systems, Umm Al-Qura University, Mecca 24382, Saudi Arabia
| |
Collapse
|
19
|
The performance of artificial intelligence-driven technologies in diagnosing mental disorders: an umbrella review. NPJ Digit Med 2022; 5:87. [PMID: 35798934 PMCID: PMC9262920 DOI: 10.1038/s41746-022-00631-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 06/08/2022] [Indexed: 11/08/2022] Open
Abstract
Artificial intelligence (AI) has been successfully exploited in diagnosing many mental disorders. Numerous systematic reviews summarize the evidence on the accuracy of AI models in diagnosing different mental disorders. This umbrella review aims to synthesize results of previous systematic reviews on the performance of AI models in diagnosing mental disorders. To identify relevant systematic reviews, we searched 11 electronic databases, checked the reference list of the included reviews, and checked the reviews that cited the included reviews. Two reviewers independently selected the relevant reviews, extracted the data from them, and appraised their quality. We synthesized the extracted data using the narrative approach. We included 15 systematic reviews of 852 citations identified. The included reviews assessed the performance of AI models in diagnosing Alzheimer's disease (n = 7), mild cognitive impairment (n = 6), schizophrenia (n = 3), bipolar disease (n = 2), autism spectrum disorder (n = 1), obsessive-compulsive disorder (n = 1), post-traumatic stress disorder (n = 1), and psychotic disorders (n = 1). The performance of the AI models in diagnosing these mental disorders ranged between 21% and 100%. AI technologies offer great promise in diagnosing mental health disorders. The reported performance metrics paint a vivid picture of a bright future for AI in this field. Healthcare professionals in the field should cautiously and consciously begin to explore the opportunities of AI-based tools for their daily routine. It would also be encouraging to see a greater number of meta-analyses and further systematic reviews on performance of AI models in diagnosing other common mental disorders such as depression and anxiety.
Collapse
|
20
|
Venegas MD, Brooks JM, Myers AL, Storm M, Fortuna KL. Peer Support Specialists and Service Users' Perspectives on Privacy, Confidentiality, and Security of Digital Mental Health. IEEE PERVASIVE COMPUTING 2022; 21:41-50. [PMID: 35814864 PMCID: PMC9267391 DOI: 10.1109/mprv.2022.3141986] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
As the digitalization of mental health systems progresses, the ethical and social debate on the use of these mental health technologies has seldom been explored among end-users. This article explores how service users (e.g., patients and users of mental health services) and peer support specialists understand and perceive issues of privacy, confidentiality, and security of digital mental health interventions. Semi-structured qualitative interviews were conducted among service users (n = 17) and peer support specialists (n = 15) from a convenience sample at an urban community mental health center in the United States. We identified technology ownership and use, lack of technology literacy including limited understanding of privacy, confidentiality, and security as the main barriers to engagement among service users. Peers demonstrated a high level of technology engagement, literacy of digital mental health tools, and a more comprehensive awareness of digital mental health ethics. We recommend peer support specialists as a potential resource to facilitate the ethical engagement of digital mental health interventions for service users. Finally, engaging potential end-users in the development cycle of digital mental health support platforms and increased privacy regulations may lead the field to a better understanding of effective uses of technology for people with mental health conditions. This study contributes to the ongoing debate of digital mental health ethics, data justice, and digital mental health by providing a first-hand experience of digital ethics from end-users' perspectives.
Collapse
Affiliation(s)
- Maria D Venegas
- Department of Veterans Affairs GRECC, Bedford, VA, 01730, USA
| | | | | | | | | |
Collapse
|
21
|
Vilaza GN, Bækgaard P. Teaching User Experience Design Ethics to Engineering Students: Lessons Learned. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.793879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Contemporary dilemmas about the role and impact of digital technologies in society have motivated the inclusion of topics of computing ethics in university programmes. Many past works have investigated how different pedagogical approaches and tools can support learning and teaching such a subject. This brief research report contributes to these efforts by describing a pilot study examining how engineering students learn from and apply ethical principles when making design decisions for an introductory User Experience (UX) design project. After a short lecture, students were asked to design and evaluate the ethical implications of digital health intervention prototypes. This approach was evaluated through the thematic analysis of semi-instructed interviews conducted with 12 students, focused on the benefits and limitations of teaching ethics this way. Findings indicate that it can be very challenging to convey the importance of ethics to unaware and uninterested students, an observation that calls for a much stronger emphasis on moral philosophy education throughout engineering degrees. This paper finishes with a reflection on the hardships and possible ways forward for teaching and putting UX design ethics into practice. The lessons learned and described in this report aim to contribute to future pedagogical efforts to enable ethical thinking in computing education.
Collapse
|
22
|
Rubeis G. iHealth: The ethics of artificial intelligence and big data in mental healthcare. Internet Interv 2022; 28:100518. [PMID: 35257003 PMCID: PMC8897624 DOI: 10.1016/j.invent.2022.100518] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 01/11/2022] [Accepted: 02/24/2022] [Indexed: 01/13/2023] Open
Abstract
The concept of intelligent health (iHealth) in mental healthcare integrates artificial intelligence (AI) and Big Data analytics. This article is an attempt to outline ethical aspects linked to iHealth by focussing on three crucial elements that have been defined in the literature: self-monitoring, ecological momentary assessment (EMA), and data mining. The material for the analysis was obtained by a database search. Studies and reviews providing outcome data for each of the three elements were analyzed. An ethical framing of the results was conducted that shows the chances and challenges of iHealth. The synergy between self-monitoring, EMA, and data mining might enable the prevention of mental illness, the prediction of its onset, the personalization of treatment, and the participation of patients in the treatment process. Challenges arise when it comes to the autonomy of users, privacy and data security of users, and potential bias.
Collapse
|
23
|
Williams JE, Pykett J. Mental health monitoring apps for depression and anxiety in children and young people: A scoping review and critical ecological analysis. Soc Sci Med 2022; 297:114802. [DOI: 10.1016/j.socscimed.2022.114802] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 01/07/2022] [Accepted: 02/07/2022] [Indexed: 01/20/2023]
|
24
|
Alhasani M, Mulchandani D, Oyebode O, Baghaei N, Orji R. A Systematic and Comparative Review of Behavior Change Strategies in Stress Management Apps: Opportunities for Improvement. Front Public Health 2022; 10:777567. [PMID: 35284368 PMCID: PMC8907579 DOI: 10.3389/fpubh.2022.777567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 01/03/2022] [Indexed: 12/03/2022] Open
Abstract
Stress is one of the significant triggers of several physiological and psychological illnesses. Mobile health apps have been used to deliver various stress management interventions and coping strategies over the years. However, little work exists on persuasive strategies employed in stress management apps to promote behavior change. To address this gap, we review 150 stress management apps on both Google Play and Apple's App Store in three stages. First, we deconstruct and compare the persuasive/behavior change strategies operationalized in the apps using the Persuasive Systems Design (PSD) framework and Cialdini's Principles of Persuasion. Our results show that the most frequently employed strategies are personalization, followed by self-monitoring, and trustworthiness, while social support strategies such as competition, cooperation and social comparison are the least employed. Second, we compare our findings within the stress management domain with those from other mental health domains to uncover further insights. Finally, we reflect on our findings and offer eight design recommendations to improve the effectiveness of stress management apps and foster future research.
Collapse
Affiliation(s)
- Mona Alhasani
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
- *Correspondence: Mona Alhasani
| | | | - Oladapo Oyebode
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Nilufar Baghaei
- Games and Extended Reality Lab, Massey University, Auckland, New Zealand
| | - Rita Orji
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| |
Collapse
|
25
|
Eis S, Solà-Morales O, Duarte-Díaz A, Vidal-Alaball J, Perestelo-Pérez L, Robles N, Carrion C. Mobile Applications in Mood Disorders and Mental Health: Systematic Search in Apple App Store and Google Play Store and Review of the Literature. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19042186. [PMID: 35206373 PMCID: PMC8871536 DOI: 10.3390/ijerph19042186] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 02/03/2022] [Accepted: 02/10/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVES The main objective of this work was to explore and characterize the current landscape of mobile applications available to treat mood disorders such as depression, bipolar disorder, and dysthymia. METHODS We developed a tool that makes both the Apple App Store and the Google Play Store searchable using keywords and that facilitates the extraction of basic app information of the search results. All app results were filtered using various inclusion and exclusion criteria. We characterized all resultant applications according to their technical details. Furthermore, we searched for scientific publications on each app's website and PubMed, to understand whether any of the apps were supported by any type of scientific evidence on their acceptability, validation, use, effectiveness, etc. Results: Thirty apps were identified that fit the inclusion and exclusion criteria. The literature search yielded 27 publications related to the apps. However, these did not exclusively concern mood disorders. 6 were randomized studies and the rest included a protocol, pilot-, feasibility, case-, or qualitative studies, among others. The majority of studies were conducted on relatively small scales and 9 of the 27 studies did not explicitly study the effects of mobile application use on mental wellbeing. CONCLUSION While there exists a wealth of mobile applications aimed at the treatment of mental health disorders, including mood disorders, this study showed that only a handful of these are backed by robust scientific evidence. This result uncovers a need for further clinically oriented and systematic validation and testing of such apps.
Collapse
Affiliation(s)
- Sophie Eis
- Fundació HiTT (Health Innovation Technology Transfer), 08015 Barcelona, Spain;
| | - Oriol Solà-Morales
- Fundació HiTT (Health Innovation Technology Transfer), 08015 Barcelona, Spain;
- Correspondence:
| | - Andrea Duarte-Díaz
- Canary Islands Health Research Institute Foundation (FIISC), 38109 Tenerife, Spain;
| | - Josep Vidal-Alaball
- Health Promotion in Rural Areas Research Group, Gerència Territorial de la Catalunya Central, Institut Català de la Salut, 08272 Barcelona, Spain;
- Unitat de Suport a la Recerca de la Catalunya Central, Fundació Institut Universitari per a la Recerca a l’Atenció Primària de Salut Jordi Gol i Gurina, 08007 Barcelona, Spain
- Faculty of Medicine, University of Vic-Central University of Catalonia (UVIC-UCC), 08500 Vic, Spain
| | | | - Noemí Robles
- eHealth Lab Research Group, School of Health Sciences and eHealth Centre, Universitat Oberta de Catalunya (UOC), 08035 Barcelona, Spain; (N.R.); (C.C.)
| | - Carme Carrion
- eHealth Lab Research Group, School of Health Sciences and eHealth Centre, Universitat Oberta de Catalunya (UOC), 08035 Barcelona, Spain; (N.R.); (C.C.)
| |
Collapse
|
26
|
SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Soc Sci Med 2022; 296:114782. [DOI: 10.1016/j.socscimed.2022.114782] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 02/02/2022] [Accepted: 02/03/2022] [Indexed: 12/12/2022]
|
27
|
Van Meter A, Agrawal N. LovesCompany: evaluating the safety and feasibility of a mental health-focused online community for adolescents. J Child Adolesc Ment Health 2022; 34:83-100. [PMID: 38504652 DOI: 10.2989/17280583.2023.2283030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Background: Adolescents are at risk for mental health (MH) disorders but are unlikely to seek services and may be reluctant to talk about their MH. An anonymous, online MH-focused community could help reduce suffering. However, online forums can also promote negative behaviours such as cyberbullying. This study aimed to evaluate the safety and feasibility of an online community - LovesCompany - to improve MH outcomes for adolescents.Methods: American adolescents (14-17 years) were recruited through social media. Eligible participants were randomised to LovesCompany or a placebo MH resource site. Outcomes were assessed every other week for six months, and at twelve months. Multilevel models assessed group differences in depression, anxiety, and suicidal ideation. A subgroup of participants participated in qualitative interviews.Results: Participants (N = 202) were mostly female (70%), White non-Hispanic (69%), and cisgender (80%). There were no instances of inappropriate behaviour such as bullying or posting explicit content. Symptoms for both groups improved over time. Participants appreciated hearing others' experiences and valued the opportunity to offer support.Conclusion: Although adolescents are often resistant to MH treatment, they appear to be interested in anonymous, online, MH-focused conversation, and can benefit from giving and seeking support. Finding a balance between an appealing user experience, ethical considerations, and resource needs is challenging.
Collapse
Affiliation(s)
- Anna Van Meter
- Department of Child and Adolescent Psychiatry, Grossman School of Medicine, New York University Langone Health, New York, USA
- Feinstein Institutes for Medical Research, Institute for Behavioral Science, Manhasset, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Uniondale, USA
- Ferkauf Graduate School of Psychology, Yeshiva University, New York, USA
| | - Neha Agrawal
- Ferkauf Graduate School of Psychology, Yeshiva University, New York, USA
- Community West Treatment Center, Los Angeles, USA
| |
Collapse
|
28
|
Möllmann NR, Mirbabaie M, Stieglitz S. Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Informatics J 2021; 27:14604582211052391. [PMID: 34935557 DOI: 10.1177/14604582211052391] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars' efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.
Collapse
Affiliation(s)
- Nicholas Rj Möllmann
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| | - Milad Mirbabaie
- Faculty of Business Administration and Economics, 9168Paderborn University, Paderborn, Germany
| | - Stefan Stieglitz
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| |
Collapse
|
29
|
Davies B. 'Personal Health Surveillance': The Use of mHealth in Healthcare Responsibilisation. Public Health Ethics 2021; 14:268-280. [PMID: 34899983 PMCID: PMC8661076 DOI: 10.1093/phe/phab013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
There is an ongoing increase in the use of mobile health (mHealth) technologies that patients can use to monitor health-related outcomes and behaviours. While the dominant narrative around mHealth focuses on patient empowerment, there is potential for mHealth to fit into a growing push for patients to take personal responsibility for their health. I call the first of these uses 'medical monitoring', and the second 'personal health surveillance'. After outlining two problems which the use of mHealth might seem to enable us to overcome-fairness of burdens and reliance on self-reporting-I note that these problems would only really be solved by unacceptably comprehensive forms of personal health surveillance which applies to all of us at all times. A more plausible model is to use personal health surveillance as a last resort for patients who would otherwise independently qualify for responsibility-based penalties. However, I note that there are still a number of ethical and practical problems that such a policy would need to overcome. The prospects of mHealth enabling a fair, genuinely cost-saving policy of patient responsibility are slim.
Collapse
Affiliation(s)
- Ben Davies
- Uehiro Centre for Practical Ethics, University of Oxford
| |
Collapse
|
30
|
Wies B, Landers C, Ienca M. Digital Mental Health for Young People: A Scoping Review of Ethical Promises and Challenges. Front Digit Health 2021; 3:697072. [PMID: 34713173 PMCID: PMC8521997 DOI: 10.3389/fdgth.2021.697072] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Accepted: 07/06/2021] [Indexed: 11/13/2022] Open
Abstract
Mental health disorders are complex disorders of the nervous system characterized by a behavioral or mental pattern that causes significant distress or impairment of personal functioning. Mental illness is of particular concern for younger people. The WHO estimates that around 20% of the world's children and adolescents have a mental health condition, a rate that is almost double compared to the general population. One approach toward mitigating the medical and socio-economic effects of mental health disorders is leveraging the power of digital health technology to deploy assistive, preventative, and therapeutic solutions for people in need. We define “digital mental health” as any application of digital health technology for mental health assessment, support, prevention, and treatment. However, there is only limited evidence that digital mental health tools can be successfully implemented in clinical settings. Authors have pointed to a lack of technical and medical standards for digital mental health apps, personalized neurotechnology, and assistive cognitive technology as a possible cause of suboptimal adoption and implementation in the clinical setting. Further, ethical concerns have been raised related to insufficient effectiveness, lack of adequate clinical validation, and user-centered design as well as data privacy vulnerabilities of current digital mental health products. The aim of this paper is to report on a scoping review we conducted to capture and synthesize the growing literature on the promises and ethical challenges of digital mental health for young people aged 0–25. This review seeks to survey the scope and focus of the relevant literature, identify major benefits and opportunities of ethical significance (e.g., reducing suffering and improving well-being), and provide a comprehensive mapping of the emerging ethical challenges. Our findings provide a comprehensive synthesis of the current literature and offer a detailed informative basis for any stakeholder involved in the development, deployment, and management of ethically-aligned digital mental health solutions for young people.
Collapse
Affiliation(s)
- Blanche Wies
- Department of Health Sciences and Technology, ETH Zurich (Swiss Federal Institut of Technology), Zurich, Switzerland
| | - Constantin Landers
- Department of Health Sciences and Technology, ETH Zurich (Swiss Federal Institut of Technology), Zurich, Switzerland
| | - Marcello Ienca
- Department of Health Sciences and Technology, ETH Zurich (Swiss Federal Institut of Technology), Zurich, Switzerland
| |
Collapse
|
31
|
Vilaza GN, McCashin D. Is the Automation of Digital Mental Health Ethical? Applying an Ethical Framework to Chatbots for Cognitive Behaviour Therapy. Front Digit Health 2021; 3:689736. [PMID: 34713163 PMCID: PMC8521996 DOI: 10.3389/fdgth.2021.689736] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 07/16/2021] [Indexed: 11/13/2022] Open
Abstract
The COVID-19 pandemic has intensified the need for mental health support across the whole spectrum of the population. Where global demand outweighs the supply of mental health services, established interventions such as cognitive behavioural therapy (CBT) have been adapted from traditional face-to-face interaction to technology-assisted formats. One such notable development is the emergence of Artificially Intelligent (AI) conversational agents for psychotherapy. Pre-pandemic, these adaptations had demonstrated some positive results; but they also generated debate due to a number of ethical and societal challenges. This article commences with a critical overview of both positive and negative aspects concerning the role of AI-CBT in its present form. Thereafter, an ethical framework is applied with reference to the themes of (1) beneficence, (2) non-maleficence, (3) autonomy, (4) justice, and (5) explicability. These themes are then discussed in terms of practical recommendations for future developments. Although automated versions of therapeutic support may be of appeal during times of global crises, ethical thinking should be at the core of AI-CBT design, in addition to guiding research, policy, and real-world implementation as the world considers post-COVID-19 society.
Collapse
|
32
|
Ong T, Wilczewski H, Paige SR, Soni H, Welch BM, Bunnell BE. Extended Reality for Enhanced Telehealth During and Beyond COVID-19: Viewpoint. JMIR Serious Games 2021; 9:e26520. [PMID: 34227992 PMCID: PMC8315161 DOI: 10.2196/26520] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 05/07/2021] [Accepted: 06/15/2021] [Indexed: 12/15/2022] Open
Abstract
The COVID-19 pandemic caused widespread challenges and revealed vulnerabilities across global health care systems. In response, many health care providers turned to telehealth solutions, which have been widely embraced and are likely to become standard for modern care. Immersive extended reality (XR) technologies have the potential to enhance telehealth with greater acceptability, engagement, and presence. However, numerous technical, logistic, and clinical barriers remain to the incorporation of XR technology into telehealth practice. COVID-19 may accelerate the union of XR and telehealth as researchers explore novel solutions to close social distances. In this viewpoint, we highlight research demonstrations of XR telehealth during the COVID-19 pandemic and discuss future directions to make XR the next evolution of remote health care.
Collapse
Affiliation(s)
- Triton Ong
- Doxy.me, LLC, Rochester, NY, United States
| | | | | | - Hiral Soni
- Doxy.me, LLC, Rochester, NY, United States
| | - Brandon M Welch
- Doxy.me, LLC, Rochester, NY, United States
- Biomedical Informatics Center, Medical University of South Carolina, Charleston, SC, United States
| | - Brian E Bunnell
- Doxy.me, LLC, Rochester, NY, United States
- Department of Psychiatry, University of South Florida, Tampa, FL, United States
| |
Collapse
|
33
|
Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent? Front Med (Lausanne) 2021; 8:695217. [PMID: 34368192 PMCID: PMC8333706 DOI: 10.3389/fmed.2021.695217] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 06/22/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose: The method of diagnosing diabetic retinopathy (DR) through artificial intelligence (AI)-based systems has been commercially available since 2018. This introduces new ethical challenges with regard to obtaining informed consent from patients. The purpose of this work is to develop a checklist of items to be disclosed when diagnosing DR with AI systems in a primary care setting. Methods: Two systematic literature searches were conducted in PubMed and Web of Science databases: a narrow search focusing on DR and a broad search on general issues of AI-based diagnosis. An ethics content analysis was conducted inductively to extract two features of included publications: (1) novel information content for AI-aided diagnosis and (2) the ethical justification for its disclosure. Results: The narrow search yielded n = 537 records of which n = 4 met the inclusion criteria. The information process was scarcely addressed for primary care setting. The broad search yielded n = 60 records of which n = 11 were included. In total, eight novel elements were identified to be included in the information process for ethical reasons, all of which stem from the technical specifics of medical AI. Conclusions: Implications for the general practitioner are two-fold: First, doctors need to be better informed about the ethical implications of novel technologies and must understand them to properly inform patients. Second, patient's overconfidence or fears can be countered by communicating the risks, limitations, and potential benefits of diagnostic AI systems. If patients accept and are aware of the limitations of AI-aided diagnosis, they increase their chances of being diagnosed and treated in time.
Collapse
|
34
|
Mobile apps for travel medicine and ethical considerations: A systematic review. Travel Med Infect Dis 2021; 43:102143. [PMID: 34256131 DOI: 10.1016/j.tmaid.2021.102143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 06/30/2021] [Accepted: 07/02/2021] [Indexed: 11/20/2022]
Abstract
BACKGROUND The advent of mobile applications for health and medicine will revolutionize travel medicine. Despite their many benefits, such as access to real-time data, mobile apps for travel medicine are accompanied by many ethical issues, including questions about security and privacy. METHODS A systematic literature review as conducted following PRISMA guidelines. Database screening yielded 1795 results and seven papers satisfied the criteria for inclusion. Through a mix of inductive and deductive data extraction, this systematic review examined both the benefits and challenges, as well as ethical considerations, of mobile apps for travel medicine. RESULTS Ethical considerations were discussed with varying depth across the included articles, with privacy and data protection mentioned most frequently, highlighting concerns over sensitive information and a lack of guidelines in the digital sphere. Additionally, technical concerns about data quality and bias were predominant issues for researchers and developers alike. Some ethical issues were not discussed at all, including equity, and user involvement. CONCLUSION This paper highlights the scarcity of discussion around ethical issues. Both researchers and developers need to better integrate ethical reflection at each step of the development and use of health apps. More effective oversight mechanisms and clearer ethical guidance are needed to guide the stakeholders in this endeavour.
Collapse
|
35
|
Laacke S, Mueller R, Schomerus G, Salloch S. Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2021; 21:4-20. [PMID: 33393864 DOI: 10.1080/15265161.2020.1863515] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The development of artificial intelligence (AI) in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders, such as depression, by using data from social media. These AI depression detectors (AIDDs) identify users who are at risk of depression prior to any contact with the healthcare system. The article focuses on the ethical implications of AIDDs regarding affected users' health-related autonomy. Firstly, it presents the (ethical) discussion of AI in medicine and, specifically, in mental health. Secondly, two models of AIDDs using social media data and different usage scenarios are introduced. Thirdly, the concept of patient autonomy, according to Beauchamp and Childress, is critically discussed. Since this concept does not encompass the specific challenges linked with the digital context of AIDDs in social media sufficiently, the current analysis suggests, finally, an extended concept of health-related digital autonomy.
Collapse
|
36
|
Zidaru T, Morrow EM, Stockley R. Ensuring patient and public involvement in the transition to AI-assisted mental health care: A systematic scoping review and agenda for design justice. Health Expect 2021; 24:1072-1124. [PMID: 34118185 PMCID: PMC8369091 DOI: 10.1111/hex.13299] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 04/07/2021] [Accepted: 05/26/2021] [Indexed: 12/16/2022] Open
Abstract
Background Machine‐learning algorithms and big data analytics, popularly known as ‘artificial intelligence’ (AI), are being developed and taken up globally. Patient and public involvement (PPI) in the transition to AI‐assisted health care is essential for design justice based on diverse patient needs. Objective To inform the future development of PPI in AI‐assisted health care by exploring public engagement in the conceptualization, design, development, testing, implementation, use and evaluation of AI technologies for mental health. Methods Systematic scoping review drawing on design justice principles, and (i) structured searches of Web of Science (all databases) and Ovid (MEDLINE, PsycINFO, Global Health and Embase); (ii) handsearching (reference and citation tracking); (iii) grey literature; and (iv) inductive thematic analysis, tested at a workshop with health researchers. Results The review identified 144 articles that met inclusion criteria. Three main themes reflect the challenges and opportunities associated with PPI in AI‐assisted mental health care: (a) applications of AI technologies in mental health care; (b) ethics of public engagement in AI‐assisted care; and (c) public engagement in the planning, development, implementation, evaluation and diffusion of AI technologies. Conclusion The new data‐rich health landscape creates multiple ethical issues and opportunities for the development of PPI in relation to AI technologies. Further research is needed to understand effective modes of public engagement in the context of AI technologies, to examine pressing ethical and safety issues and to develop new methods of PPI at every stage, from concept design to the final review of technology in practice. Principles of design justice can guide this agenda.
Collapse
Affiliation(s)
- Teodor Zidaru
- Department of Anthropology, London School of Economics and Political Science (LSE), London, UK
| | | | - Rich Stockley
- Surrey Heartlands Health and Care Partnership, Guildford and Waverley CCG, Guildford, UK.,Insight and Feedback Team, Nursing Directorate, NHS England and NHS Improvement, London, UK.,Surrey County Council, Kingston upon Thames, UK
| |
Collapse
|
37
|
Shen N, Kassam I, Zhao H, Chen S, Wang W, Wickham S, Strudwick G, Carter-Langford A. Foundations for Meaningful Consent in Canada’s Digital Health Ecosystem: Findings from a Pan-Canadian Survey (Preprint). JMIR Med Inform 2021; 10:e30986. [PMID: 35357318 PMCID: PMC9015739 DOI: 10.2196/30986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 12/17/2021] [Accepted: 01/31/2022] [Indexed: 01/25/2023] Open
Affiliation(s)
- Nelson Shen
- Centre for Complex Interventions, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | - Iman Kassam
- Centre for Complex Interventions, Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Haoyu Zhao
- Centre for Complex Interventions, Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Sheng Chen
- Centre for Complex Interventions, Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Wei Wang
- Centre for Complex Interventions, Centre for Addiction and Mental Health, Toronto, ON, Canada
- College of Public Health, University of South Florida, Tampa, FL, United States
| | | | - Gillian Strudwick
- Centre for Complex Interventions, Centre for Addiction and Mental Health, Toronto, ON, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
| | | |
Collapse
|
38
|
Skorburg JA, Yam J. Is There an App for That?: Ethical Issues in the Digital Mental Health Response to COVID-19. AJOB Neurosci 2021; 13:177-190. [PMID: 33989127 DOI: 10.1080/21507740.2021.1918284] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Well before COVID-19, there was growing excitement about the potential of various digital technologies such as tele-health, smartphone apps, or AI chatbots to revolutionize mental healthcare. As the SARS-CoV-2 virus spread across the globe, clinicians warned of the mental illness epidemic within the coronavirus pandemic. Now, funding for digital mental health technologies is surging and many researchers are calling for widespread adoption to address the mental health sequelae of COVID-19. Reckoning with the ethical implications of these technologies is urgent because decisions made today will shape the future of mental health research and care for the foreseeable future. We contend that the most pressing ethical issues concern (1) the extent to which these technologies demonstrably improve mental health outcomes and (2) the likelihood that wide-scale adoption will exacerbate the existing health inequalities laid bare by the pandemic. We argue that the evidence for efficacy is weak and that the likelihood of increasing inequalities is high. First, we review recent trends in digital mental health. Next, we turn to the clinical literature to show that many technologies proposed as a response to COVID-19 are unlikely to improve outcomes. Then, we argue that even evidence-based technologies run the risk of increasing health disparities. We conclude by suggesting that policymakers should not allocate limited resources to the development of many digital mental health tools and should focus instead on evidence-based solutions to address mental health inequalities.
Collapse
|
39
|
Blease C, Kharko A, Annoni M, Gaab J, Locher C. Machine Learning in Clinical Psychology and Psychotherapy Education: A Mixed Methods Pilot Survey of Postgraduate Students at a Swiss University. Front Public Health 2021; 9:623088. [PMID: 33898374 PMCID: PMC8064116 DOI: 10.3389/fpubh.2021.623088] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/05/2021] [Indexed: 11/13/2022] Open
Abstract
Background: There is increasing use of psychotherapy apps in mental health care. Objective: This mixed methods pilot study aimed to explore postgraduate clinical psychology students' familiarity and formal exposure to topics related to artificial intelligence and machine learning (AI/ML) during their studies. Methods: In April-June 2020, we conducted a mixed-methods online survey using a convenience sample of 120 clinical psychology students enrolled in a two-year Masters' program at a Swiss University. Results: In total 37 students responded (response rate: 37/120, 31%). Among respondents, 73% (n = 27) intended to enter a mental health profession, and 97% reported that they had heard of the term "machine learning." Students estimated 0.52% of their program would be spent on AI/ML education. Around half (46%) reported that they intended to learn about AI/ML as it pertained to mental health care. On 5-point Likert scale, students "moderately agreed" (median = 4) that AI/M should be part of clinical psychology/psychotherapy education. Qualitative analysis of students' comments resulted in four major themes on the impact of AI/ML on mental healthcare: (1) Changes in the quality and understanding of psychotherapy care; (2) Impact on patient-therapist interactions; (3) Impact on the psychotherapy profession; (4) Data management and ethical issues. Conclusions: This pilot study found that postgraduate clinical psychology students held a wide range of opinions but had limited formal education on how AI/ML-enabled tools might impact psychotherapy. The survey raises questions about how curricula could be enhanced to educate clinical psychology/psychotherapy trainees about the scope of AI/ML in mental healthcare.
Collapse
Affiliation(s)
- Charlotte Blease
- General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, United States
| | - Anna Kharko
- Faculty of Health, University of Plymouth, Plymouth, United Kingdom
| | - Marco Annoni
- Interdepartmental Center for Research Ethics and Integrity CNR, Rome, Italy.,Fondazione Umberto Veronesi, Milan, Italy
| | - Jens Gaab
- Department of Clinical Psychology and Psychotherapy, University of Basel, Basel, Switzerland
| | - Cosima Locher
- Faculty of Health, University of Plymouth, Plymouth, United Kingdom.,Department of Clinical Psychology and Psychotherapy, University of Basel, Basel, Switzerland.,Department of Consultation-Liaison Psychiatry and Psychosomatic Medicine, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
40
|
Chivilgina O, Elger BS, Jotterand F. Digital Technologies for Schizophrenia Management: A Descriptive Review. SCIENCE AND ENGINEERING ETHICS 2021; 27:25. [PMID: 33835287 PMCID: PMC8035115 DOI: 10.1007/s11948-021-00302-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 03/23/2021] [Indexed: 05/05/2023]
Abstract
While the implementation of digital technology in psychiatry appears promising, there is an urgent need to address the implications of the absence of ethical design in the early development of such technologies. Some authors have noted the gap between technology development and ethical analysis and have called for an upstream examination of the ethical issues raised by digital technologies. In this paper, we address this suggestion, particularly in relation to digital healthcare technologies for patients with schizophrenia spectrum disorders. The introduction of digital technologies in psychiatry offers a broad spectrum of diagnostic and treatment options tailored to the health needs and goals of patients' care. These technologies include wearable devices, smartphone applications for high-immersive virtual realities, smart homes, telepsychiatry and messaging systems for patients in rural areas. The availability of these technologies could increase access to mental health services and improve the diagnostics of mental disorders. In this descriptive review, we systematize ethical concerns about digital technologies for mental health with a particular focus on individuals suffering from schizophrenia. There are many unsolved dilemmas and conflicts of interest in the implementation of these technologies, such as (1) the lack of evidence on efficacy and impact on self-perception; (2) the lack of clear standards for the safety of their daily implementation; (3) unclear roles of technology and a shift in the responsibilities of all parties; (4) no guarantee of data confidentiality; and (5) the lack of a user-centered design that meets the particular needs of patients with schizophrenia. mHealth can improve care in psychiatry and make mental healthcare services more efficient and personalized while destigmatizing mental health disorders. To ensure that these technologies will benefit people with mental health disorders, we need to heighten sensitivity to ethical issues among mental healthcare specialists, health policy makers, software developers, patients themselves and their proxies. Additionally, we need to develop frameworks for furthering sustainable development in the digital technologies industry and for the responsible usage of such technologies for patients with schizophrenia in the clinical setting. We suggest that digital technology in psychiatry, particularly for schizophrenia and other serious mental health disorders, should be integrated into treatment with professional supervision rather than as a self-treatment tool.
Collapse
Affiliation(s)
- Olga Chivilgina
- Institute of Biomedical Ethics, University of Basel, Basel, Switzerland.
| | - Bernice S Elger
- Institute of Biomedical Ethics, University of Basel, Basel, Switzerland
- Unit of Health Law & Humanitarian Medicine At the Institute for Legal Medicine, University of Geneva, Geneva, Switzerland
| | - Fabrice Jotterand
- Institute of Biomedical Ethics, University of Basel, Basel, Switzerland
- Center for Bioethics and Medical Humanities, Institute for Health and Equity, Medical College of Wisconsin, Milwaukee, USA
| |
Collapse
|
41
|
Abd-alrazaq A, Schneider J, Alhuwail D, Toro CT, Ahmed A, Alajlani M, Househ M. The performance of artificial intelligence-driven technologies in diagnosing mental disorders: An umbrella review (Preprint).. [DOI: 10.2196/preprints.29235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
BACKGROUND
Diagnosing mental disorders is usually not an easy task and requires a large amount of time and effort given the complex nature of mental disorders. Artificial intelligence (AI) has been successfully exploited in diagnosing many mental disorders. Numerous systematic reviews summarize the evidence on the accuracy of AI models in diagnosing different mental disorders.
OBJECTIVE
This umbrella review aims to synthesize results of previous systematic reviews on the performance of AI models in diagnosing mental disorders.
METHODS
To identify relevant systematic reviews, we searched 11 electronic databases, checked the reference list of the included reviews, and checked the reviews that cited the included reviews. Two reviewers independently selected the relevant reviews, extracted the data from them, and appraised their quality. We synthesized the extracted data using the narrative approach. Specifically, results of the included reviews were grouped based on the target mental disorders that the AI classifiers distinguish.
RESULTS
We included 15 systematic reviews of 852 citations identified by searching all databases. The included reviews assessed the performance of AI models in diagnosing Alzheimer’s disease (n=7), mild cognitive impairment (n=6), schizophrenia (n=3), bipolar disease (n=2), autism spectrum disorder (n=1), obsessive-compulsive disorder (n=1), post-traumatic stress disorder (n=1), and psychotic disorders (n=1). The performance of the AI models in diagnosing these mental disorders ranged between 21% and 100%.
CONCLUSIONS
AI technologies offer great promise in diagnosing mental health disorders. The reported performance metrics paint a vivid picture of a bright future for AI in this field. To expedite progress towards these technologies being incorporated into routine practice, we recommend that healthcare professionals in the field cautiously and consciously begin to explore the opportunities of AI-based tools for their daily routine. It would also be encouraging to see a greater number of meta-analyses and further systematic reviews on performance of AI models in diagnosing other common mental disorders such as depression and anxiety.
CLINICALTRIAL
CRD42021231558
Collapse
|
42
|
Kuhn E, Fiske A, Henningsen P, Buyx A. [Psychotherapy with an Autonomous Artifical Intelligence - Ethical Benefits and Challenges]. PSYCHIATRISCHE PRAXIS 2021; 48:S26-S30. [PMID: 33652484 DOI: 10.1055/a-1369-2938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVE This paper provides an overview of a range of ethical aspects involved in the use of autonomous, virtual or embodied artificial intelligence (AI) in the care of people with mental health issues. METHODOLOGY The overview is based on a thematic literature review. It is guided by the principles of biomedical ethics together with the concept of epistemic (in)justice. RESULTS In addition to a risk-benefit analysis, (digital) health literacy, equity of access, issues of under- or misuse of care, and an adaptation of informed consent need to be considered. CONCLUSION The ethical assessment of autonomous AI in psychotherapy remains open; too many clinical, ethical, legal, and practical questions remain to be addressed. Quality criteria for AI application as well as guidelines for its clinical use need to be developed before wider clinical implementation.
Collapse
Affiliation(s)
- Eva Kuhn
- Sektion Global Health, Institut für Hygiene und Öffentliche Gesundheit, Universitätsklinikum Bonn
| | - Amelia Fiske
- Institut für Geschichte und Ethik der Medizin, Technische Universität München
| | - Peter Henningsen
- Klinik für Psychosomatische Medizin und Psychotherapie, Klinikum rechts der Isar der Technischen Universität München
| | - Alena Buyx
- Institut für Geschichte und Ethik der Medizin, Technische Universität München
| |
Collapse
|
43
|
Cho PJ, Singh K, Dunn J. Roles of artificial intelligence in wellness, healthy living, and healthy status sensing. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00009-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
44
|
Denecke K, Abd-Alrazaq A, Househ M. Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. MULTIPLE PERSPECTIVES ON ARTIFICIAL INTELLIGENCE IN HEALTHCARE 2021:115-128. [DOI: 10.1007/978-3-030-67303-1_10] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
45
|
Martinez-Martin N, Dasgupta I, Carter A, Chandler JA, Kellmeyer P, Kreitmair K, Weiss A, Cabrera LY. Ethics of Digital Mental Health During COVID-19: Crisis and Opportunities. JMIR Ment Health 2020; 7:e23776. [PMID: 33156811 PMCID: PMC7758081 DOI: 10.2196/23776] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/11/2020] [Accepted: 10/31/2020] [Indexed: 01/15/2023] Open
Abstract
Social distancing measures due to the COVID-19 pandemic have accelerated the adoption and implementation of digital mental health tools. Psychiatry and therapy sessions are being conducted via videoconferencing platforms, and the use of digital mental health tools for monitoring and treatment has grown. This rapid shift to telehealth during the pandemic has given added urgency to the ethical challenges presented by digital mental health tools. Regulatory standards have been relaxed to allow this shift to socially distanced mental health care. It is imperative to ensure that the implementation of digital mental health tools, especially in the context of this crisis, is guided by ethical principles and abides by professional codes of conduct. This paper examines key areas for an ethical path forward in this digital mental health revolution: privacy and data protection, safety and accountability, and access and fairness.
Collapse
Affiliation(s)
- Nicole Martinez-Martin
- Department of Pediatrics, Center for Biomedical Ethics, School of Medicine, Stanford University, Stanford, CA, United States
| | - Ishan Dasgupta
- Department of Philosophy, University of Washington, Seattle, WA, United States
| | - Adrian Carter
- School of Psychological Sciences and the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Jennifer A Chandler
- Faculty of Law, Centre for Health Law, Policy & Ethics, University of Ottawa, Ottawa, ON, Canada
| | - Philipp Kellmeyer
- Neuroethics and AI Ethics Lab Department of Neurosurgery, University Medical Center Freiburg, Freiburg, Germany
| | - Karola Kreitmair
- Department of Medical History and Bioethics, School of Medicine and Public Health, University of Wisconsin, Madison, WI, United States
| | - Anthony Weiss
- Department of Psychiatry and Center for Bioethics, Harvard Medical School, Boston, MA, United States
| | - Laura Y Cabrera
- Center for Ethics & Humanities in the Life Sciences, Department of Translational Neuroscience, Michigan State University, East Lansing, MI, United States
| |
Collapse
|
46
|
Lustgarten SD, Garrison YL, Sinnard MT, Flynn AW. Digital privacy in mental healthcare: current issues and recommendations for technology use. Curr Opin Psychol 2020; 36:25-31. [PMID: 32361651 PMCID: PMC7195295 DOI: 10.1016/j.copsyc.2020.03.012] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 03/24/2020] [Indexed: 11/26/2022]
Abstract
Mental healthcare providers increasingly use technology for psychotherapy services. This progress enables professionals to communicate, store information, and rely on digital software and hardware. Emails, text messaging, telepsychology/telemental health therapy, electronic medical records, cloud-based storage, apps/applications, and assessments are now available within the provision of services. Of those mentioned, some are directly utilized for psychotherapy while others indirectly aid providers. Whereas professionals previously wrote notes locally, technology has empowered providers to work more efficiently with third-party services and solutions. However, the implementation of these advancements in mental healthcare involves consequences to digital privacy and might increase clients' risk of unintended breaches of confidentiality. This manuscript reviews common technologies, considers the vulnerabilities therein, and proposes suggestions to strengthen privacy.
Collapse
Affiliation(s)
- Samuel D Lustgarten
- Department of Counseling Psychology, University of Wisconsin-Madison, United States.
| | - Yunkyoung L Garrison
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, United States; Colorado State University Health Network, Fort Collins, United States
| | - Morgan T Sinnard
- Department of Counseling Psychology, University of Wisconsin-Madison, United States
| | - Anthony Wp Flynn
- Department of Counseling Psychology, University of Wisconsin-Madison, United States
| |
Collapse
|
47
|
Palmer KM, Burrows V. Ethical and Safety Concerns Regarding the Use of Mental Health-Related Apps in Counseling: Considerations for Counselors. JOURNAL OF TECHNOLOGY IN BEHAVIORAL SCIENCE 2020; 6:137-150. [PMID: 32904690 PMCID: PMC7457894 DOI: 10.1007/s41347-020-00160-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 08/04/2020] [Accepted: 08/13/2020] [Indexed: 12/23/2022]
Abstract
Mental health-related smartphone apps (MHapps) have the potential to greatly enhance and enrich the counseling relationship, and dramatically improve the lives of clients. However, a large portion of MHapps have not been empirically researched and found to be effective. An average of 2 million apps are available in the Apple and Android stores, and users average more than 80 apps on their phones. Many of the apps lack disclaimers about the collection of user information, and there is no governing body to oversee and regulate app development and availability. This is particularly problematic with mental health-related smartphone apps, because many developers are not affiliated with mental health professionals, and many apps do not provide emergency information should a mental health emergency occur while using the app. Moreover, users are left to haphazardly make decisions about health-related apps usage without assistance. Counselors who supplement counseling with mental health-related smartphone apps could unknowingly violate their Code of Ethics by integrating apps that may jeopardize their clients' safety. The authors review literature related to mental health-related app efficacy, safety, and ethics and provide a compilation of items to consider that can be used before supplementing counseling with mental health-related apps.
Collapse
Affiliation(s)
- Kathleen M. Palmer
- University of Detroit Mercy, 4001 W. McNichols Road, Detroit, MI 48221-3038 USA
| | - Vanessa Burrows
- University of Detroit Mercy, 4001 W. McNichols Road, Detroit, MI 48221-3038 USA
| |
Collapse
|
48
|
Hochheiser H, Valdez RS. Human-Computer Interaction, Ethics, and Biomedical Informatics. Yearb Med Inform 2020; 29:93-98. [PMID: 32823302 PMCID: PMC7442500 DOI: 10.1055/s-0040-1701990] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Objectives
: To provide an overview of recent work at the intersection of Biomedical Informatics, Human-Computer Interaction, and Ethics.
Methods
: Search terms for Human-Computer Interaction, Biomedical Informatics, and Ethics were used to identify relevant papers published between 2017 and 2019.Relevant papers were identified through multiple methods, including database searches, manual reviews of citations, recent publications, and special collections, as well as through peer recommendations. Identified articles were reviewed and organized into broad themes.
Results
: We identified relevant papers at the intersection of Biomedical Informatics, Human-Computer Interactions, and Ethics in over a dozen journals. The content of these papers was organized into three broad themes: ethical issues associated with systems in use, systems design, and responsible conduct of research.
Conclusions
: The results of this overview demonstrate an active interest in exploring the ethical implications of Human-Computer Interaction concerns in Biomedical Informatics. Papers emphasizing ethical concerns associated with patient-facing tools, mobile devices, social media, privacy, inclusivity, and e-consent reflect the growing prominence of these topics in biomedical informatics research. New questions in these areas will likely continue to arise with the growth of precision medicine and citizen science.
Collapse
Affiliation(s)
- Harry Hochheiser
- Department of Biomedical Informatics, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania USA
| | - Rupa S Valdez
- Public Health Sciences & Engineering Systems and Environment, University of Virginia, Charlottesville, Virginia USA
| |
Collapse
|
49
|
Abstract
OBJECTIVES To survey international regulatory frameworks that serve to protect privacy of personal data as a human right as well as to review the literature regarding privacy protections and data ownership in mobile health (mHealth) technologies between January 1, 2016 and June 1, 2019 in order to identify common themes. METHODS We performed a review of relevant literature available in English published between January 1, 2016 and June 1, 2019 from databases including PubMed, Google Scholar, and Web of Science, as well as relevant legislative background material. Articles out of scope (as detailed below) were eliminated. We categorized the remaining pool of articles and discrete themes were identified, specifically: concerns around data transmission and storage, including data ownership and the ability to re-identify previously de-identified data; issues with user consent (including the availability of appropriate privacy policies) and access control; and the changing culture and variable global attitudes toward privacy of health data. RESULTS Recent literature demonstrates that the security of mHealth data storage and transmission remains of wide concern, and aggregated data that were previously considered "de-identified" have now been demonstrated to be re-identifiable. Consumer-informed consent may be lacking with regard to mHealth applications due to the absence of a privacy policy and/or to text that is too complex and lengthy for most users to comprehend. The literature surveyed emphasizes improved access control strategies. This survey also illustrates a wide variety of global user perceptions regarding health data privacy. CONCLUSION The international regulatory framework that serves to protect privacy of personal data as a human right is diverse. Given the challenges legislators face to keep up with rapidly advancing technology, we introduce the concept of a "healthcare fiduciary" to serve the best interest of data subjects in the current environment.
Collapse
Affiliation(s)
- Hannah K. Galvin
- Cambridge Health Alliance, Cambridge, MA, USA
- Tufts University School of Medicine, Boston, MA, USA
| | - Paul R. DeMuro
- Chief Legal Officer Health and Wellness, Royal Palm Companies, Miami, Florida
| |
Collapse
|
50
|
Bubolz S, Mayer G, Gronewold N, Hilbel T, Schultz JH. Adherence to Established Treatment Guidelines Among Unguided Digital Interventions for Depression: Quality Evaluation of 28 Web-Based Programs and Mobile Apps. J Med Internet Res 2020; 22:e16136. [PMID: 32673221 PMCID: PMC7385636 DOI: 10.2196/16136] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 04/01/2020] [Accepted: 04/19/2020] [Indexed: 01/04/2023] Open
Abstract
Background Web-based interventions for depression have been widely tested for usability and functioning. However, the few studies that have addressed the therapeutic quality of these interventions have mainly focused on general aspects without consideration of specific quality factors related to particular treatment components. Clinicians and scientists are calling for standardized assessment criteria for web-based interventions to enable effective and trustworthy patient care. Therefore, an extensive evaluation of web-based interventions at the level of individual treatment components based on therapeutic guidelines and manuals is needed. Objective The objective of this study was to evaluate the quality of unguided web-based interventions for depression at the level of individual treatment components based on their adherence to current gold-standard treatment guidelines and manuals. Methods A comprehensive online search of popular app stores and search engines in January 2018 revealed 11 desktop programs and 17 smartphone apps that met the inclusion criteria. Programs and apps were included if they were available for German users, interactive, unguided, and targeted toward depression. All programs and apps were tested by three independent researchers following a standardized procedure with a predefined symptom trajectory. During the testing, all web-based interventions were rated with a standardized list of criteria based on treatment guidelines and manuals for depression. Results Overall interrater reliability for all raters was substantial with an intraclass correlation coefficient of 0.73 and Gwet AC1 value of 0.80. The main features of web-based interventions included mood tracking (24/28, 86%), psychoeducation (21/28, 75%), cognitive restructuring (21/28, 75%), crisis management (20/28, 71%), behavioral activation (19/29, 68%), and relaxation training (18/28, 64%). Overall, therapeutic meaningfulness was rated higher for desktop programs (mean 4.13, SD 1.17) than for smartphone apps (mean 2.92, SD 1.46). Conclusions Although many exercises from manuals are included in web-based interventions, the necessary therapeutic depth of the interventions is often not reached, and risk management is frequently lacking. There is a need for further research targeting general principles for the development and evaluation of therapeutically sound web-based interventions for depression.
Collapse
Affiliation(s)
- Stefan Bubolz
- Department of General Internal Medicine and Psychosomatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Gwendolyn Mayer
- Department of General Internal Medicine and Psychosomatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Nadine Gronewold
- Department of General Internal Medicine and Psychosomatics, Heidelberg University Hospital, Heidelberg, Germany
| | - Thomas Hilbel
- Westphalian University of Applied Sciences, Gelsenkirchen, Germany
| | - Jobst-Hendrik Schultz
- Department of General Internal Medicine and Psychosomatics, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|