1
|
Perivolaris A, Adams-McGavin C, Madan Y, Kishibe T, Antoniou T, Mamdani M, Jung JJ. Quality of interaction between clinicians and artificial intelligence systems. A systematic review. Future Healthc J 2024; 11:100172. [PMID: 39281326 PMCID: PMC11399614 DOI: 10.1016/j.fhj.2024.100172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 07/15/2024] [Accepted: 08/04/2024] [Indexed: 09/18/2024]
Abstract
Introduction Artificial intelligence (AI) has the potential to improve healthcare quality when thoughtfully integrated into clinical practice. Current evaluations of AI solutions tend to focus solely on model performance. There is a critical knowledge gap in the assessment of AI-clinician interactions. We systematically reviewed existing literature to identify interaction traits that can be used to assess the quality of AI-clinician interactions. Methods We performed a systematic review of published studies to June 2022 that reported elements of interactions that impacted the relationship between clinicians and AI-enabled clinical decision support systems. Due to study heterogeneity, we conducted a narrative synthesis of the different interaction traits identified from this review. Two study authors categorised the AI-clinician interaction traits based on their shared constructs independently. After the independent categorisation, both authors engaged in a discussion to finalise the categories. Results From 34 included studies, we identified 210 interaction traits. The most common interaction traits included usefulness, ease of use, trust, satisfaction, willingness to use and usability. After removing duplicate or redundant traits, 90 unique interaction traits were identified. Unique interaction traits were then classified into seven categories: usability and user experience, system performance, clinician trust and acceptance, impact on patient care, communication, ethical and professional concerns, and clinician engagement and workflow. Discussion We identified seven categories of interaction traits between clinicians and AI systems. The proposed categories may serve as a foundation for a framework assessing the quality of AI-clinician interactions.
Collapse
Affiliation(s)
- Argyrios Perivolaris
- Institute of Medical Sciences, University of Toronto, Canada
- St. Michaels Hospital, Unity Health Toronto, Canada
| | - Chris Adams-McGavin
- Department of Surgery, Temetry Faculty of Medicine, University of Toronto, Canada
| | - Yasmine Madan
- Department of Health Sciences, McMaster University, Canada
| | | | - Tony Antoniou
- Department of Family and Community Medicine, St. Michael's Hospital, Canada
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Canada
- Department of Family and Community Medicine, University of Toronto, Canada
| | - Muhammad Mamdani
- St. Michaels Hospital, Unity Health Toronto, Canada
- Leslie Dan Faculty of Pharmacy, Temerty Faculty of Medicine, University of Toronto, Canada
- Dalla Lana School of Public Health, University of Toronto, Canada
| | - James J Jung
- Institute of Medical Sciences, University of Toronto, Canada
- St. Michaels Hospital, Unity Health Toronto, Canada
- Department of Surgery, Temetry Faculty of Medicine, University of Toronto, Canada
| |
Collapse
|
2
|
Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions - A Narrative Review for a Comprehensive Insight. Risk Manag Healthc Policy 2024; 17:1339-1348. [PMID: 38799612 PMCID: PMC11127648 DOI: 10.2147/rmhp.s461562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/10/2024] [Indexed: 05/29/2024] Open
Abstract
Mental health is an essential component of the health and well-being of a person and community, and it is critical for the individual, society, and socio-economic development of any country. Mental healthcare is currently in the health sector transformation era, with emerging technologies such as artificial intelligence (AI) reshaping the screening, diagnosis, and treatment modalities of psychiatric illnesses. The present narrative review is aimed at discussing the current landscape and the role of AI in mental healthcare, including screening, diagnosis, and treatment. Furthermore, this review attempted to highlight the key challenges, limitations, and prospects of AI in providing mental healthcare based on existing works of literature. The literature search for this narrative review was obtained from PubMed, Saudi Digital Library (SDL), Google Scholar, Web of Science, and IEEE Xplore, and we included only English-language articles published in the last five years. Keywords used in combination with Boolean operators ("AND" and "OR") were the following: "Artificial intelligence", "Machine learning", Deep learning", "Early diagnosis", "Treatment", "interventions", "ethical consideration", and "mental Healthcare". Our literature review revealed that, equipped with predictive analytics capabilities, AI can improve treatment planning by predicting an individual's response to various interventions. Predictive analytics, which uses historical data to formulate preventative interventions, aligns with the move toward individualized and preventive mental healthcare. In the screening and diagnostic domains, a subset of AI, such as machine learning and deep learning, has been proven to analyze various mental health data sets and predict the patterns associated with various mental health problems. However, limited studies have evaluated the collaboration between healthcare professionals and AI in delivering mental healthcare, as these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches. Ethical issues, cybersecurity, a lack of data analytics diversity, cultural sensitivity, and language barriers remain concerns for implementing this futuristic approach in mental healthcare. Considering these sensitive problems require empathy, human connections, and holistic, personalized, and multidisciplinary approaches, it is imperative to explore these aspects. Therefore, future comparative trials with larger sample sizes and data sets are warranted to evaluate different AI models used in mental healthcare across regions to fill the existing knowledge gaps.
Collapse
Affiliation(s)
- Ahmed M Alhuwaydi
- Department of Internal Medicine, Division of Psychiatry, College of Medicine, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
3
|
Kuo PB, Tanana MJ, Goldberg SB, Caperton DD, Narayanan S, Atkins DC, Imel ZE. Machine-Learning-Based Prediction of Client Distress From Session Recordings. Clin Psychol Sci 2024; 12:435-446. [PMID: 39104662 PMCID: PMC11299859 DOI: 10.1177/21677026231172694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
Natural language processing (NLP) is a subfield of machine learning that may facilitate the evaluation of therapist-client interactions and provide feedback to therapists on client outcomes on a large scale. However, there have been limited studies applying NLP models to client outcome prediction that have (a) used transcripts of therapist-client interactions as direct predictors of client symptom improvement, (b) accounted for contextual linguistic complexities, and (c) used best practices in classical training and test splits in model development. Using 2,630 session recordings from 795 clients and 56 therapists, we developed NLP models that directly predicted client symptoms of a given session based on session recordings of the previous session (Spearman's rho =0.32, p<.001). Our results highlight the potential for NLP models to be implemented in outcome monitoring systems to improve quality of care. We discuss implications for future research and applications.
Collapse
|
4
|
Kilbourne A, Chinman M, Rogal S, Almirall D. Adaptive Designs in Implementation Science and Practice: Their Promise and the Need for Greater Understanding and Improved Communication. Annu Rev Public Health 2024; 45:69-88. [PMID: 37931183 PMCID: PMC11070446 DOI: 10.1146/annurev-publhealth-060222-014438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2023]
Abstract
The promise of adaptation and adaptive designs in implementation science has been hindered by the lack of clarity and precision in defining what it means to adapt, especially regarding the distinction between adaptive study designs and adaptive implementation strategies. To ensure a common language for science and practice, authors reviewed the implementation science literature and found that the term adaptive was used to describe interventions, implementation strategies, and trial designs. To provide clarity and offer recommendations for reporting and strengthening study design, we propose a taxonomy that describes fixed versus adaptive implementation strategies and implementation trial designs. To improve impact, (a) futureimplementation studies should prespecify implementation strategy core functions that in turn can be taught to and replicated by health system/community partners, (b) funders should support exploratory studies that refine and specify implementation strategies, and (c) investigators should systematically address design requirements and ethical considerations (e.g., randomization, blinding/masking) with health system/community partners.
Collapse
Affiliation(s)
- Amy Kilbourne
- Quality Enhancement Research Initiative, U.S. Department of Veterans Affairs, Washington, District of Columbia, USA
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, USA;
| | - Matthew Chinman
- RAND Corporation, Pittsburgh, Pennsylvania, USA
- Center for Health Equity Research and Promotion, VA Pittsburgh Healthcare System, Pittsburgh, Pennsylvania, USA
- Mental Illness Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, Pittsburgh, Pennsylvania, USA
| | - Shari Rogal
- Center for Health Equity Research and Promotion, VA Pittsburgh Healthcare System, Pittsburgh, Pennsylvania, USA
- Departments of Medicine and Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Daniel Almirall
- Institute for Social Research and Department of Statistics, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
5
|
Stade EC, Stirman SW, Ungar LH, Boland CL, Schwartz HA, Yaden DB, Sedoc J, DeRubeis RJ, Willer R, Eichstaedt JC. Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. NPJ MENTAL HEALTH RESEARCH 2024; 3:12. [PMID: 38609507 PMCID: PMC10987499 DOI: 10.1038/s44184-024-00056-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/30/2024] [Indexed: 04/14/2024]
Abstract
Large language models (LLMs) such as Open AI's GPT-4 (which power ChatGPT) and Google's Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.
Collapse
Affiliation(s)
- Elizabeth C Stade
- Dissemination and Training Division, National Center for PTSD, VA Palo Alto Health Care System, Palo Alto, CA, USA.
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA.
- Institute for Human-Centered Artificial Intelligence & Department of Psychology, Stanford University, Stanford, CA, USA.
| | - Shannon Wiltsey Stirman
- Dissemination and Training Division, National Center for PTSD, VA Palo Alto Health Care System, Palo Alto, CA, USA
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Lyle H Ungar
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Cody L Boland
- Dissemination and Training Division, National Center for PTSD, VA Palo Alto Health Care System, Palo Alto, CA, USA
| | - H Andrew Schwartz
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - David B Yaden
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - João Sedoc
- Department of Technology, Operations, and Statistics, New York University, New York, NY, USA
| | - Robert J DeRubeis
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Robb Willer
- Department of Sociology, Stanford University, Stanford, CA, USA
| | - Johannes C Eichstaedt
- Institute for Human-Centered Artificial Intelligence & Department of Psychology, Stanford University, Stanford, CA, USA.
| |
Collapse
|
6
|
Leung YW, Ng S, Duan L, Lam C, Chan K, Gancarz M, Rennie H, Trachtenberg L, Chan KP, Adikari A, Fang L, Gratzer D, Hirst G, Wong J, Esplen MJ. Therapist Feedback and Implications on Adoption of an Artificial Intelligence-Based Co-Facilitator for Online Cancer Support Groups: Mixed Methods Single-Arm Usability Study. JMIR Cancer 2023; 9:e40113. [PMID: 37294610 PMCID: PMC10334721 DOI: 10.2196/40113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 12/19/2022] [Accepted: 02/28/2023] [Indexed: 03/06/2023] Open
Abstract
BACKGROUND The recent onset of the COVID-19 pandemic and the social distancing requirement have created an increased demand for virtual support programs. Advances in artificial intelligence (AI) may offer novel solutions to management challenges such as the lack of emotional connections within virtual group interventions. Using typed text from online support groups, AI can help identify the potential risk of mental health concerns, alert group facilitator(s), and automatically recommend tailored resources while monitoring patient outcomes. OBJECTIVE The aim of this mixed methods, single-arm study was to evaluate the feasibility, acceptability, validity, and reliability of an AI-based co-facilitator (AICF) among CancerChatCanada therapists and participants to monitor online support group participants' distress through a real-time analysis of texts posted during the support group sessions. Specifically, AICF (1) generated participant profiles with discussion topic summaries and emotion trajectories for each session, (2) identified participant(s) at risk for increased emotional distress and alerted the therapist for follow-up, and (3) automatically suggested tailored recommendations based on participant needs. Online support group participants consisted of patients with various types of cancer, and the therapists were clinically trained social workers. METHODS Our study reports on the mixed methods evaluation of AICF, including therapists' opinions as well as quantitative measures. AICF's ability to detect distress was evaluated by the patient's real-time emoji check-in, the Linguistic Inquiry and Word Count software, and the Impact of Event Scale-Revised. RESULTS Although quantitative results showed only some validity of AICF's ability in detecting distress, the qualitative results showed that AICF was able to detect real-time issues that are amenable to treatment, thus allowing therapists to be more proactive in supporting every group member on an individual basis. However, therapists are concerned about the ethical liability of AICF's distress detection function. CONCLUSIONS Future works will look into wearable sensors and facial cues by using videoconferencing to overcome the barriers associated with text-based online support groups. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR2-10.2196/21453.
Collapse
Affiliation(s)
- Yvonne W Leung
- de Souza Institute, University Health Network, Toronto, ON, Canada
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- College of Professional Studies, Northeastern University, Toronto, ON, Canada
| | - Steve Ng
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Lauren Duan
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Claire Lam
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Kenith Chan
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Mathew Gancarz
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Heather Rennie
- de Souza Institute, University Health Network, Toronto, ON, Canada
- BC Cancer Agency, Vancouver, BC, Canada
| | - Lianne Trachtenberg
- de Souza Institute, University Health Network, Toronto, ON, Canada
- Centre for Psychology and Emotional Health, Toronto, ON, Canada
| | - Kai P Chan
- de Souza Institute, University Health Network, Toronto, ON, Canada
| | - Achini Adikari
- Centre for Data Analytics and Cognition, La Trobe University, Melbourne, Australia
| | - Lin Fang
- Factor-Inwentash Faculty of Social Work, University of Toronto, Toronto, ON, Canada
| | - David Gratzer
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- Centre for Addiction and Mental Health, Toronto, ON, Canada
| | - Graeme Hirst
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Jiahui Wong
- de Souza Institute, University Health Network, Toronto, ON, Canada
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Mary Jane Esplen
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
7
|
Choy-Brown M, Williams NJ, Ramirez N, Esp S. Psychometric evaluation of a pragmatic measure of clinical supervision as an implementation strategy. Implement Sci Commun 2023; 4:39. [PMID: 37024945 PMCID: PMC10080877 DOI: 10.1186/s43058-023-00419-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 03/16/2023] [Indexed: 04/08/2023] Open
Abstract
BACKGROUND Valid and reliable measurement of implementation strategies is essential to advancing implementation science; however, this area lags behind the measurement of implementation outcomes and determinants. Clinical supervision is a promising and highly feasible implementation strategy in behavioral healthcare for which pragmatic measures are lacking. This research aimed to develop and psychometrically evaluate a pragmatic measure of clinical supervision conceptualized in terms of two broadly applicable, discrete clinical supervision techniques shown to improve providers' implementation of evidence-based psychosocial interventions-(1) audit and feedback and (2) active learning. METHODS Items were generated based on a systematic review of the literature and administered to a sample of 154 outpatient mental health clinicians serving youth and 181 community-based mental health providers serving adults. Scores were evaluated for evidence of reliability, structural validity, construct-related validity, and measurement invariance across the two samples. RESULTS In sample 1, confirmatory factor analysis (CFA) supported the hypothesized two-factor structure of scores on the Evidence-Based Clinical Supervision Strategies (EBCSS) scale (χ2=5.89, df=4, p=0.208; RMSEA=0.055, CFI=0.988, SRMR=0.033). In sample 2, CFA replicated the EBCSS factor structure and provided discriminant validity evidence relative to an established supervisory alliance measure (χ2=36.12, df=30, p=0.204; RMSEA=0.034; CFI=0.990; SRMR=0.031). Construct-related validity evidence was provided by theoretically concordant associations between EBCSS subscale scores and agency climate for evidence-based practice implementation in sample 1 (d= .47 and .55) as well as measures of the supervision process in sample 2. Multiple group CFA supported the configural, metric, and partial scalar invariance of scores on the EBCSS across the two samples. CONCLUSIONS Scores on the EBCSS provide a valid basis for inferences regarding the extent to which behavioral health providers experience audit and feedback and active learning as part of their clinical supervision in both clinic- and community-based behavioral health settings. TRIAL REGISTRATION ClinicalTrials.gov NCT04096274 . Registered on 19 September 2019.
Collapse
Affiliation(s)
- Mimi Choy-Brown
- University of Minnesota, Twin Cities, 1404 Gortner Avenue, St. Paul, MN 55108 USA
| | - Nathaniel J. Williams
- Boise State University, 1910 University Drive, Education Suite 717, Boise, ID 83725-1940 USA
| | - Nallely Ramirez
- Boise State University, 1910 University Drive, Education Suite 717, Boise, ID 83725-1940 USA
| | - Susan Esp
- Boise State University, 1910 University Drive, Education Suite 717, Boise, ID 83725-1940 USA
| |
Collapse
|
8
|
Creed TA, Salama L, Slevin R, Tanana M, Imel Z, Narayanan S, Atkins DC. Enhancing the quality of cognitive behavioral therapy in community mental health through artificial intelligence generated fidelity feedback (Project AFFECT): a study protocol. BMC Health Serv Res 2022; 22:1177. [PMID: 36127689 PMCID: PMC9487132 DOI: 10.1186/s12913-022-08519-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/02/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Each year, millions of Americans receive evidence-based psychotherapies (EBPs) like cognitive behavioral therapy (CBT) for the treatment of mental and behavioral health problems. Yet, at present, there is no scalable method for evaluating the quality of psychotherapy services, leaving EBP quality and effectiveness largely unmeasured and unknown. Project AFFECT will develop and evaluate an AI-based software system to automatically estimate CBT fidelity from a recording of a CBT session. Project AFFECT is an NIMH-funded research partnership between the Penn Collaborative for CBT and Implementation Science and Lyssn.io, Inc. ("Lyssn") a start-up developing AI-based technologies that are objective, scalable, and cost efficient, to support training, supervision, and quality assurance of EBPs. Lyssn provides HIPAA-compliant, cloud-based software for secure recording, sharing, and reviewing of therapy sessions, which includes AI-generated metrics for CBT. The proposed tool will build from and be integrated into this core platform. METHODS Phase I will work from an existing software prototype to develop a LyssnCBT user interface geared to the needs of community mental health (CMH) agencies. Core activities include a user-centered design focus group and interviews with community mental health therapists, supervisors, and administrators to inform the design and development of LyssnCBT. LyssnCBT will be evaluated for usability and implementation readiness in a final stage of Phase I. Phase II will conduct a stepped-wedge, hybrid implementation-effectiveness randomized trial (N = 1,875 clients) to evaluate the effectiveness of LyssnCBT to improve therapist CBT skills and client outcomes and reduce client drop-out. Analyses will also examine the hypothesized mechanism of action underlying LyssnCBT. DISCUSSION Successful execution will provide automated, scalable CBT fidelity feedback for the first time ever, supporting high-quality training, supervision, and quality assurance, and providing a core technology foundation that could support the quality delivery of a range of EBPs in the future. TRIAL REGISTRATION ClinicalTrials.gov; NCT05340738 ; approved 4/21/2022.
Collapse
Affiliation(s)
- Torrey A Creed
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
- Lyssn.io, Inc, Seattle, USA
| | - Leah Salama
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
| | | | | | - Zac Imel
- Lyssn.io, Inc, Seattle, USA
- Department of Educational Psychology, University of Utah, Salt Lake City, USA
| | - Shrikanth Narayanan
- Lyssn.io, Inc, Seattle, USA
- Viterbi School of Engineering, University of Southern California, Los Angeles, USA
| | - David C Atkins
- Lyssn.io, Inc, Seattle, USA.
- Department of Psychiatry and Behavioral Sciences, University of Washington School of Medicine, Seattle, USA.
| |
Collapse
|
9
|
Becker-Haimes EM, Klein CC, Frank HE, Oquendo MA, Jager-Hyman S, Brown GK, Brady M, Barnett ML. Clinician Maladaptive Anxious Avoidance in the Context of Implementation of Evidence-Based Interventions: A Commentary. FRONTIERS IN HEALTH SERVICES 2022; 2:833214. [PMID: 36382152 PMCID: PMC9648711 DOI: 10.3389/frhs.2022.833214] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 05/16/2022] [Indexed: 11/13/2022]
Abstract
This paper posits that a clinician's own anxious reaction to delivering specific evidence-based interventions (EBIs) should be better accounted for within implementation science frameworks. A key next step for implementation science is to delineate the causal processes most likely to influence successful implementation of evidence-based interventions (EBIs). This is critical for being able to develop tailored implementation strategies that specifically target mechanisms by which implementation succeeds or fails. First, we review the literature on specific EBIs that may act as negatively valenced stimuli for clinicians, leading to a process of clinician maladaptive anxious avoidance that can negatively impact EBI delivery. In the following sections, we argue that there are certain EBIs that can cause emotional distress or discomfort in a clinician, related to either: (1) a clinicians' fear of the real or predicted short-term distress the EBI can cause patients, or (2) fears that the clinician will inadvertently cause the patient harm and/or face liability. This distress experienced by the clinician can perpetuate a cycle of maladaptive anxious avoidance by the clinician, contributing to lack of or suboptimal EBI implementation. We illustrate how this cycle of maladaptive anxious avoidance can influence implementation by providing several examples from leading EBIs in the psychosocial literature. To conclude, we discuss how leveraging decades of treatment literature aimed at mitigating maladaptive anxious avoidance can inform the design of more tailored and effective implementation strategies for EBIs that are negatively valenced.
Collapse
Affiliation(s)
- Emily M. Becker-Haimes
- Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Hall Mercer Community Mental Health, University of Pennsylvania Health System, Philadelphia, PA, United States
| | - Corinna C. Klein
- Department of Counseling, Clinical, and School Psychology, University of California, Santa Barbara, Santa Barbara, CA, United States
| | - Hannah E. Frank
- Department of Psychiatry and Human Behavior, The Warren Alpert Medical School of Brown University, Providence, RI, United States
- Bradley Hospital, Lifespan Health System, Riverside, RI, United States
| | - Maria A. Oquendo
- Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Shari Jager-Hyman
- Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Gregory K. Brown
- Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Megan Brady
- Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Miya L. Barnett
- Department of Counseling, Clinical, and School Psychology, University of California, Santa Barbara, Santa Barbara, CA, United States
| |
Collapse
|