1
|
Berkel C, Knox DC, Flemotomos N, Martinez VR, Atkins DC, Narayanan SS, Rodriguez LA, Gallo CG, Smith JD. A machine learning approach to improve implementation monitoring of family-based preventive interventions in primary care. Implement Res Pract 2023; 4:26334895231187906. [PMID: 37790171 PMCID: PMC10375039 DOI: 10.1177/26334895231187906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/05/2023] Open
Abstract
Background Evidence-based parenting programs effectively prevent the onset and escalation of child and adolescent behavioral health problems. When programs have been taken to scale, declines in the quality of implementation diminish intervention effects. Gold-standard methods of implementation monitoring are cost-prohibitive and impractical in resource-scarce delivery systems. Technological developments using computational linguistics and machine learning offer an opportunity to assess fidelity in a low burden, timely, and comprehensive manner. Methods In this study, we test two natural language processing (NLP) methods [i.e., Term Frequency-Inverse Document Frequency (TF-IDF) and Bidirectional Encoder Representations from Transformers (BERT)] to assess the delivery of the Family Check-Up 4 Health (FCU4Health) program in a type 2 hybrid effectiveness-implementation trial conducted in primary care settings that serve primarily Latino families. We trained and evaluated models using 116 English and 81 Spanish-language transcripts from the 113 families who initiated FCU4Health services. We evaluated the concurrent validity of the TF-IDF and BERT models using observer ratings of program sessions using the COACH measure of competent adherence. Following the Implementation Cascade model, we assessed predictive validity using multiple indicators of parent engagement, which have been demonstrated to predict improvements in parenting and child outcomes. Results Both TF-IDF and BERT ratings were significantly associated with observer ratings and engagement outcomes. Using mean squared error, results demonstrated improvement over baseline for observer ratings from a range of 0.83-1.02 to 0.62-0.76, resulting in an average improvement of 24%. Similarly, results demonstrated improvement over baseline for parent engagement indicators from a range of 0.81-27.3 to 0.62-19.50, resulting in an approximate average improvement of 18%. Conclusions These results demonstrate the potential for NLP methods to assess implementation in evidence-based parenting programs delivered at scale. Future directions are presented. Trial registration NCT03013309 ClinicalTrials.gov.
Collapse
Affiliation(s)
- Cady Berkel
- College of Health Solutions, Arizona State University, Phoenix, AZ, USA
- Ming Hsieh Department of Electrical Engineering, USC Viterbi School of Engineering, REACH Institute, Arizona State University, Tempe, AZ, USA
| | - Dillon C. Knox
- Signal Analysis and Interpretation Laboratory, University of Southern California, Los Angeles, CA, USA
| | - Nikolaos Flemotomos
- Signal Analysis and Interpretation Laboratory, University of Southern California, Los Angeles, CA, USA
| | - Victor R. Martinez
- Signal Analysis and Interpretation Laboratory, University of Southern California, Los Angeles, CA, USA
| | - David C. Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA
| | - Shrikanth S. Narayanan
- Signal Analysis and Interpretation Laboratory, University of Southern California, Los Angeles, CA, USA
| | - Lizeth Alonso Rodriguez
- Ming Hsieh Department of Electrical Engineering, USC Viterbi School of Engineering, REACH Institute, Arizona State University, Tempe, AZ, USA
| | - Carlos G. Gallo
- Department of Psychiatry and Behavioral Sciences, Northwestern University, Chicago, IL, USA
| | - Justin D. Smith
- Department of Population Health Sciences, Spencer Fox Eccles School of Medicine, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
2
|
Flemotomos N, Martinez VR, Chen Z, Singla K, Ardulov V, Peri R, Caperton DD, Gibson J, Tanana MJ, Georgiou P, Van Epps J, Lord SP, Hirsch T, Imel ZE, Atkins DC, Narayanan S. Automated evaluation of psychotherapy skills using speech and language technologies. Behav Res Methods 2022; 54:690-711. [PMID: 34346043 PMCID: PMC8810915 DOI: 10.3758/s13428-021-01623-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/15/2021] [Indexed: 11/08/2022]
Abstract
With the growing prevalence of psychological interventions, it is vital to have measures which rate the effectiveness of psychological care to assist in training, supervision, and quality assurance of services. Traditionally, quality assessment is addressed by human raters who evaluate recorded sessions along specific dimensions, often codified through constructs relevant to the approach and domain. This is, however, a cost-prohibitive and time-consuming method that leads to poor feasibility and limited use in real-world settings. To facilitate this process, we have developed an automated competency rating tool able to process the raw recorded audio of a session, analyzing who spoke when, what they said, and how the health professional used language to provide therapy. Focusing on a use case of a specific type of psychotherapy called "motivational interviewing", our system gives comprehensive feedback to the therapist, including information about the dynamics of the session (e.g., therapist's vs. client's talking time), low-level psychological language descriptors (e.g., type of questions asked), as well as other high-level behavioral constructs (e.g., the extent to which the therapist understands the clients' perspective). We describe our platform and its performance using a dataset of more than 5000 recordings drawn from its deployment in a real-world clinical setting used to assist training of new therapists. Widespread use of automated psychotherapy rating tools may augment experts' capabilities by providing an avenue for more effective training and skill improvement, eventually leading to more positive clinical outcomes.
Collapse
Affiliation(s)
- Nikolaos Flemotomos
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, USA.
| | - Victor R Martinez
- Department of Computer Science, University of Southern California, Los Angeles, CA, 90089, USA
| | - Zhuohao Chen
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, USA
| | - Karan Singla
- Department of Computer Science, University of Southern California, Los Angeles, CA, 90089, USA
| | - Victor Ardulov
- Department of Computer Science, University of Southern California, Los Angeles, CA, 90089, USA
| | - Raghuveer Peri
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, USA
| | - Derek D Caperton
- Department of Educational Psychology, University of Utah, Salt Lake City, Utah, USA
| | - James Gibson
- Behavioral Signal Technologies Inc., Los Angeles, CA, USA
| | - Michael J Tanana
- College of Social Work, University of Utah, Salt Lake City, Utah, USA
| | - Panayiotis Georgiou
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, USA
| | - Jake Van Epps
- University Counseling Center, University of Utah, Salt Lake City, Utah, USA
| | - Sarah P Lord
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, Washington, USA
| | - Tad Hirsch
- Department of Art + Design, Northeastern University, Boston, Massachusetts, USA
| | - Zac E Imel
- Department of Educational Psychology, University of Utah, Salt Lake City, Utah, USA
| | - David C Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, Washington, USA
| | - Shrikanth Narayanan
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California, USA
- Department of Computer Science, University of Southern California, Los Angeles, CA, 90089, USA
- Behavioral Signal Technologies Inc., Los Angeles, CA, USA
| |
Collapse
|
3
|
Flemotomos N, Martinez VR, Chen Z, Creed TA, Atkins DC, Narayanan S. Automated quality assessment of cognitive behavioral therapy sessions through highly contextualized language representations. PLoS One 2021; 16:e0258639. [PMID: 34679105 PMCID: PMC8535177 DOI: 10.1371/journal.pone.0258639] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 10/02/2021] [Indexed: 11/28/2022] Open
Abstract
During a psychotherapy session, the counselor typically adopts techniques which are codified along specific dimensions (e.g., 'displays warmth and confidence', or 'attempts to set up collaboration') to facilitate the evaluation of the session. Those constructs, traditionally scored by trained human raters, reflect the complex nature of psychotherapy and highly depend on the context of the interaction. Recent advances in deep contextualized language models offer an avenue for accurate in-domain linguistic representations which can lead to robust recognition and scoring of such psychotherapy-relevant behavioral constructs, and support quality assurance and supervision. In this work, we propose a BERT-based model for automatic behavioral scoring of a specific type of psychotherapy, called Cognitive Behavioral Therapy (CBT), where prior work is limited to frequency-based language features and/or short text excerpts which do not capture the unique elements involved in a spontaneous long conversational interaction. The model focuses on the classification of therapy sessions with respect to the overall score achieved on the widely-used Cognitive Therapy Rating Scale (CTRS), but is trained in a multi-task manner in order to achieve higher interpretability. BERT-based representations are further augmented with available therapy metadata, providing relevant non-linguistic context and leading to consistent performance improvements. We train and evaluate our models on a set of 1,118 real-world therapy sessions, recorded and automatically transcribed. Our best model achieves an F1 score equal to 72.61% on the binary classification task of low vs. high total CTRS.
Collapse
Affiliation(s)
- Nikolaos Flemotomos
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Victor R. Martinez
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Zhuohao Chen
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Torrey A. Creed
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, United States of America
| | - David C. Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, United States of America
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
4
|
Ardulov V, Martinez VR, Somandepalli K, Zheng S, Salzman E, Lord C, Bishop S, Narayanan S. Robust diagnostic classification via Q-learning. Sci Rep 2021; 11:11730. [PMID: 34083579 PMCID: PMC8175431 DOI: 10.1038/s41598-021-90000-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/30/2021] [Indexed: 12/04/2022] Open
Abstract
Machine learning (ML) models have demonstrated the power of utilizing clinical instruments to provide tools for domain experts in gaining additional insights toward complex clinical diagnoses. In this context these tools desire two additional properties: interpretability, being able to audit and understand the decision function, and robustness, being able to assign the correct label in spite of missing or noisy inputs. This work formulates diagnostic classification as a decision-making process and utilizes Q-learning to build classifiers that meet the aforementioned desired criteria. As an exemplary task, we simulate the process of differentiating Autism Spectrum Disorder from Attention Deficit-Hyperactivity Disorder in verbal school aged children. This application highlights how reinforcement learning frameworks can be utilized to train more robust classifiers by jointly learning to maximize diagnostic accuracy while minimizing the amount of information required.
Collapse
Affiliation(s)
| | | | | | - Shuting Zheng
- University of California San Francisco, San Francisco, USA
| | - Emma Salzman
- University of California San Francisco, San Francisco, USA
| | | | - Somer Bishop
- University of California San Francisco, San Francisco, USA
| | | |
Collapse
|
5
|
Goldberg SB, Flemotomos N, Martinez VR, Tanana MJ, Kuo PB, Pace BT, Villatte JL, Georgiou PG, Van Epps J, Imel ZE, Narayanan SS, Atkins DC. Machine learning and natural language processing in psychotherapy research: Alliance as example use case. J Couns Psychol 2020; 67:438-448. [PMID: 32614225 PMCID: PMC7393999 DOI: 10.1037/cou0000382] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Artificial intelligence generally and machine learning specifically have become deeply woven into the lives and technologies of modern life. Machine learning is dramatically changing scientific research and industry and may also hold promise for addressing limitations encountered in mental health care and psychotherapy. The current paper introduces machine learning and natural language processing as related methodologies that may prove valuable for automating the assessment of meaningful aspects of treatment. Prediction of therapeutic alliance from session recordings is used as a case in point. Recordings from 1,235 sessions of 386 clients seen by 40 therapists at a university counseling center were processed using automatic speech recognition software. Machine learning algorithms learned associations between client ratings of therapeutic alliance exclusively from session linguistic content. Using a portion of the data to train the model, machine learning algorithms modestly predicted alliance ratings from session content in an independent test set (Spearman's ρ = .15, p < .001). These results highlight the potential to harness natural language processing and machine learning to predict a key psychotherapy process variable that is relatively distal from linguistic content. Six practical suggestions for conducting psychotherapy research using machine learning are presented along with several directions for future research. Questions of dissemination and implementation may be particularly important to explore as machine learning improves in its ability to automate assessment of psychotherapy process and outcome. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
6
|
Arevian AC, Bone D, Malandrakis N, Martinez VR, Wells KB, Miklowitz DJ, Narayanan S. Clinical state tracking in serious mental illness through computational analysis of speech. PLoS One 2020; 15:e0225695. [PMID: 31940347 PMCID: PMC6961853 DOI: 10.1371/journal.pone.0225695] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 11/11/2019] [Indexed: 11/19/2022] Open
Abstract
Individuals with serious mental illness experience changes in their clinical states over time that are difficult to assess and that result in increased disease burden and care utilization. It is not known if features derived from speech can serve as a transdiagnostic marker of these clinical states. This study evaluates the feasibility of collecting speech samples from people with serious mental illness and explores the potential utility for tracking changes in clinical state over time. Patients (n = 47) were recruited from a community-based mental health clinic with diagnoses of bipolar disorder, major depressive disorder, schizophrenia or schizoaffective disorder. Patients used an interactive voice response system for at least 4 months to provide speech samples. Clinic providers (n = 13) reviewed responses and provided global assessment ratings. We computed features of speech and used machine learning to create models of outcome measures trained using either population data or an individual's own data over time. The system was feasible to use, recording 1101 phone calls and 117 hours of speech. Most (92%) of the patients agreed that it was easy to use. The individually-trained models demonstrated the highest correlation with provider ratings (rho = 0.78, p<0.001). Population-level models demonstrated statistically significant correlations with provider global assessment ratings (rho = 0.44, p<0.001), future provider ratings (rho = 0.33, p<0.05), BASIS-24 summary score, depression sub score, and self-harm sub score (rho = 0.25,0.25, and 0.28 respectively; p<0.05), and the SF-12 mental health sub score (rho = 0.25, p<0.05), but not with other BASIS-24 or SF-12 sub scores. This study brings together longitudinal collection of objective behavioral markers along with a transdiagnostic, personalized approach for tracking of mental health clinical state in a community-based clinical setting.
Collapse
Affiliation(s)
- Armen C. Arevian
- Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Daniel Bone
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Nikolaos Malandrakis
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Victor R. Martinez
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| | - Kenneth B. Wells
- Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles, Los Angeles, CA, United States of America
- RAND Corporation, Santa Monica, CA, United States of America
| | - David J. Miklowitz
- Jane and Terry Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles, Los Angeles, CA, United States of America
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
7
|
Martinez VR, Flemotomos N, Ardulov V, Somandepalli K, Goldberg SB, Imel ZE, Atkins DC, Narayanan S. Identifying Therapist and Client Personae for Therapeutic Alliance Estimation. Interspeech 2019; 2019:1901-1905. [PMID: 36703954 PMCID: PMC9875729 DOI: 10.21437/interspeech.2019-2829] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Psychotherapy, from a narrative perspective, is the process in which a client relates an on-going life-story to a therapist. In each session, a client will recount events from their life, some of which stand out as more significant than others. These significant stories can ultimately shape one's identity. In this work we study these narratives in the context of therapeutic alliance-a self-reported measure on the perception of a shared bond between client and therapist. We propose that alliance can be predicted from the interactions between certain types of clients with types of therapists. To validate this method, we obtained 1235 transcribed sessions with client-reported alliance to train an unsupervised approach to discover groups of therapists and clients based on common types of narrative characters, or personae. We measure the strength of the relation between personae and alliance in two experiments. Our results show that (1) alliance can be explained by the interactions between the discovered character types, and (2) models trained on therapist and client personae achieve significant performance gains compared to competitive supervised baselines. Finally, exploratory analysis reveals important character traits that lead to an improved perception of alliance.
Collapse
Affiliation(s)
- Victor R. Martinez
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, USA
| | - Nikolaos Flemotomos
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, USA
| | - Victor Ardulov
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, USA
| | - Krishna Somandepalli
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, USA
| | - Simon B. Goldberg
- Department of Couseling Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Zac E. Imel
- Department of Educational Psychology, University of Utah, Salt Lake City, UT, USA
| | - David C. Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Lab, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|