1
|
Shaller D, Nembhard I, Matta S, Grob R, Lee Y, Warne E, Evans R, Dicello D, Colon M, Polanco A, Schlesinger M. Assessing an innovative method to promote learning from patient narratives: Findings from a field experiment in ambulatory care. Health Serv Res 2024; 59:e14245. [PMID: 37845082 PMCID: PMC10915476 DOI: 10.1111/1475-6773.14245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2023] Open
Abstract
OBJECTIVE To assess whether an online interactive report designed to facilitate interpretation of patients' narrative feedback produces change in ambulatory staff learning, behavior at the individual staff and practice level, and patient experience survey scores. DATA SOURCES AND SETTING We studied 22 ambulatory practice sites within an academic medical center using three primary data sources: 333 staff surveys; 20 in-depth interviews with practice leaders and staff; and 9551 modified CG-CAHPS patient experience surveys augmented by open-ended narrative elicitation questions. STUDY DESIGN We conducted a cluster quasi-experimental study, comparing 12 intervention and 10 control sites. At control sites, narratives were delivered free-form to site administrators via email; at intervention sites, narratives were delivered online with interactive tools for interpretation, accompanied by user training. We assessed control-versus-intervention site differences in learning, behavior, and patient experience scores. DATA COLLECTION Staff surveys and interviews were completed at intervention and control sites, 9 months after intervention launch. Patient surveys were collected beginning 4 months pre-launch through 9 months post-launch. We used control-versus-intervention and difference-in-difference analyses for survey data and thematic analysis for interview data. PRINCIPAL FINDINGS Interviews suggested that the interface facilitated narrative interpretation and use for improvement. Staff survey analyses indicated enhanced learning from narratives at intervention sites (29% over control sites' mean of 3.19 out of 5 across eight domains, p < 0.001) and greater behavior change at staff and practice levels (31% and 21% over control sites' means of 3.35 and 3.39, p < 0.001, respectively). Patient experience scores for interactions with office staff and wait time information increased significantly at intervention sites, compared to control sites (3.7% and 8.2%, respectively); however, provider listening scores declined 3.3%. CONCLUSIONS Patient narratives presented through structured feedback reporting methods can catalyze positive changes in staff learning, promote behavior change, and increase patient experience scores in domains of non-clinical interaction.
Collapse
Affiliation(s)
| | - Ingrid Nembhard
- Health Care Management Department, The Wharton SchoolUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Sasmira Matta
- Health Care Management Department, The Wharton SchoolUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Rachel Grob
- Center for Patient Partnerships, Department of Family Medicine and Community HealthUniversity of WisconsinMadisonWisconsinUSA
| | - Yuna Lee
- Department of Health Policy and Management, Mailman School of Public HealthColumbia UniversityNew YorkNew YorkUSA
| | - Emily Warne
- Center for Patient Partnerships, Department of Family Medicine and Community HealthUniversity of WisconsinMadisonWisconsinUSA
| | | | | | - Maria Colon
- New York‐Presbyterian HospitalNew YorkNew YorkUSA
| | | | - Mark Schlesinger
- Department of Health Policy and Management, School of Public HealthYale UniversityNew HavenConnecticutUSA
| |
Collapse
|
2
|
Abrams MP, Merchant RM, Meisel ZF, Pelullo AP, Chandra Guntuku S, Agarwal AK. Association Between Online Reviews of Substance Use Disorder Treatment Facilities and Drug-Induced Mortality Rates: Cross-Sectional Analysis. JMIR AI 2023; 2:e46317. [PMID: 38875553 PMCID: PMC11041514 DOI: 10.2196/46317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND Drug-induced mortality across the United States has continued to rise. To date, there are limited measures to evaluate patient preferences and priorities regarding substance use disorder (SUD) treatment, and many patients do not have access to evidence-based treatment options. Patients and their families seeking SUD treatment may begin their search for an SUD treatment facility online, where they can find information about individual facilities, as well as a summary of patient-generated web-based reviews via popular platforms such as Google or Yelp. Web-based reviews of health care facilities may reflect information about factors associated with positive or negative patient satisfaction. The association between patient satisfaction with SUD treatment and drug-induced mortality is not well understood. OBJECTIVE The objective of this study was to examine the association between online review content of SUD treatment facilities and drug-induced state mortality. METHODS A cross-sectional analysis of online reviews and ratings of Substance Abuse and Mental Health Services Administration (SAMHSA)-designated SUD treatment facilities listed between September 2005 and October 2021 was conducted. The primary outcomes were (1) mean online rating of SUD treatment facilities from 1 star (worst) to 5 stars (best) and (2) average drug-induced mortality rates from the Centers for Disease Control and Prevention (CDC) WONDER Database (2006-2019). Clusters of words with differential frequencies within reviews were identified. A 3-level linear model was used to estimate the association between online review ratings and drug-induced mortality. RESULTS A total of 589 SAMHSA-designated facilities (n=9597 reviews) were included in this study. Drug-induced mortality was compared with the average. Approximately half (24/47, 51%) of states had below average ("low") mortality rates (mean 13.40, SD 2.45 deaths per 100,000 people), and half (23/47, 49%) had above average ("high") drug-induced mortality rates (mean 21.92, SD 3.69 deaths per 100,000 people). The top 5 themes associated with low drug-induced mortality included detoxification and addiction rehabilitation services (r=0.26), gratitude for recovery (r=-0.25), thankful for treatment (r=-0.32), caring staff and amazing experience (r=-0.23), and individualized recovery programs (r=-0.20). The top 5 themes associated with high mortality were care from doctors or providers (r=0.24), rude and insensitive care (r=0.23), medication and prescriptions (r=0.22), front desk and reception experience (r=0.22), and dissatisfaction with communication (r=0.21). In the multilevel linear model, a state with a 10 deaths per 100,000 people increase in mortality was associated with a 0.30 lower average Yelp rating (P=.005). CONCLUSIONS Lower online ratings of SUD treatment facilities were associated with higher drug-induced mortality at the state level. Elements of patient experience may be associated with state-level mortality. Identified themes from online, organically derived patient content can inform efforts to improve high-quality and patient-centered SUD care.
Collapse
Affiliation(s)
- Matthew P Abrams
- Center for Digital Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Center for Emergency Care Policy and Research, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Department of Psychiatry, University of California San Diego, San Diego, CA, United States
| | - Raina M Merchant
- Center for Digital Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Center for Emergency Care Policy and Research, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, United States
| | - Zachary F Meisel
- Center for Emergency Care Policy and Research, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, United States
- Penn Injury Science Center, University of Pennsylvania, Philadelphia, PA, United States
| | - Arthur P Pelullo
- Center for Digital Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Sharath Chandra Guntuku
- Center for Digital Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, United States
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Anish K Agarwal
- Center for Digital Health, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Center for Emergency Care Policy and Research, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
3
|
Mark E, Oswald M, Kundar P, Gulati M. Patient-Centered Insights and Biases Regarding Cardiologists Via Online Review Platform Analysis. J Am Heart Assoc 2023; 12:e027405. [PMID: 36718881 PMCID: PMC9973653 DOI: 10.1161/jaha.122.027405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Background Online cardiologist reviews, such as those on the Yelp website, are a frequently used method for patients to find a cardiologist. It remains unknown how bias may influence such reviews. Our objectives for this study were to (1) determine which cardiologist- or practice-related factors influence the overall rating of cardiologists and patient satisfaction and (2) discover any associations between sex and race with the overall rating of cardiologists or with cardiologist- or practice-related factors. Methods and Results Cardiologist Yelp reviews from practices in the United States from 2007 to 2020 were analyzed. A total of 563 reviews were coded for positive and negative themes. Binary logistic regression was used to determine whether certain factors increased the likelihood of high ratings. Chi-squared tests were used to determine associations between sex and race with certain factors and overall cardiologist ratings. Cardiologists were more likely to receive higher ratings when reviewers noted the characteristics of competency/knowledge base and thoroughness, positive interactions with staff, and when the cardiologist's name was mentioned in the review. Negative interactions with staff were associated with lower ratings. Female cardiologists received lower ratings and more negative mentions of cardiologist-patient communication than expected. White and Black cardiologists received lower ratings than expected compared with other racial groups. Conclusions Patient-perceived cardiologist competency, thoroughness, and positive staff interactions were associated with positive reviews in online assessments. Sex and racial differences were also found. Further research must be done to confirm these findings and to understand the association of online reviews with clinical care and patient outcomes.
Collapse
Affiliation(s)
- Erica Mark
- School of MedicineUniversity of VirginiaCharlottesvilleVA
| | | | | | - Martha Gulati
- Barbra Streisand Women’s Heart CenterSmidt Heart Institute, Cedars‐Sinai Medical CenterLos AngelesCA
| |
Collapse
|
4
|
Wang W, Luo J, Dugas M, Gao GG, Agarwal R, Werner RM. Recency of Online Physician Ratings. JAMA Intern Med 2022; 182:881-883. [PMID: 35759272 PMCID: PMC9237799 DOI: 10.1001/jamainternmed.2022.2273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
This cross-sectional study examines the association between older and more recent online physician ratings and the implications for optimizing the trade-off between reliability and incentives.
Collapse
Affiliation(s)
- Weiguang Wang
- Simon Business School, University of Rochester, Rochester, New York
| | - Junjie Luo
- Center for Health Information and Decision Systems, University of Maryland, College Park
| | - Michelle Dugas
- Center for Health Information and Decision Systems, University of Maryland, College Park.,Now with The World Bank, Washington, DC
| | - Guodong Gordon Gao
- Center for Health Information and Decision Systems, University of Maryland, College Park
| | - Ritu Agarwal
- Center for Health Information and Decision Systems, University of Maryland, College Park
| | - Rachel M Werner
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia.,Perelman School of Medicine, University of Pennsylvania, Philadelphia.,Crescenz VA Medical Center, Philadelphia, Pennsylvania
| |
Collapse
|
5
|
Abstract
The use of artificial intelligence in healthcare has led to debates about the role of human clinicians in the increasingly technological contexts of medicine. Some researchers have argued that AI will augment the capacities of physicians and increase their availability to provide empathy and other uniquely human forms of care to their patients. The human vulnerabilities experienced in the healthcare context raise the stakes of new technologies such as AI, and the human dimensions of AI in healthcare have particular significance for research in the humanities. This article explains four key areas of concern relating to AI and the role that medical/health humanities research can play in addressing them: definition and regulation of "medical" versus "health" data and apps; social determinants of health; narrative medicine; and technological mediation of care. Issues include data privacy and trust, flawed datasets and algorithmic bias, racial discrimination, and the rhetoric of humanism and disability. Through a discussion of potential humanities contributions to these emerging intersections with AI, this article will suggest future scholarly directions for the field.
Collapse
Affiliation(s)
- Kirsten Ostherr
- Medical Humanities Program and Department of English, Rice University, 6100 Main St., MS-30, Houston, TX, 77005, USA.
| |
Collapse
|
6
|
Bello C, Filipovic MG, Andereggen L, Heidegger T, Urman RD, Luedi MM. Building a well-balanced culture in the perioperative setting. Best Pract Res Clin Anaesthesiol 2022; 36:247-256. [DOI: 10.1016/j.bpa.2022.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 05/18/2022] [Indexed: 10/18/2022]
|
7
|
Seltzer EK, Guntuku SC, Lanza AL, Tufts C, Srinivas SK, Klinger EV, Asch DA, Fausti N, Ungar LH, Merchant RM. Patient Experience and Satisfaction in Online Reviews of Obstetric Care: Observational Study. JMIR Form Res 2022; 6:e28379. [PMID: 35357310 PMCID: PMC9015735 DOI: 10.2196/28379] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 06/29/2021] [Accepted: 12/13/2021] [Indexed: 11/30/2022] Open
Abstract
Background The quality of care in labor and delivery is traditionally measured through the Hospital Consumer Assessment of Healthcare Providers and Systems but less is known about the experiences of care reported by patients and caregivers on online sites that are more easily accessed by the public. Objective The aim of this study was to generate insight into the labor and delivery experience using hospital reviews on Yelp. Methods We identified all Yelp reviews of US hospitals posted online from May 2005 to March 2017. We used a machine learning tool, latent Dirichlet allocation, to identify 100 topics or themes within these reviews and used Pearson r to identify statistically significant correlations between topics and high (5-star) and low (1-star) ratings. Results A total of 1569 hospitals listed in the American Hospital Association directory had at least one Yelp posting, contributing a total of 41,095 Yelp reviews. Among those hospitals, 919 (59%) had at least one Yelp rating for labor and delivery services (median of 9 reviews), contributing a total of 6523 labor and delivery reviews. Reviews concentrated among 5-star (n=2643, 41%) and 1-star reviews (n=1934, 30%). Themes strongly associated with favorable ratings included the following: top-notch care (r=0.45, P<.001), describing staff as comforting (r=0.52, P<.001), the delivery experience (r=0.46, P<.001), modern and clean facilities (r=0.44, P<.001), and hospital food (r=0.38, P<.001). Themes strongly correlated with 1-star labor and delivery reviews included complaints to management (r=0.30, P<.001), a lack of agency among patients (r=0.47, P<.001), and issues with discharging from the hospital (r=0.32, P<.001). Conclusions Online review content about labor and delivery can provide meaningful information about patient satisfaction and experiences. Narratives from these reviews that are not otherwise captured in traditional surveys can direct efforts to improve the experience of obstetrical care.
Collapse
Affiliation(s)
- Emily K Seltzer
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States.,Penn Medicine Center for Health Care Innovation, University of Pennsylvania, Philadelphia, PA, United States
| | - Sharath Chandra Guntuku
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States.,Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Amy L Lanza
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States
| | - Christopher Tufts
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States
| | - Sindhu K Srinivas
- Department of Obstetrics and Gynecology, University of Pennsylvania, Philadelphia, PA, United States
| | - Elissa V Klinger
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States.,Penn Medicine Center for Health Care Innovation, University of Pennsylvania, Philadelphia, PA, United States
| | - David A Asch
- Penn Medicine Center for Health Care Innovation, University of Pennsylvania, Philadelphia, PA, United States.,Center for Health Equity Research and Promotion, Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, United States
| | - Nick Fausti
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States
| | - Lyle H Ungar
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Raina M Merchant
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, United States.,Penn Medicine Center for Health Care Innovation, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
8
|
Stokes DC, Pelullo AP, Mitra N, Meisel ZF, South EC, Asch DA, Merchant RM. Association Between Crowdsourced Health Care Facility Ratings and Mortality in US Counties. JAMA Netw Open 2021; 4:e2127799. [PMID: 34665240 PMCID: PMC8527362 DOI: 10.1001/jamanetworkopen.2021.27799] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
IMPORTANCE Mortality across US counties varies considerably, from 252 to 1847 deaths per 100 000 people in 2018. Although patient satisfaction with health care is associated with patient- and facility-level health outcomes, the association between health care satisfaction and community-level health outcomes is not known. OBJECTIVE To examine the association between online ratings of health care facilities and mortality across US counties and to identify language specific to 1-star (lowest rating) and 5-star (highest rating) reviews in counties with high vs low mortality. DESIGN, SETTING, AND PARTICIPANTS This retrospective population-based cross-sectional study examined reviews and ratings of 95 120 essential health care facilities across 1301 US counties. Counties that had at least 1 essential health care facility with reviews available on Yelp, an online review platform, were included. Essential health care was defined according to the 10 essential health benefits covered by Affordable Care Act insurance plans. MAIN OUTCOMES AND MEASURES The mean rating of essential health care facilities was calculated by county from January 1, 2015, to December 31, 2019. Ratings were on a scale of 1 to 5 stars, with 1 being the worst rating and 5 the best. County-level composite measures of health behaviors, clinical care, social and economic factors, and physical environment were obtained from the University of Wisconsin School of Medicine and Public Health County Health Rankings database. The 2018 age-adjusted mortality by county was obtained from the Centers for Disease Control and Prevention Wide-ranging Online Data for Epidemiological Research database. Multiple linear regression analysis was used to estimate the association between mean facility rating and mortality, adjusting for county health ranking variables. Words with frequencies of use that were significantly different across 1-star and 5-star reviews in counties with high vs low mortality were identified. RESULTS The 95 120 facilities meeting inclusion criteria were distributed across 1301 of 3142 US counties (41.4%). At the county level, a 1-point increase in mean rating was associated with a mean (SE) age-adjusted decrease of 18.05 (3.68) deaths per 100 000 people (P < .001). Words specific to 1-star reviews in high-mortality counties included told, rude, and wait, and words specific to 5-star reviews in low-mortality counties included Dr, pain, and professional. CONCLUSIONS AND RELEVANCE This study found that, at the county level, higher online ratings of essential health care facilities were associated with lower mortality. Equivalent online ratings did not necessarily reflect equivalent experiences of care across counties with different mortality levels, as evidenced by variations in the frequency of use of key words in reviews. These findings suggest that online ratings and reviews may provide insight into unequal experiences of essential health care.
Collapse
Affiliation(s)
- Daniel C. Stokes
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles
- Center for Digital Health, Penn Medicine, University of Pennsylvania, Philadelphia
| | - Arthur P. Pelullo
- Center for Digital Health, Penn Medicine, University of Pennsylvania, Philadelphia
| | - Nandita Mitra
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
| | - Zachary F. Meisel
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Eugenia C. South
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia
- Urban Health Lab, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - David A. Asch
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Division of General Internal Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| | - Raina M. Merchant
- Center for Digital Health, Penn Medicine, University of Pennsylvania, Philadelphia
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia
| |
Collapse
|
9
|
Stokes DC, Kishton R, McCalpin HJ, Pelullo AP, Meisel ZF, Beidas RS, Merchant RM. Online Reviews of Mental Health Treatment Facilities: Narrative Themes Associated With Positive and Negative Ratings. Psychiatr Serv 2021; 72:776-783. [PMID: 34015944 PMCID: PMC9116241 DOI: 10.1176/appi.ps.202000267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
OBJECTIVE Previous studies indicate that patients' satisfaction with mental health care is correlated with both treatment outcomes and quality of life. The aims of this study were to describe online reviews of mental health treatment facilities, including key themes in review content, and to evaluate the correlation between narrative review themes, facility characteristics, and review ratings. METHODS United States National Mental Health Services Survey (N-MHSS) facilities were linked to corresponding Yelp pages, created between March 2007 and September 2019. Correlations between review ratings and both machine learning-generated latent Dirichlet allocation topics and N-MHSS-reported facility characteristics were measured by using Spearman's rank-order correlation coefficient. Significance was defined by a Bonferroni-adjusted p<0.001. RESULTS Of 10,191 unique mental health treatment facilities, 1,383 (13.6%) had relevant Yelp pages with 8,133 corresponding reviews. The number of newly reviewed facilities and the number of new reviews increased throughout the study period. Narrative topics positively correlated with review ratings included caring staff (Spearman's ρ=0.39) and nonpharmacologic treatment (ρ=0.16). Topics negatively correlated with review ratings included rude staff (ρ=-0.14) and safety and abuse (ρ=-0.14). Of 126 N-MHSS survey items, 11 were positively correlated with review rating, including "outpatient mental health facility" (ρ=0.13), and 33 were negatively correlated with review rating, including accepting Medicare (ρ=-0.21). CONCLUSIONS Narrative topics provide information beyond what is currently collected through the N-MHSS. Topics associated with positive and negative reviews, such as staff attitude toward patients, can guide improvement in patients' satisfaction and engagement with mental health care.
Collapse
Affiliation(s)
- Daniel C Stokes
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| | - Rachel Kishton
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| | - Haley J McCalpin
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| | - Arthur P Pelullo
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| | - Zachary F Meisel
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| | - Rinad S Beidas
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| | - Raina M Merchant
- Penn Medicine Center for Digital Health (Stokes, McCalpin, Pelullo, Merchant), Center for Emergency Care Policy and Research, Department of Emergency Medicine (Stokes, Meisel, Merchant), Department of Psychiatry (Beidas), and Penn Medical Ethics and Health Policy (Beidas), Perelman School of Medicine, University of Pennsylvania, Philadelphia; National Clinician Scholars Program (Kishton), Leonard Davis Institute of Health Economics (Meisel, Merchant), and Penn Implementation Science Center at the Leonard Davis Institute (Beidas), University of Pennsylvania, Philadelphia
| |
Collapse
|
10
|
Agarwal AK, Wong V, Pelullo AM, Guntuku S, Polsky D, Asch DA, Muruako J, Merchant RM. Online Reviews of Specialized Drug Treatment Facilities-Identifying Potential Drivers of High and Low Patient Satisfaction. J Gen Intern Med 2020; 35:1647-1653. [PMID: 31755009 PMCID: PMC7280415 DOI: 10.1007/s11606-019-05548-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
BACKGROUND Despite the importance of high-quality and patient-centered substance use disorder treatment, there are no standardized ratings of specialized drug treatment facilities and their services. Online platforms offer insights into potential drivers of high and low patient experience. OBJECTIVE We sought to analyze publicly available online review content of specialized drug treatment facilities and identify themes within high and low ratings. DESIGN This was a retrospective analysis of online ratings and reviews of specialized drug treatment facilities in Pennsylvania listed within the 2016 National Directory of Drug and Alcohol Abuse Treatment Facilities. Latent Dirichlet Allocation, a machine learning approach to narrative text, was used to identify themes within reviews. Differential Language Analysis was then used to measure correlations between themes and star ratings. SETTING Online reviews of Pennsylvania's specialized drug treatment facilities posted to Google and Yelp (July 2010-August 2018). RESULTS A total of 7823 online ratings were posted over 8 years. The distribution was bimodal (43% 5-star and 34% 1-star). The average weighted rating of a facility was 3.3 stars. Online themes correlated with 5-star ratings were the following: focus on recovery (r = 0.53), helpfulness of staff (r = 0.43), compassionate care (r = 0.37), experienced a life-changing moment (r = 0.32), and staff professionalism (r = 0.29). Themes correlated with a 1-star rating were waiting time (r = 0.41), poor accommodations (0.26), poor phone communication (r = 0.24), medications given (0.24), and appointment availability (r = 0.23). Themes derived from review content were similar to 9 of the 14 facility-level services highlighted by the Substance Abuse and Mental Health Services Administration's National Survey of Substance Abuse Treatment Services. CONCLUSIONS Individuals are sharing their ratings and reviews of specialized drug treatment facilities on online platforms. Organically derived reviews of the patient experience, captured by online platforms, reveal potential drivers of high and low ratings. These represent additional areas of focus which can inform patient-centered quality metrics for specialized drug treatment facilities.
Collapse
Affiliation(s)
- Anish K Agarwal
- Department of Emergency Medicine at the Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA.
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA.
| | - Vivien Wong
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arthur M Pelullo
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sharath Guntuku
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel Polsky
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
| | - David A Asch
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
- Division of General Internal Medicine at the Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Center for Health Equity Research and Promotion, Philadelphia VA Medical Center, Philadelphia, PA, USA
| | - Jonathan Muruako
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Raina M Merchant
- Department of Emergency Medicine at the Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Digital Health, University of Pennsylvania, Philadelphia, PA, USA
- Penn Medicine Center for Healthcare Innovation, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
11
|
Ryskina KL, Andy AU, Manges KA, Foley KA, Werner RM, Merchant RM. Association of Online Consumer Reviews of Skilled Nursing Facilities With Patient Rehospitalization Rates. JAMA Netw Open 2020; 3:e204682. [PMID: 32407501 PMCID: PMC7225899 DOI: 10.1001/jamanetworkopen.2020.4682] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/01/2020] [Indexed: 11/17/2022] Open
Abstract
Importance There are areas of skilled nursing facility (SNF) experience of importance to the public that are not currently included in public reporting initiatives on SNF quality. Whether patients, hospitals, and payers can leverage the information available from unsolicited online reviews to reduce avoidable rehospitalizations from SNFs is unknown. Objectives To assess the association between rehospitalization rates and online ratings of SNFs; to compare the association of rehospitalization with ratings from a review website vs Medicare Nursing Home Compare (NHC) ratings; and to identify specific topics consistently reported in reviews of SNFs with the highest vs lowest rehospitalization rates using natural language processing. Design, Setting, and Participants A retrospective cross-sectional study of 1536 SNFs with online reviews on Yelp (a website that allows consumers to rate and review businesses and services, scored on a 1- to 5-star rating scale, with 1 star indicating the lowest rating and 5 stars indicating the highest rating) posted between January 1, 2014, and December 31, 2018. The combined data set included 1536 SNFs with 8548 online reviews, NHC ratings, and readmission rates. Main Outcomes and Measures A mean rating from the review website was calculated through the end of each year. Risk-standardized rehospitalization rates were obtained from NHC. Linear regression was used to measure the association between the rehospitalization rate of a SNF and the online ratings. Natural language processing was used to identify topics associated with reviews of SNFs in the top and bottom quintiles of rehospitalization rates. Results The 1536 SNFs in the sample had a median of 6 reviews (interquartile range, 3-13 reviews), with a mean (SD) review website rating of 2.7 (1.1). The SNFs with the highest rating on both the review website and NHC had 2.0% lower rehospitalization rates compared with the SNFs with the lowest rating on both websites (21.3%; 95% CI, 20.7%-21.8%; vs 23.3%; 95% CI, 22.7%-24.0%; P = .04). Compared with the NHC ratings alone, review website ratings were associated with an additional 0.4% of the variation in rehospitalization rates across SNFs (adjusted R2 = 0.009 vs adjusted R2 = 0.013; P = .003). Thematic analysis of qualitative comments on the review website for SNFs with high vs low rehospitalization rates identified several areas of importance to the reviewers, such as the quality of physical infrastructure and equipment, staff attitudes and communication with caregivers. Conclusions and Relevance Skilled nursing facilities with the best rating on both a review website and NHC had slightly lower rehospitalization rates than SNFs with the best rating on NHC alone. However, there was marked variation in the volume of reviews, and many SNF characteristics were underrepresented. Further refinement of the review process is warranted.
Collapse
Affiliation(s)
- Kira L. Ryskina
- Perelman School of Medicine, Division of General Internal Medicine, Department of Medicine, University of Pennsylvania, Philadelphia
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
| | - Anietie U. Andy
- Center for Digital Health, University of Pennsylvania Health System, Philadelphia
| | - Kirstin A. Manges
- Perelman School of Medicine, Division of General Internal Medicine, Department of Medicine, University of Pennsylvania, Philadelphia
- University of Pennsylvania, Philadelphia
| | | | - Rachel M. Werner
- Perelman School of Medicine, Division of General Internal Medicine, Department of Medicine, University of Pennsylvania, Philadelphia
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Corporal Michael J. Crescenz VA Medical Center, Philadelphia, Pennsylvania
| | - Raina M. Merchant
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Center for Digital Health, University of Pennsylvania Health System, Philadelphia
- Perelman School of Medicine, Department of Emergency Medicine, Department of Medicine, University of Pennsylvania, Philadelphia
| |
Collapse
|
12
|
Brereton EJ, Matlock DD, Fitzgerald M, Venechuk G, Knoepke C, Allen LA, Tate CE. Content Analysis of Negative Online Reviews of Hospice Agencies in the United States. JAMA Netw Open 2020; 3:e1921130. [PMID: 32049299 PMCID: PMC8409083 DOI: 10.1001/jamanetworkopen.2019.21130] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
IMPORTANCE As online reviews of health care become increasingly integral to patient decision-making, understanding their content can help health care practices identify and address patient concerns. OBJECTIVE To identify the most frequently cited complaints in negative (ie, 1-star) online reviews of hospice agencies across the United States. DESIGN, SETTING, AND PARTICIPANTS This qualitative study conducted a thematic analysis of online reviews of US hospice agencies posted between August 2011 and July 2019. The sample was selected from a Hospice Analytics database. For each state, 1 for-profit (n = 50) and 1 nonprofit (n = 50) hospice agency were randomly selected from the category of extra-large hospice agencies (ie, serving >200 patients/d) in the database. Data analysis was conducted from January 2019 to April 2019. MAIN OUTCOMES AND MEASURES Reviews were analyzed to identify the most prevalent concerns expressed by reviewers. RESULTS Of 100 hospice agencies in the study sample, 67 (67.0%) had 1-star reviews; 33 (49.3%) were for-profit facilities and 34 (50.7%) were nonprofit facilities. Of 137 unique reviews, 68 (49.6%) were for for-profit facilities and 69 (50.4%) were for nonprofit facilities. A total of 5 themes emerged during the coding and analytic process, as follows: discordant expectations, suboptimal communication, quality of care, misperceptions about the role of hospice, and the meaning of a good death. The first 3 themes were categorized as actionable criticisms, which are variables hospice organizations could change. The remaining 2 themes were categorized as unactionable criticisms, which are factors that would require larger systematic changes to address. For both for-profit and nonprofit hospice agencies, quality of care was the most frequently commented-on theme (117 of 212 comments [55.2%]). For-profit hospice agencies received more communication-related comments overall (34 of 130 [26.2%] vs 9 of 82 [11.0%]), while nonprofit hospice agencies received more comments about the role of hospice (23 of 33 [69.7%] vs 19 of 31 [61.3%]) and the quality of death (16 [48.5%] vs 12 [38.7%]). CONCLUSIONS AND RELEVANCE Regarding actionable criticisms, hospice agencies could examine their current practices, given that reviewers described these issues as negatively affecting the already difficult experience of losing a loved one. The findings indicated that patients and their families, friends, and caregivers require in-depth instruction and guidance on what they can expect from hospice staff, hospice services, and the dying process. Several criticisms identified in this study may be mitigated through operationalized, explicit conversations about these topics during hospice enrollment.
Collapse
Affiliation(s)
- Elinor J Brereton
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
| | - Daniel D Matlock
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
- Division of Geriatrics, University of Colorado School of Medicine, Aurora
- VA Eastern Colorado Geriatric Research Education and Clinical Center, Denver
| | - Monica Fitzgerald
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
| | - Grace Venechuk
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
| | - Chris Knoepke
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
- Division of Cardiology, University of Colorado School of Medicine, Aurora
| | - Larry A Allen
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
- Division of Cardiology, University of Colorado School of Medicine, Aurora
| | - Channing E Tate
- Adult and Child Consortium for Outcomes Research and Delivery Science, Anschutz Medical Campus, University of Colorado School of Medicine, Aurora
| |
Collapse
|
13
|
Affiliation(s)
- Amar Shireesh Kanekar
- School of Counseling, Human Performance and Rehabilitation, University of Arkansas at Little Rock, Little Rock, Arkansas, USA
| | - Avinash Thombre
- Department of Applied Communication, University of Arkansas at Little Rock, Little Rock, Arkansas, USA
| |
Collapse
|
14
|
|
15
|
Agarwal AK, Mahoney K, Lanza AL, Klinger EV, Asch DA, Fausti N, Tufts C, Ungar L, Merchant RM. Online Ratings of the Patient Experience: Emergency Departments Versus Urgent Care Centers. Ann Emerg Med 2019; 73:631-638. [DOI: 10.1016/j.annemergmed.2018.09.029] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 09/24/2018] [Accepted: 09/26/2018] [Indexed: 10/27/2022]
|
16
|
Grob R, Schlesinger M, Barre LR, Bardach N, Lagu T, Shaller D, Parker AM, Martino SC, Finucane ML, Cerully JL, Palimaru A. What Words Convey: The Potential for Patient Narratives to Inform Quality Improvement. Milbank Q 2019; 97:176-227. [PMID: 30883954 DOI: 10.1111/1468-0009.12374] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Policy Points Narratives about patients' experiences with outpatient care are essential for quality improvement because they convey ample actionable information that both elaborates on existing domains within patient experience surveys and describes multiple additional domains that are important to patients. The content of narrative feedback from patients can potentially be translated to improved quality in multiple ways: clinicians can learn from their own patients, groups of clinicians can learn from the experience of their peers' patients, and health system administrators can identify and respond to patterns in patients' accounts that reflect systemic challenges to quality. Consistent investment by payers and providers is required to ensure that patient narratives are rigorously collected, analyzed fully, and effectively used for quality improvement. CONTEXT For the past 25 years, health care providers and health system administrators have sought to improve care by surveying patients about their experiences. More recently, policymakers have acted to promote this learning by deploying financial incentives tied to survey scores. This article explores the potential of systematically elicited narratives about experiences with outpatient care to enrich quality improvement. METHODS Narratives were collected from 348 patients recruited from a nationally representative Internet panel. Drawing from the literature on health services innovation, we developed a two-part coding schema that categorized narrative content in terms of (a) the aspects of care being described, and (b) the actionability of this information for clinicians, quality improvement staff, and health system administrators. Narratives were coded using this schema, with high levels of reliability among the coders. FINDINGS The scope of outpatient narratives divides evenly among aspects of care currently measured by patient experience surveys (35% of content), aspects related to measured domains but not captured by existing survey questions (31%), and aspects of care that are omitted from surveys entirely (34%). Overall, the narrative data focused heavily on relational aspects of care (43%), elaborating on this aspect of experience well beyond what is captured with communication-related questions on existing surveys. Three-quarters of elicited narratives had some actionable content, and almost a third contained three or more separate actionable elements. CONCLUSIONS In a health policy environment that incentivizes attention to patient experience, rigorously elicited narratives hold substantial promise for improving quality in general and patients' experiences with care in particular. They do so in two ways: by making concrete what went wrong or right in domains covered by existing surveys, and by expanding our view of what aspects of care matter to patients as articulated in their own words and thus how care can be made more patient-centered. Most narratives convey experiences that are potentially actionable by those committed to improving health care quality in outpatient settings.
Collapse
Affiliation(s)
- Rachel Grob
- University of Wisconsin-Madison Law School and University of Wisconsin-Madison School of Medicine and Public Health
| | | | | | | | - Tara Lagu
- University of Massachusetts Medical School-Baystate
| | | | | | | | | | | | | |
Collapse
|
17
|
Hendrikx RJP, Spreeuwenberg MD, Drewes HW, Struijs JN, Ruwaard D, Baan CA. Harvesting the wisdom of the crowd: using online ratings to explore care experiences in regions. BMC Health Serv Res 2018; 18:801. [PMID: 30342518 PMCID: PMC6195971 DOI: 10.1186/s12913-018-3566-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Accepted: 09/25/2018] [Indexed: 12/14/2022] Open
Abstract
Background Regional population health management (PHM) initiatives need an understanding of regional patient experiences to improve their services. Websites that gather patient ratings have become common and could be a helpful tool in this effort. Therefore, this study explores whether unsolicited online ratings can provide insight into (differences in) patient’s experiences at a (regional) population level. Methods Unsolicited online ratings from the Dutch website Zorgkaart Nederland (year = 2008–2017) were used. Patients rated their care providers on six dimensions from 1 to 10 and these ratings were geographically aggregated based on nine PHM regions. Distributions were explored between regions. Multilevel analyses per provider category, which produced Intraclass Correlation Coefficients (ICC), were performed to determine clustering of ratings of providers located within regions. If ratings were clustered, then this would indicate that differences found between regions could be attributed to regional characteristics (e.g. demographics or regional policy). Results In the nine regions, 70,889 ratings covering 4100 care providers were available. Overall, average regional scores (range = 8.3–8.6) showed significant albeit small differences. Multilevel analyses indicated little clustering between unsolicited provider ratings within regions, as the regional level ICCs were low (ICC pioneer site < 0.01). At the provider level, all ICCs were above 0.11, which showed that ratings were clustered. Conclusions Unsolicited online provider-based ratings are able to discern (small) differences between regions, similar to solicited data. However, these differences could not be attributed to the regional level, making unsolicited ratings not useful for overall regional policy evaluations. At the provider level, ratings can be used by regions to identify under-performing providers within their regions.
Collapse
Affiliation(s)
- Roy J P Hendrikx
- Tranzo Scientific Center for Care and Welfare, Research Centre for Technology in Care, Tilburg University, PO Box 90153, 5000, LE, Tilburg, The Netherlands. .,Department for Quality of Care and Health Economics, Center for Nutrition, Prevention and Health Services, National Institute for Public Health and the Environment, PO Box 1, 3720, BA, Bilthoven, The Netherlands.
| | - Marieke D Spreeuwenberg
- Zuyd University of Applied Sciences, PO Box 550, 6400, AN, Heerlen, The Netherlands.,Department of Health Services Research, Care and Public Health Research Institute (CAPHRI) , Faculty of Health, Medicine and Life Sciences, Maastricht University, PO Box 616, 6200, MD, Maastricht, The Netherlands
| | - Hanneke W Drewes
- Department for Quality of Care and Health Economics, Center for Nutrition, Prevention and Health Services, National Institute for Public Health and the Environment, PO Box 1, 3720, BA, Bilthoven, The Netherlands
| | - Jeroen N Struijs
- Department for Quality of Care and Health Economics, Center for Nutrition, Prevention and Health Services, National Institute for Public Health and the Environment, PO Box 1, 3720, BA, Bilthoven, The Netherlands.,Department of Public Health and Primary Care, LUMC Campus, Schouwburgstraat 2, 2522, VA, The Hague, The Netherlands
| | - Dirk Ruwaard
- Department of Health Services Research, Care and Public Health Research Institute (CAPHRI) , Faculty of Health, Medicine and Life Sciences, Maastricht University, PO Box 616, 6200, MD, Maastricht, The Netherlands
| | - Caroline A Baan
- Tranzo Scientific Center for Care and Welfare, Research Centre for Technology in Care, Tilburg University, PO Box 90153, 5000, LE, Tilburg, The Netherlands.,Department for Quality of Care and Health Economics, Center for Nutrition, Prevention and Health Services, National Institute for Public Health and the Environment, PO Box 1, 3720, BA, Bilthoven, The Netherlands
| |
Collapse
|
18
|
Dulmage BO, Akintilo L, Welty LJ, Davis MM, Colavincenzo M, Xu S. A Qualitative, Cross-Sectional Study of Positive and Negative Comments of Residency Programs Across 9 Medical and Surgical Specialties. Am J Med 2018; 131:1130-1134.e6. [PMID: 29908767 DOI: 10.1016/j.amjmed.2018.05.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 05/29/2018] [Indexed: 11/28/2022]
Abstract
IMPORTANCE Residency applicants often use social media to discuss the positive and negative features of prospective training programs. An examination of the content discussed by applicants could provide guidance for how a medical education faculty can better engage with prospective trainees and adapt to meet the educational expectations of a new generation of digital-native physicians. OBJECTIVE The objective was to identify unstructured social media data submitted by residency applicants and categorize positive and negative statements to determine key themes. DESIGN The study design was qualitative analysis of a retrospective cohort. SETTING Publicly available datasets were used. PARTICIPANTS The participants were anonymized medical trainees applying to residency training positions in 9 specialties-dermatology, general surgery, internal medicine, obstetrics/gynecology, plastic surgery, otolaryngology, physical medicine and rehabilitation, pediatrics, and radiology-from 2007 to 2017. MAIN OUTCOMES AND MEASURES After we developed a standardized coding scheme that broke comments down into major features, themes, and subthemes, all unstructured comments were coded by two independent researchers. Positive and negative comments were coded separately. Frequency counts and percentages were recorded for each identified feature, theme, and subtheme. The percent positive and negative comments by specialty were also calculated. RESULTS Of the 6314 comments identified, 4541 were positive and 1773 were negative. Institution was the most commonly cited major feature in both the positive (n = 767 [17%]) and negative (n = 827 [47%]) comments. Geography was the most cited theme, and City, Cost of Living, and Commute were commonly cited subthemes. Training was the next most cited major feature in both positive (n = 1005 [22%]) and negative (n = 291 [16%]) comments, with Clinical Training being more commonly cited compared to Research Opportunities. Overall, 72% of comments from all were positive; however, the percent of comments that were positive comments varied significantly across the 9 specialties. Pediatrics (65%), dermatology (66%), and internal medicine (68%) applicants were more likely to express negative comments compared with the global average, but physical medicine and rehabilitation (85%), radiology (82%), otolaryngology (81%), and plastic surgery (80%) applicants were more likely to express positive comments. CONCLUSIONS AND RELEVANCE This qualitative analysis of positive and negative themes as posted by applicants in recent matching years is the first and provides new detailed insights into the motivations and desires of trainees.
Collapse
Affiliation(s)
- Brittany O Dulmage
- Department of Dermatology, Northwestern University Feinberg School of Medicine, Chicago, Ill; Medical Education Clinical Scholars Program, Northwestern University Feinberg School of Medicine, Chicago, Ill
| | - Lisa Akintilo
- Northwestern University Feinberg School of Medicine, Chicago, Ill
| | - Leah J Welty
- Division of Biostatistics, Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, Ill
| | - Matthew M Davis
- Northwestern University Feinberg School of Medicine, Chicago, Ill; Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Ill
| | - Maria Colavincenzo
- Department of Dermatology, Northwestern University Feinberg School of Medicine, Chicago, Ill
| | - Shuai Xu
- Department of Dermatology, Northwestern University Feinberg School of Medicine, Chicago, Ill; Center for Bio-Integrated Electronics, Northwestern University, Evanston, Ill.
| |
Collapse
|
19
|
Rothenfluh F, Schulz PJ. Content, Quality, and Assessment Tools of Physician-Rating Websites in 12 Countries: Quantitative Analysis. J Med Internet Res 2018; 20:e212. [PMID: 29903704 PMCID: PMC6024097 DOI: 10.2196/jmir.9105] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Revised: 01/06/2018] [Accepted: 03/13/2018] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Websites on which users can rate their physician are becoming increasingly popular, but little is known about the website quality, the information content, and the tools they offer users to assess physicians. This study assesses these aspects on physician-rating websites in German- and English-speaking countries. OBJECTIVE The objective of this study was to collect information on websites with a physician rating or review tool in 12 countries in terms of metadata, website quality (transparency, privacy and freedom of speech of physicians and patients, check mechanisms for appropriateness and accuracy of reviews, and ease of page navigation), professional information about the physician, rating scales and tools, as well as traffic rank. METHODS A systematic Web search based on a set of predefined keywords was conducted on Google, Bing, and Yahoo in August 2016. A final sample of 143 physician-rating websites was analyzed and coded for metadata, quality, information content, and the physician-rating tools. RESULTS The majority of websites were registered in the United States (40/143) or Germany (25/143). The vast majority were commercially owned (120/143, 83.9%), and 69.9% (100/143) displayed some form of physician advertisement. Overall, information content (mean 9.95/25) as well as quality were low (mean 18.67/47). Websites registered in the United Kingdom obtained the highest quality scores (mean 26.50/47), followed by Australian websites (mean 21.50/47). In terms of rating tools, physician-rating websites were most frequently asking users to score overall performance, punctuality, or wait time in practice. CONCLUSIONS This study evidences that websites that provide physician rating should improve and communicate their quality standards, especially in terms of physician and user protection, as well as transparency. In addition, given that quality standards on physician-rating websites are low overall, the development of transparent guidelines is required. Furthermore, attention should be paid to the financial goals that the majority of physician-rating websites, especially the ones that are commercially owned, pursue.
Collapse
Affiliation(s)
- Fabia Rothenfluh
- Institute of Communication and Health, Università della Svizzera italiana, Lugano, Switzerland
| | - Peter J Schulz
- Institute of Communication and Health, Università della Svizzera italiana, Lugano, Switzerland
| |
Collapse
|
20
|
Graves RL, Goldshear J, Perrone J, Ungar L, Klinger E, Meisel ZF, Merchant RM. Patient narratives in Yelp reviews offer insight into opioid experiences and the challenges of pain management. Pain Manag 2018; 8:95-104. [DOI: 10.2217/pmt-2017-0050] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Aim: To characterize Yelp reviews about pain management and opioids. Methods: We manually coded and applied natural language processing to 836 Yelp reviews of US hospitals mentioning an opioid medication. Results: Yelp reviews by patients and caregivers describing experiences with pain management and opioids had lower ratings compared with other reviews. Negative descriptions of pain management and opioid-related experiences were more commonly described than positive experiences, and the number of themes they reflected was more diverse. Conclusion: Yelp reviews offer insights into pain management and opioid use that are not assessed by traditional surveys. As a free, highly utilized source of unstructured narratives, Yelp may allow ongoing assessment of policies related to pain management and opioid use.
Collapse
Affiliation(s)
- Rachel L Graves
- Penn Medicine Center for Digital Health, University of Pennsylvania, 3400 Civic Center Blvd, Philadelphia, PA 19104, USA
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Blockley Hall, 423 Guardian Drive Room 407, Philadelphia, PA 19104, USA
| | - Jesse Goldshear
- Penn Medicine Center for Digital Health, University of Pennsylvania, 3400 Civic Center Blvd, Philadelphia, PA 19104, USA
| | - Jeanmarie Perrone
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Blockley Hall, 423 Guardian Drive Room 407, Philadelphia, PA 19104, USA
| | - Lyle Ungar
- Penn Medicine Center for Digital Health, University of Pennsylvania, 3400 Civic Center Blvd, Philadelphia, PA 19104, USA
- Computer & Information Science, University of Pennsylvania, Levine Hall, 3330 Walnut Street, Philadelphia, PA 19104, USA
| | - Elissa Klinger
- Penn Medicine Center for Digital Health, University of Pennsylvania, 3400 Civic Center Blvd, Philadelphia, PA 19104, USA
| | - Zachary F Meisel
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Blockley Hall, 423 Guardian Drive Room 407, Philadelphia, PA 19104, USA
| | - Raina M Merchant
- Penn Medicine Center for Digital Health, University of Pennsylvania, 3400 Civic Center Blvd, Philadelphia, PA 19104, USA
- Department of Emergency Medicine, Perelman School of Medicine, University of Pennsylvania, Blockley Hall, 423 Guardian Drive Room 407, Philadelphia, PA 19104, USA
| |
Collapse
|