1
|
Bondok M, Nguyen AXL, Tanya SM, Youn GM, Lando L, Wu AY. Gender and personalized profile information influence online ratings of Canadian academic ophthalmologists. CANADIAN JOURNAL OF OPHTHALMOLOGY 2024:S0008-4182(24)00287-4. [PMID: 39374903 DOI: 10.1016/j.jcjo.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 07/28/2024] [Accepted: 09/11/2024] [Indexed: 10/09/2024]
Abstract
OBJECTIVE To determine the characteristics associated with higher online ratings of academic ophthalmologists in Canada. DESIGN Retrospective cross-sectional study. METHODS All ophthalmologists affiliated with Canadian ophthalmology departments were queried in March 2023 using WebMDs. Online ratings and physician profile details were extracted and descriptively analyzed using nonparametric tests with significance at p < 0.05. Subgroup analysis was conducted using ≥ 4-star rated profiles. RESULTS Eight hundred and ninety-nine department faculty from 15 institutions were considered, and 660 ophthalmologists with active, rated profiles were included. A total of 27,823 online ratings with a median of 4.14 stars (out of 5) were observed. Most profiles were of men (74.1%). Women received lower overall ratings compared to men (median = 4.08 vs. 4.20; p = 0.021), and lower number of reviews (median = 23 vs. 34; p < 0.001). Most profiles included office addresses (87.9%), private practice affiliation (79.8%), and contact information (51.1%). There was a positive correlations between higher ratings and profiles that included biographies (rho = 0.13; p = 0.001), languages spoken (rho = 0.15; p < 0.001), educational background (rho = 0.13; p < 0.001), areas of expertise (rho = 0.10; p = 0.010), awards (rho = 0.12; p = 0.002), and among physicians indicating they accept new patients (rho = 0.15; p < 0.001) and accommodate virtual visits (rho = 0.09; p = 0.020). CONCLUSIONS Canadian ophthalmologists having certain personal information on their online profiles tended to have higher ratings, despite weak associations, possibly due to wider public outreach. Women had fewer and lower overall ratings compared to men. Further research about online ratings' influence on physician selection and physician career satisfaction is needed.
Collapse
Affiliation(s)
- Mostafa Bondok
- Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Anne Xuan-Lan Nguyen
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, ON, Canada
| | - Stuti M Tanya
- Department of Ophthalmology and Visual Sciences, McGill University Faculty of Medicine, Montréal, QC, Canada
| | - Gun Min Youn
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, United States
| | - Leonardo Lando
- Ocular Oncology Service, Barretos Cancer Hospital, Barretos, SP, Brazil
| | - Albert Y Wu
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, United States.
| |
Collapse
|
2
|
O'Malley GR, Sarwar SA, Weisman HE, Wan E, Prem Kumar R, Patel NV. Assessing Diversity, Equity, and Inclusion in Patient-Facing Websites in Neurosurgical Departments in the United States. World Neurosurg 2024; 186:e366-e373. [PMID: 38556163 DOI: 10.1016/j.wneu.2024.03.144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND Patient-facing websites serve as essential platforms for disseminating information, engaging with patients, and increasing access to neurosurgical resources and services. Diversity, Equity, and Inclusion are at the forefront of issues facing the field of neurosurgery, especially concerning race and gender disparities in regards to providers in the field. METHODS Data were collected in regards to the race and gender of patients and providers displayed on the neurosurgery department's patient-facing website in addition to accommodations for disabilities, decreased ability to pay, and language. RESULTS Patients who were White were depicted more commonly than those of color (69% vs. 31%, P < 0.00001). White patients also were over-represented when compared with the average demographics of the communities in which the hospitals served (P = 0.03846). Neurosurgical providers who were White outnumbered those of color (70% vs. 30%, P < 0.00001). The racial depiction of providers was comparable with racial disparities currently observed in neurosurgery (P = 0.59612). Female neurosurgery providers were seen less than male providers on patient-facing websites (P < 0.00001) but were seen more commonly on patient-facing websites than the percentage of practicing neurosurgeons they currently comprise (28% vs. 8%, P < 0.00001). CONCLUSIONS The results of this study suggest that patient-facing websites of neurosurgical departments are an area of improvement in regards to Diversity, Equity, and Inclusion in the field of neurosurgery. Disparities are noted in regards to the racial depiction of patients and further call to attention racial and gender disparities in the field of neurosurgery.
Collapse
Affiliation(s)
- Geoffrey R O'Malley
- Department of Neurosurgery, Hackensack Meridian School of Medicine, Nutley, New Jersey, USA.
| | - Syed A Sarwar
- Department of Neurosurgery, HMH-Jersey Shore University Medical Center, Neptune, New Jersey, USA
| | - Hannah E Weisman
- Department of Neurosurgery, Hackensack Meridian School of Medicine, Nutley, New Jersey, USA
| | - Erica Wan
- Department of Neurosurgery, Hackensack Meridian School of Medicine, Nutley, New Jersey, USA
| | - Rohit Prem Kumar
- Department of Neurosurgery, Hackensack Meridian School of Medicine, Nutley, New Jersey, USA
| | - Nitesh V Patel
- Department of Neurosurgery, Hackensack Meridian School of Medicine, Nutley, New Jersey, USA; Department of Neurosurgery, HMH-Jersey Shore University Medical Center, Neptune, New Jersey, USA
| |
Collapse
|
3
|
Janssen A, Donnelly C, Shaw T. A Taxonomy for Health Information Systems. J Med Internet Res 2024; 26:e47682. [PMID: 38820575 PMCID: PMC11179026 DOI: 10.2196/47682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 10/05/2023] [Accepted: 01/31/2024] [Indexed: 06/02/2024] Open
Abstract
The health sector is highly digitized, which is enabling the collection of vast quantities of electronic data about health and well-being. These data are collected by a diverse array of information and communication technologies, including systems used by health care organizations, consumer and community sources such as information collected on the web, and passively collected data from technologies such as wearables and devices. Understanding the breadth of IT that collect these data and how it can be actioned is a challenge for the significant portion of the digital health workforce that interact with health data as part of their duties but are not for informatics experts. This viewpoint aims to present a taxonomy categorizing common information and communication technologies that collect electronic data. An initial classification of key information systems collecting electronic health data was undertaken via a rapid review of the literature. Subsequently, a purposeful search of the scholarly and gray literature was undertaken to extract key information about the systems within each category to generate definitions of the systems and describe the strengths and limitations of these systems.
Collapse
Affiliation(s)
- Anna Janssen
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Candice Donnelly
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Tim Shaw
- Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| |
Collapse
|
4
|
Anastasio AT, Baumann AN, Curtis DP, Rogers H, Hogge C, Ryan SF, Walley KC, Adams SB. An examination of negative one-star patient reviews for foot and ankle orthopedic surgery: A retrospective analysis. Foot Ankle Surg 2024; 30:252-257. [PMID: 38195290 DOI: 10.1016/j.fas.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 11/21/2023] [Accepted: 12/22/2023] [Indexed: 01/11/2024]
Abstract
BACKGROUND Despite the questionable validity of online-based physician review websites (PRWs), negative reviews can adversely affect a provider's practice. Several investigations have explored the effect of extremely negative "one-star" reviews across subspecialties such as adult reconstruction, sports medicine, and orthopaedic traumatology; however, to date, no study has explored one-star reviews in foot and ankle surgery. The goal of this study was to characterize factors that contribute to extremely negative, one-star reviews for foot and ankle surgeons on Vitals.com. METHODS A retrospective analysis of negative one-star reviews with corresponding patient complaints for foot and ankle surgeons (both orthopaedic surgeons as well as podiatrists) in the United States. Physicians included were selected within a 10-mile radius of the top ten largest cities in the United States. Data was stratified by patient type (e.g., those receiving surgery and those not undergoing surgical intervention) and binned according to type of patient complaint, as previously described. RESULTS Of the 2645 foot and ankle surgeons identified in our initial query, 13.8% of surgeons contained one-star reviews eligible for analysis. Patient complaints related to bedside manner and patient experience are the causative factors accounting for 41.5% of the one-star reviews of foot and ankle surgeons for nonsurgical-related complaints. Surgical complications and other outcomes-related factors comprised roughly 50% of the complaints related to surgical patients. CONCLUSION In conclusion, complaints related to bedside manner and patient experience are the causative factors accounting for 41.5% of the one-star reviews of foot and ankle surgeons for nonsurgical-related complaints. Surgical complications and other outcomes-related factors comprised roughly half of the complaints related to surgery. This data serves to inform practicing foot and ankle surgeons as to the influences behind patients leaving extremely negative reviews on PRWs. LEVEL OF CLINICAL EVIDENCE IV.
Collapse
Affiliation(s)
| | - Anthony N Baumann
- College of Medicine, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Deven P Curtis
- College of Medicine, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Hudson Rogers
- College of Medicine, Northeast Ohio Medical University, Rootstown, OH, USA
| | - Caleb Hogge
- School of Osteopathic Medicine, Lake Erie College of Medicine, Erie, PA, USA
| | - Savannah F Ryan
- Department of Orthopaedics, University of Michigan | Michigan Medicine, Ann Arbor, MI, USA.
| | - Kempland C Walley
- Department of Orthopaedics, University of Michigan | Michigan Medicine, Ann Arbor, MI, USA
| | - Samuel B Adams
- Department of Orthopaedics, Duke University, Durham, NC, USA
| |
Collapse
|
5
|
Kim JK, Tawk K, Kim JM, Shahbaz H, Lipton JA, Haidar YM, Tjoa T, Abouzari M. Online ratings and narrative comments of American Head and Neck Society surgeons. Head Neck 2024; 46:2508-2516. [PMID: 38488221 PMCID: PMC11401960 DOI: 10.1002/hed.27743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 02/29/2024] [Accepted: 03/10/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND We analyzed online rating scores and comments of head and neck surgeons to understand factors that contribute to higher ratings. METHODS Numerical ratings and comments for American Head and Neck Society physicians were extracted from Healthgrades, Vitals, RateMDs, and Yelp, with narrative comments categorized based on content. Physician practice location, education, and residency training were also compiled. RESULTS Patient ratings were significantly higher with supportive staff and affable physician demeanor but showed significant drops with longer wait times and difficulties scheduling appointments or follow-ups. Physician education and postgraduate training did not significantly affect ratings. CONCLUSION Online ratings and comments correlated to modifiable factors in clinical practice and may be informative in understanding patient needs.
Collapse
Affiliation(s)
- Joshua K Kim
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
- School of Medicine, Duke University, Durham, North Carolina, USA
| | - Karen Tawk
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Jonathan M Kim
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Hady Shahbaz
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Joshua A Lipton
- Department of Computer Science, University of California Irvine, Irvine, California, USA
| | - Yarah M Haidar
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Tjoson Tjoa
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Mehdi Abouzari
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| |
Collapse
|
6
|
Akbarpour M, Tawk K, Frank M, Gomez AS, Mostaghni N, Abouzari M. Assessment of laryngologists' ratings on physician review websites. World J Otorhinolaryngol Head Neck Surg 2024; 10:1-6. [PMID: 38560034 PMCID: PMC10979035 DOI: 10.1002/wjo2.95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 02/02/2023] [Accepted: 03/01/2023] [Indexed: 04/03/2023] Open
Abstract
Objective To assess and characterize online ratings and comments on laryngologists and determine factors that correlate with higher ratings. Methods All the American Laryngological Association (ALA) members were queried across several online platforms. Ratings were normalized for comparison on a five-point Likert scale. Ratings were categorized based on context and for positive/negative aspects. Results Of the 331 ALA members, 256 (77%) were rated on at least one online platform. Across all platforms, the average overall rating was 4.39 ± 0.61 (range: 1.00-5.00). Specific positive ratings including "bedside manners," "diagnostic accuracy," "adequate time spent with patient," "appropriate follow-up," and "physician timeliness" had significant positive correlations to overall ratings, by Pearson's correlation (P < 0.001). Long wait times had significant negative correlations to overall ratings (P < 0.001). Conclusion Online ratings and comments for laryngologists are significantly influenced by patient perceptions of bedside manner, physician competence, and time spent with the patient.
Collapse
Affiliation(s)
- Meleeka Akbarpour
- Department of Otolaryngology‐Head and Neck SurgeryUniversity of CaliforniaIrvineUSA
| | - Karen Tawk
- Department of Otolaryngology‐Head and Neck SurgeryUniversity of CaliforniaIrvineUSA
| | - Madelyn Frank
- Department of Otolaryngology‐Head and Neck SurgeryUniversity of CaliforniaIrvineUSA
| | - Alizah S. Gomez
- Department of Otolaryngology‐Head and Neck SurgeryUniversity of CaliforniaIrvineUSA
| | - Navid Mostaghni
- Department of Otolaryngology‐Head and Neck SurgeryUniversity of CaliforniaIrvineUSA
| | - Mehdi Abouzari
- Department of Otolaryngology‐Head and Neck SurgeryUniversity of CaliforniaIrvineUSA
| |
Collapse
|
7
|
Park SH, Cheng CP, Buehler NJ, Sanford T, Torrey W. A sentiment analysis on online psychiatrist reviews to identify clinical attributes of psychiatrists that shape the therapeutic alliance. Front Psychiatry 2023; 14:1174154. [PMID: 37398580 PMCID: PMC10313228 DOI: 10.3389/fpsyt.2023.1174154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/29/2023] [Indexed: 07/04/2023] Open
Abstract
Background While online reviews from physician rating websites are increasingly utilized by healthcare providers to better understand patient needs, it remains difficult to objectively identify areas for improvement in providing psychiatric care. Objectives To quantitatively characterize the sentiment of online written reviews of psychiatrists to determine clinical attributes that can be strengthened to improve psychiatrists' therapeutic alliance with their patients. Materials and methods Sentiment scores of 6,400 written reviews of 400 US-based psychiatrists on a US-based online physician rating website were obtained through a natural-language-processing-based sentiment analysis. Relationships among sentiment scores, average star ratings, and demographics were examined. Linguistic analyses determined words and bigrams that were highly associated with reviews with the most positive and negative sentiment. Findings Sentiment scores were significantly correlated with average star ratings of the psychiatrists (R = 0.737, p < 0.001). Psychiatrists who were younger (< 56 years old) and/or practiced in the Northeast had significantly higher average star ratings than those older and/or practicing in the Southwest. Frequency analysis showed that positive reviews most frequently contained "time" (N = 1,138) and "caring" (N = 784) while negative reviews most frequently contained "medication" (N = 495) and "time" (N = 379). Logistic regression analysis revealed that reviews were more likely to be considered positive when they included "great listener" (OR = 16.89) and "comfortable" (OR = 10.72) and more likely to be negative when they included "meds" (OR = 0.55) and "side effect" (OR = 0.59). Conclusion Psychiatrists who are younger and located in the Northeast receive more positive reviews; there may be potential for demographic bias among patient reviewers. Patients positively rate psychiatrists who make them feel heard and comfortable but negatively rate encounters centered around medications and their side effects. Our study lends quantitative evidence to support the importance of thorough and empathetic communication of psychiatrists in building a strong therapeutic alliance.
Collapse
Affiliation(s)
- Soo Hwan Park
- Geisel School of Medicine at Dartmouth, Hanover, NH, United States
- Department of Psychiatry, Dartmouth Health, Lebanon, NH, United States
| | | | | | - Timothy Sanford
- Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| | - William Torrey
- Geisel School of Medicine at Dartmouth, Hanover, NH, United States
- Department of Psychiatry, Dartmouth Health, Lebanon, NH, United States
| |
Collapse
|
8
|
Guetz B, Bidmon S. The Credibility of Physician Rating Websites: A Systematic Literature Review. Health Policy 2023; 132:104821. [PMID: 37084700 DOI: 10.1016/j.healthpol.2023.104821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 04/23/2023]
Abstract
OBJECTIVES Increasingly, the credibility of online reviews is drawing critical attention due to the lack of control mechanisms, the constant debate about fake reviews and, last but not least, current developments in the field of artificial intelligence. For this reason, the aim of this study was to examine the extent to which assessments recorded on physician rating websites (PRWs) are credible, based on a comparison to other evaluation criteria. METHODS Referring to the PRISMA guidelines, a comprehensive literature search was conducted across different scientific databases. Data were synthesized by comparing individual statistical outcomes, objectives and conclusions. RESULTS The chosen search strategy led to a database of 36,755 studies of which 28 were ultimately included in the systematic review. The literature review yielded mixed results regarding the credibility of PRWs. While seven publications supported the credibility of PRWs, six publications found no correlation between PRWs and alternative datasets. 15 studies reported mixed results. CONCLUSIONS This study has shown that ratings on PRWs seem to be credible when relying primarily on patients' perception. However, these portals seem inadequate to represent alternative comparative values such as the medical quality of physicians. For health policy makers our results show that decisions based on patients' perceptions may be well supported by data from PRWs. For all other decisions, however, PRWs do not seem to contain sufficiently useful data.
Collapse
Affiliation(s)
- Bernhard Guetz
- Department of Marketing and International Management, Alpen-Adria- Universitaet Klagenfurt, Universitaetsstrasse 65-67, Klagenfurt am Woerthersee, 9020, Austria.
| | - Sonja Bidmon
- Department of Marketing and International Management, Alpen-Adria- Universitaet Klagenfurt, Universitaetsstrasse 65-67, Klagenfurt am Woerthersee, 9020, Austria
| |
Collapse
|
9
|
Characterizing Single-star Negative Online Reviews of Orthopaedic Trauma Association Members. J Am Acad Orthop Surg 2023; 31:397-404. [PMID: 36727955 DOI: 10.5435/jaaos-d-22-00631] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 12/14/2022] [Indexed: 02/03/2023] Open
Abstract
INTRODUCTION The purpose of this study was to characterize factors that contribute to 1-star negative reviews regarding orthopaedic trauma surgeons. METHODS A search was done for Orthopaedic Trauma Association members on Yelp.com, Healthgrade.com, and Vitals.com in New York, Boston, San Francisco, Los Angeles, Dallas, Phoenix, Seattle, Baltimore, Denver, Houston, Philadelphia, and Washington, DC. All single-star reviews (out of a possible 5 stars) were included in this study. Reviews were categorized as either clinical or nonclinical and then further subcategorized. Categorical variables were analyzed using a chi-square test. The rate ratio (the ratio of the rate for nonsurgical divided by surgical reviews) was determined for each category. RESULTS Two hundred eighty-eight single-star reviews were included in the study, comprising 655 total complaints. Of all complaints, 274 (41.8%) were clinically related and 381 (58.2%) were nonclinical. Of the 288 single-star reviews, 96 (33.3%) were from surgically treated patients and 192 (66.7%) were from nonsurgical patients. Most complaints were in reference to nonclinical aspects of care such as physician bedside manner (173 reviews, 60%), not enough time spent with provider (58 reviews, 20%), and wait time (42 complaints, 15%). The most common clinical complaints were for complication (61 reviews, 21%), disagree with decision/plan (49 reviews, 17%), and uncontrolled pain (45 reviews, 16%). Surgical patients had a significantly higher rate of clinical complaints than nonsurgical patients (1.57 vs. 0.64 clinical complaints per review, P < 0.001). Nonsurgical patients had a significantly higher rate of nonclinical complaints than surgical patients (1.43 vs. 1.10 nonclinical complaints per review, P < 0.001). DISCUSSION Most 1-star reviews referenced a nonclinical aspect of care with a physician's bedside manner being the most common complaint. Surgical patients were markedly more likely to reference a clinical aspect of care, such as complications or misdiagnosis compared with nonsurgical patients, who more commonly referenced nonclinical aspects of care.
Collapse
|
10
|
Spiro GM, Sommerfeld CS, Fung K, Quimby AE, Pulkki KH, Fortin M, Nguyen LH. Poor Online Patient Ratings of Otolaryngologists in the United States: What are Patients Saying? EAR, NOSE & THROAT JOURNAL 2023:1455613221150146. [PMID: 36602263 DOI: 10.1177/01455613221150146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
OBJECTIVES Online patient forums have become a platform for patient education and advocacy in many areas of medicine. The anonymity provided by such forums may encourage honest, candid responses. Using patient online reviews, this study sought to explore themes that arose from negatively perceived care interactions with American otolaryngologists using the Accreditation Council for Graduate Medical Education (ACGME) competency framework. STUDY DESIGN Qualitative thematic analysis. METHODS Through an iterative multistep process, a qualitative thematic analysis was conducted on negative reviews (defined as ratings of two or less out of five) of all American otolaryngologists found on a popular online physician-rating website (RateMDs.com). RESULTS A systematic search through the RateMDs website revealed 2950 separate comments of negative reviews. Of these negative reviews, 350 were randomly selected for thematic analysis. The predominant themes that emerged aligned closely with the Accreditation Council for Graduate Medical Education (ACGME) competencies, in particularly with professionalism and interprofessional skills and communication. CONCLUSIONS The negative reviews of American otolaryngologists revealed a number of areas where improvements could be made to quality of care. Patients value evidence-based medicine delivered by compassionate and respectful physicians. Isolating and aligning predominant themes within the ACGME framework proved a productive method to collect and organize pertinent patient feedback and integrate teaching into the post-graduate training and continuing professional development in order to avoid such negatively perceived interactions in the future.
Collapse
Affiliation(s)
- Grace M Spiro
- Department of Otolaryngology - Head & Neck Surgery, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Connor S Sommerfeld
- Division of Otolaryngology-Head & Neck Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Kevin Fung
- Department of Otolaryngology - Head & Neck Surgery, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Alexandra E Quimby
- Department of Otolaryngology - Head & Neck Surgery, University of Ottawa, Ottawa, Ontario, Canada
| | - Kristina H Pulkki
- Department of Otolaryngology - Head & Neck Surgery, University of Ottawa, Ottawa, Ontario, Canada
| | - Mélyssa Fortin
- Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Lily Hp Nguyen
- Faculty of Medicine, McGill University, Montreal, Quebec, Canada
- Department of Otolaryngology - Head & Neck Surgery, McGill University, Montreal, Quebec, Canada
- Institute for Health Sciences Education, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
11
|
Romere CM, Shah RF. Discordance in online commercial ratings of orthopaedic surgeons: a retrospective review of online rating scores. CURRENT ORTHOPAEDIC PRACTICE 2022. [DOI: 10.1097/bco.0000000000001190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
12
|
Online Patient Reviews of Breast Reconstruction: RealSelf Analysis. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2022; 10:e4476. [PMID: 36438458 PMCID: PMC9682616 DOI: 10.1097/gox.0000000000004476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 06/10/2022] [Indexed: 03/08/2023]
Abstract
UNLABELLED RealSelf is an online community that hosts an expansive number of online reviews for cosmetic and reconstructive plastic surgery procedures. The purpose of this study is to analyze patient satisfaction with breast reconstruction procedures from RealSelf to determine factors contributing to a positive or negative patient experience. METHODS The breast reconstruction category from RealSelf.com was analyzed using a web crawler-based application built from Python and Selenium. Reviews were collected from May 2009 to November 2021. Information including RealSelf's inherent "worth it" ranking system, review text, the number of submitted photographs, and the number of readers who found the review helpful was captured. The content of the review was then independently reviewed by the authors and was categorized with key factors that determined positive or negative reviews. RESULTS A total of 3451 breast reconstruction reviews were collected. After the authors analyzed each review, 3225 (94.33%) were identified as positive reviews. The most common factors associated with positive reviews were physician demeanor (n = 2600, 31.7%), aesthetic outcome (n = 1955, 23.8%), or staff (n = 1543, 18.8%), while negative reviews were associated with unfavorable aesthetic outcome (n = 94, 28.9%), physician demeanor (n = 82, 25.2%), or postoperative complications (n = 75, 23.1%). CONCLUSIONS Although there are surveys that analyze patient satisfaction for breast reconstruction, there has not been a study that analyzed a large online review database. Predominating factors in both positive and negative reviews were physician demeanor and aesthetic outcome.
Collapse
|
13
|
Morena N, Zelt N, Nguyen D, Dionne E, Rentschler CA, Greyson D, Meguerditchian AN. Use of Online Patient Reviews to Assess Medical Oncologist Competency: Mixed-Method Sequential Explanatory Study (Preprint). JMIR Form Res 2022; 7:e39857. [PMID: 37140959 DOI: 10.2196/39857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 02/24/2023] [Accepted: 03/27/2023] [Indexed: 03/29/2023] Open
Abstract
BACKGROUND Patients increasingly use web-based evaluation tools to assess their physicians, health care teams, and overall medical experience. OBJECTIVE This study aimed to evaluate the extent to which the standardized physician competencies of the CanMEDS Framework are present in web-based patient reviews (WPRs) and to identify patients' perception of important physician qualities in the context of quality cancer care. METHODS The WPRs of all university-affiliated medical oncologists in midsized cities with medical schools in the province of Ontario (Canada) were collected. Two reviewers (1 communication studies researcher and 1 health care professional) independently assessed the WPRs according to the CanMEDS Framework and identified common themes. Comment scores were then evaluated to identify κ agreement rates between the reviewers, and a descriptive quantitative analysis of the cohort was completed. Following the quantitative analysis, an inductive thematic analysis was performed. RESULTS This study identified 49 actively practicing university-affiliated medical oncologists in midsized urban areas in Ontario. A total of 473 WPRs reviewing these 49 physicians were identified. Among the CanMEDS competencies, those defining the roles of medical experts, communicators, and professionals were the most prevalent (303/473, 64%; 182/473, 38%; and 129/473, 27%, respectively). Common themes in WPRs include medical skill and knowledge, interpersonal skills, and answering questions (from the patient to the physician). Detailed WPRs tend to include the following elements: experience and connection; discussion and evaluation of the physician's knowledge, professionalism, interpersonal skills, and punctuality; in positive reviews, the expression of feelings of gratitude and a recommendation; and in negative reviews, discouragement from seeking the physician's care. Patients' perception of medical skills is less specific than their perception of interpersonal qualities, although medical skills are the most commented-on element of care in WPRs. Patients' perception of interpersonal skills (listening, compassion, and overall caring demeanor) and other experiential phenomena, such as feeling rushed during appointments, is often specific and detailed. Details about a physician's interpersonal skills or "bedside manner" are highly perceived, valued, and shareable in an WPR context. A small number of WPRs reflected a distinction between the value of medical skills and that of interpersonal skills. The authors of these WPRs claimed that for them, a physician's medical skills and competence are more important than their interpersonal skills. CONCLUSIONS CanMEDS roles and competencies that are explicitly patient facing (ie, those directly experienced by patients in their interactions with physicians and through the care that physicians provide) are the most likely to be present and reported on in WPRs. The findings demonstrate the opportunity to learn from WPRs, not simply to discern physicians' popularity but to grasp what patients may expect from their physicians. In this context, WPRs can represent a method for the measurement and assessment of patient-facing physician competency.
Collapse
Affiliation(s)
- Nina Morena
- Art History and Communication Studies, McGill University, Montreal, QC, Canada
| | - Nicholas Zelt
- Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Diana Nguyen
- McGill University Health Centre Research Institute, Montreal, QC, Canada
- St Mary's Research Centre, Montreal, QC, Canada
| | | | - Carrie A Rentschler
- Art History and Communication Studies, McGill University, Montreal, QC, Canada
| | - Devon Greyson
- School of Population and Public Health, University of British Columbia, Vancouver, BC, Canada
| | - Ari N Meguerditchian
- McGill University Health Centre Research Institute, Montreal, QC, Canada
- St Mary's Research Centre, Montreal, QC, Canada
- Department of Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
14
|
Richman EH, Ogbaudu E, Pollock JR, Brinkman JC, Moore ML, Arthur JR, Karlen JW. Characterizing Negative Online Reviews of Pediatric Orthopaedic Surgeons. J Pediatr Orthop 2022; 42:e533-e537. [PMID: 35200216 DOI: 10.1097/bpo.0000000000002121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND The growing focus on subjective patient experiences has created an increase in popularity for physician rating websites. The purpose of this study was to characterize extremely negative reviews of pediatric orthopaedic surgeons. METHODS Pediatric orthopaedic surgeons were randomly selected using the Pediatric Orthopaedic Society of North America comprehensive list of surgeons. A search was then performed on Healthgrades.com, Vitals.com, and Yelp.com for 1-star reviews. Reviews were classified into clinical and nonclinical categories. Statistical analyses were performed regarding the frequency of reviews and complaints for each respective category. RESULTS Of the 279 one-star reviews categorized, 248 reviews (88.9% of reviews) included nonclinical complaints, and 182 reviews (65.2% of reviews) included clinical complaints. Nonsurgical patients were associated with 255 reviews, and the remaining 24 were related to surgical patients. Of the 430 comments within reviews, 248 referenced nonclinical aspects of care, and 182 referenced clinical care. Clinical factors most frequently noted included clinical disagreement (37%), unclear treatment plan (25%), complication (17%), misdiagnosis (15%), uncontrolled pain (13%), and delay in care (8%). The most addressed nonclinical factors included physician bedside manner (68%), time spent with provider (21%), wait time (18%), unprofessional staff (17%), scheduling issues (9%), cost (8%), and billing (8%). Compared with surgical reviews, nonsurgical reviews were more likely to contain nonclinical complaints (rate ratio: 1.5; P<0.05) and less likely to contain clinical complaints (rate ratio: 0.7; P<0.05). The most common complaint by surgical patients was complications (91.7%). CONCLUSIONS To our knowledge, this is the first study to examine the factors associated with negative reviews of pediatric orthopaedic surgeons. The majority of reviews of pediatric orthopaedic surgeons were left by nonsurgical patients and were related to nonclinical aspects of care. We also found surgeon-dependent factors such as poor physician bedside manner, unclear treatment plan, or parents' disagreement with treatment plan were the most common reasons for negative reviews. LEVEL OF EVIDENCE Level IV.
Collapse
Affiliation(s)
- Evan H Richman
- Creighton University School of Medicine-Phoenix Regional Campus
| | | | | | | | | | | | - Judson W Karlen
- Department of Orthopaedic Surgery, Phoenix Children's Hospital, Phoenix
| |
Collapse
|
15
|
What Affects an Orthopaedic Surgeon's Online Rating? A Large-Scale, Retrospective Analysis. J Am Acad Orthop Surg Glob Res Rev 2022; 6:01979360-202203000-00013. [PMID: 35290257 PMCID: PMC8926034 DOI: 10.5435/jaaosglobal-d-22-00027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 01/21/2022] [Indexed: 11/29/2022]
Abstract
Introduction: In the past decade, online physician review websites have become an important source of information for patients, with the largest and most popular being Healthgrades.com. Our study aims to investigate demographic and volume-based trends for online reviews of every Healthgrades-listed orthopaedic surgeon through a nationwide, retrospective analysis. Methods: All available demographic and rating information for orthopaedic surgeons (n = 28,713; Healthgrades.com) was analyzed using one-way Analysis of Variance, Tukey Studentized Range (Honestly Significant Difference), linear regression, and Pearson correlation coefficient. Results: The mean rating for all surgeons was 3.99 (SD 0.92), and the mean number of ratings was 13.43 (SD 20.4). Men had a greater mean rating at 4.02 compared with women at 3.91 (P < 0.0001), and DO surgeons had greater mean rating at 4.11 compared with MD surgeons at 3.90 (P < 0.0001). The correlation between rating and age had a significant negative correlation (P < 0.0001). The correlation between average online rating and number of reviews had a significant positive correlation (P < 0.0001). Discussion: Our analysis suggests that greater online ratings are associated with the male sex and DO degrees. In addition, our study discovered that the number of ratings was positively correlated with greater mean online ratings, whereas older age was negatively correlated with greater mean online ratings.
Collapse
|
16
|
Assessing the reliability of automatic sentiment analysis tools on rating the sentiment of reviews of NHS dental practices in England. PLoS One 2021; 16:e0259797. [PMID: 34910757 PMCID: PMC8673612 DOI: 10.1371/journal.pone.0259797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 10/27/2021] [Indexed: 11/19/2022] Open
Abstract
Background Online reviews may act as a rich source of data to assess the quality of dental practices. Assessing the content and sentiment of reviews on a large scale is time consuming and expensive. Automation of the process of assigning sentiment to big data samples of reviews may allow for reviews to be used as Patient Reported Experience Measures for primary care dentistry. Aim To assess the reliability of three different online sentiment analysis tools (Amazon Comprehend DetectSentiment API (ACDAPI), Google and Monkeylearn) at assessing the sentiment of reviews of dental practices working on National Health Service contracts in the United Kingdom. Methods A Python 3 script was used to mine 15800 reviews from 4803 unique dental practices on the NHS.uk websites between April 2018 –March 2019. A random sample of 270 reviews were rated by the three sentiment analysis tools. These reviews were rated by 3 blinded independent human reviewers and a pooled sentiment score was assigned. Kappa statistics and polychoric evalutaiton were used to assess the level of agreement. Disagreements between the automated and human reviewers were qualitatively assessed. Results There was good agreement between the sentiment assigned to reviews by the human reviews and ACDAPI (k = 0.660). The Google (k = 0.706) and Monkeylearn (k = 0.728) showed slightly better agreement at the expense of usability on a massive dataset. There were 33 disagreements in rating between ACDAPI and human reviewers, of which n = 16 were due to syntax errors, n = 10 were due to misappropriation of the strength of conflicting emotions and n = 7 were due to a lack of overtly emotive language in the text. Conclusions There is good agreement between the sentiment of an online review assigned by a group of humans and by cloud-based sentiment analysis. This may allow the use of automated sentiment analysis for quality assessment of dental service provision in the NHS.
Collapse
|
17
|
Building a Multidisciplinary Academic Surgical Gender-affirmation Program: Lessons Learned. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2021; 9:e3478. [PMID: 33968551 PMCID: PMC8099415 DOI: 10.1097/gox.0000000000003478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 01/25/2021] [Indexed: 11/26/2022]
Abstract
Background: Every day, we see more patients present to hospitals and clinics seeking gender-affirmation care to ameliorate the symptoms of gender dysphoria. However, to provide a multidisciplinary approach, it is important to offer an integrated clinical program that provides mental health assessment, endocrine therapy, physical therapy, research, and the full spectrum of surgical services devoted to transgender patients. This article describes our experience on building a specialized, multidisciplinary, academic state-of-the-art gender-affirmation program. Methods: Herein, we describe the main and critical components on how to build a multidisciplinary academic gender-affirmation program. We share our lessons learned from this experience and describe how to overcome some of the obstacles during the process. Results: Building a multidisciplinary academic gender-affirmation program requires an invested team, as each and every member is essential for feedback, referrals, and to improve patient’s experience. Institutional support is essential and by far the most important component to overcome some of the obstacles during the process. Having all team members working under the same institution provides all the critical components needed to improve outcomes and patient satisfaction. In addition, the collection of prospective data with a well-structured research team will provide information needed to improve clinical services and standardize clinical protocols, while leaving space for innovation. Conclusions: This article describes the steps and experience needed to build a multidisciplinary holistic academic gender-affirmation program. We provide our lessons learned during the process that will help guide those who intend to start an academic gender-affirmation program.
Collapse
|
18
|
Damanpour S, Nazarian R, Deutsch A, Hosgood HD, Kim J, McLellan BN. Social media activity is associated with higher physician ratings by patients. J Am Acad Dermatol 2020; 84:1455-1458. [PMID: 32622896 DOI: 10.1016/j.jaad.2020.06.1015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 06/15/2020] [Accepted: 06/17/2020] [Indexed: 11/18/2022]
Affiliation(s)
- Shadi Damanpour
- Department of Medicine, Division of Dermatology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, New York
| | - Roya Nazarian
- Icahn School of Medicine at Mount Sinai, New York, New York
| | - Alana Deutsch
- Department of Medicine, Division of Dermatology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, New York
| | - H Dean Hosgood
- Department of Epidemiology and Population Health, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, New York
| | - Jaehwan Kim
- Department of Medicine, Division of Dermatology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, New York
| | - Beth N McLellan
- Department of Medicine, Division of Dermatology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, New York.
| |
Collapse
|
19
|
Runge NE, Jay JH, Vergara FH, Oni JK. An Analysis of Online Ratings of Hip and Knee Surgeons. J Arthroplasty 2020; 35:1432-1436. [PMID: 31973969 DOI: 10.1016/j.arth.2019.12.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 11/30/2019] [Accepted: 12/03/2019] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Public domain physician review websites (PRWs) and personal websites are extremely popular measures that patients use to evaluate physicians before receiving care. Few studies have examined how orthopedic surgeons are rated on PRWs and personal websites. This study examines the online ratings of hip and knee replacement subspecialists. METHODS The American Association of Hip and Knee Surgeons (AAHKS) fellow's ratings were examined from October 1st, 2018 to December 31st, 2018, on Healthgrades.com, Vitals.com, RateMDs.com, Google.com, and personal websites. Number of responses and average ratings (0.0-5.0) were recorded, along with provider gender, years in practice (0-10, 11-20, and 21+), practice type (academic, private), geographic region (NE, SE, MW, SW, W), degree (MD, DO), and fellowship training (yes, no). The Kruskal-Wallis testing was performed to determine factors affecting positive surgeon ratings. RESULTS 98.3% (483) of 490 AAHKS surgeons were rated at least once. No significant differences in average ratings were identified between websites. Surgeons in practice 1-10 years had significantly higher ratings than those in practice 11-20 and 21+ years (P < .01). Fellowship-trained surgeons in practice 1-10 years also showed significantly higher ratings. No differences in average ratings were found between gender, practice type, and geographic region. CONCLUSIONS AAHKS surgeons have high average ratings and are rated online frequently. Surgeons in practice 1-10 years had statistically higher overall average ratings. Adult reconstruction fellowship training was also associated with higher average ratings for surgeons in practice 1-10 years. Public domain PRWs and personal websites showed no difference in average ratings.
Collapse
Affiliation(s)
| | - Jordan H Jay
- Alabama College of Osteopathic Medicine, Dothan, AL
| | - Franz H Vergara
- Orthopaedic Surgical Department, WellStar Health System, Atlanta, GA
| | - Julius K Oni
- Department of Orthopedic Surgery, The Johns Hopkins University, Baltimore, MD
| |
Collapse
|
20
|
Melone G, Brodell J, Hernandez C, Menga E, Balkissoon R, Liu X, Zhang J, Mesfin A. Online ratings of spinal deformity surgeons: analysis of 634 surgeons. Spine Deform 2020; 8:17-24. [PMID: 31925764 DOI: 10.1007/s43390-019-00012-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Accepted: 09/15/2019] [Indexed: 11/30/2022]
Abstract
STUDY DESIGN Observational study. OBJECTIVE To evaluate the online ratings of spine deformity surgeons and variables that may affect online ratings Physician review websites (PRW) continue to be an emerging trend in the US across all specialties. Previous literature with smaller sample sizes revealed that most spine surgeons are rated on at least on PRW. To date, the online ratings of spinal deformity surgeons have not been evaluated. MATERIALS AND METHODS A review of the 2017 Scoliosis Research Society (SRS) Fellowship directory for active fellows and candidate members yielded 634 active members. Online ratings from five PRWs were recorded and scaled from 0 to 100. Using SPSS, one-way analysis of variance was used to compare differences between multiple groups. A t test was used to compare differences between two groups. Significance was set at p < 0.05. RESULTS Most surgeons (98.7%) were rated on at least one PRW. Surgeons in academic or hospital practice had higher ratings than those in private practice (83.4 vs. 78.8, and 83.7 vs. 78.8, p < 0.001). Surgeons with 0-5-year experience had higher ratings than more experienced surgeons (p < 0.001). However, surgeons in practice for 0-5 years also had fewer reviews than their more experienced colleagues (p < 0.05). We found no differences in ratings based on sex, specialty, or region. The largest differences in ratings between high and poorly rated spine surgeons was in areas pertaining to the doctor-patient relationship (answering questions, time spent with the patient). CONCLUSION The majority (98.7%) of SRS surgeons are rated on at least one PRW. SRS surgeons in practice between 0 and 5 years have higher ratings than more experienced surgeons, but were rated by fewer patients than their more experienced counterparts. Higher ratings were associated with variables pertaining to the patient-doctor relationship. LEVEL OF EVIDENCE IV.
Collapse
Affiliation(s)
| | - James Brodell
- University of Texas Medical Branch, Galveston, TX, USA
| | - Cesar Hernandez
- Department of Orthopaedics and Rehabilitation, University of Rochester School of Medicine and Dentistry, 601 Elmwood Ave, Box 665, Rochester, NY, 14642, USA
| | - Emmanuel Menga
- Department of Orthopaedics and Rehabilitation, University of Rochester School of Medicine and Dentistry, 601 Elmwood Ave, Box 665, Rochester, NY, 14642, USA
| | - Rishi Balkissoon
- Department of Orthopaedics and Rehabilitation, University of Rochester School of Medicine and Dentistry, 601 Elmwood Ave, Box 665, Rochester, NY, 14642, USA
| | - Ximing Liu
- Department of Orthopedics, Wuhan General Hospital of the Chinese People's Liberation Army, Wuhan, 430070, Hubei, China
| | - Jun Zhang
- Department of Orthopedics, Zhejiang Provincial People's Hospital, Hangzhou, 310014, Zhejiang, China
| | - Addisu Mesfin
- Department of Orthopaedics and Rehabilitation, University of Rochester School of Medicine and Dentistry, 601 Elmwood Ave, Box 665, Rochester, NY, 14642, USA.
| |
Collapse
|
21
|
Abstract
Background: Social media is an effective tool to enhance reputation and brand recognition and is being used by more than 40% of patients when selecting a physician. This study aimed to evaluate the use of social media in hand surgeon practices, and to assess the impact that one's social media presence has on physician-rating website scores (PRWs). Methods: Randomly selected hand surgeons from across the United States were identified. Sequential searches were performed using the physicians name + the respective social media platform (Facebook, LinkedIn, YouTube, Twitter, Instagram, personal website, group website). A comprehensive social media utilization index (SMI) was created for each surgeon. Utilizing descriptive statistics, we assessed the effect of social media on the PRW. Results: A total of 116 board-certified hand surgeons were included in our study. The sample identified 10.3% of the population used Facebook, 1.7% used Twitter, 25.8% used YouTube, 22.4% used LinkedIn, 27.5% used a personal website, and 36.2% used a group website, 0% used Instagram. The average SMI was 1.53 ± 1.42 (0-6). Physicians with a personal website received higher Healthgrades scores than those without one (P < .05). Analysis of SMI demonstrated that hand surgeons with an index less than 3 received lower Healthgrades scores compared to those with an SMI above 3 (P < .001). Conclusion: Hand surgeons underutilize social media platforms in their practice. A personal website is single most important social media platform to improve HealthGrades score in hand surgeons.
Collapse
|
22
|
Rotman LE, Alford EN, Shank CD, Dalgo C, Stetler WR. Is There an Association Between Physician Review Websites and Press Ganey Survey Results in a Neurosurgical Outpatient Clinic? World Neurosurg 2019; 132:e891-e899. [PMID: 31382063 DOI: 10.1016/j.wneu.2019.07.193] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 07/25/2019] [Accepted: 07/26/2019] [Indexed: 11/30/2022]
Abstract
OBJECTIVE Recent studies suggest a poor association between physician review websites and the validated metrics used by the Centers for Medicare and Medicaid Services. The purpose of this study was to evaluate the association between online and outpatient Press Ganey (PG) measures of patient satisfaction in a neurosurgical department. METHODS We obtained PG survey results from one large academic institution's outpatient neurosurgery clinic. Popular physician review websites were searched for each of the faculty captured in the PG data. Average physician rating and percent Top Box scores were calculated for each physician. PG data were separated into new and established clinic visits for subset analysis. Spearman's rank correlation coefficients were calculated to determine associations. RESULTS Twelve neurosurgeons were included. Established patients demonstrated greater PG scores as compared with new patients, with an average physician rating increase of 0.55 and an average Top Box increase of 12.5%. Online physician ratings were found to demonstrate strong agreement with PG scores for the entire PG population, new patient subset, and established patient subset (ρ = 0.77-0.79, P < 0.05). Online Top Box scores demonstrated moderate agreement with overall PG Top Box scores (ρ = 0.59, P = 0.042), moderate agreement with the new patient population Top Box scores (ρ = 0.56, P = 0.059), and weak agreement with established patient population Top Box scores (ρ = 0.38, P = 0.217). CONCLUSIONS Our findings demonstrated a strong agreement between PG ratings and online physician ratings and a poorer correlation when comparing PG Top Box scores with online physician Top Box scores, particularly in the established patient population.
Collapse
Affiliation(s)
- Lauren E Rotman
- Department of Neurosurgery, University of Alabama at Birmingham, Birmingham, Alabama, USA.
| | - Elizabeth N Alford
- Department of Neurosurgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Christopher D Shank
- Department of Neurosurgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Caitlin Dalgo
- Department of Neurosurgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - William R Stetler
- Department of Neurosurgery, University of Alabama at Birmingham, Birmingham, Alabama, USA
| |
Collapse
|
23
|
Murphy GP, Radadia KD, Breyer BN. Online physician reviews: is there a place for them? Risk Manag Healthc Policy 2019; 12:85-89. [PMID: 31191060 PMCID: PMC6526774 DOI: 10.2147/rmhp.s170381] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 03/26/2019] [Indexed: 11/28/2022] Open
Abstract
Web-based physician ratings are increasingly popular but imperfect proxies for clinical competence. Yet they provide valuable information to patients and providers when taken in proper context. Providers need to embrace the reviews and use them to enact positive change in order to improve the quality of our patients’ experience. Patients need to realize the limitations of online ratings, particularly with smaller sample size and be discerning about the reasons behind the review.
Collapse
Affiliation(s)
- Gregory P Murphy
- Division of Urologic Surgery, Washington University, St. Louis, MO, USA
| | - Kushan D Radadia
- Division of Urology, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Benjamin N Breyer
- Department of Urology, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
24
|
Haglin JM, Eltorai AEM, Kalagara S, Kingrey B, Durand WM, Aidlen JP, Daniels AH. Patient-Rated Trust of Spine Surgeons: Influencing Factors. Global Spine J 2018; 8:728-732. [PMID: 30443484 PMCID: PMC6232710 DOI: 10.1177/2192568218767385] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
STUDY DESIGN Descriptive analysis using publicly available data. OBJECTIVES The purpose of this study was 2-fold: to assess patient-rated trustworthiness of spine surgeons as a whole and to assess if academic proclivity, region of practice, or physician sex affects ratings of patient perceived trust. METHODS Orthopedic spine surgeons were randomly selected from the North American Spine Society directory. Surgeon profiles on 3 online physician rating websites, HealthGrades, Vitals, and RateMDs were analyzed for patient-reported trustworthiness. Whether or not the surgeon had published a PubMed-indexed paper in 2016 was assessed with regard to trustworthiness scores. Total number of publications was also assessed. Individuals with >300 publications were excluded due to the likelihood of repeat names. RESULTS Recent publication and total number of publications has no relationship with online patient ratings of trustworthiness across all surgeons in this study. Region of practice likewise has no influence on mean trust ratings, yet varied levels of correlation are observed. Furthermore, there was no difference in trust scores between male and female surgeons. CONCLUSION Total academic proclivity via indexed publications does not correlate with patient perceived physician trustworthiness among spine surgeons as reported on physician review websites. Furthermore, region of practice within the United States does not have an influence on these trust scores. Likewise, there is no difference in trust score between female and male spine surgeons. This study also highlights an increasing utility for physician rating websites in spine surgery for evaluating and monitoring patient perception.
Collapse
Affiliation(s)
| | | | | | - Brandon Kingrey
- Warren Alpert Medical School of Brown University, Providence, RI,
USA
| | - Wessley M. Durand
- Warren Alpert Medical School of Brown University, Providence, RI,
USA
| | | | - Alan H. Daniels
- Warren Alpert Medical School of Brown University, Providence, RI,
USA
| |
Collapse
|
25
|
Burn MB, Lintner DM, Cosculluela PE, Varner KE, Liberman SR, McCulloch PC, Harris JD. Physician Rating Scales Do Not Accurately Rate Physicians. Orthopedics 2018; 41:e445-e456. [PMID: 29658974 DOI: 10.3928/01477447-20180409-06] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 07/31/2017] [Indexed: 02/03/2023]
Abstract
The purpose of this study was to determine the proportion of questions used by online physician rating scales to directly rate physicians themselves. A systematic review was performed of online, patient-reported physician rating scales. Fourteen websites were identified containing patient-reported physician rating scales, with the most common questions pertaining to office staff courtesy, wait time, overall rating (entered, not calculated), trust/confidence in physician, and time spent with patient. Overall, 28% directly rated the physician, 48% rated both the physician and the office, and 24% rated the office alone. There is great variation in the questions used, and most fail to directly rate physicians themselves. [Orthopedics. 2018; 41(4):e445-e456.].
Collapse
|
26
|
Abstract
STUDY DESIGN Observational study. OBJECTIVE To evaluate the online ratings of spine surgeons and variables that may affect online ratings. SUMMARY OF BACKGROUND DATA Physician review Web sites (PRW) are rapidly growing for-profit businesses. Most orthopedic surgeons are rated on at least one PRW as are other surgical specialists. To date the online ratings of spine surgeons have not been evaluated. METHODS Cervical Spine Research Society surgeon ratings on five physician rating Web sites were performed in April 2016: "healthgrade.com," "vitals.com," "ratemd.com," "webmd.com," and "yelp.com." Numeric ratings from the PRWs were standardized on a scale of 0 to 100 with a higher score indicating positive ratings. Sex, practice sector (academic or private), specialty (orthopedics or neurosurgery), geographic location, and years of practice were also collected. RESULTS A total of 209 spine surgeons were included in our study. Of the 209 spine surgeons, 208 (99.52%) were rated at least once in one of the five PRWs. Average number of ratings per surgeon was 2.96. Average rating was 80 (40-100). There were four female (1.92%) and 204 male surgeons (98.1%). There were 121 (58.2%) in academic practice and 87 (41.8%) in private practice. There were 175 (84.1%) orthopedic surgeons and 33 (15.9%) neurosurgeons. Most of the surgeons were Caucasian 163 (78.4%) and worked in the South and Northeast 135 (64.9%). Those in academic practice had significantly higher ratings (81.6 vs. 77.65; P = 0.026). Number of years in practice was significantly associated with ratings (P = 0.0003) with those in practice for 21 or more years having significantly lower ratings. CONCLUSION In this first study evaluating the online ratings of spine surgeons, we found that 99.5% of spine surgeon had at least one rating on a PRW. The average score, 80, indicated mostly positive ratings. Being in practice for 20 years or less and being in academic practice significantly associated with higher ratings. LEVEL OF EVIDENCE 4.
Collapse
|
27
|
Calixto NE, Chiao W, Durr ML, Jiang N. Factors Impacting Online Ratings for Otolaryngologists. Ann Otol Rhinol Laryngol 2018; 127:521-526. [PMID: 29882425 DOI: 10.1177/0003489418778062] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE To identify factors associated with online patient ratings and comments for a nationwide sample of otolaryngologists. METHODS Ratings, demographic information, and written comments were obtained for a random sample of otolaryngologists from HealthGrades.com and Vitals.com . Online Presence Score (OPS) was based on 10 criteria, including professional website and social media profiles. Regression analyses identified factors associated with increased rating. We evaluated for correlations between OPS and other attributes with star rating and used chi-square tests to evaluate content differences between positive and negative comments. RESULTS On linear regression, increased OPS was associated with higher ratings on HealthGrades and Vitals; higher ratings were also associated with younger age on Vitals and less experience on HealthGrades. However, detailed correlation studies showed weak correlation between OPS and rating; age and graduation year also showed low correlation with ratings. Negative comments more likely focused on surgeon-independent factors or poor bedside manner. CONCLUSION Though younger otolaryngologists with greater online presence tend to have higher ratings, weak correlations suggest that age and online presence have only a small impact on the content found on ratings websites. While most written comments are positive, deficiencies in bedside manner or other physician-independent factors tend to elicit negative comments.
Collapse
Affiliation(s)
- Nathaniel E Calixto
- 1 University of California, Irvine School of Medicine, Irvine, California, USA
| | - Whitney Chiao
- 2 University of California, San Francisco School of Medicine, San Francisco, California, USA
| | - Megan L Durr
- 3 Head and Neck Surgery Department, Kaiser Permanente Oakland Medical Center, Oakland, California, USA
| | - Nancy Jiang
- 3 Head and Neck Surgery Department, Kaiser Permanente Oakland Medical Center, Oakland, California, USA
| |
Collapse
|
28
|
Jack RA, Burn MB, McCulloch PC, Liberman SR, Varner KE, Harris JD. Does experience matter? A meta-analysis of physician rating websites of Orthopaedic Surgeons. Musculoskelet Surg 2018; 102:63-71. [PMID: 28853024 DOI: 10.1007/s12306-017-0500-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 08/23/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE To perform a systematic review evaluating online ratings of Orthopaedic Surgeons to determine: (1) the number of reviews per surgeon by website, (2) whether the number of reviews and rate of review acquisition correlated with years in practice, and (3) whether the use of ratings websites varied based on the surgeons' geographic region of practice. METHODS The USA was divided into nine geographic regions, and the most populous city in each region was selected. HealthGrades and the American Board of Orthopaedic Surgery (ABOS) database were used to identify and screen (respectively) all Orthopaedic Surgeons within each of these nine cities. These surgeons were divided into three "age" groups by years since board certification (0-10, 10-20, and 20-30 years were assigned as Groups 1, 2, and 3, respectively). An equal number of surgeons were randomly selected from each region for final analysis. The online profiles for each surgeon were reviewed on four online physician rating websites (PRW, i.e. HealthGrades, Vitals, RateMDs, Yelp) for the number of available patient reviews. Descriptive statistics, analysis of variance (ANOVA), and Pearson correlations were used. RESULTS Using HealthGrades, 2802 "Orthopaedic Surgeons" were identified in nine cities. However, 1271 (45%) of these were not found in the ABOS board certification database. After randomization, a total of 351 surgeons were included in the final analysis. For these 351 surgeons, the mean number of reviews per surgeon found on all four websites was 9.0 ± 14.8 (range 0-184). The mean number of reviews did not differ between the three age groups (p > 0.05) with 8.7 ± 14.4, (2) 10.3 ± 18.3, and (3) 8.0 ± 10.8 for Groups 1, 2, and 3, respectively. However, the rate that reviews were obtained (i.e. reviews per surgeon per year) was significantly higher (p < 0.001) in Group 1 (2.6 ± 7.7 reviews per year) compared to Group 2 (1.4 ± 2.4) and Group 3 (1.1 ± 1.4). There was no correlation between the number of reviews and years in practice (R < 0.001), and there was a poor correlation between number of reviews and regional population (R = 0.199). CONCLUSIONS The number of reviews per surgeon did not differ significantly between the three defined age groups based on years in practice. However, surgeons with less than 10 years in practice were accumulating reviews at a significantly higher rate. Interestingly nearly half of "Orthopaedic Surgeons" listed were not found to be ABOS-certified Orthopaedic Surgeons.
Collapse
Affiliation(s)
- R A Jack
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - M B Burn
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - P C McCulloch
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - S R Liberman
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - K E Varner
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA
| | - J D Harris
- Department of Orthopedics and Sports Medicine, Houston Methodist Hospital, 6445 Main Street, Outpatient Center, Floor 25, Houston, TX, 77030, USA.
| |
Collapse
|
29
|
Liu JJ, Matelski JJ, Bell CM. Scope, Breadth, and Differences in Online Physician Ratings Related to Geography, Specialty, and Year: Observational Retrospective Study. J Med Internet Res 2018. [PMID: 29514775 PMCID: PMC5863010 DOI: 10.2196/jmir.7475] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background Physician ratings websites have emerged as a novel forum for consumers to comment on their health care experiences. Little is known about such ratings in Canada. Objective We investigated the scope and trends for specialty, geographic region, and time for online physician ratings in Canada using a national data source from the country’s leading physician-rating website. Methods This observational retrospective study used online ratings data from Canadian physicians (January 2005-September 2013; N=640,603). For specialty, province, and year of rating, we assessed whether physicians were likely to be rated favorably by using the proportion of ratings greater than the overall median rating. Results In total, 57,412 unique physicians had 640,603 individual ratings. Overall, ratings were positive (mean 3.9, SD 1.3). On average, each physician had 11.2 (SD 10.1) ratings. By comparing specialties with Canadian Institute of Health Information physician population numbers over our study period, we inferred that certain specialties (obstetrics and gynecology, family practice, surgery, and dermatology) were more commonly rated, whereas others (pathology, radiology, genetics, and anesthesia) were less represented. Ratings varied by specialty; cardiac surgery, nephrology, genetics, and radiology were more likely to be rated in the top 50th percentile, whereas addiction medicine, dermatology, neurology, and psychiatry were more often rated in the lower 50th percentile of ratings. Regarding geographic practice location, ratings were more likely to be favorable for physicians practicing in eastern provinces compared with western and central Canada. Regarding year, the absolute number of ratings peaked in 2007 before stabilizing and decreasing by 2013. Moreover, ratings were most likely to be positive in 2007 and again in 2013. Conclusions Physician-rating websites are a relatively novel source of provider-level patient satisfaction and are a valuable source of the patient experience. It is important to understand the breadth and scope of such ratings, particularly regarding specialty, geographic practice location, and changes over time.
Collapse
Affiliation(s)
- Jessica Janine Liu
- Department of Medicine, University of Toronto, University Health Network, Toronto, ON, Canada
| | | | - Chaim M Bell
- Sinai Health System, Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
30
|
Online physician review websites poorly correlate to a validated metric of patient satisfaction. J Surg Res 2018; 227:1-6. [PMID: 29804840 DOI: 10.1016/j.jss.2018.01.037] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 01/15/2018] [Accepted: 01/25/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND Physician review websites such as Vitals and Healthgrades are becoming an increasingly popular tool for patients to choose providers. We hypothesized that the scores of these surveys poorly represent the true value of patient satisfaction when compared to a validated survey instrument. METHODS Answers from Vitals and Healthgrades online surveys were compared to the Press Ganey Medical Practice Survey (PGMPS) for 200 faculty members at a university hospital for FY15. Weighted Pearson's correlation was used to compare Healthgrades and Vitals to PGMPS. RESULTS While statistically significant, both Vitals and Healthgrades had very low correlations with the PGMPS with weighted coefficients of 0.18 (95% confidence interval: 0.02-0.34, P = 0.025) and 0.27 (95% confidence interval: 0.12-0.42, P < 0.001), respectively. CONCLUSIONS Online physician rating websites such as Vitals and Healthgrades poorly correlate with the PGMPS, a validated measure of patient satisfaction. Patients should be aware of these limitations and, consequently, should have access to the most accurate measure of patient satisfaction.
Collapse
|
31
|
Huis In 't Veld EA, Canales FL, Furnas HJ. The Impact of a Plastic Surgeon's Gender on Patient Choice. Aesthet Surg J 2017; 37:466-471. [PMID: 27913412 PMCID: PMC5434485 DOI: 10.1093/asj/sjw180] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background In the patient-driven market of aesthetic surgery, an understanding of the factors that patients consider in their choice of surgeon can inform the individual plastic surgeon's marketing strategy. Previous studies have investigated patient gender preferences for physicians in other specialties, but none has investigated whether patients consider gender when choosing a plastic surgeon. Objectives The purpose of this study is to determine the impact of a plastic surgeon's gender on patient choice. Methods A prospective study was conducted in a single private practice of two plastic surgeons, one male and one female, both closely matched in training, experience, and reputation. Two hundred consecutive patients calling for a consultation were asked if they preferred a male or female doctor; their preference, age, and area(s) of interest were recorded. Results All patients were women. Nearly half (46%) had no gender preference, 26% requested a female surgeon, and 1% requested a male. Preference for a female surgeon was significant (Binomial-test: P < 0.001). The remaining 27% requested a specific doctor, with slightly more requesting (53.7%) the male surgeon by name, than requested the female surgeon by name (46.3%), a difference that was not statistically significant (P = 0.683). Conclusions Most female patients interested in aesthetic surgery have no gender preference. Of those who do, nearly all requested a female plastic surgeon. More important than a plastic surgeon's gender, however, is a plastic surgeon's reputation.
Collapse
Affiliation(s)
| | | | - Heather J Furnas
- Adjunct Assistant Professor, Division of Plastic Surgery, Department of Surgery, Stanford University, Stanford, CA, USA
| |
Collapse
|
32
|
Hawkins CM, DeLaO AJ, Hung C. Social Media and the Patient Experience. J Am Coll Radiol 2016; 13:1615-1621. [DOI: 10.1016/j.jacr.2016.09.006] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Revised: 09/13/2016] [Accepted: 09/15/2016] [Indexed: 11/25/2022]
|