1
|
Artz N, Dalton J, Ramanathan R, Lin RT, Sadhwani S, Como CJ, Oyekan A, Tang YM, Li V, Nwankwo J, Lee JY, Shaw JD. Characterizing Negative Online Reviews of Spine Surgeons. Spine (Phila Pa 1976) 2024; 49:E154-E163. [PMID: 38351707 DOI: 10.1097/brs.0000000000004962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 02/05/2024] [Indexed: 05/09/2024]
Abstract
DESIGN Retrospective review. OBJECTIVE Characterize negative reviews of spine surgeons in the United States. SUMMARY Physician rating websites significantly influence the selection of doctors by other patients. Negative experiences are impacted by various factors, both clinical and nonclinical, geography, and practice structure. The purpose of this study was to evaluate and categorize negative reviews of spine surgeons in the United States, with a focus on surgical versus nonsurgical reviewers. METHODS Spine surgeons were selected from available online professional society membership directories. A search for reviews was performed on Healthgrades.com, Vitals.com, and RateMDs.com for the past 10 years. Free response reviews were coded by complaint, and qualitative analysis was performed. χ 2 and Fisher exact tests were used to compare categorical variables, and multiple comparisons were adjusted with Benjamini-Hochberg correction. A binary logistic regression model was performed for the top three most mentioned nonclinical and clinical complaint labels. A P -value <0.05 was considered statistically significant. RESULTS A total of 16,695 online reviews were evaluated, including 1690 one-star reviews (10.1%). Among one-star reviews, 64.7% were written by nonsurgical patients and 35.3% by surgical patients. Nonclinical and clinical comments constituted 54.9% and 45.1% of reviews, respectively. Surgeons in the South had more "bedside manner" comments (43.3%, P <0.0001), while Northeast surgeons had more "poor surgical outcome" remarks compared with all other geographic regions (14.4%, P <0.001). Practicing in the South and Northeast were independent predictors of having complaints about "bedside manner" and "poor surgical outcome," respectively. CONCLUSION Most one-star reviews of spine surgeons were attributed to nonsurgical patients, who tended to be unsatisfied with nonclinical factors, especially "bedside manner." However, there was substantial geographic variation. These results suggest that spine surgeons could benefit from focusing on nonclinical factors (bedside manner), especially among nonoperative patients, and that regional nuances should be considered in delivering spine care. LEVEL OF EVIDENCE Level- 5.
Collapse
Affiliation(s)
- Nicolas Artz
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
| | - Jonathan Dalton
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| | - Rahul Ramanathan
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| | - Ryan T Lin
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
| | - Shaan Sadhwani
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
| | - Christopher J Como
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| | - Anthony Oyekan
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| | - Yunting Melissa Tang
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| | - Vivian Li
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
| | - Josephine Nwankwo
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
| | - Joon Y Lee
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| | - Jeremy D Shaw
- Department of Orthopaedic Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA
- Pittsburgh Orthopaedic Spine Research (POSR) Group, Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, PA
- Orland Bethel Family Musculoskeletal Research Center (BMRC), Pittsburgh, PA
| |
Collapse
|
2
|
Kim JK, Tawk K, Kim JM, Shahbaz H, Lipton JA, Haidar YM, Tjoa T, Abouzari M. Online ratings and narrative comments of American Head and Neck Society surgeons. Head Neck 2024; 46:2508-2516. [PMID: 38488221 PMCID: PMC11401960 DOI: 10.1002/hed.27743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 02/29/2024] [Accepted: 03/10/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND We analyzed online rating scores and comments of head and neck surgeons to understand factors that contribute to higher ratings. METHODS Numerical ratings and comments for American Head and Neck Society physicians were extracted from Healthgrades, Vitals, RateMDs, and Yelp, with narrative comments categorized based on content. Physician practice location, education, and residency training were also compiled. RESULTS Patient ratings were significantly higher with supportive staff and affable physician demeanor but showed significant drops with longer wait times and difficulties scheduling appointments or follow-ups. Physician education and postgraduate training did not significantly affect ratings. CONCLUSION Online ratings and comments correlated to modifiable factors in clinical practice and may be informative in understanding patient needs.
Collapse
Affiliation(s)
- Joshua K Kim
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
- School of Medicine, Duke University, Durham, North Carolina, USA
| | - Karen Tawk
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Jonathan M Kim
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Hady Shahbaz
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Joshua A Lipton
- Department of Computer Science, University of California Irvine, Irvine, California, USA
| | - Yarah M Haidar
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Tjoson Tjoa
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| | - Mehdi Abouzari
- Department of Otolaryngology - Head and Neck Surgery, University of California Irvine, Irvine, California, USA
| |
Collapse
|
3
|
Guetz B, Bidmon S. The Credibility of Physician Rating Websites: A Systematic Literature Review. Health Policy 2023; 132:104821. [PMID: 37084700 DOI: 10.1016/j.healthpol.2023.104821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 04/05/2023] [Accepted: 04/11/2023] [Indexed: 04/23/2023]
Abstract
OBJECTIVES Increasingly, the credibility of online reviews is drawing critical attention due to the lack of control mechanisms, the constant debate about fake reviews and, last but not least, current developments in the field of artificial intelligence. For this reason, the aim of this study was to examine the extent to which assessments recorded on physician rating websites (PRWs) are credible, based on a comparison to other evaluation criteria. METHODS Referring to the PRISMA guidelines, a comprehensive literature search was conducted across different scientific databases. Data were synthesized by comparing individual statistical outcomes, objectives and conclusions. RESULTS The chosen search strategy led to a database of 36,755 studies of which 28 were ultimately included in the systematic review. The literature review yielded mixed results regarding the credibility of PRWs. While seven publications supported the credibility of PRWs, six publications found no correlation between PRWs and alternative datasets. 15 studies reported mixed results. CONCLUSIONS This study has shown that ratings on PRWs seem to be credible when relying primarily on patients' perception. However, these portals seem inadequate to represent alternative comparative values such as the medical quality of physicians. For health policy makers our results show that decisions based on patients' perceptions may be well supported by data from PRWs. For all other decisions, however, PRWs do not seem to contain sufficiently useful data.
Collapse
Affiliation(s)
- Bernhard Guetz
- Department of Marketing and International Management, Alpen-Adria- Universitaet Klagenfurt, Universitaetsstrasse 65-67, Klagenfurt am Woerthersee, 9020, Austria.
| | - Sonja Bidmon
- Department of Marketing and International Management, Alpen-Adria- Universitaet Klagenfurt, Universitaetsstrasse 65-67, Klagenfurt am Woerthersee, 9020, Austria
| |
Collapse
|
4
|
Analysis of Negative Reviews of Orthopedic Oncology Surgeons: An Investigation of Reviews from Healthgrades, Vitals, and Google. Sarcoma 2022; 2022:4351427. [DOI: 10.1155/2022/4351427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 11/13/2022] [Accepted: 11/21/2022] [Indexed: 12/14/2022] Open
Abstract
Background. Physician review websites (PRWs) are increasing in usage and popularity. Our purpose is to characterize one-star reviews of orthopedic oncology surgeons to understand factors in healthcare that contribute to patient satisfaction. Methods. Orthopedic oncology surgeons were randomly selected from the Musculoskeletal Tumor Society. A search for one-star reviews was performed on Google Reviews, Healthgrades, and Vitals.com. Reviews were classified as clinical or nonclinical. Statistical analyses were performed regarding the frequency of reviews and complaints for each category. Results. Of the 7,733 reviews discovered, 908 (11.7%) were identified as one-star reviews. Of 907 usable complaints, 362 (40.8%) were clinical and 545 (59.2%) were nonclinical. The most common nonclinical complaints included bedside manner (65%) and limited time with providers (19%). The most common clinical complaints included complications (26%) and disagreements with the treatment plan (26%). There were 120 surgical and 221 nonsurgical reviews. Surgical patients had a higher rate of clinical complaints. Nonsurgical patients had a higher rate of total complaints. Conclusion. To the best of our knowledge, this is the first study examining PRWs regarding orthopedic oncology surgeons. Most one-star reviews were due to nonclinical complaints from nonsurgical patients. The most common factors are bedside manner, limited time with provider, phone communication issues, and rude/unprofessional conduct.
Collapse
|
5
|
Analysis of physician characteristics and factors influencing the online recommendation of pediatric orthopaedic surgeons: a cross-sectional study. CURRENT ORTHOPAEDIC PRACTICE 2022. [DOI: 10.1097/bco.0000000000001170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
6
|
Brinkman JC, Pollock JR, Arthur JR, Smith J, Lin K, Chang MS. Characterizing negative reviews of orthopedic spine surgeons and practices. NORTH AMERICAN SPINE SOCIETY JOURNAL 2022; 11:100126. [PMID: 35755018 PMCID: PMC9213250 DOI: 10.1016/j.xnsj.2022.100126] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 05/13/2022] [Accepted: 05/13/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Recent evidence suggests that patients prefer subjective and crowd-sourced information over data-driven or quality-based outcomes when choosing a surgeon. Online physician rating and review websites continue to increase in popularity, and over half of patients use them to research physicians. Specifically, Yelp.com is the most frequently utilized online resource by patients. Data regarding the characteristics of negative reviews for spine surgeons and practices is lacking. METHODS Orthopedic Spine surgeons and practices in 8 major US metropolitan regions were surveyed for one-star reviews on Yelp.com. The factors noted in the reviews were recorded and they were classified according to their clinical or nonclinical nature. Reviews were also subclassified into nonsurgical or surgical episodes of care. RESULTS A total of 6,286 Yelp reviews were discovered, 671 (10.6%) of which were rated one-star. The majority of negative reviews (76.4%) were from patients who did not report surgery by the surgeon or practice. Of all comments, 491 (77.6%) related to nonclinical complaints. The most common factors noted in negative reviews were related to bedside manner, rude or unprofessional staff, and wait time. CONCLUSION Choosing a surgeon is a complex process for patients. The large majority of negative reviews were related to nonclinical issues such as poor bedside manner or rude staff and most of these were written by patients that did not undergo a surgical procedure. This may explain the large discrepancy that has been observed between quality metrics and online crowd-sourced reviews. Paying attention to these nonclinical factors may represent the most feasible and valuable targets to improve a surgeon's practice and attract future patients.
Collapse
Affiliation(s)
- Joseph C. Brinkman
- Department of Orthopedic Surgery, Mayo Clinic Arizona, Phoenix, Arizona, United States of America
| | - Jordan R. Pollock
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona, United States of America
| | - Jaymeson R. Arthur
- Department of Orthopedic Surgery, Mayo Clinic Arizona, Phoenix, Arizona, United States of America
| | - Jacob Smith
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona, United States of America
| | - Keldon Lin
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona, United States of America
| | - Michael S. Chang
- Department of Orthopedic Surgery, Mayo Clinic Arizona, Phoenix, Arizona, United States of America
- Sonoran Spine Institute, Scottsdale, Arizona, United States of America
| |
Collapse
|
7
|
Murasko MJ, Ivanov DV, Roe AK, Kamal RN, Amanatullah DF. Patient Satisfaction Scores Are a Poor Metric of Orthopedic Care. Orthopedics 2022; 45:e127-e133. [PMID: 35201936 DOI: 10.3928/01477447-20220217-06] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Patient satisfaction scores are a popular metric used to evaluate orthopedic care. There is little consistency with how satisfaction is described in the orthopedic literature. Online physician reviews are a growing trend that directly and indirectly affect a surgeon's reputation. There is little correlation of higher satisfaction with improved surgical outcomes, so rating surgical care may be misguided and possibly dangerous. Patient satisfaction is an important part of the patient-centered care model, so rating systems should directly reflect quality. More research is needed to determine the relationship between patient satisfaction and the delivery of quality care. [Orthopedics. 2022;45(3):e127-e133.].
Collapse
|
8
|
Saifee DH, Hudnall M, Raja U. Physician Gender, Patient Risk, and Web-Based Reviews: Longitudinal Study of the Relationship Between Physicians' Gender and Their Web-Based Reviews. J Med Internet Res 2022; 24:e31659. [PMID: 35394435 PMCID: PMC9034420 DOI: 10.2196/31659] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 09/02/2021] [Accepted: 03/16/2022] [Indexed: 11/23/2022] Open
Abstract
Background Web-based reviews of physicians have become exceedingly popular among health care consumers since the early 2010s. A factor that can potentially influence these reviews is the gender of the physician, because the physician’s gender has been found to influence patient-physician communication. Our study is among the first to conduct a rigorous longitudinal analysis to study the effects of the gender of physicians on their reviews, after accounting for several important clinical factors, including patient risk, physician specialty, and temporal factors, using time fixed effects. In addition, this study is among the first to study the possible gender bias in web-based reviews using statewide data from Alabama, a predominantly rural state with high Medicaid and Medicare use. Objective This study conducts a longitudinal empirical investigation of the relationship between physician gender and their web-based reviews using data across the state of Alabama, after accounting for patient risk and temporal effects. Methods We created a unique data set by combining data from web-based physician reviews from the popular physician review website, RateMDs, and clinical data from the Center for Medicare and Medicaid Services for the state of Alabama. We used longitudinal econometric specifications to conduct an econometric analysis, while controlling for several important clinical and review characteristics across four rating dimensions (helpfulness, knowledge, staff, and punctuality). The overall rating and these four rating dimensions from RateMDs were used as the dependent variables, and physician gender was the key explanatory variable in our panel regression models. Results The panel used to conduct the main econometric analysis included 1093 physicians. After controlling for several clinical and review factors, the physician random effects specifications showed that male physicians receive better web-based ratings than female physicians. Coefficients and corresponding SEs and P values of the binary variable GenderFemale (1 for female physicians and 0 otherwise) with different rating variables as outcomes were as follows: OverallRating (coefficient –0.194, SE 0.060; P=.001), HelpfulnessRating (coefficient –0.221, SE 0.069; P=.001), KnowledgeRating (coefficient –0.230, SE 0.065; P<.001), StaffRating (coefficient –0.123, SE 0.062; P=.049), and PunctualityRating (coefficient –0.200, SE 0.067; P=.003). The negative coefficients indicate a bias toward male physicians versus female physicians for aforementioned rating variables. Conclusions This study found that female physicians receive lower web-based ratings than male physicians even after accounting for several clinical characteristics associated with the physicians and temporal effects. Although the magnitude of the coefficients of GenderFemale was relatively small, they were statistically significant. This study provides support to the findings on gender bias in the existing health care literature. We contribute to the existing literature by conducting a study using data across the state of Alabama and using a longitudinal econometric analysis, along with incorporating important clinical and review controls associated with the physicians.
Collapse
Affiliation(s)
- Danish Hasnain Saifee
- Department of Information Systems, Statistics, and Management Science, The University of Alabama, Tuscaloosa, AL, United States
| | - Matthew Hudnall
- Department of Information Systems, Statistics, and Management Science, The University of Alabama, Tuscaloosa, AL, United States
| | - Uzma Raja
- Department of Systems and Technology, Auburn University, Auburn, AL, United States
| |
Collapse
|
9
|
The Online Physician Image. Surg Clin North Am 2022; 102:233-239. [DOI: 10.1016/j.suc.2021.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Pollock JR, Arthur JR, Smith JF, Mujahed T, Brinkman JC, Moore ML, Chhabra A. The Majority of Complaints About Orthopedic Sports Surgeons on Yelp Are Nonclinical. Arthrosc Sports Med Rehabil 2021; 3:e1465-e1472. [PMID: 34746847 PMCID: PMC8551418 DOI: 10.1016/j.asmr.2021.07.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 07/12/2021] [Indexed: 12/15/2022] Open
Abstract
PURPOSE To examine and characterize extremely negative Yelp reviews of orthopedic sports surgeons in the United States. METHODS A search for reviews was performed using the keywords "Orthopedic Sports Medicine" on Yelp.com for 8 major metropolitan areas. Single-star reviews were isolated for analysis, and individual complaints were then categorized as clinical or nonclinical. The reviews were classified as surgical or nonsurgical. RESULTS A total of 11,033 reviews were surveyed. Of these, 1,045 (9.5%) were identified as 1-star, and 289 were ultimately included in the study. These reviews encompassed 566 total complaints, 133 (23%) of which were clinical, and 433 (77%) of which were nonclinical in nature. The most common clinical complaints concerned complications (32 complaints; 6%), misdiagnosis (29 complaints; 5%), and uncontrolled pain (21 complaints; 4%). The most common nonclinical complaints concerned physicians' bedside manner (120 complaints; 21%), unprofessional staff (98 complaints; 17%), and finances (78 complaints; 14%). Patients who had undergone surgery wrote 47 reviews that resulted in 114 complaints (20.5% of total complaints), whereas nonsurgical patients were responsible for 242 reviews and a total of 452 complaints (81.3% of total complaints). The difference in the number of complaints by patients after surgery and patients without surgery was statistically significant (P < 0.05) for all categories except for uncontrolled pain, delay in care, bedside manner of midlevel staff, and facilities. CONCLUSION Our study of extremely negative Yelp reviews found that 77% of negative complaints were nonclinical in nature. The most common clinical complaints were complications, misdiagnoses and uncontrolled pain. Only 16% of 1-star reviews were from surgical patients. CLINICAL RELEVANCE Patients use online review platforms when choosing surgeons. A comprehensive understanding of factors affecting patient satisfaction and dissatisfaction is needed. The results of our study could be used to guide future quality-improvement measures and to assist surgeons in maintaining favorable online reputations.
Collapse
Affiliation(s)
| | | | - Jacob F. Smith
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona
| | - Tala Mujahed
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona
| | | | - M. Lane Moore
- Mayo Clinic Alix School of Medicine, Scottsdale, Arizona
| | - Anikar Chhabra
- Department of Orthopedics, Mayo Clinic, Phoenix, Arizona, U.S.A
| |
Collapse
|
11
|
Gates MC, McLachlan I, Butler S, Weston JF. Building Veterinarians beyond Veterinary School: Challenges and Opportunities for Continuing Professional Development in Veterinary Medicine. JOURNAL OF VETERINARY MEDICAL EDUCATION 2021; 48:383-400. [PMID: 34161200 DOI: 10.3138/jvme.2019-0148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Continuing professional development (CPD) is an important mechanism for veterinarians to acquire, maintain, and enhance their capability to perform competently in their chosen practice area over their career. Although most licensing bodies require veterinarians to complete a minimum number of CPD hours each registration cycle, there are known issues with verifying that these activities are having the desired effects of ensuring professional competence and improving outcomes for patients, owners, and veterinarians. In this review, we summarize the literature across different health care professions to highlight three key challenges for veterinary CPD programs. These are (a) defining what it means to be professionally competent across different career stages from graduation to retirement, (b) delivering CPD activities that are effective in promoting evidence-based medicine and behavioral change in practice, and (c) developing reliable and sustainable systems to formally assess the continued professional competence of veterinarians. A common theme across all challenges was the importance of interacting with colleagues as an opportunity to receive external feedback on their professional strengths and weaknesses and to develop stronger support networks for managing common stressors in clinical practice. There was also a recognized need to develop more transparent outlines of the available and acceptable options for managing different animal health concerns as a first step toward identifying new opportunities for the veterinary profession to elevate the level of care provided. Based on these findings, we propose a new framework for defining, delivering, and evaluating CPD that promotes stronger collaboration between veterinarians to improve professional and personal well-being.
Collapse
|
12
|
Emmert M, McLennan S. One Decade of Online Patient Feedback: Longitudinal Analysis of Data From a German Physician Rating Website. J Med Internet Res 2021; 23:e24229. [PMID: 34309579 PMCID: PMC8367114 DOI: 10.2196/24229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 12/21/2020] [Accepted: 06/30/2021] [Indexed: 01/13/2023] Open
Abstract
Background Feedback from patients is an essential element of a patient-oriented health care system. Physician rating websites (PRWs) are a key way patients can provide feedback online. This study analyzes an entire decade of online ratings for all medical specialties on a German PRW. Objective The aim of this study was to examine how ratings posted on a German PRW have developed over the past decade. In particular, it aimed to explore (1) the distribution of ratings according to time-related aspects (year, month, day of the week, and hour of the day) between 2010 and 2019, (2) the number of physicians with ratings, (3) the average number of ratings per physician, (4) the average rating, (5) whether differences exist between medical specialties, and (6) the characteristics of the patients rating physicians. Methods All scaled-survey online ratings that were posted on the German PRW jameda between 2010 and 2019 were obtained. Results In total, 1,906,146 ratings were posted on jameda between 2010 and 2019 for 127,921 physicians. The number of rated physicians increased constantly from 19,305 in 2010 to 82,511 in 2018. The average number of ratings per rated physicians increased from 1.65 (SD 1.56) in 2010 to 3.19 (SD 4.69) in 2019. Overall, 75.2% (1,432,624/1,906,146) of all ratings were in the best rating category of “very good,” and 5.7% (107,912/1,906,146) of the ratings were in the lowest category of “insufficient.” However, the mean of all ratings was 1.76 (SD 1.53) on the German school grade 6-point rating scale (1 being the best) with a relatively constant distribution over time. General practitioners, internists, and gynecologists received the highest number of ratings (343,242, 266,899, and 232,914, respectively). Male patients, those of higher age, and those covered by private health insurance gave significantly (P<.001) more favorable evaluations compared to their counterparts. Physicians with a lower number of ratings tended to receive ratings across the rating scale, while physicians with a higher number of ratings tended to have better ratings. Physicians with between 21 and 50 online ratings received the lowest ratings (mean 1.95, SD 0.84), while physicians with >100 ratings received the best ratings (mean 1.34, SD 0.47). Conclusions This study is one of the most comprehensive analyses of PRW ratings to date. More than half of all German physicians have been rated on jameda each year since 2016, and the overall average number of ratings per rated physicians nearly doubled over the decade. Nevertheless, we could also observe a decline in the number of ratings over the last 2 years. Future studies should investigate the most recent development in the number of ratings on both other German and international PRWs as well as reasons for the heterogeneity in online ratings by medical specialty.
Collapse
Affiliation(s)
- Martin Emmert
- Institute for Healthcare Management & Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
13
|
Goyal D, Guttag J, Syed Z, Mehta R, Elahi Z, Saeed M. Comparing Precision Machine Learning With Consumer, Quality, and Volume Metrics for Ranking Orthopedic Surgery Hospitals: Retrospective Study. J Med Internet Res 2020; 22:e22765. [PMID: 33258459 PMCID: PMC7738251 DOI: 10.2196/22765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 08/19/2020] [Accepted: 10/14/2020] [Indexed: 11/28/2022] Open
Abstract
Background Patients’ choices of providers when undergoing elective surgeries significantly impact both perioperative outcomes and costs. There exist a variety of approaches that are available to patients for evaluating between different hospital choices. Objective This paper aims to compare differences in outcomes and costs between hospitals ranked using popular internet-based consumer ratings, quality stars, reputation rankings, average volumes, average outcomes, and precision machine learning–based rankings for hospital settings performing hip replacements in a large metropolitan area. Methods Retrospective data from 4192 hip replacement surgeries among Medicare beneficiaries in 2018 in a the Chicago metropolitan area were analyzed for variations in outcomes (90-day postprocedure hospitalizations and emergency department visits) and costs (90-day total cost of care) between hospitals ranked through multiple approaches: internet-based consumer ratings, quality stars, reputation rankings, average yearly surgical volume, average outcome rates, and machine learning–based rankings. The average rates of outcomes and costs were compared between the patients who underwent surgery at a hospital using each ranking approach in unadjusted and propensity-based adjusted comparisons. Results Only a minority of patients (1159/4192, 27.6% to 2078/4192, 49.6%) were found to be matched to higher-ranked hospitals for each of the different approaches. Of the approaches considered, hip replacements at hospitals that were more highly ranked by consumer ratings, quality stars, and machine learning were all consistently associated with improvements in outcomes and costs in both adjusted and unadjusted analyses. The improvement was greatest across all metrics and analyses for machine learning–based rankings. Conclusions There may be a substantive opportunity to increase the number of patients matched to appropriate hospitals across a broad variety of ranking approaches. Elective hip replacement surgeries performed at hospitals where patients were matched based on patient-specific machine learning were associated with better outcomes and lower total costs of care.
Collapse
Affiliation(s)
- Dev Goyal
- Health at Scale Corporation, San Jose, CA, United States
| | - John Guttag
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Zeeshan Syed
- Health at Scale Corporation, San Jose, CA, United States
| | - Rudra Mehta
- Health at Scale Corporation, San Jose, CA, United States
| | - Zahoor Elahi
- Health at Scale Corporation, San Jose, CA, United States
| | - Mohammed Saeed
- Health at Scale Corporation, San Jose, CA, United States.,Department of Internal Medicine, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
14
|
Mulgund P, Sharman R, Anand P, Shekhar S, Karadi P. Data Quality Issues With Physician-Rating Websites: Systematic Review. J Med Internet Res 2020; 22:e15916. [PMID: 32986000 PMCID: PMC7551103 DOI: 10.2196/15916] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 04/24/2020] [Accepted: 08/13/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND In recent years, online physician-rating websites have become prominent and exert considerable influence on patients' decisions. However, the quality of these decisions depends on the quality of data that these systems collect. Thus, there is a need to examine the various data quality issues with physician-rating websites. OBJECTIVE This study's objective was to identify and categorize the data quality issues afflicting physician-rating websites by reviewing the literature on online patient-reported physician ratings and reviews. METHODS We performed a systematic literature search in ACM Digital Library, EBSCO, Springer, PubMed, and Google Scholar. The search was limited to quantitative, qualitative, and mixed-method papers published in the English language from 2001 to 2020. RESULTS A total of 423 articles were screened. From these, 49 papers describing 18 unique data quality issues afflicting physician-rating websites were included. Using a data quality framework, we classified these issues into the following four categories: intrinsic, contextual, representational, and accessible. Among the papers, 53% (26/49) reported intrinsic data quality errors, 61% (30/49) highlighted contextual data quality issues, 8% (4/49) discussed representational data quality issues, and 27% (13/49) emphasized accessibility data quality. More than half the papers discussed multiple categories of data quality issues. CONCLUSIONS The results from this review demonstrate the presence of a range of data quality issues. While intrinsic and contextual factors have been well-researched, accessibility and representational issues warrant more attention from researchers, as well as practitioners. In particular, representational factors, such as the impact of inline advertisements and the positioning of positive reviews on the first few pages, are usually deliberate and result from the business model of physician-rating websites. The impact of these factors on data quality has not been addressed adequately and requires further investigation.
Collapse
Affiliation(s)
- Pavankumar Mulgund
- School of Management, State University of New York Buffalo, Buffalo, NY, United States
| | - Raj Sharman
- School of Management, State University of New York Buffalo, Buffalo, NY, United States
| | - Priya Anand
- Institute of Computational and Data Sciences, State University of New York Buffalo, Buffalo, NY, United States
| | - Shashank Shekhar
- School of Management, State University of New York Buffalo, Buffalo, NY, United States
| | - Priya Karadi
- Institute of Computational and Data Sciences, State University of New York Buffalo, Buffalo, NY, United States
| |
Collapse
|
15
|
McLennan S. Rejected Online Feedback From a Swiss Physician Rating Website Between 2008 and 2017: Analysis of 2352 Ratings. J Med Internet Res 2020; 22:e18374. [PMID: 32687479 PMCID: PMC7432139 DOI: 10.2196/18374] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Revised: 05/22/2020] [Accepted: 06/11/2020] [Indexed: 11/15/2022] Open
Abstract
Background Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. Objective The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. Methods The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues. Results Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%). Conclusions It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
16
|
Cheshire WP. Autonomic medical practice viewed through the lens of physician-rating websites. Clin Auton Res 2020; 30:335-341. [DOI: 10.1007/s10286-020-00665-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 01/03/2020] [Indexed: 10/25/2022]
|
17
|
Liu JJ, Goldberg HR, Lentz EJ, Matelski JJ, Alam A, Bell CM. Association Between Web-Based Physician Ratings and Physician Disciplinary Convictions: Retrospective Observational Study. J Med Internet Res 2020; 22:e16708. [PMID: 32406851 PMCID: PMC7256745 DOI: 10.2196/16708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Revised: 01/14/2020] [Accepted: 02/12/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Physician rating websites are commonly used by the public, yet the relationship between web-based physician ratings and health care quality is not well understood. OBJECTIVE The objective of our study was to use physician disciplinary convictions as an extreme marker for poor physician quality and to investigate whether disciplined physicians have lower ratings than nondisciplined matched controls. METHODS This was a retrospective national observational study of all disciplined physicians in Canada (751 physicians, 2000 to 2013). We searched ratings (2005-2015) from the country's leading online physician rating website for this group, and for 751 matched controls according to gender, specialty, practice years, and location. We compared overall ratings (out of a score of 5) as well as mean ratings by the type of misconduct. We also compared ratings for each type of misconduct and punishment. RESULTS There were 62.7% (471/751) of convicted and disciplined physicians (cases) with web-based ratings and 64.6% (485/751) of nondisciplined physicians (controls) with ratings. Of 312 matched case-control pairs, disciplined physicians were rated lower than controls overall (3.62 vs 4.00; P<.001). Disciplined physicians had lower ratings for all types of misconduct and punishment-except for physicians disciplined for sexual offenses (n=90 pairs; 3.83 vs 3.86; P=.81). Sexual misconduct was the only category in which mean ratings for physicians were higher than those for other disciplined physicians (3.63 vs 3.35; P=.003). CONCLUSIONS Physicians convicted for disciplinary misconduct generally had lower web-based ratings. Physicians convicted of sexual misconduct did not have lower ratings and were rated higher than other disciplined physicians. These findings may have future implications for the identification of physicians providing poor-quality care.
Collapse
Affiliation(s)
- Jessica Janine Liu
- Division of General Internal Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada
- Department of Medicine, University Health Network, Toronto, ON, Canada
| | - Hanna R Goldberg
- Department of Obstetrics and Gynecology, University of Toronto, Toronto, ON, Canada
| | - Eric Jm Lentz
- Faculty of Medicine, McMaster University, Hamilton, ON, Canada
| | | | - Asim Alam
- Department of Anesthesia and Surgery, North York General Hospital, Toronto, ON, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
| | - Chaim M Bell
- Division of General Internal Medicine, Department of Medicine, University of Toronto, Toronto, ON, Canada
- Sinai Health System, Toronto, ON, Canada
| |
Collapse
|
18
|
McLennan S. Quantitative Ratings and Narrative Comments on Swiss Physician Rating Websites: Frequency Analysis. J Med Internet Res 2019; 21:e13816. [PMID: 31350838 PMCID: PMC6688440 DOI: 10.2196/13816] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 05/02/2019] [Accepted: 05/17/2019] [Indexed: 11/26/2022] Open
Abstract
Background Physician rating websites (PRWs) have been developed as part of a wider move toward transparency around health care quality, and these allow patients to anonymously rate, comment, and discuss physicians’ quality on the Web. The first Swiss PRWs were established in 2008, at the same time as many international PRWs. However, there has been limited research conducted on PRWs in Switzerland to date. International research has indicated that a key shortcoming of PRWs is that they have an insufficient number of ratings. Objective The aim of this study was to examine the frequency of quantitative ratings and narrative comments on the Swiss PRWs. Methods In November 2017, a random stratified sample of 966 physicians was generated from the regions of Zürich and Geneva. Every selected physician was searched for on 4 rating websites (OkDoc, DocApp, Medicosearch, and Google) between November 2017 and July 2018. It was recorded whether the physician could be identified, what the physician’s quantitative rating was, and whether the physician had received narrative comments. In addition, Alexa Internet was used to examine the number of visitors to the PRWs, compared with other websites. Results Overall, the portion of physicians able to be identified on the PRWs ranged from 42.4% (410/966) on OkDoc to 87.3% (843/966) on DocApp. Of the identifiable physicians, only a few of the selected physicians had been rated quantitatively (4.5% [38/843] on DocApp to 49.8% [273/548] on Google) or received narrative comments (4.5% [38/843] on DocApp to 31.2% [171/548] on Google) at least once. Rated physicians also had, on average, a low number of quantitative ratings (1.47 ratings on OkDoc to 3.74 rating on Google) and narrative comments (1.23 comment on OkDoc to 3.03 comments on Google). All 3 websites allowing ratings used the same rating scale (1-5 stars) and had a very positive average rating: DocApp (4.71), Medicosearch (4.69), and Google (4.41). There were significant differences among the PRWs (with the majority of ratings being posted on Google in past 2 years) and regions (with physicians in Zurich more likely to have been rated and have more ratings on average). Only Google (position 1) and Medicosearch (position 8358) are placed among the top 10,000 visited websites in Switzerland. Conclusions It appears that this is the first time Google has been included in a study examining physician ratings internationally and it is noticeable how Google has had substantially more ratings than the 3 dedicated PRWs in Switzerland over the past 2 and a half years. Overall, this study indicates that Swiss PRWs are not yet a reliable source of unbiased information regarding patient experiences and satisfaction with Swiss physicians; many selected physicians were unable to be identified, only a few physicians had been rated, and the ratings posted were overwhelmingly positive.
Collapse
Affiliation(s)
- Stuart McLennan
- Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany.,Institute for Biomedical Ethics, University of Basel, Basel, Switzerland
| |
Collapse
|
19
|
Murphy GP, Radadia KD, Breyer BN. Online physician reviews: is there a place for them? Risk Manag Healthc Policy 2019; 12:85-89. [PMID: 31191060 PMCID: PMC6526774 DOI: 10.2147/rmhp.s170381] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 03/26/2019] [Indexed: 11/28/2022] Open
Abstract
Web-based physician ratings are increasingly popular but imperfect proxies for clinical competence. Yet they provide valuable information to patients and providers when taken in proper context. Providers need to embrace the reviews and use them to enact positive change in order to improve the quality of our patients’ experience. Patients need to realize the limitations of online ratings, particularly with smaller sample size and be discerning about the reasons behind the review.
Collapse
Affiliation(s)
- Gregory P Murphy
- Division of Urologic Surgery, Washington University, St. Louis, MO, USA
| | - Kushan D Radadia
- Division of Urology, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Benjamin N Breyer
- Department of Urology, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
20
|
Hong YA, Liang C, Radcliff TA, Wigfall LT, Street RL. What Do Patients Say About Doctors Online? A Systematic Review of Studies on Patient Online Reviews. J Med Internet Res 2019; 21:e12521. [PMID: 30958276 PMCID: PMC6475821 DOI: 10.2196/12521] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 12/16/2018] [Accepted: 01/31/2019] [Indexed: 01/20/2023] Open
Abstract
Background The number of patient online reviews (PORs) has grown significantly, and PORs have played an increasingly important role in patients’ choice of health care providers. Objective The objective of our study was to systematically review studies on PORs, summarize the major findings and study characteristics, identify literature gaps, and make recommendations for future research. Methods A major database search was completed in January 2019. Studies were included if they (1) focused on PORs of physicians and hospitals, (2) reported qualitative or quantitative results from analysis of PORs, and (3) peer-reviewed empirical studies. Study characteristics and major findings were synthesized using predesigned tables. Results A total of 63 studies (69 articles) that met the above criteria were included in the review. Most studies (n=48) were conducted in the United States, including Puerto Rico, and the remaining were from Europe, Australia, and China. Earlier studies (published before 2010) used content analysis with small sample sizes; more recent studies retrieved and analyzed larger datasets using machine learning technologies. The number of PORs ranged from fewer than 200 to over 700,000. About 90% of the studies were focused on clinicians, typically specialists such as surgeons; 27% covered health care organizations, typically hospitals; and some studied both. A majority of PORs were positive and patients’ comments on their providers were favorable. Although most studies were descriptive, some compared PORs with traditional surveys of patient experience and found a high degree of correlation and some compared PORs with clinical outcomes but found a low level of correlation. Conclusions PORs contain valuable information that can generate insights into quality of care and patient-provider relationship, but it has not been systematically used for studies of health care quality. With the advancement of machine learning and data analysis tools, we anticipate more research on PORs based on testable hypotheses and rigorous analytic methods. Trial Registration International Prospective Register of Systematic Reviews (PROSPERO) CRD42018085057; https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=85057 (Archived by WebCite at http://www.webcitation.org/76ddvTZ1C)
Collapse
Affiliation(s)
- Y Alicia Hong
- Department of Health Administration and Policy, George Mason University, Fairfax, VA, United States.,School of Public Health, Texas A&M University, College Station, TX, United States
| | - Chen Liang
- Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| | - Tiffany A Radcliff
- School of Public Health, Texas A&M University, College Station, TX, United States
| | - Lisa T Wigfall
- Department of Health Kinesiology, Texas A&M University, College Station, TX, United States
| | - Richard L Street
- Department of Communication, Texas A&M University, College Station, TX, United States
| |
Collapse
|
21
|
Murphy GP, Awad MA, Tresh A, Gaither TW, Osterberg EC, Baradaran N, Breyer BN. Association of Patient Volume With Online Ratings of California Urologists. JAMA Surg 2018; 153:685-686. [PMID: 29562062 DOI: 10.1001/jamasurg.2018.0149] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Gregory P Murphy
- Department of Surgery, Washington University, St Louis, Missouri
| | - Mohannad A Awad
- Department of Urology, University of California, San Francisco.,Department of Surgery, King Abdulaziz University, Rabigh, Saudi Arabia
| | - Anas Tresh
- Department of Urology, University of California, San Francisco
| | | | | | - Nima Baradaran
- Department of Urology, University of California, San Francisco
| | - Benjamin N Breyer
- Department of Urology, University of California, San Francisco.,Department of Biostatistics and Epidemiology, University of California, San Francisco
| |
Collapse
|
22
|
McGrath RJ, Priestley JL, Zhou Y, Culligan PJ. The Validity of Online Patient Ratings of Physicians: Analysis of Physician Peer Reviews and Patient Ratings. Interact J Med Res 2018; 7:e8. [PMID: 29631992 PMCID: PMC5913572 DOI: 10.2196/ijmr.9350] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Revised: 12/27/2017] [Accepted: 02/19/2018] [Indexed: 11/13/2022] Open
Abstract
Background Information from ratings sites are increasingly informing patient decisions related to health care and the selection of physicians. Objective The current study sought to determine the validity of online patient ratings of physicians through comparison with physician peer review. Methods We extracted 223,715 reviews of 41,104 physicians from 10 of the largest cities in the United States, including 1142 physicians listed as “America’s Top Doctors” through physician peer review. Differences in mean online patient ratings were tested for physicians who were listed and those who were not. Results Overall, no differences were found between the online patient ratings based upon physician peer review status. However, statistical differences were found for four specialties (family medicine, allergists, internal medicine, and pediatrics), with online patient ratings significantly higher for those physicians listed as a peer-reviewed “Top Doctor” versus those who were not. Conclusions The results of this large-scale study indicate that while online patient ratings are consistent with physician peer review for four nonsurgical, primarily in-office specializations, patient ratings were not consistent with physician peer review for specializations like anesthesiology. This result indicates that the validity of patient ratings varies by medical specialization.
Collapse
Affiliation(s)
| | | | - Yiyun Zhou
- Graduate College, Kennesaw State University, Kennesaw, GA, United States
| | - Patrick J Culligan
- Department of Urology, Weill Cornell Medicine, New York, NY, United States
| |
Collapse
|
23
|
Liu JJ, Matelski JJ, Bell CM. Scope, Breadth, and Differences in Online Physician Ratings Related to Geography, Specialty, and Year: Observational Retrospective Study. J Med Internet Res 2018. [PMID: 29514775 PMCID: PMC5863010 DOI: 10.2196/jmir.7475] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background Physician ratings websites have emerged as a novel forum for consumers to comment on their health care experiences. Little is known about such ratings in Canada. Objective We investigated the scope and trends for specialty, geographic region, and time for online physician ratings in Canada using a national data source from the country’s leading physician-rating website. Methods This observational retrospective study used online ratings data from Canadian physicians (January 2005-September 2013; N=640,603). For specialty, province, and year of rating, we assessed whether physicians were likely to be rated favorably by using the proportion of ratings greater than the overall median rating. Results In total, 57,412 unique physicians had 640,603 individual ratings. Overall, ratings were positive (mean 3.9, SD 1.3). On average, each physician had 11.2 (SD 10.1) ratings. By comparing specialties with Canadian Institute of Health Information physician population numbers over our study period, we inferred that certain specialties (obstetrics and gynecology, family practice, surgery, and dermatology) were more commonly rated, whereas others (pathology, radiology, genetics, and anesthesia) were less represented. Ratings varied by specialty; cardiac surgery, nephrology, genetics, and radiology were more likely to be rated in the top 50th percentile, whereas addiction medicine, dermatology, neurology, and psychiatry were more often rated in the lower 50th percentile of ratings. Regarding geographic practice location, ratings were more likely to be favorable for physicians practicing in eastern provinces compared with western and central Canada. Regarding year, the absolute number of ratings peaked in 2007 before stabilizing and decreasing by 2013. Moreover, ratings were most likely to be positive in 2007 and again in 2013. Conclusions Physician-rating websites are a relatively novel source of provider-level patient satisfaction and are a valuable source of the patient experience. It is important to understand the breadth and scope of such ratings, particularly regarding specialty, geographic practice location, and changes over time.
Collapse
Affiliation(s)
- Jessica Janine Liu
- Department of Medicine, University of Toronto, University Health Network, Toronto, ON, Canada
| | | | - Chaim M Bell
- Sinai Health System, Department of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|