1
|
Goldstein M, Donos N, Teughels W, Gkranias N, Temmerman A, Derks J, Kuru BE, Carra MC, Castro AB, Dereka X, Dekeyser C, Herrera D, Vandamme K, Calciolari E. Structure, governance and delivery of specialist training programs in periodontology and implant dentistry. J Clin Periodontol 2024. [PMID: 39072845 DOI: 10.1111/jcpe.14033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 06/07/2024] [Accepted: 06/08/2024] [Indexed: 07/30/2024]
Abstract
AIM To update the competences and learning outcomes and their evaluation, educational methods and education quality assurance for the training of contemporary specialists in periodontology, including the impact of the 2018 Classification of Periodontal and Peri-implant Diseases and Conditions (2018 Classification hereafter) and the European Federation of Periodontology (EFP) Clinical Practice Guidelines (CPGs). METHODS Evidence was gathered through scientific databases and by searching for European policies on higher education. In addition, two surveys were designed and sent to program directors and graduates. RESULTS Program directors reported that curricula were periodically adapted to incorporate advances in diagnosis, classification, treatment guidelines and clinical techniques, including the 2018 Classification and the EFP CPGs. Graduates evaluated their overall training positively, although satisfaction was limited for training in mucogingival and surgical procedures related to dental implants. Traditional educational methods, such as didactic lectures, are still commonly employed, but they are now often associated with more interactive methods such as case-based seminars and problem-based and simulation-based learning. The evaluation of competences/learning outcomes should employ multiple methods of assessment. CONCLUSION An update of competences and learning outcomes of specialist training in periodontology is proposed, including knowledge and practical application of the 2018 Classification and CPGs. Harmonizing specialist training in periodontology is a critical issue at the European level.
Collapse
Affiliation(s)
- Moshe Goldstein
- Faculty of Dental Medicine, Hadassah Medical Center and Hebrew University, Jerusalem, Israel
- Postgraduate Education Committee, European Federation of Periodontology (EFP)
| | - Nikolaos Donos
- Centre for Oral Clinical Research, Institute of Dentistry, Faculty of Medicine and Dentistry, Queen Mary University of London, London, UK
- Chair, Education Committee, European Federation of Periodontology (EFP)
| | - Wim Teughels
- Department of Oral Health Sciences, Periodontology and Oral Microbiology, KU Leuven and Dentistry, University Hospitals Leuven, Leuven, Belgium
| | - Nikolaos Gkranias
- Centre for Oral Clinical Research, Institute of Dentistry, Faculty of Medicine and Dentistry, Queen Mary University of London, London, UK
| | - Andy Temmerman
- Department of Oral Health Sciences, Periodontology and Oral Microbiology, KU Leuven and Dentistry, University Hospitals Leuven, Leuven, Belgium
| | - Jan Derks
- Department of Periodontology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Periodontology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Bahar Eren Kuru
- Department of Periodontology and Postgraduate Program in Periodontology, Faculty of Dentistry, Yeditepe University, Istanbul, Turkey
| | - Maria Clotilde Carra
- Department of Periodontology, U.F.R. of Odontology, Université Paris Cité, Paris, France
- Unit of Periodontal and Oral Surgery, Service of Odontology, Rothschild Hospital (AP-HP), Paris, France
- INSERM- Sorbonne Paris Cité Epidemiology and Statistics Research Centre, Paris, France
| | - Ana Belen Castro
- Department of Oral Health Sciences, Periodontology and Oral Microbiology, KU Leuven and Dentistry, University Hospitals Leuven, Leuven, Belgium
| | - Xanthippi Dereka
- Department of Periodontology, School of Dentistry, National and Kapodistrian University of Athens, Athens, Greece
| | - Christel Dekeyser
- Department of Oral Health Sciences, Periodontology and Oral Microbiology, KU Leuven and Dentistry, University Hospitals Leuven, Leuven, Belgium
| | - David Herrera
- ETEP (Etiology and Therapy of Periodontal and Peri-implant Diseases) Research Group, University Complutense of Madrid, Madrid, Spain
| | - Katleen Vandamme
- Department of Oral Health Sciences, Periodontology and Oral Microbiology, KU Leuven and Dentistry, University Hospitals Leuven, Leuven, Belgium
| | - Elena Calciolari
- Centre for Oral Clinical Research, Institute of Dentistry, Faculty of Medicine and Dentistry, Queen Mary University of London, London, UK
- Dental School, Department of Medicine and Surgery, University of Parma, Parma, Italy
| |
Collapse
|
2
|
Forth FA, Hammerle F, König J, Urschitz MS, Neuweiler P, Mildenberger E, Kidszun A. Optimistic vs Pessimistic Message Framing in Communicating Prognosis to Parents of Very Preterm Infants: The COPE Randomized Clinical Trial. JAMA Netw Open 2024; 7:e240105. [PMID: 38393728 PMCID: PMC10891472 DOI: 10.1001/jamanetworkopen.2024.0105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/03/2024] [Indexed: 02/25/2024] Open
Abstract
Importance In the neonatal intensive care unit, there is a lack of understanding about how best to communicate the prognosis of a serious complication to parents. Objective To examine parental preferences and the effects of optimistic vs pessimistic message framing when providing prognostic information about a serious complication. Design, Setting, and Participants This crossover randomized clinical trial was conducted at a single German university medical center between June and October 2021. Eligible participants were parents of surviving preterm infants with a birth weight under 1500 g. Data were analyzed between October 2021 and August 2022. Interventions Alternating exposure to 2 scripted video vignettes showing a standardized conversation between a neonatologist and parents, portrayed by professional actors, about the prognosis of a hypothetical very preterm infant with severe intraventricular hemorrhage. The video vignettes differed in the framing of identical numerical outcome estimates as either probability of survival and probability of nonimpairment (optimistic framing) or a risk of death and impaired survival (pessimistic framing). Main Outcomes and Measures The primary outcome was preference odds (ratio of preference for optimistic vs pessimistic framing). Secondary outcomes included state anxiety, perceptions of communication, and recall of numerical estimates. Results Of 220 enrolled parents (142 [64.5%] mothers; mean [SD] age: mothers, 39.1 [5.6] years; fathers, 42.7 [6.9] years), 196 (89.1%) preferred optimistic and 24 (10.1%) preferred pessimistic framing (preference odds, 11.0; 95% CI, 6.28-19.10; P < .001). Preference for optimistic framing was more pronounced when presented second than when presented first (preference odds, 5.41; 95% CI, 1.77-16.48; P = .003). State anxiety scores were similar in both groups at baseline (mean difference, -0.34; -1.18 to 0.49; P = .42) and increased equally after the first video (mean difference, -0.55; 95% CI, -1.79 to 0.69; P = .39). After the second video, state anxiety scores decreased when optimistic framing followed pessimistic framing but remained unchanged when pessimistic framing followed optimistic framing (mean difference, 2.15; 95% CI, 0.91 to 3.39; P < .001). With optimistic framing, participants recalled numerical estimates more accurately for survival (odds ratio, 4.00; 95% CI, 1.64-9.79; P = .002) but not for impairment (odds ratio, 1.50; 95% CI, 0.85-2.63; P = .16). Conclusions and Relevance When given prognostic information about a serious complication, parents of very preterm infants may prefer optimistic framing. Optimistic framing may lead to more realistic expectations for survival, but not for impairment. Trial Registration German Clinical Trials Register (DRKS): DRKS00024466.
Collapse
Affiliation(s)
- Fiona A. Forth
- Division of Neonatology, Center for Pediatric and Adolescent Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Florian Hammerle
- Department of Pediatric and Adolescent Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Jochem König
- Division of Pediatric Epidemiology, Institute for Medical Biostatistics, Epidemiology and Informatics, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Michael S. Urschitz
- Division of Pediatric Epidemiology, Institute for Medical Biostatistics, Epidemiology and Informatics, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Philipp Neuweiler
- Journalistisches Seminar, Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Eva Mildenberger
- Division of Neonatology, Center for Pediatric and Adolescent Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - André Kidszun
- Division of Neonatology, Center for Pediatric and Adolescent Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
- Division of Neonatology, Department of Pediatrics, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
3
|
Naraynassamy C. The Central Role of Ethics in Medical Affairs Practice. Pharmaceut Med 2023:10.1007/s40290-023-00477-9. [PMID: 37227690 DOI: 10.1007/s40290-023-00477-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2023] [Indexed: 05/26/2023]
Abstract
The author argues that notwithstanding available guidelines and established practices, the elaboration of a formal ethics framework specific to medical affairs could improve good practice internationally. He further argues that further and better insights into the theory behind the practice of medical affairs are an essential precondition for elaborating any such framework.
Collapse
Affiliation(s)
- Carl Naraynassamy
- Centre for Pharmaceutical Medicine Research, King's College London, London, SE1 9NH, UK.
| |
Collapse
|
4
|
Khan YS, Khoodoruth MAS, Ghaffar A, Al Khal A, Alabdullah M. The Impact of Multisource Feedback on Continuing Medical Education, Clinical Performance and Patient Experience: Innovation in a Child and Adolescent Mental Health Service. JOURNAL OF CME 2023; 12:2202834. [PMID: 37123200 PMCID: PMC10142306 DOI: 10.1080/28338073.2023.2202834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
This paper reiterates the importance of the role of multisource feedback (MSF) in continuing medical education/continuing professional development (CME/CPD) and its impact on doctors' performance and patient experience globally. It summarises a unique initiative of robust utilisation of internationally recognised multisource feedback tools in an outpatient child and adolescent mental health service (CAMHS) in Qatar. The process involved the effective adoption and administering of the General Medical Council's (GMC) self-assessment questionnaire (SQ), patient questionnaire (PQ), and colleague questionnaire (CQ) followed by the successful incorporation of these tools in CME/CPD. The original version of the PQ questionnaire and the instructions to the patient document were translated into Arabic through the blind back-translation technique. This initiative of introducing gold-standard MSF tools and processes into clinical practice, among other quality-improvement projects, has contributed to the improvement of service standards and doctors' clinical practice. Patient satisfaction was measured through the annual patient experience analysis using the Experience of Service Questionnaire (ESQ) whereas changes in doctors' performance were evaluated by comparing annual appraisal scores before and after implementation of this initiative. We have demonstrated that when MSF is obtained impartially and transparently using recognised and valid tools, it can improve patient experience and enhance doctors' performance.
Collapse
Affiliation(s)
- Yasser Saeed Khan
- Mental Health Service, Hamad Medical Corporation, Doha, Qatar
- CONTACT Yasser Saeed Khan Mental Health Service,Hamad Medical Corporation, Doha, Qatar P.O Box 3050
| | - Mohamed Adil Shah Khoodoruth
- Mental Health Service, Hamad Medical Corporation, Doha, Qatar
- Division of Genomics and Precision Medicine, College of Health and Life Sciences, Hamad Bin Khalifa University, Doha, Qatar
| | - Adeel Ghaffar
- Graduat
e Medical Education, Hamad Medical Corporation, Doha, Qatar
| | | | - Majid Alabdullah
- Mental Health Service, Hamad Medical Corporation, Doha, Qatar
- College of Medicine, Qatar University, Doha, Qatar
| |
Collapse
|
5
|
Forth FA, Hammerle F, König J, Urschitz MS, Neuweiler P, Mildenberger E, Kidszun A. The COPE-Trial-Communicating prognosis to parents in the neonatal ICU: Optimistic vs. PEssimistic: study protocol for a randomized controlled crossover trial using two different scripted video vignettes to explore communication preferences of parents of preterm infants. Trials 2021; 22:884. [PMID: 34872601 PMCID: PMC8647439 DOI: 10.1186/s13063-021-05796-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 11/03/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND One of the numerous challenges preterm birth poses for parents and physicians is prognostic disclosure. Prognoses are based on scientific evidence and medical experience. They are subject to individual assessment and will generally remain uncertain with regard to the individual. This can result in differences in prognostic framing and thus affect the recipients' perception. In neonatology, data on the effects of prognostic framing are scarce. In particular, it is unclear whether parents prefer a more optimistic or a more pessimistic prognostic framing. OBJECTIVE To explore parents' preferences concerning prognostic framing and its effects on parent-reported outcomes and experiences. To identify predictors (demographic, psychological) of parents' communication preferences. DESIGN, SETTING, PARTICIPANTS Unblinded, randomized controlled crossover trial (RCT) at the Division of Neonatology of the University Medical Center Mainz, Germany, including German-speaking parents or guardians of infants born preterm between 2010 and 2019 with a birth weight < 1500 g. Inclusion of up to 204 families is planned, with possible revision according to a blinded sample size reassessment. INTERVENTION Embedded in an online survey and in pre-specified order, participants will watch two video vignettes depicting a more optimistic vs. a more pessimistic framing in prognostic disclosure to parents of a preterm infant. Apart from prognostic framing, all other aspects of physician-parent communication are standardized in both videos. MAIN OUTCOMES AND MEASURES At baseline and after each video, participants complete a two-part online questionnaire (baseline and post-intervention). Primary outcome is the preference for either a more optimistic or a more pessimistic prognostic framing. Secondary outcomes include changes in state-anxiety (STAI-SKD), satisfaction with prognostic framing, evaluation of prognosis, future optimism and hope, preparedness for shared decision-making (each assessed using customized questions), and general impression (customized question), professionalism (adapted from GMC Patient Questionnaire) and compassion (Physician Compassion Questionnaire) of the consulting physician. DISCUSSION This RCT will explore parents' preferences concerning prognostic framing and its effects on physician-parent communication. Results may contribute to a better understanding of parental needs in prognostic disclosure and will be instrumental for a broad audience of clinicians, scientists, and ethicists. TRIAL REGISTRATION German Clinical Trials Register DRKS00024466 . Registered on April 16, 2021.
Collapse
Affiliation(s)
- Fiona A Forth
- Division of Neonatology, Center for Pediatric and Adolescent Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany.
- DFG-Research Training Group "Life Sciences - Life Writing", Institute for the History, Philosophy and Ethics of Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Am Pulverturm 13, 55131, Mainz, Germany.
| | - Florian Hammerle
- Department of Pediatric and Adolescent Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Jochem König
- Division of Pediatric Epidemiology, Institute of Medical Biostatistics, Epidemiology and Informatics (IMBEI), University Medical Center of the Johannes Gutenberg-University Mainz, Obere Zahlbacher Strasse 69, 55131, Mainz, Germany
| | - Michael S Urschitz
- Division of Pediatric Epidemiology, Institute of Medical Biostatistics, Epidemiology and Informatics (IMBEI), University Medical Center of the Johannes Gutenberg-University Mainz, Obere Zahlbacher Strasse 69, 55131, Mainz, Germany
| | - Philipp Neuweiler
- Journalistisches Seminar, Johannes Gutenberg-University Mainz, Alte Universitätsstrasse 17, 55116, Mainz, Germany
| | - Eva Mildenberger
- Division of Neonatology, Center for Pediatric and Adolescent Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
- DFG-Research Training Group "Life Sciences - Life Writing", Institute for the History, Philosophy and Ethics of Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Am Pulverturm 13, 55131, Mainz, Germany
| | - André Kidszun
- Division of Neonatology, Center for Pediatric and Adolescent Medicine, University Medical Center of the Johannes Gutenberg-University Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
- Division of Neonatology, Department of Pediatrics, Inselspital, Bern University Hospital, University of Bern, Freiburgstraße, CH-3010, Bern, Switzerland
| |
Collapse
|
6
|
Sureda E, Chacón-Moscoso S, Sanduvete-Chaves S, Sesé A. A Training Intervention through a 360° Multisource Feedback Model. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18179137. [PMID: 34501727 PMCID: PMC8431571 DOI: 10.3390/ijerph18179137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/24/2021] [Accepted: 08/27/2021] [Indexed: 11/29/2022]
Abstract
Physicians and other health sciences professionals need continuous training, not only in technical aspects of their activity but also in nontechnical, transversal competencies with a cost-efficient impact on the proper functioning of healthcare. The objective of this paper is to analyze the behavioral change among health professionals at a large public hospital following a training intervention on a set of core nontechnical competencies: Teamwork, Adaptability-Flexibility, Commitment-Engagement, Results Orientation, and Leadership Skills for Supervisors. The 360° Multisource Feedback (MSF) model was applied using three sources of information: supervisors, co-workers, and the workers themselves (self-assessment). A quasi-experimental pretest–post-test single-group design with two points in time was utilized. The training intervention improved the scores of only one of the trained competencies—the “Results Orientation” competency—although the scores were slightly inflated. Moreover, significant discrepancies were detected between the three sources, with supervisors awarding the highest scores. The magnitude of behavioral change was related to certain sociodemographic and organizational variables. The study was not immune to the ceiling effect, despite control measures aimed at avoiding it. The empirical evidence suggests that the 360° MSF model must be maintained over time to enhance and reinforce an evaluation culture for better patient care.
Collapse
Affiliation(s)
- Elena Sureda
- Department of Psychology, University of Balearic Islands, 07122 Palma, Spain;
| | - Salvador Chacón-Moscoso
- Experimental Psychology Department, Universidad de Sevilla, 41018 Sevilla, Spain;
- Department of Psychology, Universidad Autónoma de Chile, Santiago 7500138, Chile
- Correspondence: (S.C.-M.); (A.S.)
| | | | - Albert Sesé
- Department of Psychology, University of Balearic Islands, 07122 Palma, Spain;
- Balearic Islands Health Research Institute (IdISBa), 07120 Palma, Spain
- Correspondence: (S.C.-M.); (A.S.)
| |
Collapse
|
7
|
Haider A, Azhar A, Tanco KC, Epner M, Naqvi SMAA, Abdelghani E, Reddy A, Dev R, Wu J, Bruera E. Oncology patients' perception of physicians who use an integrated electronic health record (EHR) during clinic visits: PRIME-EHR double-blind, randomized controlled trial. Cancer 2021; 127:3967-3974. [PMID: 34264520 DOI: 10.1002/cncr.33778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 03/16/2021] [Accepted: 04/16/2021] [Indexed: 11/07/2022]
Abstract
BACKGROUND Patients with cancer prefer and positively perceive physicians who communicate face-to-face without the use of a computer. However, the use of electronic health records (EHRs) in the examination room remains a practical necessity. On the basis of existing literature, the authors developed and tested an integration model, PRIME-EHR, that focuses on the best-practice guidelines. To their knowledge, no randomized controlled trials (RCTs) have been conducted to test the effectiveness of such models. METHODS In this double-blind, crossover RCT, 120 eligible patients with cancer were enrolled between April 1, 2019 and February 15, 2020 at The University of Texas MD Anderson Cancer Center. The objectives were to compare patients' perceptions of physicians' skills and their overall preference after they watched 2 standardized, scripted video vignettes of physicians: 1 portraying the use of a standard EHR and the other portraying the use of a PRIME-EHR. Actors and patients were blinded to the purpose of the study. Investigators were blinded to the sequence of videos watched by the patients. Validated questionnaires to rate physicians' compassion (0 = best, 50 = worst), communication skills (14 = poor, 70 = excellent), and professionalism (4 = poor, 20 = very good) were used. RESULTS PRIME-EHR, compared with the standard EHR, resulted in better scores for physician compassion (median score, 5 [interquartile range, 0-10] vs 12 [interquartile range, 4-25]; P = .0009), communication skills (median score, 69 [interquartile range, 63-70] vs 61 [interquartile range, 50-69]; P = .0026), and professionalism (median score, 20 [interquartile range, 18-20] vs 18 [interquartile range, 14-20]; P = .0058). The majority of patients preferred physicians who used PRIME-EHR (n = 70 [77%] vs n = 21 [23%]; P < .0001). CONCLUSIONS The PRIME-EHR approach significantly improved patients' perceptions of and preference for the physicians. This integrated model of health care delivery has the potential to improve communication and compassion in cancer care.
Collapse
Affiliation(s)
- Ali Haider
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Ahsan Azhar
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Kimberson C Tanco
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Margeaux Epner
- The University of Texas Health Science Center, McGovern Medical School, Houston, Texas
| | - Syed Mussadiq Ali Akber Naqvi
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Eman Abdelghani
- Department of Lymphoma/Leukemia, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Akhila Reddy
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Rony Dev
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Jimin Wu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Eduardo Bruera
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
8
|
Cohen A, Kind T, DeWolfe C. A Qualitative Exploration of the Intern Experience in Assessing Medical Student Performance. Acad Pediatr 2021; 21:728-734. [PMID: 33127592 DOI: 10.1016/j.acap.2020.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 10/23/2020] [Accepted: 10/25/2020] [Indexed: 11/17/2022]
Abstract
BACKGROUND Interns play a key role in medical student education, often observing behaviors that others do not. Their role in assessment, however, is less clear. Despite accreditation standards pertaining to residents' assessment skills, they receive little guidance or formal training in it. In order to better prepare residents for their role in medical student assessment, we need to understand their current experience. OBJECTIVE We aimed to describe the first-year resident experience assessing students' performance and providing input to faculty for student clinical performance assessments and grades in the inpatient setting. METHODS Pediatric interns at Children's National Hospital (CN) from February 2018 to February 2019 were invited to participate in semistructured interviews about their experience assessing students. Constant comparative methodology was used to develop themes. Ten interviews were conducted, at which point thematic saturation was reached. RESULTS We identified 4 major themes: 1) Interns feel as though they assess students in meaningful, unique ways. 2) Interns encounter multiple barriers and facilitators to assessing students. 3) Interns voice varying levels of comfort and motivation assessing different areas of student work. 4) Interns see their role in assessment limited to formative rather than summative assessment. CONCLUSIONS These findings depict the intern experience with assessment of medical students at a large pediatric residency program and can help inform ways to develop and utilize the assessment skills of interns.
Collapse
Affiliation(s)
- Adam Cohen
- Baylor College of Medicine, Texas Children's Hospital (A Cohen), Houston, Tex.
| | - Terry Kind
- George Washington University, Children's National Hospital (T Kind and C DeWolfe), Washington, DC
| | - Craig DeWolfe
- George Washington University, Children's National Hospital (T Kind and C DeWolfe), Washington, DC
| |
Collapse
|
9
|
Carenzo L, Cena T, Carfagna F, Rondi V, Ingrassia PL, Cecconi M, Violato C, Della Corte F, Vaschetto R. Assessing anaesthesiology and intensive care specialty physicians: An Italian language multisource feedback system. PLoS One 2021; 16:e0250404. [PMID: 33891626 PMCID: PMC8064525 DOI: 10.1371/journal.pone.0250404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 04/07/2021] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Physician professionalism, including anaesthesiologists and intensive care doctors, should be continuously assessed during training and subsequent clinical practice. Multi-source feedback (MSF) is an assessment system in which healthcare professionals are assessed on several constructs (e.g., communication, professionalism, etc.) by multiple people (medical colleagues, coworkers, patients, self) in their sphere of influence. MSF has gained widespread acceptance for both formative and summative assessment of professionalism for reflecting on how to improve clinical practice. METHODS Instrument development and psychometric analysis (feasibility, reliability, construct validity via exploratory factor analysis) for MSF questionnaires in a postgraduate specialty training in Anaesthesiology and intensive care in Italy. Sixty-four residents at the Università del Piemonte Orientale (Italy) Anesthesiology Residency Program. Main outcomes assessed were: development and psychometric testing of 4 questionnaires: self, medical colleague, coworker and patient assessment. RESULTS Overall 605 medical colleague questionnaires (mean of 9.3 ±1.9) and 543 coworker surveys (mean 8.4 ±1.4) were collected providing high mean ratings for all items (> 4.0 /5.0). The self-assessment item mean score ranged from 3.1 to 4.3. Patient questionnaires (n = 308) were returned from 31 residents (40%; mean 9.9 ± 6.2). Three items had high percentages of "unable to assess" (> 15%) in coworker questionnaires. Factor analyses resulted in a two-factor solution: clinical management with leadership and accountability accounting for at least 75% of the total variance for the medical colleague and coworker's survey with high internal consistency reliability (Cronbach's α > 0.9). Patient's questionnaires had a low return rate, a limited exploratory analysis was performed. CONCLUSIONS We provide a feasible and reliable Italian language MSF instrument with evidence of construct validity for the self, coworkers and medical colleague. Patient feedback was difficult to collect in our setting.
Collapse
Affiliation(s)
- Luca Carenzo
- Department of Anesthesia and Intensive Care Medicine, Humanitas Clinical and Research Center—IRCCS, Rozzano (MI), Italy
- * E-mail:
| | - Tiziana Cena
- Department of Anaesthesia and Intensive Care Medicine, Azienda Ospedaliero-Universitaria “Maggiore della Carità”, Novara, Italy
| | - Fabio Carfagna
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele–Milan, Italy
| | - Valentina Rondi
- Dipartimento di Medicina Traslazionale, Università del Piemonte Orientale, Novara, Italy
| | - Pier Luigi Ingrassia
- Centro di Simulazione, Centro Professionale Sociosanitario, Lugano, Switzerland
- Centro Interdipartimentale di Didattica Innovativa e di Simulazione in Medicina e Professioni Sanitarie, SIMNOVA, Università del Piemonte Orientale, Novara, Italy
| | - Maurizio Cecconi
- Department of Anesthesia and Intensive Care Medicine, Humanitas Clinical and Research Center—IRCCS, Rozzano (MI), Italy
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele–Milan, Italy
| | - Claudio Violato
- Departments of Medicine and Medical Education, University of Minnesota Medical School, Minneapolis, MN, United States of America
| | - Francesco Della Corte
- Department of Anaesthesia and Intensive Care Medicine, Azienda Ospedaliero-Universitaria “Maggiore della Carità”, Novara, Italy
- Dipartimento di Medicina Traslazionale, Università del Piemonte Orientale, Novara, Italy
| | - Rosanna Vaschetto
- Department of Anaesthesia and Intensive Care Medicine, Azienda Ospedaliero-Universitaria “Maggiore della Carità”, Novara, Italy
- Dipartimento di Medicina Traslazionale, Università del Piemonte Orientale, Novara, Italy
| |
Collapse
|
10
|
van der Meulen MW, Arah OA, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. When Feedback Backfires: Influences of Negative Discrepancies Between Physicians' Self and Assessors' Scores on Their Subsequent Multisource Feedback Ratings. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2021; 41:94-103. [PMID: 34009839 DOI: 10.1097/ceh.0000000000000347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
INTRODUCTION With multisource feedback (MSF) physicians might overrate their own performance compared with scores received from assessors. However, there is limited insight into how perceived divergent feedback affects physicians' subsequent performance scores. METHODS During 2012 to 2018, 103 physicians were evaluated twice by 684 peers, 242 residents, 999 coworkers, and themselves in three MSF performance domains. Mixed-effect models quantified associations between the outcome variable "score changes" between first and second MSF evaluations, and the explanatory variable "negative discrepancy score" (number of items that physicians rated themselves higher compared with their assessors' scores) at the first MSF evaluation. Whether associations differed across assessor groups and across a physician's years of experience as a doctor was analyzed too. RESULTS Forty-nine percent of physicians improved their total MSF score at the second evaluation, as assessed by others. Number of negative discrepancies was negatively associated with score changes in domains "organization and (self)management" (b = -0.02; 95% confidence interval [CI], -0.03 to -0.02; SE = 0.004) and "patient-centeredness" (b = -0.03; 95% CI, -0.03 to -0.02; SE = 0.004). For "professional attitude," only negative associations between score changes and negative discrepancies existed for physicians with more than 6-year experience (b6-10yearsofexperience = -0.03; 95% CI, -0.05 to -0.003; SE = 0.01; b16-20yearsofexperience = -0.03; 95% CI, -0.06 to -0.004; SE = 0.01). DISCUSSION The extent of performance improvement was less for physicians confronted with negative discrepancies. Performance scores actually declined when physicians overrated themselves on more than half of the feedback items. PA score changes of more experienced physicians confronted with negative discrepancies and were affected more adversely. These physicians might have discounted feedback due to having more confidence in own performance. Future work should investigate how MSF could improve physicians' performance taking into account physicians' confidence.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Dr. van der Meulen: is PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance and Compassionate Care Research Group, Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands. Dr. Arah: is professor, Department of Epidemiology, University of California, Los Angeles (UCLA), Los Angeles, the United States of America. Dr. Heeneman: is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Lombarts: is professor, Professional Performance and Compassionate Care Research Group, Department of Medical Psychology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | |
Collapse
|
11
|
Benavent M, Sastre J, Escobar IG, Segura A, Capdevila J, Carmona A, Sevilla I, Alonso T, Crespo G, García L, Canal N, de la Cruz G, Gallego J. Physician-perceived utility of the EORTC QLQ-GINET21 questionnaire in the treatment of patients with gastrointestinal neuroendocrine tumours: a multicentre, cross-sectional survey (QUALINETS). Health Qual Life Outcomes 2021; 19:38. [PMID: 33516211 PMCID: PMC7847563 DOI: 10.1186/s12955-021-01688-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 01/22/2021] [Indexed: 11/10/2022] Open
Abstract
Background and objective Patient-reported outcome measures can provide clinicians with valuable information to improve doctor-patient communication and inform clinical decision-making. The aim of this study was to evaluate the physician-perceived utility of the QLQ-GINET21 in routine clinical practice in patients with gastrointestinal neuroendocrine tumours (GI-NETs). Secondary aims were to explore the patient, clinician, and/or centre-related variables potentially associated with perceived clinical utility. Methods Non-interventional, cross-sectional, multicentre study conducted at 34 hospitals in Spain and Portugal (NCT02853422). Patients diagnosed with GI-NETs completed two health-related quality of life (HRQoL) questionnaires (QLQ-C30, QLQ-GINET21) during a single routine visit. Physicians completed a 14-item ad hoc survey to rate the clinical utility of QLQ-GINET21 on three dimensions: 1)therapeutic and clinical decision-making, 2)doctor-patient communication, 3)questionnaire characteristics. Results A total of 199 patients at 34 centres were enrolled by 36 participating clinicians. The highest rated dimension on the QLQ-GINET21 was questionnaire characteristics (86.9% of responses indicating “high utility”), followed by doctor-patient communication (74.4%), and therapeutic and clinical decision-making (65.8%). One physician-related variable (GI-NET patient volume > 30 patients/year) was associated with high clinical utility and two variables (older age/less experience treating GI-NETs) with low clinical utility. Conclusions Clinician-perceived clinical utility of QLQ-GINET21 is high. Clinicians valued the instruments’ capacity to provide a better understanding of patient perspectives and to identify the factors that had the largest influence on patient HRQoL.
Collapse
Affiliation(s)
- Marta Benavent
- Virgen Del Rocío University Hospital, Biomedicine Institut Biomedicina of Sevilla (IBIS), Av. Manuel Siurot, S/n, 41013, Sevilla, Spain.
| | - Javier Sastre
- San Carlos Clinic Hospital, San Carlos Hospital Research Institute (IdISSC), Madrid, Spain
| | | | - Angel Segura
- Politécnico La Fe University Hospital, Valencia, Spain
| | - Jaume Capdevila
- Teknon Oncologic Institut (IOT), Teknon Medical Center, Vall Hebron University Hospital, Vall Hebron Institute of Oncology (VHIO), Barcelona, Spain
| | | | - Isabel Sevilla
- Clinical and Translational Research in Cancer, Biomedical Research Institut of Malaga (IBIMA), Regional University Hospital and Virgen de la Victoria University Hospital of Málaga, Málaga, Spain
| | | | | | | | - Neus Canal
- IQVIA Information S.A., Barcelona, Spain
| | | | - Javier Gallego
- Hospital General Universitario de Elche, Elche, Alicante, Spain
| |
Collapse
|
12
|
Prakash J, Chatterjee K, Srivastava K, Chauhan VS, Sharma R. Workplace based assessment: A review of available tools and their relevance. Ind Psychiatry J 2020; 29:200-204. [PMID: 34158702 PMCID: PMC8188940 DOI: 10.4103/ipj.ipj_225_20] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 12/06/2020] [Accepted: 12/28/2020] [Indexed: 11/04/2022] Open
Abstract
Workplace-based assessment (WPBA) appears to be a promising tool for more comprehensive assessment of the learners. Relevant literature in this direction was collated and analyzed for its relevance, salience, and merit. Many WPBA tools are being used by various educational institutions which spans across multiple domains and over the entire duration of the workplace learning. It caters to holistic assessment with structured measures, real-time feedback, and continuous professional development. It is being used more for formative assessment and has limited utility in summative assessment as on date. WPBA tools have promising prospect in bringing novelty, objectivity, and holistic approach in assessment.
Collapse
Affiliation(s)
- Jyoti Prakash
- Department of Psychiatry, Armed Forces Medical College, Pune, Maharashtra, India
| | - K Chatterjee
- Department of Psychiatry, Armed Forces Medical College, Pune, Maharashtra, India
| | - K Srivastava
- Department of Psychiatry, Armed Forces Medical College, Pune, Maharashtra, India
| | - V S Chauhan
- Department of Psychiatry, Armed Forces Medical College, Pune, Maharashtra, India
| | - R Sharma
- Department of Psychiatry, Armed Forces Medical College, Pune, Maharashtra, India
| |
Collapse
|
13
|
Azhar A, Tanco K, Haider A, Park M, Liu D, Williams JL, Bruera E. Challenging the Status Quo of Physician Attire in the Palliative Care Setting. Oncologist 2020; 25:627-637. [PMID: 32073181 DOI: 10.1634/theoncologist.2019-0568] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 01/07/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND, AIM, AND HYPOTHESIS This randomized controlled trial aimed to compare the impact of a physician's attire on the perceptions of patients with cancer of compassion, professionalism, and physician preference. Our hypothesis was that patients would perceive the physician with formal attire as more compassionate than the physician wearing casual attire. MATERIALS AND METHODS One hundred five adult follow-up patients with advanced cancer were randomized to watch two standardized, 3-minute video vignettes with the same script, depicting a routine physician-patient clinic encounter. Videos included a physician in formal attire with tie and buttoned-up white coat and casual attire without a tie or white coat. Actors, patients, and investigators were all blinded to the purpose and videos watched, respectively. After each video, patients completed validated questionnaires rating their perception of physician compassion, professionalism, and their overall preference for the physician. RESULTS There were no significant differences between formal and casual attire for compassion (median [interquartile range], 25 [10-31] vs. 20 [8-27]; p = .31) and professionalism (17 [13-21] vs. 18 [14-22]; p = .42). Thirty percent of patients preferred formal attire, 31% preferred casual attire, and 38% had no preference. Subgroup analysis did not show statistically significant differences among different cohorts of age, sex, marital status, and education level. CONCLUSION Doctors' attire did not affect the perceptions of patients with cancer of physician's level of compassion and professionalism, nor did it influence the patients' preference for their doctor or their trust and confidence in the doctor's ability to provide care. There is a need for more studies in this area of communications skills. Clinical trial identification number. NCT03168763 IMPLICATIONS FOR PRACTICE: The significance of physician attire as a means of nonverbal communication has not been well characterized. It is an important element to consider, as patient preferences vary geographically, are influenced by cultural beliefs, and may vary based on particular care settings. Previous studies consisted of nonblinded surveys and found increasing confidence in physicians wearing a professional white coat. Unfortunately, there are no randomized controlled trials, to the authors' knowledge, to confirm the survey findings. In this randomized, blinded clinical trial the researchers found that physician's attire did not affect patients' perception of the physician's level of compassion and professionalism. Attire also did not influence the patients' preferences for their doctor or their trust and confidence in the doctor's ability to provide care.
Collapse
Affiliation(s)
- Ahsan Azhar
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kimberson Tanco
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ali Haider
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Minjeong Park
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Diane Liu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Janet L Williams
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Eduardo Bruera
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
14
|
Hou J, He Y, Zhao X, Thai J, Fan M, Feng Y, Huang L. The effects of job satisfaction and psychological resilience on job performance among residents of the standardized residency training: a nationwide study in China. PSYCHOL HEALTH MED 2020; 25:1106-1118. [PMID: 31992067 DOI: 10.1080/13548506.2019.1709652] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
High resident job performance is essential for effective medical professionalism. To date, few studies have investigated the potential predictors of job performance among residents in standardized residency training (SRT) program in China. Therefore, a nationwide survey among Chinese residents in SRT program was conducted to evaluate the impact of job satisfaction and psychological resilience on job performance. A total of 1146 residents from 9 hospitals were recruited in this study. Demographic and work-related information, job satisfaction, psychological resilience and job performance were collected through questionnaires. Hierarchical regression analyses indicated that, "work pressure", "doctor-patient conflict", "intrinsic job satisfaction" and "psychological resilience" were significant predictors of job performance for residents in SRT programs and explained 61.3% of variance, while the three dimensions of psychological resilience (tenacity, strength and optimism) accounted for 27.2% of variance. The area under curve (AUC) of the receiver operating characteristic (ROC) curve showed that resilience had the highest predictive accuracy than another three subscales. This study indicated that intrinsic job satisfaction and psychological resilience had a significant influence on job performance. Strategies and measures to improve residents' intrinsic job satisfaction and psychological resilience may be efficacious ways to enhance their job performance.
Collapse
Affiliation(s)
- Jiaojiao Hou
- Medical Education Division & Department of Psychiatry, Tongji Hospital, Tongji University School of Medicine , Shanghai, China.,Shanghai East Hospital affiliated Tongji University School of Medicine , Shanghai, China
| | - Yifei He
- Department of Psychiatry and Psychotherapy, Philipps-University Marburg , Marburg, Germany.,Department of Translation Studies, Linguistics and Cultural Studies, Johannes Gutenberg University Mainz , Mainz, Germany
| | - Xudong Zhao
- Shanghai East Hospital affiliated Tongji University School of Medicine , Shanghai, China.,Pudong New Area Mental Health Center , Shanghai, China
| | - Jessica Thai
- College of Medicine, University of Nebraska Medical Center , Omaha, NE, USA
| | - Mingxiang Fan
- Tong ji University School of Medicine , Shanghai, China
| | | | - Lei Huang
- Medical Education Division & Department of Psychiatry, Tongji Hospital, Tongji University School of Medicine , Shanghai, China
| |
Collapse
|
15
|
Baines R, Zahra D, Bryce M, de Bere SR, Roberts M, Archer J. Is Collecting Patient Feedback "a Futile Exercise" in the Context of Recertification? ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2019; 43:570-576. [PMID: 31309453 DOI: 10.1007/s40596-019-01088-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 06/19/2019] [Accepted: 06/25/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVE Patient feedback is considered integral to maintaining excellence, patient safety, and professional development. However, the collection of and reflection on patient feedback may pose unique challenges for psychiatrists. This research uniquely explores the value, relevance, and acceptability of patient feedback in the context of recertification. METHODS The authors conducted statistical and inductive thematic analyses of psychiatrist responses (n = 1761) to a national census survey of all doctors (n = 26,171) licensed to practice in the UK. Activity theory was also used to develop a theoretical understanding of the issues identified. RESULTS Psychiatrists rate patient feedback as more useful than some other specialties. However, despite asking a comparable number of patients, psychiatrists receive a significantly lower response rate than most other specialties. Inductive thematic analysis identified six key themes: (1) job role, setting, and environment; (2) reporting issues; (3) administrative barriers; (4) limitations of existing patient feedback tools; (5) attitudes towards patient feedback; and (6) suggested solutions. CONCLUSIONS The value, relevance, and acceptability of patient feedback are undermined by systemic tensions between division of labor, community understanding, tool complexity, and restrictive rule application. This is not to suggest that patient feedback is "a futile exercise." Rather, existing feedback processes should be refined. In particular, the value and acceptability of patient feedback tools should be explored both from a patient and professional perspective. If issues identified remain unresolved, patient feedback is at risk of becoming a "futile exercise" that is denied the opportunity to enhance patient safety, quality of care, and professional development.
Collapse
|
16
|
Sehlbach C, Govaerts MJB, Mitchell S, Teunissen TGJ, Smeenk FWJM, Driessen EW, Rohde GGU. Perceptions of people with respiratory problems on physician performance evaluation-A qualitative study. Health Expect 2019; 23:247-255. [PMID: 31747110 PMCID: PMC6978864 DOI: 10.1111/hex.12999] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 10/17/2019] [Accepted: 10/18/2019] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Despite increasing calls for patient and public involvement in health-care quality improvement, the question of how patient evaluations can contribute to physician learning and performance assessment has received scant attention. OBJECTIVE The objective of this study was to explore, amid calls for patient involvement in quality assurance, patients' perspectives on their role in the evaluation of physician performance and to support physicians' learning and decision making on professional competence. DESIGN A qualitative study based on semi-structured interviews. SETTING AND PARTICIPANTS The study took place in a secondary care setting in the Netherlands. The authors selected 25 patients from two Dutch hospitals and through the Dutch Lung Foundation, using purposive sampling. METHODS Data were analysed according to the principles of template analysis, based on an a priori coding framework developed from the literature about patient empowerment, feedback and performance assessment. RESULTS The analysis unearthed three predominant patient perspectives: the proactive perspective, the restrained perspective and the outsider perspective. These perspectives differed in terms of perceived power dynamics within the doctor-patient relationship, patients' perceived ability, and willingness to provide feedback and evaluate their physician's performance. Patients' perspectives thus affected the role patients envisaged for themselves in evaluating physician performance. DISCUSSION AND CONCLUSION Although not all patients are equally suitable or willing to be involved, patients can play a role in evaluating physician performance and continuing training through formative approaches. To involve patients successfully, it is imperative to distinguish between different patient perspectives and empower patients by ensuring a safe environment for feedback.
Collapse
Affiliation(s)
- Carolin Sehlbach
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands.,School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | | | - Truus G J Teunissen
- Patient Contributor, and Researcher at the Department of Medical Humanities, Amsterdam Public Health research institute (APH), Amsterdam UMC Free University Medical Centre, Amsterdam, The Netherlands
| | - Frank W J M Smeenk
- School of Health Professions Education, Maastricht University, Maastricht, The Netherlands.,Respiratory Medicine, Catharina Hospital, Eindhoven, The Netherlands
| | - Erik W Driessen
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Gernot G U Rohde
- Department of Respiratory Medicine, University Hospital, Goethe University, Frankfurt, Germany
| |
Collapse
|
17
|
Haider A, Tanco K, Epner M, Azhar A, Williams J, Liu DD, Bruera E. Physicians' Compassion, Communication Skills, and Professionalism With and Without Physicians' Use of an Examination Room Computer: A Randomized Clinical Trial. JAMA Oncol 2019; 4:879-881. [PMID: 29710136 DOI: 10.1001/jamaoncol.2018.0343] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Ali Haider
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston
| | - Kimberson Tanco
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston
| | - Margeaux Epner
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston
| | - Ahsan Azhar
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston
| | - Janet Williams
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston
| | - Diane D Liu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston
| | - Eduardo Bruera
- Department of Palliative Care, Rehabilitation, and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston
| |
Collapse
|
18
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
19
|
Narayanan A, Farmer EA, Greco MJ. Multisource feedback as part of the Medical Board of Australia's Professional Performance Framework: outcomes from a preliminary study. BMC MEDICAL EDUCATION 2018; 18:323. [PMID: 30594157 PMCID: PMC6310994 DOI: 10.1186/s12909-018-1432-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 12/14/2018] [Indexed: 05/28/2023]
Abstract
BACKGROUND The recent introduction of the Professional Performance Framework by the Medical Board of Australia is intended to strengthen continuing professional development for the 100,000 or so medical practitioners in Australia. An important option within the Framework is the use of multisource feedback from patients, colleagues and self-evaluations to allow doctors to reflect on their performance and identify methods for self-improvement. The aim of this study is to explore the relationships between patient feedback, colleague feedback, and self-evaluation using the same questionnaires as used by patients and colleagues. METHODS Feedback data for around 2000 doctors belonging to four different groups were collected through non-probability sampling from nearly 100,000 patients and 24,000 colleagues. Reliability analysis was performed using single measures intraclass coefficients, Cronbach' alpha and signal-to-noise ratios. Analysis of variance was used to identify significant differences in scores between items and sub-populations of doctors; principal component analysis involving Kaiser-Meyer-Olkin (KMO) sampling adequacy and Bartlett's test for sphericity was used to reveal components of doctor performance; and correlation analysis was used for identifying convergence between sets of scores from different sources. RESULTS Patients rated doctors highest on respect shown and lowest on reassurance provided. Colleagues rated doctors highest on trustworthiness and lowest on ability to say 'no'. With regard to self-evaluation, doctors gave themselves lower scores on the patient questionnaire and the colleague questionnaire (10 and 12%, respectively) than they received from their patients and colleagues. There were weak but positive correlations between self-scores and scores received indicating some convergence of agreement, with doctors feeling more comfortable with self-evaluation from the perspective of patients than from colleagues. CONCLUSIONS Supplementing patient and colleague feedback with self-evaluation may help doctors confirm for themselves areas for enhanced CPD through convergence. If self-evaluation is used, the colleague questionnaire may be sufficient, since aspects of clinical competence, management, communication and leadership as well as patient care can be addressed through colleague items. Mentoring of doctors in CPD should aim to make doctors feel more comfortable about being rated by colleagues to enhance convergence between self-scores and evaluations from the perspective of colleagues.
Collapse
Affiliation(s)
- Ajit Narayanan
- Computer and Mathematical Sciences, School of Engineering, Auckland University of Technology, 2-14 Wakefield Street, Auckland, 1010 New Zealand
| | | | - Michael J. Greco
- School of Medicine, Gold Coast Campus, Griffith University, Southport, Australia
| |
Collapse
|
20
|
Lalani M, Baines R, Bryce M, Marshall M, Mead S, Barasi S, Archer J, Regan de Bere S. Patient and public involvement in medical performance processes: A systematic review. Health Expect 2018; 22:149-161. [PMID: 30548359 PMCID: PMC6433319 DOI: 10.1111/hex.12852] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 10/15/2018] [Accepted: 11/07/2018] [Indexed: 12/26/2022] Open
Abstract
Background Patient and public involvement (PPI) continues to develop as a central policy agenda in health care. The patient voice is seen as relevant, informative and can drive service improvement. However, critical exploration of PPI's role within monitoring and informing medical performance processes remains limited. Objective To explore and evaluate the contribution of PPI in medical performance processes to understand its extent, purpose and process. Search strategy The electronic databases PubMed, PsycINFO and Google Scholar were systematically searched for studies published between 2004 and 2018. Inclusion criteria Studies involving doctors and patients and all forms of patient input (eg, patient feedback) associated with medical performance were included. Data extraction and synthesis Using an inductive approach to analysis and synthesis, a coding framework was developed which was structured around three key themes: issues that shape PPI in medical performance processes; mechanisms for PPI; and the potential impacts of PPI on medical performance processes. Main results From 4772 studies, 48 articles (from 10 countries) met the inclusion criteria. Findings suggest that the extent of PPI in medical performance processes globally is highly variable and is primarily achieved through providing patient feedback or complaints. The emerging evidence suggests that PPI can encourage improvements in the quality of patient care, enable professional development and promote professionalism. Discussion and conclusions Developing more innovative methods of PPI beyond patient feedback and complaints may help revolutionize the practice of PPI into a collaborative partnership, facilitating the development of proactive relationships between the medical profession, patients and the public.
Collapse
Affiliation(s)
- Mirza Lalani
- Department of Primary Care and Population Health, University College London, London, UK
| | - Rebecca Baines
- Collaboration for the Advancement of Medical Education Research and Assessment, Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
| | - Marie Bryce
- Collaboration for the Advancement of Medical Education Research and Assessment, Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
| | - Martin Marshall
- Department of Primary Care and Population Health, University College London, London, UK
| | - Sol Mead
- General Medical Council, Registration and Revalidation Directorate, London, UK.,NHS England London and Southeast Regions, Regional Medical Directorate, London, UK
| | - Stephen Barasi
- General Medical Council, Registration and Revalidation Directorate (Wales), Wales, UK
| | - Julian Archer
- Collaboration for the Advancement of Medical Education Research and Assessment, Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
| | - Samantha Regan de Bere
- Collaboration for the Advancement of Medical Education Research and Assessment, Faculty of Medicine and Dentistry, University of Plymouth, Plymouth, UK
| |
Collapse
|
21
|
Baines R, Donovan J, Regan de Bere S, Archer J, Jones R. Patient and public involvement in the design, administration and evaluation of patient feedback tools, an example in psychiatry: a systematic review and critical interpretative synthesis. J Health Serv Res Policy 2018; 24:130-142. [PMID: 30477354 DOI: 10.1177/1355819618811866] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Patient feedback is considered integral to healthcare design, delivery and reform. However, while there is a strong policy commitment to evidencing patient and public involvement (PPI) in the design of patient feedback tools, it remains unclear whether this happens in practice. METHODS A systematic review using thematic analysis and critical interpretative synthesis of peer-reviewed and grey literature published between 2007 and 2017 exploring the presence of PPI in the design, administration and evaluation of patient feedback tools for practising psychiatrists. The research process was carried out in collaboration with a volunteer mental health patient research partner. RESULTS Fourteen articles (10 peer-reviewed, four grey literature) discussing the development of nine patient feedback tools were included. Six of the nine tools reviewed were designed from a professional perspective only. Tool content and its categorization primarily remained at the professional's discretion. Patient participation rates, presence of missing data and psychometric validation were used to determine validity and patient acceptability. In most instances, patients remained passive recipients with limited opportunity to actively influence change at any stage. No article reviewed reported PPI in all aspects of tool design, administration or evaluation. CONCLUSIONS The majority of patient feedback tools are designed, administered and evaluated from the professional perspective only. Existing tools appear to assume that: professional and patient agendas are synonymous; psychometric validation is indicative of patient acceptability; and psychiatric patients do not have the capacity or desire to be involved. Future patient feedback tools should be co-produced from the outset to ensure they are valued by all those involved. A reconsideration of the purpose of patient feedback, and what constitutes valid patient feedback, is also required.
Collapse
Affiliation(s)
- Rebecca Baines
- 1 Research Assistant, Collaboration for the Advancement of Medical Education Research & Assessment, Faculty of Medicine and Dentistry, University of Plymouth, UK
| | - John Donovan
- 2 Volunteer Mental Health Patient-Research-Partner, UK
| | - Sam Regan de Bere
- 3 Lecturer in Medical Humanities, Collaboration for the Advancement of Medical Education Research & Assessment, Faculty of Medicine and Dentistry, University of Plymouth, UK
| | - Julian Archer
- 4 Collaboration for the Advancement of Medical Education Research & Assessment, Faculty of Medicine and Dentistry, University of Plymouth, UK
| | - Ray Jones
- 5 Professor of Health Informatics, School of Nursing and Midwifery, University of Plymouth, UK
| |
Collapse
|
22
|
Al-Jabr H, Twigg MJ, Scott S, Desborough JA. Patient feedback questionnaires to enhance consultation skills of healthcare professionals: A systematic review. PATIENT EDUCATION AND COUNSELING 2018; 101:1538-1548. [PMID: 29598964 DOI: 10.1016/j.pec.2018.03.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2017] [Revised: 02/26/2018] [Accepted: 03/15/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE To identify patient feedback questionnaires that assess the development of consultation skills (CSs) of practitioners. METHODS We conducted a systematic search using seven databases from inception to January 2017 to identify self-completed patient feedback questionnaires assessing and enhancing the development of CSs of individual practitioners. Results were checked for eligibility by three authors, and disagreements were resolved by discussion. Reference lists of relevant studies and Open Grey were searched for additional studies. RESULTS Of 16,312 studies retrieved, sixteen were included, describing twelve patient feedback questionnaires that were mostly designed for physicians in primary care settings. Most questionnaires had limited data regarding their psychometric properties, except for the Doctor Interpersonal Skills Questionnaire (DISQ). Most studies conducted follow-up, capturing positive views of practitioners regarding the process (n = 14). Feedback was repeated by only three studies, demonstrating different levels of improvement in practitioners' performance. CONCLUSION Identified questionnaires were mainly focused on physicians, however, to support using patient feedback, questionnaires need to be validated with other practitioners. PRACTICE IMPLICATIONS Several patient feedback questionnaires are available, showing potential for supporting practitioners' development. Valid questionnaires should be used with appropriate practitioners in developing more evidence for the impact they may have on actual consultations.
Collapse
Affiliation(s)
- Hiyam Al-Jabr
- School of Pharmacy, University of East Anglia, Norwich, UK.
| | | | - Sion Scott
- School of Pharmacy, University of East Anglia, Norwich, UK
| | | |
Collapse
|
23
|
Burt J, Abel G, Elliott MN, Elmore N, Newbould J, Davey A, Llanwarne N, Maramba I, Paddison C, Campbell J, Roland M. The Evaluation of Physicians' Communication Skills From Multiple Perspectives. Ann Fam Med 2018; 16:330-337. [PMID: 29987081 PMCID: PMC6037531 DOI: 10.1370/afm.2241] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 01/30/2018] [Accepted: 02/27/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE To examine how family physicians', patients', and trained clinical raters' assessments of physician-patient communication compare by analysis of individual appointments. METHODS Analysis of survey data from patients attending face-to-face appointments with 45 family physicians at 13 practices in England. Immediately post-appointment, patients and physicians independently completed a questionnaire including 7 items assessing communication quality. A sample of videotaped appointments was assessed by trained clinical raters, using the same 7 communication items. Patient, physician, and rater communication scores were compared using correlation coefficients. RESULTS Included were 503 physician-patient pairs; of those, 55 appointments were also evaluated by trained clinical raters. Physicians scored themselves, on average, lower than patients (mean physician score 74.5; mean patient score 94.4); 63.4% (319) of patient-reported scores were the maximum of 100. The mean of rater scores from 55 appointments was 57.3. There was a near-zero correlation coefficient between physician-reported and patient-reported communication scores (0.009, P = .854), and between physician-reported and trained rater-reported communication scores (-0.006, P = .69). There was a moderate and statistically significant association, however, between patient and trained-rater scores (0.35, P = .042). CONCLUSIONS The lack of correlation between physician scores and those of others indicates that physicians' perceptions of good communication during their appointments may differ from those of external peer raters and patients. Physicians may not be aware of how patients experience their communication practices; peer assessment of communication skills is an important approach in identifying areas for improvement.
Collapse
Affiliation(s)
- Jenni Burt
- The Healthcare Improvement Studies Institute (THIS Institute), University of Cambridge, Cambridge Biomedical Campus, Cambridge, United Kingdom
| | - Gary Abel
- University of Exeter Medical School, St Luke's Campus, Exeter, United Kingdom
| | | | - Natasha Elmore
- The Healthcare Improvement Studies Institute (THIS Institute), University of Cambridge, Cambridge Biomedical Campus, Cambridge, United Kingdom
| | | | - Antoinette Davey
- University of Exeter Medical School, St Luke's Campus, Exeter, United Kingdom
| | - Nadia Llanwarne
- Cambridge Centre for Health Services Research, University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | - Inocencio Maramba
- University of Exeter Medical School, St Luke's Campus, Exeter, United Kingdom
| | - Charlotte Paddison
- Cambridge Centre for Health Services Research, University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| | - John Campbell
- University of Exeter Medical School, St Luke's Campus, Exeter, United Kingdom
| | - Martin Roland
- Cambridge Centre for Health Services Research, University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
| |
Collapse
|
24
|
Stevens S, Read J, Baines R, Chatterjee A, Archer J. Validation of Multisource Feedback in Assessing Medical Performance: A Systematic Review. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2018; 38:262-268. [PMID: 30157152 DOI: 10.1097/ceh.0000000000000219] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION Over the past 10 years, a number of systematic reviews have evaluated the validity of multisource feedback (MSF) to assess and quality-assure medical practice. The purpose of this study is to synthesize the results from existing reviews to provide a holistic overview of the validity evidence. METHODS This review identified eight systematic reviews evaluating the validity of MSF published between January 2006 and October 2016. Using a standardized data extraction form, two independent reviewers extracted study characteristics. A framework of validation developed by the American Psychological Association was used to appraise the validity evidence within each systematic review. RESULTS In terms of validity evidence, each of the eight reviews demonstrated evidence across at least one domain of the American Psychological Association's validity framework. Evidence of assessment validity within the domains of "internal structure" and "relationship to other variables" has been well established. However, the domains of content validity (ie, ensuring that MSF tools measure what they are intended to measure); consequential validity (ie, evidence of the intended or unintended consequences MSF assessments may have on participants or wider society), and response process validity (ie, the process of standardization and quality control in the delivery and completion of assessments) remain limited. DISCUSSION Evidence for the validity of MSF has, across a number of domains, been well established. However, the size and quality of the existing evidence remains variable. To determine the extent to which MSF is considered a valid instrument to assess medical performance, future research is required to determine the following: (1) how best to design and deliver MSF assessments that address the identified limitations of existing tools and (2) how to ensure that involvement within MSF supports positive changes in practice. Such research is integral if MSF is to continue to inform medical performance and subsequent improvements in the quality and safety of patient care.
Collapse
Affiliation(s)
- Sebastian Stevens
- Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), Plymouth University Peninsula Schools of Medicine and Dentistry (PU PSMD), University of Plymouth, Plymouth, PL, UK
| | | | | | | | | |
Collapse
|
25
|
Tanco K, Azhar A, Rhondali W, Rodriguez-Nunez A, Liu D, Wu J, Baile W, Bruera E. The Effect of Message Content and Clinical Outcome on Patients' Perception of Physician Compassion: A Randomized Controlled Trial. Oncologist 2017; 23:375-382. [PMID: 29118266 DOI: 10.1634/theoncologist.2017-0326] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Accepted: 09/22/2017] [Indexed: 01/29/2023] Open
Abstract
BACKGROUND In a previous randomized crossover study, patients perceived a physician delivering a more optimistic message (MO) as more compassionate and professional. However, the impact of the clinical outcome of the patient on patient's perception of physician's level of compassion and professionalism has not been previously studied. Our aim was to determine if the reported clinical outcome modified the patient's perception of physician compassion, professionalism, impression, and preference for physician. MATERIALS AND METHODS One hundred twenty-eight advanced cancer patients in an outpatient Supportive Care Center were randomized to complete validated questionnaires about patients' perception of physician's level of compassion, professionalism, impression, and preference of physician for themselves and their family after watching scripted videos depicting a physician delivering an MO versus a less optimistic (LO) message followed by a clinical vignette depicting a worse outcome. RESULTS Median age was 61 years and 55% were female. There was no difference in compassion score after the vignette in the MO and LO groups. However, there were significantly worse overall impression and professionalism scores in both the MO and LO groups after the vignette. In the MO group, preference for the physician for themselves and their family significantly decreased after the vignette. CONCLUSION Seeing a worse clinical outcome did not change the patients' appraisal of an inappropriately optimistic physician. However, it reduced the overall impression of both physicians that conveyed an MO or an LO message and it also resulted in less likelihood of choosing the MO physician for themselves and their family. IMPLICATIONS FOR PRACTICE The study found that a patient's perception of a physician's compassion did not change after reading a vignette describing a negative clinical outcome, regardless of whether the physician had given a more or a less optimistic message to the patient. However, the results suggested that patients perceived worse professionalism and overall physician impression scores for both more and less optimistic physicians and lower likelihood to choose the more optimistic physician for themselves and their family.
Collapse
Affiliation(s)
- Kimberson Tanco
- Departments of Palliative, Rehabilitation and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ahsan Azhar
- Departments of Palliative, Rehabilitation and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Wadih Rhondali
- Consultations Souffrance au Travail et Psychopathologie du Travail, Marseille, France
| | - Alfredo Rodriguez-Nunez
- Programa Medicina Paliativa y Cuidados Continuos, Facultad de Medicina, Pontificia Universidad Catolica de Chile, Santiago, Chile
| | - Diane Liu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jimin Wu
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Walter Baile
- Departments of Psychiatry and Behavioral Science, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Eduardo Bruera
- Departments of Palliative, Rehabilitation and Integrative Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
26
|
Scheepers RA, Lases LSS, Arah OA, Heineman MJ, Lombarts KMJMH. Job Resources, Physician Work Engagement, and Patient Care Experience in an Academic Medical Setting. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:1472-1479. [PMID: 28471782 DOI: 10.1097/acm.0000000000001719] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE Physician work engagement is associated with better work performance and fewer medical errors; however, whether work-engaged physicians perform better from the patient perspective is unknown. Although availability of job resources (autonomy, colleague support, participation in decision making, opportunities for learning) bolster work engagement, this relationship is understudied among physicians. This study investigated associations of physician work engagement with patient care experience and job resources in an academic setting. METHOD The authors collected patient care experience evaluations, using nine validated items from the Dutch Consumer Quality index in two academic hospitals (April 2014 to April 2015). Physicians reported job resources and work engagement using, respectively, the validated Questionnaire on Experience and Evaluation of Work and the Utrecht Work Engagement Scale. The authors conducted multivariate adjusted mixed linear model and linear regression analyses. RESULTS Of the 9,802 eligible patients and 238 eligible physicians, respectively, 4,573 (47%) and 185 (78%) participated. Physician work engagement was not associated with patient care experience (B = 0.01; 95% confidence interval [CI] = -0.02 to 0.03; P = .669). However, learning opportunities (B = 0.28; 95% CI = 0.05 to 0.52; P = .019) and autonomy (B = 0.31; 95% CI = 0.10 to 0.51; P = .004) were positively associated with work engagement. CONCLUSIONS Higher physician work engagement did not translate into better patient care experience. Patient experience may benefit from physicians who deliver stable quality under varying levels of work engagement. From the physicians' perspective, autonomy and learning opportunities could safeguard their work engagement.
Collapse
Affiliation(s)
- Renée A Scheepers
- R.A. Scheepers is postdoctoral researcher, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands. L.S.S. Lases is PhD candidate, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands. O.A. Arah is professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands, professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, California, and professor, UCLA Center for Health Policy Research, Los Angeles, California. M.J. Heineman is professor, Professional Performance Research Group, Center for Evidence-Based Education, and member, Board of Directors, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| | | | | | | | | |
Collapse
|
27
|
Nimmo S. Revalidation, appraisal and multisource feedback for occupational physicians. Occup Med (Lond) 2017; 67:413-415. [DOI: 10.1093/occmed/kqx062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
28
|
Li H, Ding N, Zhang Y, Liu Y, Wen D. Assessing medical professionalism: A systematic review of instruments and their measurement properties. PLoS One 2017; 12:e0177321. [PMID: 28498838 PMCID: PMC5428933 DOI: 10.1371/journal.pone.0177321] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Accepted: 04/25/2017] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Over the last three decades, various instruments were developed and employed to assess medical professionalism, but their measurement properties have yet to be fully evaluated. This study aimed to systematically evaluate these instruments' measurement properties and the methodological quality of their related studies within a universally acceptable standardized framework and then provide corresponding recommendations. METHODS A systematic search of the electronic databases PubMed, Web of Science, and PsycINFO was conducted to collect studies published from 1990-2015. After screening titles, abstracts, and full texts for eligibility, the articles included in this study were classified according to their respective instrument's usage. A two-phase assessment was conducted: 1) methodological quality was assessed by following the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist; and 2) the quality of measurement properties was assessed according to Terwee's criteria. Results were integrated using best-evidence synthesis to look for recommendable instruments. RESULTS After screening 2,959 records, 74 instruments from 80 existing studies were included. The overall methodological quality of these studies was unsatisfactory, with reasons including but not limited to unknown missing data, inadequate sample sizes, and vague hypotheses. Content validity, cross-cultural validity, and criterion validity were either unreported or negative ratings in most studies. Based on best-evidence synthesis, three instruments were recommended: Hisar's instrument for nursing students, Nurse Practitioners' Roles and Competencies Scale, and Perceived Faculty Competency Inventory. CONCLUSION Although instruments measuring medical professionalism are diverse, only a limited number of studies were methodologically sound. Future studies should give priority to systematically improving the performance of existing instruments and to longitudinal studies.
Collapse
Affiliation(s)
- Honghe Li
- Research Center of Medical Education, China Medical University, Shenyang, Liaoning, China
| | - Ning Ding
- Research Center of Medical Education, China Medical University, Shenyang, Liaoning, China
| | - Yuanyuan Zhang
- School of Public Health, Dalian Medical University, Dalian, Liaoning, China
| | - Yang Liu
- School of Public Health, China Medical University, Shenyang, Liaoning, China
| | - Deliang Wen
- Research Center of Medical Education, China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
29
|
Burt J, Campbell J, Abel G, Aboulghate A, Ahmed F, Asprey A, Barry H, Beckwith J, Benson J, Boiko O, Bower P, Calitri R, Carter M, Davey A, Elliott MN, Elmore N, Farrington C, Haque HW, Henley W, Lattimer V, Llanwarne N, Lloyd C, Lyratzopoulos G, Maramba I, Mounce L, Newbould J, Paddison C, Parker R, Richards S, Roberts M, Setodji C, Silverman J, Warren F, Wilson E, Wright C, Roland M. Improving patient experience in primary care: a multimethod programme of research on the measurement and improvement of patient experience. PROGRAMME GRANTS FOR APPLIED RESEARCH 2017. [DOI: 10.3310/pgfar05090] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
BackgroundThere has been an increased focus towards improving quality of care within the NHS in the last 15 years; as part of this, there has been an emphasis on the importance of patient feedback within policy, through National Service Frameworks and the Quality and Outcomes Framework. The development and administration of large-scale national patient surveys to gather representative data on patient experience, such as the national GP Patient Survey in primary care, has been one such initiative. However, it remains unclear how the survey is used by patients and what impact the data may have on practice.ObjectivesOur research aimed to gain insight into how different patients use surveys to record experiences of general practice; how primary care staff respond to feedback; and how to engage primary care staff in responding to feedback.MethodsWe used methods including quantitative survey analyses, focus groups, interviews, an exploratory trial and an experimental vignette study.Results(1)Understanding patient experience data. Patients readily criticised their care when reviewing consultations on video, although they were reluctant to be critical when completing questionnaires. When trained raters judged communication during a consultation to be poor, a substantial proportion of patients rated the doctor as ‘good’ or ‘very good’. Absolute scores on questionnaire surveys should be treated with caution; they may present an overoptimistic view of general practitioner (GP) care. However, relative rankings to identify GPs who are better or poorer at communicating may be acceptable, as long as statistically reliable figures are obtained. Most patients have a particular GP whom they prefer to see; however, up to 40% of people who have such a preference are unable regularly to see the doctor of their choice. Users of out-of-hours care reported worse experiences when the service was run by a commercial provider than when it was run by a not-for profit or NHS provider. (2)Understanding patient experience in minority ethnic groups. Asian respondents to the GP Patient Survey tend to be registered with practices with generally low scores, explaining about half of the difference in the poorer reported experiences of South Asian patients than white British patients. We found no evidence that South Asian patients used response scales differently. When viewing the same consultation in an experimental vignette study, South Asian respondents gave higher scores than white British respondents. This suggests that the low scores given by South Asian respondents in patient experience surveys reflect care that is genuinely worse than that experienced by their white British counterparts. We also found that service users of mixed or Asian ethnicity reported lower scores than white respondents when rating out-of-hours services. (3)Using patient experience data. We found that measuring GP–patient communication at practice level masks variation between how good individual doctors are within a practice. In general practices and in out-of-hours centres, staff were sceptical about the value of patient surveys and their ability to support service reconfiguration and quality improvement. In both settings, surveys were deemed necessary but not sufficient. Staff expressed a preference for free-text comments, as these provided more tangible, actionable data. An exploratory trial of real-time feedback (RTF) found that only 2.5% of consulting patients left feedback using touch screens in the waiting room, although more did so when reminded by staff. The representativeness of responding patients remains to be evaluated. Staff were broadly positive about using RTF, and practices valued the ability to include their own questions. Staff benefited from having a facilitated session and protected time to discuss patient feedback.ConclusionsOur findings demonstrate the importance of patient experience feedback as a means of informing NHS care, and confirm that surveys are a valuable resource for monitoring national trends in quality of care. However, surveys may be insufficient in themselves to fully capture patient feedback, and in practice GPs rarely used the results of surveys for quality improvement. The impact of patient surveys appears to be limited and effort should be invested in making the results of surveys more meaningful to practice staff. There were several limitations of this programme of research. Practice recruitment for our in-hours studies took place in two broad geographical areas, which may not be fully representative of practices nationally. Our focus was on patient experience in primary care; secondary care settings may face different challenges in implementing quality improvement initiatives driven by patient feedback. Recommendations for future research include consideration of alternative feedback methods to better support patients to identify poor care; investigation into the factors driving poorer experiences of communication in South Asian patient groups; further investigation of how best to deliver patient feedback to clinicians to engage them and to foster quality improvement; and further research to support the development and implementation of interventions aiming to improve care when deficiencies in patient experience of care are identified.FundingThe National Institute for Health Research Programme Grants for Applied Research programme.
Collapse
Affiliation(s)
- Jenni Burt
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | | | - Gary Abel
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
- University of Exeter Medical School, Exeter, UK
| | - Ahmed Aboulghate
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Faraz Ahmed
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | | | | | - Julia Beckwith
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - John Benson
- Primary Care Unit, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Olga Boiko
- University of Exeter Medical School, Exeter, UK
| | - Pete Bower
- National Institute for Health Research (NIHR) School for Primary Care Research, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK
| | | | - Mary Carter
- University of Exeter Medical School, Exeter, UK
| | | | | | - Natasha Elmore
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Conor Farrington
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Hena Wali Haque
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | | | - Val Lattimer
- School of Health Sciences, University of East Anglia, Norwich, UK
| | - Nadia Llanwarne
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Cathy Lloyd
- Faculty of Health & Social Care, The Open University, Milton Keynes, UK
| | - Georgios Lyratzopoulos
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | | | - Luke Mounce
- University of Exeter Medical School, Exeter, UK
| | - Jenny Newbould
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Charlotte Paddison
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Richard Parker
- Primary Care Unit, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | | | | | | | | | | | - Ed Wilson
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | | | - Martin Roland
- Cambridge Centre for Health Services Research, Institute of Public Health, University of Cambridge School of Clinical Medicine, Cambridge, UK
| |
Collapse
|
30
|
Gibbons C, Richards S, Valderas JM, Campbell J. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy. J Med Internet Res 2017; 19:e65. [PMID: 28298265 PMCID: PMC5371715 DOI: 10.2196/jmir.6533] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Revised: 09/30/2016] [Accepted: 11/29/2016] [Indexed: 12/18/2022] Open
Abstract
Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or innovative and those who were not rated at all (P>.05). Conclusions Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high performance. Colleague open-text comments that signal respect, professionalism, and being interpersonal may be key indicators of doctor’s performance.
Collapse
Affiliation(s)
- Chris Gibbons
- Centre for Health Services Research, University of Cambridge, Cambridge, United Kingdom.,The Psychometrics Centre, University of Cambridge, Cambridge, United Kingdom
| | - Suzanne Richards
- Leeds Institute for Health Sciences, University of Leeds, Leeds, United Kingdom
| | | | - John Campbell
- Primary Care Research Group, University of Exeter, Exeter, United Kingdom
| |
Collapse
|
31
|
van der Meulen MW, Boerebach BCM, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Arah OA, Lombarts KMJMH. Validation of the INCEPT: A Multisource Feedback Tool for Capturing Different Perspectives on Physicians' Professional Performance. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2017; 37:9-18. [PMID: 28212117 DOI: 10.1097/ceh.0000000000000143] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
INTRODUCTION Multisource feedback (MSF) instruments are used to and must feasibly provide reliable and valid data on physicians' performance from multiple perspectives. The "INviting Co-workers to Evaluate Physicians Tool" (INCEPT) is a multisource feedback instrument used to evaluate physicians' professional performance as perceived by peers, residents, and coworkers. In this study, we report on the validity, reliability, and feasibility of the INCEPT. METHODS The performance of 218 physicians was assessed by 597 peers, 344 residents, and 822 coworkers. Using explorative and confirmatory factor analyses, multilevel regression analyses between narrative and numerical feedback, item-total correlations, interscale correlations, Cronbach's α and generalizability analyses, the psychometric qualities, and feasibility of the INCEPT were investigated. RESULTS For all respondent groups, three factors were identified, although constructed slightly different: "professional attitude," "patient-centeredness," and "organization and (self)-management." Internal consistency was high for all constructs (Cronbach's α ≥ 0.84 and item-total correlations ≥ 0.52). Confirmatory factor analyses indicated acceptable to good fit. Further validity evidence was given by the associations between narrative and numerical feedback. For reliable total INCEPT scores, three peer, two resident and three coworker evaluations were needed; for subscale scores, evaluations of three peers, three residents and three to four coworkers were sufficient. DISCUSSION The INCEPT instrument provides physicians performance feedback in a valid and reliable way. The number of evaluations to establish reliable scores is achievable in a regular clinical department. When interpreting feedback, physicians should consider that respondent groups' perceptions differ as indicated by the different item clustering per performance factor.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- Ms. van der Meulen: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Boerebach: Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Smirnova: PhD Candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands, and Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands. Dr. Heeneman: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. oude Egbrink: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten: Professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, the Netherlands. Dr. Arah: Professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles (UCLA), Los Angeles, CA, and UCLA Center for Health Policy Research, Los Angeles, CA. Dr. Lombarts: Professor, Professional Performance Research Group, Center for Evidence-Based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
32
|
Heneghan M, Chaplin R. Colleague and patient appraisal of consultant psychiatrists and the effects of patient detention on appraisal scores. BJPsych Bull 2016; 40:181-4. [PMID: 27512584 PMCID: PMC4967774 DOI: 10.1192/pb.bp.115.051334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Aims and method This paper aims to review colleague and patient feedback from the 10-year period of the operation of the Royal College of Psychiatrists' 360-degree appraisal system, specifically: (1) examine the overall distribution of ratings; (2) examine the effect of working primarily with detained patients on patient feedback, represented by forensic psychiatrists; and (3) look for a relationship between colleague and patient ratings. Results Data were analysed for 977 participating psychiatrists. Both colleagues and patients rated psychiatrists overall with high scores. Less than 1% were identified as low scorers, although there was no relationship between those identified by colleagues or patients. Colleague and patient feedback scores varied little between subspecialties including forensic consultants. Clinical implications Psychiatrists in all subspecialties obtained high scores from colleagues and staff. Working with detained patients appeared to have little effect on patient ratings.
Collapse
Affiliation(s)
- Miranda Heneghan
- Royal College of Psychiatrists' Centre for Quality Improvement, London, UK
| | - Robert Chaplin
- Royal College of Psychiatrists' Centre for Quality Improvement, London, UK
| |
Collapse
|
33
|
Abstract
The number of data-based research articles focusing on patient sociodemographic profiling and experience with healthcare practices is still relatively small. One of the reasons for this relative lack of research is that categorizing patients into different demographic groups can lead to significant reductions in sample numbers for homogeneous subgroups. The aim of this article is to identify problems and issues when dealing with big data that contains information at two levels: patient experience of their general practice, and scores received by practices. The Practice Accreditation and Improvement Survey (PAIS) consisting of 27 five-point Likert items and 11 sociodemographic questions is a Royal Australian College of General Practitioners (RACGP)-endorsed instrument for seeking patient views as part of the accreditation of Australian general practices. The data were collected during the 3-year period May 2011-July 2014, during which time PAIS was completed for 3734 individual general practices throughout Australia involving 312,334 anonymous patients. This represents over 60% of practices in Australia, and ∼75% of practices that undergo voluntary accreditation. The sampling method for each general practice was convenience sampling. The results of our analysis show how sociodemographic profiles of Australian patients can affect their ratings of practices and also how the location of the practice (State/Territory, remote access area) can affect patient experience. These preliminary findings can act as an initial set of results against which future studies in patient experience trends can be developed and measured in Australia. Also, the methods used in this article provide a methodological framework for future patient experience researchers to use when dealing with data that contain information at two levels, such as the patient and practice. Finally, the outcomes demonstrate that different subgroups can experience healthcare provision differently, especially indigenous patients and young patients. The implications of these findings for healthcare policy and priorities will need to be further investigated.
Collapse
Affiliation(s)
- Ajit Narayanan
- 1 School of Computer and Mathematical Sciences, Auckland University of Technology , Auckland, New Zealand
| | - Michael Greco
- 2 School of Medicine, Griffith University , Brisbane, Australia
| |
Collapse
|
34
|
Yousuf Guraya S. Workplace-based Assessment; Applications and Educational Impact. Malays J Med Sci 2015; 22:5-10. [PMID: 28223879 PMCID: PMC5295751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2014] [Accepted: 11/06/2015] [Indexed: 06/06/2023] Open
Abstract
Workplace based assessment (WPBA) refers to a group of assessment modalities which evaluates trainees' performance during the clinical settings. Hallmark of WPBA is the element of observation of the trainee's performance in real workplace environment along with relevant feedback, thus fostering reflective practice. WPBA consists of observation of clinical performance (mini-clinical evaluation exercise, direct observation of procedural skills), discussion of clinical cases (case based discussion), and feedback from peers, coworkers, and patients (multisource feedback). This literature review was conducted on the databases of MEDLINE, EMBASE, CINAHL, and The Cochrane Library. Data were retrieved by connecting Medical Subject Headings (MeSH) keywords: 'workplace based assessment' AND 'mini-clinical evaluation exercise' AND 'direct observation of procedural skills' AND 'case based discussion' AND 'multi-source feedback'. Additional studies were searched in the reference lists of all included articles. As WPBA is gaining popularity, there is a growing need for continuing faculty development and greater evidence regarding the validity and reliability of these instruments, which will allow the academia to embed this strategy in the existing curricula. As of today, there are conflicting reports about the educational impact of WPBA in terms of its validity and reliability. This review draws upon the spectrum of WPBA tools, their designs and applications, and an account of the existing educational impact of this emerging assessment strategy in medical education. Finally, the study reports some educational impact of WPBAs on learning and emphasises the need for more empirical research in endorsing its application worldwide.
Collapse
Affiliation(s)
- Salman Yousuf Guraya
- Correspondence: Profesor Salman Yousuf Guraya, FRCS, MMedEd (Dundee), College of Medicine, Taibah University, Tareeq Jamiat P.O Box 30054, Almadinah Almunawwarah Kingdom of Saudi Arabia, Tel: +966 14 846 0008, Fax: +966 14 847 1403,
| |
Collapse
|
35
|
Young ME, Cruess SR, Cruess RL, Steinert Y. The Professionalism Assessment of Clinical Teachers (PACT): the reliability and validity of a novel tool to evaluate professional and clinical teaching behaviors. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:99-113. [PMID: 23754583 DOI: 10.1007/s10459-013-9466-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Accepted: 05/28/2013] [Indexed: 06/02/2023]
Abstract
Physicians function as clinicians, teachers, and role models within the clinical environment. Negative learning environments have been shown to be due to many factors, including the presence of unprofessional behaviors among clinical teachers. Reliable and valid assessments of clinical teacher performance, including professional behaviors, may provide a foundation for evidence-based feedback to clinical teachers, enable targeted remediation or recognition, and help to improve the learning environment. However, few tools exist for the evaluation of clinical teachers that include a focus on both professional and clinical teaching behaviors. The Professionalism Assessment of Clinical Teachers (PACT) was developed and implemented at one Canadian institution and was assessed for evidence of reliability and validity. Following each clerkship rotation, students in the 2009-2010 third-year undergraduate clerkship cohort (n = 178) anonymously evaluated a minimum of two clinical teachers using the PACT. 4,715 forms on 567 faculty members were completed. Reliability, validity, and free text comments (present in 45 % of the forms) were examined. An average of 8.6 PACT forms were completed per faculty (range 1-60), with a reliability of 0.31 for 2.9 forms (harmonic mean); 12 forms were necessary for a reliability of 0.65. Global evaluations of teachers aligned with ratings of free-text comments (r = 0.77, p < 0.001). Comment length related negatively with overall rating (r = -0.19, p < 0.001). Mean performance related negatively with variability of performance (r = -0.72, p < 0.001), although this may be related to a ceiling effect. Most faculty members were rated highly; however 'provided constructive feedback' was the least well-rated item. Respectful interactions with students appeared to be the most influential item in the global rating of faculty performance. The PACT is a moderately reliable tool for the assessment of professional behaviors of clinical teachers, with evidence supporting its validity.
Collapse
Affiliation(s)
- Meredith E Young
- Department of Medicine, Centre for Medical Education, Faculty of Medicine, McGill University, 1110 Pine Ave West, Montreal, QC, H3A 1A3, Canada,
| | | | | | | |
Collapse
|
36
|
Donnon T, Al Ansari A, Al Alawi S, Violato C. The reliability, validity, and feasibility of multisource feedback physician assessment: a systematic review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:511-6. [PMID: 24448051 DOI: 10.1097/acm.0000000000000147] [Citation(s) in RCA: 122] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE The use of multisource feedback (MSF) or 360-degree evaluation has become a recognized method of assessing physician performance in practice. The purpose of the present systematic review was to investigate the reliability, generalizability, validity, and feasibility of MSF for the assessment of physicians. METHOD The authors searched the EMBASE, PsycINFO, MEDLINE, PubMed, and CINAHL databases for peer-reviewed, English-language articles published from 1975 to January, 2013. Studies were included if they met the follow ing inclusion criteria: used one or more MSF instruments to assess physician performance in practice; reported psychometric evidence of the instrument(s) in the form of reliability, generalizability coefficients, and construct or criterion-related validity; and provided information regarding the administration or feasibility of the process in collecting the feedback data. RESULTS Of the 96 full-text articles assessed for eligibility, 43 articles were included. The use of MSF has been shown to be an effective method for providing feedback to physicians from a multitude of specialties about their clinical and nonclinical (i.e., professionalism, communication, interpersonal relationship, management) performance. In general, assessment of physician performance was based on the completion of the MSF instruments by 8 medical colleagues, 8 coworkers, and 25 patients to achieve adequate reliability and generalizability coefficients of α ≥ 0.90 and Ep ≥ 0.80, respectively. CONCLUSIONS The use of MSF employing medical colleagues, coworkers, and patients as a method to assess physicians in practice has been shown to have high reliability, validity, and feasibility.
Collapse
Affiliation(s)
- Tyrone Donnon
- Dr. Donnon is associate professor, Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada. Dr. Al Ansari is director of training and development, Department of Medical Education, Faculty of Medicine, Bahrain Defense Force Hospital, Riffa, Bahrain. Dr. Al Alawi is a faculty member, Department of Family Medicine, Faculty of Medicine, Bahrain Defense Force Hospital, Riffa, Bahrain. Dr. Violato is professor, Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada
| | | | | | | |
Collapse
|
37
|
Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. Br J Gen Pract 2013; 63:e200-8. [PMID: 23561787 DOI: 10.3399/bjgp13x664252] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
BACKGROUND The UK government has encouraged NHS services to obtain patient feedback to support the further development of patient-centred care. In 2009, the English GP Patient Survey included a sample of 5.5 million, but little is known about its potential utility in informing developments aimed at improving the quality of patients' experiences of primary care. AIM To investigate primary care providers' response to feedback on patient experience from a national survey. DESIGN AND SETTING Qualitative interview study in 10 general practices from four primary care trusts in England. METHOD Semi-structured interviews were conducted with GPs, practice nurses, and practice managers (n = 37). Transcripts were analysed thematically. RESULTS Although some participants reported making changes to their practice in response to the survey data, many expressed doubts about the credibility of the results. Key issues included: concerns about practical aspects of the survey, such as the response rate and representativeness of the sample; the view that it gave insufficient detail to facilitate change and failed to address some salient issues; and unease about the influence of political influences underpinning its introduction and use. CONCLUSION Although, in general, primary care professionals have positive attitudes towards patient feedback, this study suggests a mismatch between the conventional demonstration of the objectivity of a questionnaire survey and the attitudes and experiences of those receiving the data. This is likely to prevent doctors from engaging constructively with the survey. These concerns may well militate against the potential of the survey to act as a simple means of capturing, and effectively using, feedback from patients.
Collapse
|
38
|
Ingram JR, Anderson EJ, Pugsley L. Difficulty giving feedback on underperformance undermines the educational value of multi-source feedback. MEDICAL TEACHER 2013; 35:838-46. [PMID: 23808684 DOI: 10.3109/0142159x.2013.804910] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
BACKGROUND Multi-source feedback (MSF) was intended to provide both a summative and formative assessment of doctors' attitudes and behaviours. AIMS To explore the influences of feedback quality and trainees' acceptance of the assessment on formative educational gains from MSF. METHODS Semi-structured interviews were conducted with a convenience sample of eight dermatology trainees, from an insider researcher position, following two pilot interviews. Interviews were manually transcribed and coded to permit template analysis, a subtype of thematic analysis. RESULTS The interview data indicated that MSF provides relatively little formative educational gains largely because of a paucity of constructive feedback on sub-optimal performance. This was due to difficulties encountered by raters giving developmental feedback, in particular, potential loss of anonymity, and by trainees selecting raters expected to give favourable comments. Dual use of MSF as a summative assessment in annual appraisals also inhibited educational gains by promotion of a 'tick box' mentality in which trainees' need to pass their assessment superseded their desire for self-improvement. CONCLUSIONS A relative lack of developmental feedback limits the formative educational gains from MSF and could provide false reassurance that might reinforce negative behaviours.
Collapse
|
39
|
Narayanan A, Greco M, Powell H, Coleman L. The Reliability of Big "Patient Satisfaction" Data. BIG DATA 2013; 1:141-51. [PMID: 27442196 DOI: 10.1089/big.2013.0021] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Big data in healthcare can bring significant clinical and cost benefits. Of equal but often overlooked importance is the role of patient satisfaction data in improving the quality of healthcare service and treatment, where satisfaction is measured through feedback by patients on their meetings with medical specialists and experts. One of the major problems in analyzing patient feedback data is the nonstandard research designs often used for gathering such data: the designs can be uncrossed, unbalanced, and fully nested. Traditional measures of data reliability are more difficult to calculate for such data. Also, patient data can contain significant proportions of missing values that further complicate the calculation of reliability. This paper describes a reliability approach that is robust in the face of nonstandard research designs and missing values for use with large-scale patient survey data. The dataset contains nearly 85,000 patient responses to over 2,000 healthcare practitioners in five different subtypes over a 15-year period in the United Kingdom. Reliability measures are calculated to provide benchmarks involving minimum numbers of patients and practitioners for deeper drill-down analysis. The paper concludes with a demonstration of how regression models generated from big patient feedback data can be assessed in terms of reliability at the total data level as well as drill-down levels.
Collapse
Affiliation(s)
- Ajit Narayanan
- 1 School of Computing and Mathematical Sciences, Auckland University of Technology , Auckland, New Zealand
| | - Michael Greco
- 2 Client Focused Evaluation Programme (CFEP) Australia , Everton Park, Queensland, Australia
| | - Helen Powell
- 3 CFEP UK, Matford Business Park , Exeter, United Kingdom
| | - Louise Coleman
- 3 CFEP UK, Matford Business Park , Exeter, United Kingdom
| |
Collapse
|
40
|
Schafheutle EI, Hassell K, Noyce PR. Ensuring continuing fitness to practice in the pharmacy workforce: Understanding the challenges of revalidation. Res Social Adm Pharm 2013; 9:199-214. [DOI: 10.1016/j.sapharm.2012.08.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2012] [Revised: 08/10/2012] [Accepted: 08/10/2012] [Indexed: 01/06/2023]
|
41
|
|
42
|
Archer J, de Bere SR. The United Kingdom's experience with and future plans for revalidation. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2013; 33 Suppl 1:S48-S53. [PMID: 24347152 DOI: 10.1002/chp.21206] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Assuring fitness to practice for doctors internationally is increasingly complex. In the United Kingdom, the General Medical Council (GMC) has recently launched revalidation, which has been designed to bring all doctors into a governed environment. Since December 2012, all doctors who wish to practice are required to submit and reflect on supporting documentation against a framework of best practice, Good Medical Practice. These documents are brought together in an annual appraisal. Evidence of practice includes clinical governance activities such as significant events, complaints and audits, continuing professional development and feedback from colleagues and patients. Revalidation has been designed to support professionalism and identify early doctors in difficulty to support their remediation and so assure patient safety. The appraiser decides annually if the doctor has met the standard which is shared with the most senior doctor in the area, the responsible officer (RO). The RO's role is to make a recommendation for revalidation every 5 years for each doctor to the GMC. Revalidation is unique in that it is national, compulsory, involves all doctors regardless of position or training, and is linked to the potentially performance moderating process of appraisal. However, it has a long and troubled history that is shaped by high-profile medical scandals and delays from the profession, the GMC, and the government. Revalidation has been complicated further by rhetoric around patient care and driving up standards but at the same time identifying poor performance. The GMC have responded by commissioning a national evaluation which is currently under development.
Collapse
Affiliation(s)
- Julian Archer
- NIHR Career Development Fellow, Clinical Senior Lecturer and Director of the Collaboration for the Advancement of Medical Education Research & Assessment (CAMERA), Plymouth University Peninsula Schools of Medicine & Dentistry.
| | | |
Collapse
|
43
|
Lockyer J. Multisource feedback: can it meet criteria for good assessment? THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2013; 33:89-98. [PMID: 23775909 DOI: 10.1002/chp.21171] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
INTRODUCTION High-quality instruments are required to assess and provide feedback to practicing physicians. Multisource feedback (MSF) uses questionnaires from colleagues, coworkers, and patients to provide data. It enables feedback in areas of increasing interest to the medical profession: communication, collaboration, professionalism, and interpersonal skills. The purpose of the study was to apply the 7 assessment criteria as a framework to examine the quality of MSF instruments used to assess practicing physicians. METHODS The criteria for assessment (validity, reproducibility, equivalence, feasibility, educational effect, catalytic effect, and acceptability) were examined for 3 sets of instruments, drawing on published data. RESULTS Three MSF instruments with a sufficient body of research for inclusion-the Canadian Physician Achievement Review instruments and the United Kingdom's GMC and CFEP360 instruments-were examined. There was evidence that MSF has been assessed against all criteria except educational effects, although variably for some of the instruments. The greatest emphasis was on validity, reproducibility, and feasibility for all of the instruments. Assessments of the catalytic effect were not available for 1 of the 2 UK instruments and minimally examined for the other. Data about acceptability are implicit in the UK instruments from their endorsement by the Royal College of General Practice and explicitly examined in the Canadian instruments. DISCUSSION The 7 criteria provided a useful framework to assess the quality of MSF instruments and enable an approach to analyzing gaps in instrument assessment. These criteria are likely to be helpful in assessing other instruments used in medical education.
Collapse
Affiliation(s)
- Jocelyn Lockyer
- Departmentof Community Health Sciences, Faculty of Medicine, University of Calgary, Canada T2N 4Z6.
| |
Collapse
|
44
|
Wright C, Richards SH, Hill JJ, Roberts MJ, Norman GR, Greco M, Taylor MRS, Campbell JL. Multisource feedback in evaluating the performance of doctors: the example of the UK General Medical Council patient and colleague questionnaires. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2012; 87:1668-78. [PMID: 23095930 DOI: 10.1097/acm.0b013e3182724cc0] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE Internationally, there is increasing interest in monitoring and evaluating doctors' professional practice. Multisource feedback (MSF) offers one way of collecting information about doctors' performance. The authors investigated the psychometric properties of two questionnaires developed for this purpose and explored the biases that may exist within data collected via such instruments. METHOD A cross-sectional study was conducted in 11 UK health care organizations during 2008-2011. Patients (n = 30,333) and colleagues (n = 17,012) rated the professional performance of 1,065 practicing doctors, using the General Medical Council Patient Questionnaire (PQ) and Colleague Questionnaire (CQ). The psychometric properties of the questionnaires were assessed, and regression modeling was used to explore factors that influenced patient and colleague responses on the core questionnaire items. RESULTS Although the questionnaires demonstrated satisfactory internal consistency, test-retest reliability, and convergent validity, patient and colleague ratings were highly skewed toward favorable impressions of doctor performance. At least 34 PQs and 15 CQs were required to achieve acceptable reliability (G > 0.70). Item ratings were influenced by characteristics of the patient and colleague respondents and the context in which their feedback was provided. CONCLUSIONS The PQ and CQ are acceptable for the provision of formative feedback on a doctor's professional practice within an appraisal process. However, biases identified in the questionnaire data suggest that caution is required when interpreting and acting on this type of information. MSF derived from these questionnaires should not be used in isolation to inform decisions about a doctor's fitness to practice medicine.
Collapse
Affiliation(s)
- Christine Wright
- Primary Care Research Group, University of Exeter Medical School, Exeter, United Kingdom
| | | | | | | | | | | | | | | |
Collapse
|
45
|
Multisource feedback questionnaires in appraisal and for revalidation: a qualitative study in UK general practice. Br J Gen Pract 2012; 62:e314-21. [PMID: 22546590 DOI: 10.3399/bjgp12x641429] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
BACKGROUND UK revalidation plans for doctors include obtaining multisource feedback from patient and colleague questionnaires as part of the supporting information for appraisal and revalidation. AIM To investigate GPs' and appraisers' views of using multisource feedback data in appraisal, and of the emerging links between multisource feedback, appraisal, and revalidation. DESIGN AND SETTING A qualitative study in UK general practice. METHOD In total, 12 GPs who had recently completed the General Medical Council multisource feedback questionnaires and 12 appraisers undertook a semi-structured, telephone interview. A thematic analysis was performed. RESULTS Participants supported multisource feedback for formative development, although most expressed concerns about some elements of its methodology (for example, 'self' selection of colleagues, or whether patients and colleagues can provide objective feedback). Some participants reported difficulties in understanding benchmark data and some were upset by their scores. Most accepted the links between appraisal and revalidation, and that multisource feedback could make a positive contribution. However, tensions between the formative processes of appraisal and the summative function of revalidation were identified. CONCLUSION Participants valued multisource feedback as part of formative assessment and saw a role for it in appraisal. However, concerns about some elements of multisource feedback methodology may undermine its credibility as a tool for identifying poor performance. Proposals linking multisource feedback, appraisal, and revalidation may limit the use of multisource feedback and appraisal for learning and development by some doctors. Careful consideration is required with respect to promoting the accuracy and credibility of such feedback processes so that their use for learning and development, and for revalidation, is maximised.
Collapse
|
46
|
Abstract
BACKGROUND The clinical collaborations among hospitalist physicians create opportunities for peer evaluation. We conducted this study to generate validity evidence for a scale that allows for peer assessment of professional performance. METHODS All of the hospitalist physicians working for >1 year at our hospital were asked to assess each of their physician colleagues along eight domains and name three colleagues whom they would choose to care for a loved one needing hospitalization. A mean composite clinical performance score was generated for each provider. Statistical analyses using the Pearson coefficient were performed. RESULTS The 22 hospitalist physician participants were confident in their ability to assess their peers' clinical skills. There were strong correlations between the domains of clinical excellence (r > 0.5, P < 0.05). Being selected as a doctor whom colleagues would choose to take care of their loved ones was highly correlated with high scores in the domains of humanism, diagnostic acumen, signouts/handoffs, and passion for clinical medicine, and higher composite clinical performance scores (all r > 0.5, P < 0.05). High scores on the Press Ganey questions correlated with peer assessment of humanism (r = .78, P = 0.06). CONCLUSIONS The correlation among scale items, the composite clinical performance score, and the variable "a doctor whom you would choose to care for a loved one" provides validity evidence to our assessment scale. Such measurements may allow hospitalist groups to identify top performers who could be recognized, rewarded, and held up as role models and weaker performers who may need focused training or remediation.
Collapse
|
47
|
|
48
|
Murphy DJ, Guthrie B, Sullivan FM, Mercer SW, Russell A, Bruce DA. Insightful practice: a reliable measure for medical revalidation. BMJ Qual Saf 2012; 21:649-56. [PMID: 22653078 PMCID: PMC3404544 DOI: 10.1136/bmjqs-2011-000429] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
BACKGROUND Medical revalidation decisions need to be reliable if they are to reassure on the quality and safety of professional practice. This study tested an innovative method in which general practitioners (GPs) were assessed on their reflection and response to a set of externally specified feedback. SETTING AND PARTICIPANTS 60 GPs and 12 GP appraisers in the Tayside region of Scotland, UK. METHODS A feedback dataset was specified as (1) GP-specific data collected by GPs themselves (patient and colleague opinion; open book self-evaluated knowledge test; complaints) and (2) Externally collected practice-level data provided to GPs (clinical quality and prescribing safety). GPs' perceptions of whether the feedback covered UK General Medical Council specified attributes of a 'good doctor' were examined using a mapping exercise. GPs' professionalism was examined in terms of appraiser assessment of GPs' level of insightful practice, defined as: engagement with, insight into and appropriate action on feedback data. The reliability of assessment of insightful practice and subsequent recommendations on GPs' revalidation by face-to-face and anonymous assessors were investigated using Generalisability G-theory. MAIN OUTCOME MEASURES Coverage of General Medical Council attributes by specified feedback and reliability of assessor recommendations on doctors' suitability for revalidation. RESULTS Face-to-face assessment proved unreliable. Anonymous global assessment by three appraisers of insightful practice was highly reliable (G=0.85), as were revalidation decisions using four anonymous assessors (G=0.83). CONCLUSIONS Unlike face-to-face appraisal, anonymous assessment of insightful practice offers a valid and reliable method to decide GP revalidation. Further validity studies are needed.
Collapse
Affiliation(s)
- Douglas J Murphy
- Quality, Safety and Informatics Research Group, University of Dundee, Dundee, UK.
| | | | | | | | | | | |
Collapse
|
49
|
Overeem K, Wollersheim HC, Arah OA, Cruijsberg JK, Grol RPTM, Lombarts KMJMH. Evaluation of physicians' professional performance: an iterative development and validation study of multisource feedback instruments. BMC Health Serv Res 2012; 12:80. [PMID: 22448816 PMCID: PMC3349515 DOI: 10.1186/1472-6963-12-80] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2011] [Accepted: 03/26/2012] [Indexed: 11/12/2022] Open
Abstract
Background There is a global need to assess physicians' professional performance in actual clinical practice. Valid and reliable instruments are necessary to support these efforts. This study focuses on the reliability and validity, the influences of some sociodemographic biasing factors, associations between self and other evaluations, and the number of evaluations needed for reliable assessment of a physician based on the three instruments used for the multisource assessment of physicians' professional performance in the Netherlands. Methods This observational validation study of three instruments underlying multisource feedback (MSF) was set in 26 non-academic hospitals in the Netherlands. In total, 146 hospital-based physicians took part in the study. Each physician's professional performance was assessed by peers (physician colleagues), co-workers (including nurses, secretary assistants and other healthcare professionals) and patients. Physicians also completed a self-evaluation. Ratings of 864 peers, 894 co-workers and 1960 patients on MSF were available. We used principal components analysis and methods of classical test theory to evaluate the factor structure, reliability and validity of instruments. We used Pearson's correlation coefficient and linear mixed models to address other objectives. Results The peer, co-worker and patient instruments respectively had six factors, three factors and one factor with high internal consistencies (Cronbach's alpha 0.95 - 0.96). It appeared that only 2 percent of variance in the mean ratings could be attributed to biasing factors. Self-ratings were not correlated with peer, co-worker or patient ratings. However, ratings of peers, co-workers and patients were correlated. Five peer evaluations, five co-worker evaluations and 11 patient evaluations are required to achieve reliable results (reliability coefficient ≥ 0.70). Conclusions The study demonstrated that the three MSF instruments produced reliable and valid data for evaluating physicians' professional performance in the Netherlands. Scores from peers, co-workers and patients were not correlated with self-evaluations. Future research should examine improvement of performance when using MSF.
Collapse
Affiliation(s)
- Karlijn Overeem
- IQ healthcare, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | | | |
Collapse
|
50
|
Patient feedback in revalidation: an exploratory study using the consultation satisfaction questionnaire. Br J Gen Pract 2012; 61:e638-44. [PMID: 22152843 DOI: 10.3399/bjgp11x601343] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
BACKGROUND Revalidation is the UK process for the review of doctors to ensure they are fit to practise. Revalidation will include patient feedback. AIM To investigate the role of patient feedback on GPs' consultations in revalidation. DESIGN AND SETTING Cross-sectional survey of patients consulting 171 GPs. METHOD A total of 6433 patients aged 16 years or over completed the consultation satisfaction questionnaire (CSQ). Generalisability analysis was undertaken, scale scores calculated, and outliers identified using two and three standard deviations from the mean as control limits. Comments made by patients were categorised into positive, neutral, or negative. RESULTS After averaging each scale for each doctor, mean scores (standard deviation), out of a possible score of 100, were: general satisfaction 78.1 (7.2); professional care 82.1 (6.1); relationship 71.2 (7.1); perceived time 65.7 (7.6). A D-study (which enables estimation of the reliability from 0-1 of the CSQ scores for different numbers of responders for each doctor), indicated that ratings by 19 patients would achieve a generalisability coefficient of 0.80 for the combined score. Fifteen GPs had one or more scale scores below two standard deviations of the mean. Comments were more often negative for GPs with scores below two standard deviations of the mean. CONCLUSION Most patients of most GPs are satisfied with their experience of consultations, and ways to make patient feedback formative for these doctors is required. For a few GPs, most patients report some dissatisfaction. Patient feedback may identify doctors who need educational support and possibly remediation, but agreed questionnaire score thresholds are required, and agreement is needed on the weight to be attached to patient experience in comparison with other aspects of performance.
Collapse
|