1
|
Trisukhon K, Thammasitboon S, Vaewpanich J, Petrescu M, Punyoo J, Jongaramraung J, Pakakasama S, Balmer DF. Workplace affordances and learning engagement in a Thai paediatric intensive care unit. CLINICAL TEACHER 2024:e13821. [PMID: 39435900 DOI: 10.1111/tct.13821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 09/28/2024] [Indexed: 10/23/2024]
Abstract
BACKGROUND Workplace learning in critical care settings is complex and challenging. Research has explored learner-, teacher-, and context-related factors that influence medical residents' engagement in critical care workplaces in Western but not in non-Western cultures. This limits our understanding of workplace learning globally and how we can better support resident learning in diverse cultures. OBJECTIVE To explore how paediatric residents engage in workplace learning in a Thai Paediatric Intensive Care Unit (PICU) and how this culturally situated workplace shapes their learning. METHODS In this qualitative study, we recruited paediatric residents (n = 16) from a tertiary care hospital in Thailand for semi-structured interviews. We used reflexive thematic analysis to describe, analyse and interpret residents' experiences of workplace learning, and to capitalise on our own experience as an analytic resource. RESULTS We constructed three themes to represent participants' narratives: PICU cases and context as dynamic affordances; impact of psychological safety; and the role of attending physicians. While Thai PICU cases and context could afford participation and thus learning, Thailand's collectivist culture, which prioritises group needs over individual needs, contributed to a sense of psychological safety within culturally-endorsed, professional and social hierarchies, and set the stage for workplace learning. Despite their higher status in these hierarchies, attending physicians facilitated resident learning by fostering open dialogue, joint problem-solving and a low-stress atmosphere. CONCLUSIONS Workplace learning in a Thai PICU while challenging, is uniquely facilitated by Thailand's collectivist culture that fosters psychological safety and attending physicians' invitation in, and learn from, the workplace optimises learning.
Collapse
Affiliation(s)
- Kanaporn Trisukhon
- Division of Pediatric Critical Care, Department of Pediatrics, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Thailand
| | - Satid Thammasitboon
- Division of Pediatric Critical Care, Baylor College of Medicine, Houston, Texas, USA
| | - Jarin Vaewpanich
- Division of Pediatric Critical Care, Department of Pediatrics, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Thailand
| | - Matei Petrescu
- Pediatric Critical Care, Christus Children's Hospital, Baylor College of Medicine, San Antonio, Texas, USA
| | - Jiraporn Punyoo
- Division of Pediatric Nursing, Ramathibodi School of Nursing, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Thailand
| | - Jongjai Jongaramraung
- Division of Pediatric Nursing, Ramathibodi School of Nursing, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Thailand
| | - Samart Pakakasama
- Division of Hematology-Oncology, Department of Pediatrics, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Thailand
| | - Dorene F Balmer
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
2
|
Soghikian S, Chipman M, Holmes J, Calhoun AW, Mallory LA. Assessing Team Performance in a Longitudinal Neonatal Resuscitation Simulation Training Program: Comparing Validity Evidence to Select the Best Tool. Cureus 2024; 16:e68810. [PMID: 39371693 PMCID: PMC11456317 DOI: 10.7759/cureus.68810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/03/2024] [Indexed: 10/08/2024] Open
Abstract
Introduction Neonatal resuscitation is a high-acuity, low-occurrence event that requires ongoing practice by interprofessional teams to maintain proficiency. Simulation provides an ideal platform for team training and evaluation of team performance. Our simulation center supports a longitudinal in situ simulation training program for delivery room teams. In addition to adherence to the Neonatal Resuscitation Program standards, team performance assessment is an essential component of program evaluation and participant feedback. Multiple published teamwork assessment tools exist. Our objective was to select the tool with the best validity evidence for our program's needs. Methods We used Messick's framework to assess the validity of evidence for potential teamwork assessment tools. Four possible tools were identified from the literature: the Mayo High Performance Teamwork Scale (Mayo), Team Performance Observation Tool (TPOT), Clinical Teamwork Scale (CTS), and Team Emergency Assessment Measure (TEAM). Relevant context included team versus individual focus, external evaluator versus self-evaluation, and ease of use (which included efficiency, clarity of interpretation, and overall assessment). Three simulation experts identified consensus anchors for each tool and independently reviewed and scored 10 pre-recorded neonatal resuscitation simulations. Raters assigned each tool a rating according to efficiency, ease of interpretation, and completeness of teamwork assessment. Interrater reliability (IRR) was calculated using intraclass correlation for each tool across the three raters. Average team performance scores for each tool were correlated with neonatal resuscitation adherence scores for each video using Spearman's rank coefficient. Results There was a range of IRR between the tools, with Mayo having the best (single 0.55 and multi 0.78). Each of the three raters ranked Mayo optimally in terms of efficiency (mean 4.66 + 0.577) and ease of use (4+1). However, TPOT and CTS scored highest (mean 4.66 ± 0.577) for overall completeness of teamwork assessment. There was no significant correlation to NRP adherence scores for any teamwork tool. Conclusion Of the four tools assessed, Mayo demonstrated moderate IRR and scored highest for its ease of use and efficiency, though not completeness of assessment. The remaining three tools had poor IRR, which is not an uncommon problem with teamwork assessment tools. Our process emphasizes the fact that assessment tool validity is contextual. Factors such as a relatively narrow (and high) performance distribution and clinical context may have contributed to reliability challenges for tools that offered a more complete teamwork assessment.
Collapse
Affiliation(s)
- Sierra Soghikian
- Maine Track Program, Tufts University School of Medicine, Boston, USA
| | - Micheline Chipman
- Medical Education and Simulation, Hannaford Center for Safety, Innovation and Simulation, MaineHealth Brighton Campus, Portland, USA
| | - Jeffrey Holmes
- Emergency Medicine, MaineHealth Maine Medical Center, Portland, USA
| | - Aaron W Calhoun
- Pediatrics and Critical Care Medicine, University of Louisville, Louisville, USA
| | - Leah A Mallory
- Medical Education and Simulation, Hannaford Center for Safety Innovation and Simulation, MaineHealth Brighton Campus, Portland, USA
- Pediatric Hospital Medicine, MaineHealth Barbara Bush Children's Hospital, Portland, USA
| |
Collapse
|
3
|
Dufayet L, Piot MA, Geoffroy PA, Oulès B, Petitjean-Brichant C, Peiffer-Smadja N, Bouzid D, Tran Dinh A, Mirault T, Faye A, Lemogne C, Ruszniewski P, Peyre H, Vodovar D. CARECOS study: Medical students' empathy as assessed with the CARE measure by examiners versus standardized patients during a formative Objective and Structured Clinical Examination (OSCE) station. MEDICAL TEACHER 2024; 46:1187-1195. [PMID: 38285021 DOI: 10.1080/0142159x.2024.2306840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 01/15/2024] [Indexed: 01/30/2024]
Abstract
PURPOSE To assess the Consultation And Relational Empathy (CARE) measure as a tool for examiners to assess medical students' empathy during Objective and Structured Clinical Examinations (OSCEs), as the best tool for assessing empathy during OSCEs remains unknown. METHODS We first assessed the psychometric properties of the CARE measure, completed simultaneously by examiners and standardized patients (SP, either teachers - SPteacher - or civil society members - SPcivil society), for each student, at the end of an OSCE station. We then assessed the qualitative/quantitative agreement between examiners and SP. RESULTS We included 129 students, distributed in eight groups, four groups for each SP type. The CARE measure showed satisfactory psychometric properties in the context of the study but moderate, and even poor inter-rater reliability for some items. Considering paired observations, examiners scored lower than SPs (p < 0.001) regardless of the SP type. However, the difference in score was greater when the SP was a SPteacher rather than a SPcivil society (p < 0.01). CONCLUSION Despite acceptable psychometric properties, inter-rater reliability of the CARE measure between examiners and SP was unsatisfactory. The choice of examiner as well as the type of SP seems critical to ensure a fair measure of empathy during OSCEs.
Collapse
Affiliation(s)
- Laurene Dufayet
- UFR de médecine, Université Paris Cité, Paris, France
- Unité Médico-judiciaire, Hôtel-Dieu, AP-HP, Paris, France
- Centre Antipoison de Paris, Hôpital Fernand-Widal, AP-HP, Paris, France
- INSERM, UMRS-1144, Faculté de pharmacie, Paris, France
| | - Marie-Aude Piot
- UFR de médecine, Université Paris Cité, Paris, France
- Département de psychiatrie de l'enfant et de l'adolescent, Hôpital Necker, AP-HP, Paris, France
- INSERM, UMR 1018, Université Paris-Saclay, Villejuif cedex, France
| | - Pierre-Alexis Geoffroy
- UFR de médecine, Université Paris Cité, Paris, France
- Département de psychiatrie et d'addictologie, Hôpital Bichat-Claude Bernard, AP-HP, Paris, France
- Psychiatrie & Neurosciences, Hôpital Saint-Anne, GHU Paris, Paris, France
- Université de Paris, NeuroDiderot, Inserm, FHU I2-D2, Paris, France
| | - Bénédicte Oulès
- UFR de médecine, Université Paris Cité, Paris, France
- Service de dermatologie, Hôpital Saint-Louis, AP-HP, Paris, France
| | - Clara Petitjean-Brichant
- Département de psychiatrie et d'addictologie, Hôpital Bichat-Claude Bernard, AP-HP, Paris, France
| | - Nathan Peiffer-Smadja
- UFR de médecine, Université Paris Cité, Paris, France
- Service de maladies infectieuses et tropicales, Hôpital Bichat-Claude Bernard, AP-HP, Paris, France
- Université Paris Cité, INSERM UMR1137, IAME, Paris, France
| | - Donia Bouzid
- UFR de médecine, Université Paris Cité, Paris, France
- Université Paris Cité, INSERM UMR1137, IAME, Paris, France
- Service d'accueil des urgences, Hôpital Bichat-Claude Bernard, AP-HP, Paris, France
| | - Alexy Tran Dinh
- UFR de médecine, Université Paris Cité, Paris, France
- Département d'anesthésie-réanimation, Hôpital Bichat-Claude Bernard, AP-HP, Paris, France
| | - Tristan Mirault
- UFR de médecine, Université Paris Cité, Paris, France
- Service de médecine vasculaire, Hôpital Européen Georges Pompidou, Paris, France
| | - Albert Faye
- UFR de médecine, Université Paris Cité, Paris, France
- Service de Pédiatrie générale, Maladies infectieuses et Médecine interne, Hôpital Robert Debré, AP-HP, Paris, France
| | - Cédric Lemogne
- UFR de médecine, Université Paris Cité, Paris, France
- Service de Psychiatrie de l'adulte, AP-HP, Hôpital Hôtel-Dieu, Paris, France
- Center for Research in Epidemiology and StatisticS (CRESS), Université Paris Cité and Université Sorbonne Paris Nord, INSERM, INRAE, Paris, France
| | - Philippe Ruszniewski
- UFR de médecine, Université Paris Cité, Paris, France
- Service de gastro-entérologie et pancréatologie, Hôpital Beaujon AP-HP, Clichy, France
| | - Hugo Peyre
- UFR de médecine, Université Paris Cité, Paris, France
- Service de psychiatrie de l'enfant et de l'adolescent, Hôpital Robert Debré, APHP, Paris, France
- INSERM UMR 1141, Université Paris Cité, Paris, France
| | - Dominique Vodovar
- UFR de médecine, Université Paris Cité, Paris, France
- Centre Antipoison de Paris, Hôpital Fernand-Widal, AP-HP, Paris, France
- INSERM, UMRS-1144, Faculté de pharmacie, Paris, France
| |
Collapse
|
4
|
Blanchette P, Poitras ME, Lefebvre AA, St-Onge C. Making judgments based on reported observations of trainee performance: a scoping review in Health Professions Education. CANADIAN MEDICAL EDUCATION JOURNAL 2024; 15:63-75. [PMID: 39310309 PMCID: PMC11415737 DOI: 10.36834/cmej.75522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Background Educators now use reported observations when assessing trainees' performance. Unfortunately, they have little information about how to design and implement assessments based on reported observations. Objective The purpose of this scoping review was to map the literature on the use of reported observations in judging health professions education (HPE) trainees' performances. Methods Arksey and O'Malley's (2005) method was used with four databases (sources: ERIC, CINAHL, MEDLINE, PsycINFO). Eligibility criteria for articles were: documents in English or French, including primary data, and initial or professional training; (2) training in an HPE program; (3) workplace-based assessment; and (4) assessment based on reported observations. The inclusion/exclusion, and data extraction steps were performed (agreement rate > 90%). We developed a data extraction grid to chart the data. Descriptive analyses were used to summarize quantitative data, and the authors conducted thematic analysis for qualitative data. Results Based on 36 papers and 13 consultations, the team identified six steps characterizing trainee performance assessment based on reported observations in HPE: (1) making first contact, (2) observing and documenting the trainee performance, (3) collecting and completing assessment data, (4) aggregating assessment data, (5) inferring the level of competence, and (6) documenting and communicating the decision to the stakeholders. Discussion The design and implementation of assessment based on reported observations is a first step towards a quality implementation by guiding educators and administrators responsible for graduating competent professionals. Future research might focus on understanding the context beyond assessor cognition to ensure the quality of meta-assessors' decisions.
Collapse
|
5
|
Westein MPD, Koster AS, Daelmans HEM, Bouvy ML, Kusurkar RA. How progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships: a mixed method study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:205-222. [PMID: 36094680 PMCID: PMC9992254 DOI: 10.1007/s10459-022-10153-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/07/2022] [Indexed: 06/15/2023]
Abstract
The combination of measuring performance and giving feedback creates tension between formative and summative purposes of progress evaluations and can be challenging for supervisors. There are conflicting perspectives and evidence on the effects supervisor-trainee relationships have on assessing performance. The aim of this study was to learn how progress evaluations are used in postgraduate education with longitudinal supervisor-trainee relationships. Progress evaluations in a two-year community-pharmacy specialization program were studied with a mixed-method approach. An adapted version of the Canadian Medical Education Directives for Specialists (CanMEDS) framework was used. Validity of the performance evaluation scores of 342 trainees was analyzed using repeated measures ANOVA. Semi-structured interviews were held with fifteen supervisors to investigate their response processes, the utility of the progress evaluations, and the influence of supervisor-trainee relationships. Time and CanMEDS roles affected the three-monthly progress evaluation scores. Interviews revealed that supervisors varied in their response processes. They were more committed to stimulating development than to scoring actual performance. Progress evaluations were utilized to discuss and give feedback on trainee development and to add structure to the learning process. A positive supervisor-trainee relationship was seen as the foundation for feedback and supervisors preferred the roles of educator, mentor, and coach over the role of assessor. We found that progress evaluations are a good method for directing feedback in longitudinal supervisor-trainee relationships. The reliability of scoring performance was low. We recommend progress evaluations to be independent of formal assessments in order to minimize roles-conflicts of supervisors.
Collapse
Affiliation(s)
- Marnix P D Westein
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands.
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands.
- The Royal Dutch Pharmacists Association (KNMP), The Hague, The Netherlands.
| | - A S Koster
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - H E M Daelmans
- Programme Director Master of Medicine, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| | - M L Bouvy
- Department of Pharmaceutical Sciences, Utrecht University, Universiteitsweg 99, 3584, CG, Utrecht, The Netherlands
| | - R A Kusurkar
- Research in Education, Faculty of Medicine Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
6
|
Malau-Aduli BS, Hays RB, D'Souza K, Jones K, Saad S, Celenza A, Turner R, Smith J, Ward H, Schlipalius M, Murphy R, Garg N. “Could You Work in My Team?”: Exploring How Professional Clinical Role Expectations Influence Decision-Making of Assessors During Exit-Level Medical School OSCEs. Front Med (Lausanne) 2022; 9:844899. [PMID: 35602481 PMCID: PMC9120654 DOI: 10.3389/fmed.2022.844899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/08/2022] [Indexed: 11/13/2022] Open
Abstract
Decision-making in clinical assessment, such as exit-level medical school Objective Structured Clinical Examinations (OSCEs), is complex. This study utilized an empirical phenomenological qualitative approach with thematic analysis to explore OSCE assessors' perceptions of the concept of a “prototypical intern” expressed during focus group discussions. Topics discussed included the concept of a prototypical intern, qualities to be assessed, and approaches to clinical assessment decision-making. The thematic analysis was then applied to a theoretical framework (Cultural Historical Activity Theory—CHAT) that explored the complexity of making assessment decisions amidst potentially contradicting pressures from academic and clinical perspectives. Ten Australasian medical schools were involved with 15 experienced and five less experienced assessors participating. Thematic analysis of the data revealed four major themes in relation to how the prototypical intern concept influences clinical assessors' judgements: (a) Suitability of marking rubric based on assessor characteristics and expectations; (b) Competence as final year student vs. performance as a prototypical intern; (c) Safety, trustworthiness and reliability as constructs requiring assessment and (d) Contradictions in decision making process due to assessor differences. These themes mapped well within the interaction between two proposed activity systems in the CHAT model: academic and clinical. More clinically engaged and more experienced assessors tend to fall back on a heuristic, mental construct of a “prototypical intern,” to calibrate judgements, particularly, in difficult situations. Further research is needed to explore whether consensus on desirable intern qualities and their inclusion into OSCE marksheets decreases the cognitive load and increases the validity of assessor decision making.
Collapse
Affiliation(s)
- Bunmi S. Malau-Aduli
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
- *Correspondence: Bunmi S. Malau-Aduli
| | - Richard B. Hays
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Karen D'Souza
- School of Medicine, Deakin University, Geelong, VIC, Australia
| | - Karina Jones
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Shannon Saad
- School of Medicine, Notre Dame University, Chippendale, NSW, Australia
| | - Antonio Celenza
- School of Medicine, University of Western Australia, Perth, WA, Australia
| | - Richard Turner
- School of Medicine, University of Tasmania, Hobart, TAS, Australia
| | - Jane Smith
- Medical Program, Bond University, Gold Coast, QLD, Australia
| | - Helena Ward
- Adelaide Medical School, University of Adelaide, Adelaide, SA, Australia
| | - Michelle Schlipalius
- School of Medicine and Health Sciences, Monash University, Melbourne, VIC, Australia
| | - Rinki Murphy
- Medical Program, University of Auckland, Auckland, New Zealand
| | - Nidhi Garg
- School of Medicine, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
7
|
Swanberg M, Woodson-Smith S, Pangaro L, Torre D, Maggio L. Factors and Interactions Influencing Direct Observation: A Literature Review Guided by Activity Theory. TEACHING AND LEARNING IN MEDICINE 2022; 34:155-166. [PMID: 34238091 DOI: 10.1080/10401334.2021.1931871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 04/19/2021] [Accepted: 05/11/2021] [Indexed: 06/13/2023]
Abstract
PhenomenonEnsuring that future physicians are competent to practice medicine is necessary for high quality patient care and safety. The shift toward competency-based education has placed renewed emphasis on direct observation via workplace-based assessments in authentic patient care contexts. Despite this interest and multiple studies focused on improving direct observation, challenges regarding the objectivity of this assessment approach remain underexplored and unresolved. Approach: We conducted a literature review of direct observation in authentic patient contexts by systematically searching databases PubMed, Embase, Web of Science, and ERIC. Included studies comprised original research conducted in the patient care context with authentic patients, either as a live encounter or a video recording of an actual encounter, which focused on factors affecting the direct observation of undergraduate medical education (UME) or graduate medical education (GME) trainees. Because the patient care context adds factors that contribute to the cognitive load of the learner and of the clinician-observer we focused our question on such contexts, which are most useful in judgments about advancement to the next level of training or practice. We excluded articles or published abstracts not conducted in the patient care context (e.g., OSCEs) or those involving simulation, allied health professionals, or non-UME/GME trainees. We also excluded studies focused on end-of-rotation evaluations and in-training evaluation reports. We extracted key data from the studies and used Activity Theory as a lens to identify factors affecting these observations and the interactions between them. Activity Theory provides a framework to understand and analyze complex human activities, the systems in which people work, and the interactions or tensions between multiple associated factors. Findings: Nineteen articles were included in the analysis; 13 involved GME learners and 6 UME learners. Of the 19, six studies were set in the operating room and four in the Emergency department. Using Activity Theory, we discovered that while numerous studies focus on rater and tool influences, very few study the impact of social elements. These are the rules that govern how the activity happens, the environment and members of the community involved in the activity and how completion of the activity is divided up among the members of the community. Insights: Viewing direct observation via workplace-based assessment through the lens of Activity Theory may enable educators to implement curricular changes to improve direct observation of assessment. Activity Theory may allow researchers to design studies to focus on the identified underexplored interactions and influences in relation to direct observation.
Collapse
Affiliation(s)
- Margaret Swanberg
- Department of Neurology, Uniformed Services University, Bethesda, Maryland, USA
| | - Sarah Woodson-Smith
- Department of Neurology, Naval Medical Center Portsmouth, Portsmouth, Virginia, USA
| | - Louis Pangaro
- Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
| | - Dario Torre
- Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
- Center for Health Professions Education, Uniformed Services University, Bethesda, Maryland, USA
| | - Lauren Maggio
- Department of Medicine, Uniformed Services University, Bethesda, Maryland, USA
- Center for Health Professions Education, Uniformed Services University, Bethesda, Maryland, USA
| |
Collapse
|
8
|
Hyde S, Fessey C, Boursicot K, MacKenzie R, McGrath D. OSCE rater cognition - an international multi-centre qualitative study. BMC MEDICAL EDUCATION 2022; 22:6. [PMID: 34980099 PMCID: PMC8721185 DOI: 10.1186/s12909-021-03077-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 12/06/2021] [Indexed: 05/09/2023]
Abstract
INTRODUCTION This study aimed to explore the decision-making processes of raters during objective structured clinical examinations (OSCEs), in particular to explore the tacit assumptions and beliefs of raters as well as rater idiosyncrasies. METHODS Thinking aloud protocol interviews were used to gather data on the thoughts of examiners during their decision-making, while watching trigger OSCE videos and rating candidates. A purposeful recruiting strategy was taken, with a view to interviewing both examiners with many years of experience (greater than six years) and those with less experience examining at final medical examination level. RESULTS Thirty-one interviews were conducted in three centres in three different countries. Three themes were identified during data analysis, entitled 'OSCEs are inauthentic', 'looking for glimpses of truth' and 'evolution with experience'. CONCLUSION Raters perceive that the shortcomings of OSCEs can have unwanted effects on student behaviour. Some examiners, more likely the more experienced group, may deviate from an organisations directions due to perceived shortcomings of the assessment. No method of assessment is without flaw, and it is important to be aware of the limitations and shortcomings of assessment methods on student performance and examiner perception. Further study of assessor and student perception of OSCE performance would be helpful.
Collapse
Affiliation(s)
- Sarah Hyde
- School of Medicine at the University of Limerick, Health Research Institute, Limerick, Ireland.
| | | | | | | | - Deirdre McGrath
- School of Medicine at the University of Limerick, Health Research Institute, Limerick, Ireland
| |
Collapse
|
9
|
Yeates P, Moult A, Cope N, McCray G, Xilas E, Lovelock T, Vaughan N, Daw D, Fuller R, McKinley RK(B. Measuring the Effect of Examiner Variability in a Multiple-Circuit Objective Structured Clinical Examination (OSCE). ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1189-1196. [PMID: 33656012 PMCID: PMC8300845 DOI: 10.1097/acm.0000000000004028] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Ensuring that examiners in different parallel circuits of objective structured clinical examinations (OSCEs) judge to the same standard is critical to the chain of validity. Recent work suggests examiner-cohort (i.e., the particular group of examiners) could significantly alter outcomes for some candidates. Despite this, examiner-cohort effects are rarely examined since fully nested data (i.e., no crossover between the students judged by different examiner groups) limit comparisons. In this study, the authors aim to replicate and further develop a novel method called Video-based Examiner Score Comparison and Adjustment (VESCA), so it can be used to enhance quality assurance of distributed or national OSCEs. METHOD In 2019, 6 volunteer students were filmed on 12 stations in a summative OSCE. In addition to examining live student performances, examiners from 8 separate examiner-cohorts scored the pool of video performances. Examiners scored videos specific to their station. Video scores linked otherwise fully nested data, enabling comparisons by Many Facet Rasch Modeling. Authors compared and adjusted for examiner-cohort effects. They also compared examiners' scores when videos were embedded (interspersed between live students during the OSCE) or judged later via the Internet. RESULTS Having accounted for differences in students' ability, different examiner-cohort scores for the same ability of student ranged from 18.57 of 27 (68.8%) to 20.49 (75.9%), Cohen's d = 1.3. Score adjustment changed the pass/fail classification for up to 16% of students depending on the modeled cut score. Internet and embedded video scoring showed no difference in mean scores or variability. Examiners' accuracy did not deteriorate over the 3-week Internet scoring period. CONCLUSIONS Examiner-cohorts produced a replicable, significant influence on OSCE scores that was unaccounted for by typical assessment psychometrics. VESCA offers a promising means to enhance validity and fairness in distributed OSCEs or national exams. Internet-based scoring may enhance VESCA's feasibility.
Collapse
Affiliation(s)
- Peter Yeates
- P. Yeates is a senior lecturer in medical education research, School of Medicine, Keele University, Keele, Staffordshire, and a consultant in acute and respiratory medicine, Fairfield General Hospital, Pennine Acute Hospitals, NHS Trust, Bury, Lancashire, United Kingdom; ORCID: https://orcid.org/0000-0001-6316-4051
| | - Alice Moult
- A. Moult is a research assistant in medical education, School of Medicine, Keele University, Keele, Staffordshire, United Kingdom; ORCID: https://orcid.org/0000-0002-9424-5660
| | - Natalie Cope
- N. Cope is a lecturer in clinical education (psychometrics), School of Medicine, Keele University, Keele, Staffordshire, United Kingdom
| | - Gareth McCray
- G. McCray is a researcher, School of Primary, Community and Social Care, Keele University, Keele, Staffordshire, United Kingdom
| | - Eleftheria Xilas
- E. Xilas is a foundation year 1 doctor and recent graduate, School of Medicine, Keele University, Keele, Staffordshire, United Kingdom
| | - Tom Lovelock
- T. Lovelock is an information technology services manager, Faculty of Medicine & Health Sciences, Keele University, Keele, Staffordshire, United Kingdom
| | - Nicholas Vaughan
- N. Vaughan is a senior application developer, directorate of digital strategy and information technology services, Keele University, Keele, Staffordshire, United Kingdom
| | - Dan Daw
- D. Daw is an information technology systems development engineer, School of Medicine, Keele University, Keele, Staffordshire, United Kingdom
| | - Richard Fuller
- R. Fuller is deputy dean, School of Medicine, University of Liverpool, Liverpool, United Kingdom; ORCID: https://orcid.org/0000-0001-7965-4864
| | - Robert K. (Bob) McKinley
- R.K. McKinley is an emeritus professor of education in general practice, School of Medicine, Keele University, Keele, Staffordshire, United Kingdom; ORCID: https://orcid.org/0000-0002-3684-3435
| |
Collapse
|
10
|
Bowman A, Harreveld RB, Lawson C. Factors influencing the rating of sonographer students' clinical performance. Radiography (Lond) 2021; 28:8-16. [PMID: 34332858 DOI: 10.1016/j.radi.2021.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 07/07/2021] [Accepted: 07/09/2021] [Indexed: 11/18/2022]
Abstract
INTRODUCTION Little is known about the factors influencing clinical supervisor-assessors' ratings of sonographer students' performance. This study identifies these influential factors and relates them to professional competency standards, with the aim of raising awareness and improving assessment practice. METHODS This study used archived written comments from 94 clinical assessors describing 174 sonographer students' performance one month into their initial clinical practice (2015-6). Qualitative mixed method analysis revealed factors influencing assessor ratings of student performance and provided an estimate of the valency, association, and frequency of these factors. RESULTS Assessors provided written comments for 93 % (n = 162/174) of students. Comments totaled 7190 words (mean of 44 words/student). One-third of comment paragraphs were wholly positive, two-thirds were equivocal. None were wholly negative. Thematic analysis revealed eleven factors, and eight sub-factors, influencing assessor impressions of five dimensions of performance. Of the factors mentioned, 84.6 % (n = 853/1008) related to professional competencies. While 15.4 % (n = 155/1008) were unrelated to competencies, instead reflecting humanistic factors such as student motivation, disposition, approach to learning, prospects and impact on supervisor and staff. Factors were prioritised and combined independently, although some associated. CONCLUSION Clinical assessors formed impressions based on student performance, humanistic behaviours and personal qualities not necessarily outlined in educational outcomes or professional competency standards. Their presence, and interrelations, impact success in clinical practice, through their contribution to, and indication of, competence. IMPLICATIONS FOR PRACTICE Sonographer student curricula and assessor training should raise awareness of the factors influencing performance ratings and judgement of clinical competence, particularly the importance of humanistic factors. Inclusion of narrative comments, multiple assessors, and broad performance dimensions would enhance clinical assessment of sonographer student performance.
Collapse
Affiliation(s)
- A Bowman
- School of Graduate Research, Central Queensland University, Cairns, Australia.
| | - R B Harreveld
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| | - C Lawson
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| |
Collapse
|
11
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
12
|
Interassessor agreement of portfolio-based competency assessment for orthotists/prosthetists in Australia: a mixed method study. Prosthet Orthot Int 2021; 45:276-288. [PMID: 34061054 DOI: 10.1097/pxr.0000000000000022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 04/23/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND Internationally qualified orthotists/prosthetists who want to practice in Australia must pass a portfolio-based competency assessment. Testing the agreement between independent assessors is important to engender confidence in the assessment, and continually improve the processes. OBJECTIVES To quantify interassessor agreement for all 68 performance indicators in the Australian Orthotic Prosthetic Association's Entry Level Competency Standards and where there was significant disagreement between assessors, to explore the reasons why. STUDY DESIGN Mixed methods: explanatory sequential. METHOD Fifteen portfolios were assigned to independent assessors. Assessors determined whether the evidence presented met the requirements of each performance indicator. Interassessor agreement was calculated using Gwet's Agreement Coefficient 1 (AC1), and these data informed semistructured interviews to explore the reasons for disagreement. RESULTS Most performance indicators (87%) had moderate to substantial agreement (AC1 > 0.71), which could be attributed to a variety of factors including the use of a simple assessment rubric with supporting guidelines and assessor training to establish shared expectations. The remaining performance indicators (13%) had fair to slight agreement (AC1 ≤ 0.7). Interviews with assessors suggested that disagreement could be attributed to the complexity of some performance indicators, unconscious bias, and the appropriateness of the evidence presented. CONCLUSIONS Although most performance indicators in Australian Orthotic Prosthetic Association's Entry Level Competency Standard were associated with moderate to substantial interassessor agreement, there are opportunities to improve agreement by simplifying the wording of some performance indicators and revising guidelines to help applicants curate the most appropriate evidence for each performance indicator.
Collapse
|
13
|
Sadka N, Lee V, Ryan A. Purpose, Pleasure, Pace and Contrasting Perspectives: Teaching and Learning in the Emergency Department. AEM EDUCATION AND TRAINING 2021; 5:e10468. [PMID: 33796807 PMCID: PMC7995923 DOI: 10.1002/aet2.10468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 05/02/2020] [Accepted: 05/04/2020] [Indexed: 06/12/2023]
Abstract
OBJECTIVES Teaching and learning in the clinical setting are vital for the training and development of emergency physicians. Increasing service provision and time pressures in the emergency department (ED) have led to junior trainees' perceptions of a lack of teaching and a lack of support during clinical shifts. We sought to explore the perceptions of learners and supervisors in our ED regarding teaching within this diverse and challenging context. METHODS Nine ED physicians and eight ED trainees were interviewed to explore perceptions of teaching in the ED. Clinical teaching was described as "on-the-floor" teaching during work shifts. We used a validated clinical teaching assessment instrument to help pilot and develop some of our interview questions, and data were analyzed using qualitative thematic analysis. RESULTS We identified three major themes in our study: 1) the strong sense of purpose and the pleasure gained through teaching and learning interactions, despite both groups being unsure of each other's engagement and enthusiasm; 2) contrasting perspectives of teaching with registrars holding a traditional knowledge transmission view, yet shared perspectives of teacher as being ED consultants; and 3) the effect of patient acuity and volume, which both facilitated learning until a critical point of busyness beyond which service provision pressures and staffing limitations were perceived to negatively impact learning. CONCLUSIONS The ED is a complex and fluid working and learning environment. We need to develop a shared understanding of teaching and learning opportunities in the ED, which helps all stakeholders move beyond learning as knowledge acquisition and sees the potential for learning from teachers of a multitude of professional backgrounds.
Collapse
Affiliation(s)
- Nancy Sadka
- From theEmergency Medicine TrainingAustin HealthHeidelbergVictoriaAustralia
| | - Victor Lee
- From theEmergency Medicine TrainingAustin HealthHeidelbergVictoriaAustralia
| | - Anna Ryan
- and theMelbourne Medical SchoolUniversity of MelbourneMelbourneVictoriaAustralia
| |
Collapse
|
14
|
Malau-Aduli BS, Hays RB, D'Souza K, Smith AM, Jones K, Turner R, Shires L, Smith J, Saad S, Richmond C, Celenza A, Sen Gupta T. Examiners' decision-making processes in observation-based clinical examinations. MEDICAL EDUCATION 2021; 55:344-353. [PMID: 32810334 DOI: 10.1111/medu.14357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 08/08/2020] [Accepted: 08/14/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Objective structured clinical examinations (OSCEs) are commonly used to assess the clinical skills of health professional students. Examiner judgement is one acknowledged source of variation in candidate marks. This paper reports an exploration of examiner decision making to better characterise the cognitive processes and workload associated with making judgements of clinical performance in exit-level OSCEs. METHODS Fifty-five examiners for exit-level OSCEs at five Australian medical schools completed a NASA Task Load Index (TLX) measure of cognitive load and participated in focus group interviews immediately after the OSCE session. Discussions focused on how decisions were made for borderline and clear pass candidates. Interviews were transcribed, coded and thematically analysed. NASA TLX results were quantitatively analysed. RESULTS Examiners self-reported higher cognitive workload levels when assessing a borderline candidate in comparison with a clear pass candidate. Further analysis revealed five major themes considered by examiners when marking candidate performance in an OSCE: (a) use of marking criteria as a source of reassurance; (b) difficulty adhering to the marking sheet under certain conditions; (c) demeanour of candidates; (d) patient safety, and (e) calibration using a mental construct of the 'mythical [prototypical] intern'. Examiners demonstrated particularly higher mental demand when assessing borderline compared to clear pass candidates. CONCLUSIONS Examiners demonstrate that judging candidate performance is a complex, cognitively difficult task, particularly when performance is of borderline or lower standard. At programme exit level, examiners intuitively want to rate candidates against a construct of a prototypical graduate when marking criteria appear not to describe both what and how a passing candidate should demonstrate when completing clinical tasks. This construct should be shared, agreed upon and aligned with marking criteria to best guide examiner training and calibration. Achieving this integration may improve the accuracy and consistency of examiner judgements and reduce cognitive workload.
Collapse
Affiliation(s)
- Bunmi S Malau-Aduli
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Richard B Hays
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Karen D'Souza
- School of Medicine, Deakin University, Geelong, VIC, Australia
| | - Amy M Smith
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Karina Jones
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| | - Richard Turner
- School of Medicine, University of Tasmania, Hobart, TAS, Australia
| | - Lizzi Shires
- School of Medicine, University of Tasmania, Hobart, TAS, Australia
| | - Jane Smith
- Medical Program, Bond University, Gold Coast, QLD, Australia
| | - Shannon Saad
- School of Medicine, Notre Dame University, Sydney, NSW, Australia
| | | | - Antonio Celenza
- School of Medicine, University of Western Australia, Perth, WA, Australia
| | - Tarun Sen Gupta
- College of Medicine and Dentistry, James Cook University, Townsville, QLD, Australia
| |
Collapse
|
15
|
Hyde C, Yardley S, Lefroy J, Gay S, McKinley RK. Clinical assessors' working conceptualisations of undergraduate consultation skills: a framework analysis of how assessors make expert judgements in practice. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2020; 25:845-875. [PMID: 31997115 PMCID: PMC7471149 DOI: 10.1007/s10459-020-09960-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 01/18/2020] [Indexed: 06/10/2023]
Abstract
Undergraduate clinical assessors make expert, multifaceted judgements of consultation skills in concert with medical school OSCE grading rubrics. Assessors are not cognitive machines: their judgements are made in the light of prior experience and social interactions with students. It is important to understand assessors' working conceptualisations of consultation skills and whether they could be used to develop assessment tools for undergraduate assessment. To identify any working conceptualisations that assessors use while assessing undergraduate medical students' consultation skills and develop assessment tools based on assessors' working conceptualisations and natural language for undergraduate consultation skills. In semi-structured interviews, 12 experienced assessors from a UK medical school populated a blank assessment scale with personally meaningful descriptors while describing how they made judgements of students' consultation skills (at exit standard). A two-step iterative thematic framework analysis was performed drawing on constructionism and interactionism. Five domains were found within working conceptualisations of consultation skills: Application of knowledge; Manner with patients; Getting it done; Safety; and Overall impression. Three mechanisms of judgement about student behaviour were identified: observations, inferences and feelings. Assessment tools drawing on participants' conceptualisations and natural language were generated, including 'grade descriptors' for common conceptualisations in each domain by mechanism of judgement and matched to grading rubrics of Fail, Borderline, Pass, Very good. Utilising working conceptualisations to develop assessment tools is feasible and potentially useful. Work is needed to test impact on assessment quality.
Collapse
Affiliation(s)
- Catherine Hyde
- School of Medicine, Keele University, Keele, Staffordshire, ST5 5BG, UK
| | - Sarah Yardley
- School of Medicine, Keele University, Keele, Staffordshire, ST5 5BG, UK.
- Palliative Care Service, Central and North West London NHS Foundation Trust, St Pancras Hospital, 5th Floor South Wing, 4 St. Pancras Way, London, NW1 0PE, UK.
| | - Janet Lefroy
- School of Medicine, Keele University, Keele, Staffordshire, ST5 5BG, UK
| | - Simon Gay
- University of Leicester School of Medicine, Leicester, UK
| | - Robert K McKinley
- School of Medicine, Keele University, Keele, Staffordshire, ST5 5BG, UK
| |
Collapse
|
16
|
|
17
|
Liu YP, Jensen D, Chan CY, Wei CJ, Chang Y, Wu CH, Chiu CH. Development of a nursing-specific Mini-CEX and evaluation of the core competencies of new nurses in postgraduate year training programs in Taiwan. BMC MEDICAL EDUCATION 2019; 19:270. [PMID: 31319845 PMCID: PMC6639917 DOI: 10.1186/s12909-019-1705-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 07/11/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND Modern nursing requires a broad set of academic and practical skills, and an effective nurse must integrate these skills in a wide range of healthcare contexts. Cultivation of core competencies has recently become a key issue globally in the development of nursing education. To assess the performance of new nurses, this study developed a nursing-specific Mini-Clinical Evaluation Exercise (Mini-CEX) to evaluate the effect of postgraduate year (PGY) nurse training programs in Taiwan. METHODS A nursing-specific Mini-CEX was developed based on the required core competencies of nurses. Reliability and validity were confirmed in evaluator workshops carried out prior to the administration of the pilot test and final test. Thirty-two PYG trainees were recruited with a supervisor-to-trainee ratio of 1:1.94. Data were collected from February to June 2012 and analyzed using the Kruskal-Wallis test. RESULTS The 32 PGY trainees scored highest in the "nursing professionalism" dimension and the lowest in the "physical examination" dimension. The overall competency score was satisfactory. The trainee nurses with 19-24 months of experience scored higher than the other two groups in overall performance. CONCLUSION The results of this research indicate the feasibility of using our Mini-CEX tool to evaluate the competencies of PGY trainees.
Collapse
Affiliation(s)
- Yueh-Ping Liu
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Dana Jensen
- School of Health Care Administration, Taipei Medical University, 250 Wu-hsing St., Taipei, Taiwan
| | - Cho-yu Chan
- Center for Teaching Excellence, Changhua Christian Hospital, Changhua City, Taiwan
| | - Chung-jen Wei
- Department of Public Health, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Yuanmay Chang
- Institute of Long Term Care, MacKay Medical College, New Taipei City, Taiwan
| | - Chih-Hsiung Wu
- College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Chiung-hsuan Chiu
- School of Health Care Administration, Taipei Medical University, 250 Wu-hsing St., Taipei, Taiwan
| |
Collapse
|
18
|
Wei CJ, Lu TH, Chien SC, Huang WT, Liu YP, Chan CY, Chiu CH. The development and use of a pharmacist-specific Mini-CEX for postgraduate year trainees in Taiwan. BMC MEDICAL EDUCATION 2019; 19:165. [PMID: 31118004 PMCID: PMC6530012 DOI: 10.1186/s12909-019-1602-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 05/13/2019] [Indexed: 06/03/2023]
Abstract
BACKGROUND Clinical pharmacists must have a complex combination of academic knowledge and practical experience that integrates all aspects of practice. Taiwan's Ministry of Health and Welfare in 2007 launched the Postgraduate Year (PGY) training program to increase the standard of pharmaceutical care. This study aims to develop a pharmacist-specific Chinese-language Mini-Clinical Evaluation Exercise (Mini-CEX) to evaluate the professional development of postgraduate year trainees. METHOD The specialized Mini-CEX was developed based on the core competencies of pharmacists, published literature, and expert opinion. A pilot test and evaluator workshop were held prior to the administration of the main test. Fifty-three samples were recruited. The main study was conducted at two regional teaching hospitals and a medical center teaching hospital in Taiwan between February and June 2012. The results were analyzed with the kappa statistic (inter-rater reliability) and descriptive statistics, while the Kruskal-Wallis test was used to examine the PGY trainees' Mini-CEX scores based on their performances. RESULTS Trainees who had recently completed PGY programs (C-PGY) and 2nd year PGY trainees (PGY2) earned excellent scores, while the 1st year PGY trainees (PGY1) earned satisfactory scores in overall performance. C-PGY and PGY2 trainees also performed significantly better than PGY1 trainees in the organization and efficiency domain, and the communication skills domain. CONCLUSION This study demonstrates the feasibility of using the newly developed pharmacist-specific Chinese-language version of the Mini-CEX instrument to evaluate the core competencies of PGY trainees in clinical settings.
Collapse
Affiliation(s)
- Chung-Jen Wei
- Department of Public Health, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Tzu-Hsuan Lu
- Medical Quality Department, Taipei Medical University-Shuang Ho Hospital, New Taipei City, Taiwan
| | - Shu-Chen Chien
- School of Pharmacy, College of Pharmacy, Taipei Medical University, Taipei, Taiwan
- Department of Pharmacy, Taipei Medical University Hospital, Taipei, Taiwan
| | - Wan-Tsui Huang
- Department of Pharmacy. Cathay General Hospital, Taipei, Taiwan. School of Pharmacy. Taipei Medical University, Taipei, Taiwan
| | - Yueh-Ping Liu
- Department of Emergency Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Cho-Yu Chan
- Changhua Christian Hospital, Chunghua, Taiwan
| | - Chiung-Hsuan Chiu
- Department of Pharmacy. Cathay General Hospital, Taipei, Taiwan. School of Pharmacy. Taipei Medical University, Taipei, Taiwan
| |
Collapse
|