1
|
Bowman A, Reid D, Bobby Harreveld R, Lawson C. Evaluation of post-simulation sonographer students' professional behaviour in the workplace. Radiography (Lond) 2022; 28:889-896. [PMID: 35780628 DOI: 10.1016/j.radi.2022.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 06/15/2022] [Accepted: 06/17/2022] [Indexed: 11/24/2022]
Abstract
INTRODUCTION In Australia, sonographer's professional identity is traditionally 'caught' from clinical role models. A four-year undergraduate-postgraduate course introduced professional identity education, with simulated practice, to prepare novice sonographer students prior to clinical practice. Preclinical students learnt sonographer professional behaviour, and humanistic attributes, during simulation designed with volunteer peers as standardised patients, educator role-models, immediate feedback, self-reflection, and longitudinal multi-observer assessment. This paper reports on the transfer of learnt professional behaviour and humanistic attributes to clinical practice. METHODS Professional behaviour evaluations completed by 94 clinical assessors described 174 students' professional behaviour and attributes one month into their initial clinical practice (2015-6). Student performance of each behaviour, and behavioural category, was quantitatively analysed by modelling binomial proportions with logistic regression. RESULTS Students demonstrated substantial learning transfer to clinical practice, achieving an overall mean score of 'consistent' sonographer professional behaviour and humanistic attributes (mean score of equal to or >3/4), one month into clinical practice. Professional behaviours varied in transferability, with 'response to patient's questions' showing least efficacy (P < 0.05). Increased deliberate practice with educator role-models improved transfer efficacy significantly (P < 0.001). CONCLUSION Preclinical application of theory to simulated practice, using standardised patients, educator role-models, immediate feedback, and multi-observer assessment, facilitated substantial transfer of sonographer professional behaviour and attributes to clinical practice. The efficacy of transfer varied but improved with increased deliberate practice and feedback. IMPLICATIONS FOR PRACTICE The incorporation of preclinical professional behaviour education with simulated practice into the core curriculum of sonographer courses is recommended for the formation of sonographer professional identity, improved clinical outcomes and increased patient safety during the early stages of ultrasound education.
Collapse
Affiliation(s)
- A Bowman
- School of Graduate Research, Central Queensland University, Cairns, Australia.
| | - D Reid
- Department of Agriculture and Fisheries, Queensland Government, Rockhampton, Australia.
| | - R Bobby Harreveld
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| | - C Lawson
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| |
Collapse
|
2
|
Williams KM. The Occupational Performance Assessment–Response Distortion (OPerA-RD) Scale. JOURNAL OF PERSONNEL PSYCHOLOGY 2022. [DOI: 10.1027/1866-5888/a000301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. The ubiquity and consequences of job performance evaluations necessitate accurate responding. This paper describes two studies designed to develop (Study 1) and provide initial validation (Study 2) for a new measure specifically designed to assist in this context: the Occupational Performance Assessment–Response Distortion (OPerA-RD) scale. This 20-item scale is contextualized to the workplace and was developed by identifying items that could detect over- and under-reporting of job performance by self- or other-report in four independent faking samples. Initial validation of the OPerA-RD was supported by expected differences between within-group faking and control conditions in subsequent samples, specifically over- and under-reporting of job performance by self- or other-reports. Implications for research and applied settings are discussed.
Collapse
Affiliation(s)
- Kevin M. Williams
- Center for Education and Career Development, Educational Testing Service, Princeton, NJ, USA
| |
Collapse
|
3
|
Soukoulis V, Martindale J, Bray MJ, Bradley E, Gusic ME. The use of EPA assessments in decision-making: Do supervision ratings correlate with other measures of clinical performance? MEDICAL TEACHER 2021; 43:1323-1329. [PMID: 34242113 DOI: 10.1080/0142159x.2021.1947480] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND Entrustable professional activities (EPAs) have been introduced as a framework for teaching and assessment in competency-based educational programs. With growing use, has come a call to examine the validity of EPA assessments. We sought to explore the correlation of EPA assessments with other clinical performance measures to support use of supervision ratings in decisions about medical students' curricular progression. METHODS Spearman rank coefficients were used to determine correlation of supervision ratings from EPA assessments with scores on clerkship evaluations and performance on an end-of-clerkship-year Objective Structured Clinical Examination (CPX). RESULTS Both overall clinical evaluation items score (rho 0.40; n = 166) and CPX patient encounter domain score (rho 0.31; n = 149) showed significant correlation with students' overall mean EPA supervision rating during the clerkship year. There was significant correlation between mean supervision rating for EPA assessments of history, exam, note, and oral presentation skills with scores for these skills on clerkship evaluations; less so on the CPX. CONCLUSIONS Correlation of EPA supervision ratings with commonly used clinical performance measures offers support for their use in undergraduate medical education. Data supporting the validity of EPA assessments promotes stakeholders' acceptance of their use in summative decisions about students' readiness for increased patient care responsibility.
Collapse
Affiliation(s)
- Victor Soukoulis
- Division of Cardiovascular Medicine, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - James Martindale
- Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Megan J Bray
- Center for Medical Education Research and Scholarly Innovation and Department of Obstetrics and Gynecology, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Elizabeth Bradley
- Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Maryellen E Gusic
- Center for Medical Education Research and Scholarly Innovation and Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, VA, USA
| |
Collapse
|
4
|
Bowman A, Harreveld RB, Lawson C. Factors influencing the rating of sonographer students' clinical performance. Radiography (Lond) 2021; 28:8-16. [PMID: 34332858 DOI: 10.1016/j.radi.2021.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 07/07/2021] [Accepted: 07/09/2021] [Indexed: 11/18/2022]
Abstract
INTRODUCTION Little is known about the factors influencing clinical supervisor-assessors' ratings of sonographer students' performance. This study identifies these influential factors and relates them to professional competency standards, with the aim of raising awareness and improving assessment practice. METHODS This study used archived written comments from 94 clinical assessors describing 174 sonographer students' performance one month into their initial clinical practice (2015-6). Qualitative mixed method analysis revealed factors influencing assessor ratings of student performance and provided an estimate of the valency, association, and frequency of these factors. RESULTS Assessors provided written comments for 93 % (n = 162/174) of students. Comments totaled 7190 words (mean of 44 words/student). One-third of comment paragraphs were wholly positive, two-thirds were equivocal. None were wholly negative. Thematic analysis revealed eleven factors, and eight sub-factors, influencing assessor impressions of five dimensions of performance. Of the factors mentioned, 84.6 % (n = 853/1008) related to professional competencies. While 15.4 % (n = 155/1008) were unrelated to competencies, instead reflecting humanistic factors such as student motivation, disposition, approach to learning, prospects and impact on supervisor and staff. Factors were prioritised and combined independently, although some associated. CONCLUSION Clinical assessors formed impressions based on student performance, humanistic behaviours and personal qualities not necessarily outlined in educational outcomes or professional competency standards. Their presence, and interrelations, impact success in clinical practice, through their contribution to, and indication of, competence. IMPLICATIONS FOR PRACTICE Sonographer student curricula and assessor training should raise awareness of the factors influencing performance ratings and judgement of clinical competence, particularly the importance of humanistic factors. Inclusion of narrative comments, multiple assessors, and broad performance dimensions would enhance clinical assessment of sonographer student performance.
Collapse
Affiliation(s)
- A Bowman
- School of Graduate Research, Central Queensland University, Cairns, Australia.
| | - R B Harreveld
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| | - C Lawson
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| |
Collapse
|
5
|
Gingerich A, Sebok-Syer SS, Larstone R, Watling CJ, Lingard L. Seeing but not believing: Insights into the intractability of failure to fail. MEDICAL EDUCATION 2020; 54:1148-1158. [PMID: 32562288 DOI: 10.1111/medu.14271] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/04/2020] [Accepted: 06/10/2020] [Indexed: 06/11/2023]
Abstract
CONTEXT Inadequate documentation of observed trainee incompetence persists despite research-informed solutions targeting this failure to fail phenomenon. Documentation could be impeded if assessment language is misaligned with how supervisors conceptualise incompetence. Because frameworks tend to itemise competence as well as being vague about incompetence, assessment design may be improved by better understanding and describing of how supervisors experience being confronted with a potentially incompetent trainee. METHODS Following constructivist grounded theory methodology, analysis using a constant comparison approach was iterative and informed data collection. We interviewed 22 physicians about their experiences supervising trainees who demonstrate incompetence; we quickly found that they bristled at the term 'incompetence,' so we began to use 'underperformance' in its place. RESULTS Physicians began with a belief and an expectation: all trainees should be capable of learning and progressing by applying what they learn to subsequent clinical experiences. Underperformance was therefore unexpected and evoked disbelief in supervisors, who sought alternate explanations for the surprising evidence. Supervisors conceptualised underperformance as: an inability to engage with learning due to illness, a life event or learning disorders, so that progression was stalled, or an unwillingness to engage with learning due to lack of interest, insight or humility. CONCLUSION Physicians conceptualise underperformance as problematic progression due to insufficient engagement with learning that is unresponsive to intensified supervision. Although failure to fail tends to be framed as a reluctance to document underperformance, the prior phase of disbelief prevents confident documentation of performance and delays identification of underperformance. The findings offer further insight and possible new solutions to address under-documentation of underperformance.
Collapse
Affiliation(s)
- Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada
| | - Stefanie S Sebok-Syer
- Emergency Medicine, Stanford Medicine, Stanford University, Stanford, California, USA
| | - Roseann Larstone
- Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada
| | - Christopher J Watling
- Department of Clinical Neurological Sciences, Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, London, Ontario, Canada
| | - Lorelei Lingard
- Department of Medicine, Centre for Education Research and Innovation, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
6
|
Bowman A, Reid D, Bobby Harreveld R, Lawson C. Evaluation of students' clinical performance post-simulation training. Radiography (Lond) 2020; 27:404-413. [PMID: 33876732 DOI: 10.1016/j.radi.2020.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 09/18/2020] [Accepted: 10/03/2020] [Indexed: 12/20/2022]
Abstract
INTRODUCTION Traditionally in Australia, sonographer skills are learnt on patients in clinical practice. A four-year undergraduate-postgraduate course introduced ultrasound simulation to prepare novice sonographer students for interaction with patients. Second-year students learnt psychomotor and patient-sonographer communication skills during simulation using commercial ultrasound machines and volunteer year-group peers as standardised patients. This paper reports on the transfer of the ultrasound skills learnt in simulation to clinical practice. METHODS Clinical performance evaluations were completed by 94 supervisors involved in the initial clinical practice of 174 post-simulation second-year students over a two-year period (2015-2016). Student performance of each component skill, and skill category, was analysed by modelling binomial proportions with logistic regression. RESULTS Students demonstrated substantial transfer of learnt ultrasound skills to achieve a mean of advanced beginner competence (mean score of equal to or >3/5) in complex psychomotor and patient-sonographer communication skills, as measured one month into clinical practice. Knowledge and skill components, or sub-tasks, varied significantly (P < 0.001) in transferability. Scanning tasks in general, particularly the skill of 'extending the examination', transferred with significantly (P < 0.001) less efficacy than pre-exam, instrumentation, post-exam, and additional tasks. Skill transfer improved significantly (P < 0.001) following increased deliberate practice with tutor feedback. CONCLUSION Preclinical simulation, using standardised patients, clearly stated objectives to manage cognitive load and immediate tutor feedback, facilitated substantial transfer of ultrasound skills to clinical practice. The efficacy of skill transfer varied but improved with increased deliberate practice and feedback quality. IMPLICATIONS FOR PRACTICE The incorporation of preclinical simulation into the core curriculum of sonographer courses is recommended to improve student performance, reduce the burden on clinical staff and increase patient safety during the early stages of ultrasound education.
Collapse
Affiliation(s)
- A Bowman
- School of Graduate Research, Central Queensland University, Cairns, Australia.
| | - D Reid
- Department of Agriculture and Fisheries, Queensland Government, Rockhampton, Australia.
| | - R Bobby Harreveld
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| | - C Lawson
- School of Education and the Arts, Central Queensland University, Rockhampton, Australia.
| |
Collapse
|
7
|
Faherty A, Counihan T, Kropmans T, Finn Y. Inter-rater reliability in clinical assessments: do examiner pairings influence candidate ratings? BMC MEDICAL EDUCATION 2020; 20:147. [PMID: 32393228 PMCID: PMC7212618 DOI: 10.1186/s12909-020-02009-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 03/19/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The reliability of clinical assessments is known to vary considerably with inter-rater reliability a key contributor. Many of the mechanisms that contribute to inter-rater reliability however remain largely unexplained and unclear. While research in other fields suggests personality of raters can impact ratings, studies looking at personality factors in clinical assessments are few. Many schools use the approach of pairing examiners in clinical assessments and asking them to come to an agreed score. Little is known however, about what occurs when these paired examiners interact to generate a score. Could personality factors have an impact? METHODS A fully-crossed design was employed with each participant examiner observing and scoring. A quasi-experimental research design used candidate's observed scores in a mock clinical assessment as the dependent variable. The independent variables were examiner numbers, demographics and personality with data collected by questionnaire. A purposeful sample of doctors who examine in the Final Medical examination at our institution was recruited. RESULTS Variability between scores given by examiner pairs (N = 6) was less than the variability with individual examiners (N = 12). 75% of examiners (N = 9) scored below average for neuroticism and 75% also scored high or very high for extroversion. Two-thirds scored high or very high for conscientiousness. The higher an examiner's personality score for extroversion, the lower the amount of change in his/her score when paired up with a co-examiner; reflecting possibly a more dominant role in the process of reaching a consensus score. CONCLUSIONS The reliability of clinical assessments using paired examiners is comparable to assessments with single examiners. Personality factors, such as extroversion, may influence the magnitude of change in score an individual examiner agrees to when paired up with another examiner. Further studies on personality factors and examiner behaviour are needed to test associations and determine if personality testing has a role in reducing examiner variability.
Collapse
Affiliation(s)
| | - Tim Counihan
- National University of Ireland Galway, Galway, Ireland
| | | | - Yvonne Finn
- National University of Ireland Galway, Galway, Ireland
| |
Collapse
|
8
|
Walsh E, Foley T, Sinnott C, Boyle S, Smithson WH. Developing and piloting a resource for training assessors in use of the Mini-CEX (mini clinical evaluation exercise). EDUCATION FOR PRIMARY CARE 2017; 28:243-245. [PMID: 28110625 DOI: 10.1080/14739879.2017.1280694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Elaine Walsh
- a Department of General Practice , University College Cork , Cork , Ireland
| | - Tony Foley
- a Department of General Practice , University College Cork , Cork , Ireland
| | - Carol Sinnott
- a Department of General Practice , University College Cork , Cork , Ireland
| | - Siobhan Boyle
- a Department of General Practice , University College Cork , Cork , Ireland
| | - W Henry Smithson
- a Department of General Practice , University College Cork , Cork , Ireland
| |
Collapse
|
9
|
Boscardin CK, Wijnen-Meijer M, Cate OT. Taking Rater Exposure to Trainees Into Account When Explaining Rater Variability. J Grad Med Educ 2016; 8:726-730. [PMID: 28018538 PMCID: PMC5180528 DOI: 10.4300/jgme-d-16-00122.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Rater-based judgments are widely used in graduate medical education to provide more meaningful assessments, despite concerns about rater reliability. OBJECTIVE We introduced a statistical modeling technique that corresponds to the new rater reliability framework, and present a case example to provide an illustration of the utility of this new approach to assessing rater reliability. METHODS We used mixed-effects models to simultaneously incorporate random effects for raters and systematic effects of rater role as fixed effects. Study data are clinical performance ratings collected from medical school graduates who were evaluated for their readiness for supervised clinical practice in authentic simulation settings at 2 medical schools in the Netherlands and Germany. RESULTS The medical schools recruited a maximum of 30 graduates out of 60 (50%) and 180 (17%) eligible candidates, respectively. Clinician raters (n = 25) for the study were selected based on their level of expertise and experience. Graduates were assessed on 7 facets of competence (FOCs) that are considered important in supervisors' entrustment decisions across the 5 cases used. Rater role was significantly associated with 2 FOCs: (1) teamwork and collegiality, and (2) verbal communication with colleagues/supervisors. For another 2 FOCs, rater variability was only partially explained by the role of the rater (a proxy for the amount of direct interaction with the trainee). CONCLUSIONS Consideration of raters as meaningfully idiosyncratic provides a new framework to explore their influence on assessment scores, which goes beyond considering them as random sources of variability.
Collapse
Affiliation(s)
- Christy K. Boscardin
- Corresponding author: Christy K. Boscardin, PhD, UCSF School of Medicine, Department of Medicine, Office of Medical Education, 533 Parnassus Avenue, Suite U-80, San Francisco, CA 94143-3202, 415.519.3570,
| | | | | |
Collapse
|
10
|
Lawson L, Jung J, Franzen D, Hiller K. Clinical Assessment of Medical Students in Emergency Medicine Clerkships: A Survey of Current Practice. J Emerg Med 2016; 51:705-711. [PMID: 27614539 DOI: 10.1016/j.jemermed.2016.06.045] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Revised: 06/15/2016] [Accepted: 06/29/2016] [Indexed: 11/17/2022]
Abstract
BACKGROUND Assessment practices in emergency medicine (EM) clerkships have not been previously described. Clinical assessment frequently relies on global ratings of clinical performance, or "shift cards," although these tools have not been standardized or studied. OBJECTIVE We sought to characterize assessment practices in EM clerkships, with particular attention to shift cards. METHODS A survey regarding assessment practices was administered to a national sample of EM clerkship directors (CDs). Descriptive statistics were compiled and regression analyses were performed. RESULTS One hundred seventy-two CDs were contacted, and 100 (58%) agreed to participate. The most heavily weighted assessment methods in final grades were shift cards (66%) and written examinations (21-26%), but there was considerable variability in grading algorithms. EM faculty (100%) and senior residents (69%) were most commonly responsible for assessment, and assessors were often preassigned (71%). Forty-four percent of CDs reported immediate completion of shift cards, 27% within 1 to 2 days, and 20% within a week. Only 40% reported return rates >75%. Thirty percent of CDs do not permit students to review individual evaluations, and 54% of the remainder deidentify evaluations before student review. Eighty-six percent had never performed psychometric analysis on their assessment tools. Sixty-five percent of CDs were satisfied with their shift cards, but 90% supported the development of a national tool. CONCLUSION There is substantial variability in assessment practices between EM clerkships, raising concern regarding the comparability of grades between institutions. CDs rely on shift cards in grading despite the lack of evidence of validity and inconsistent process variables. Standardization of assessment practices may improve the assessment of EM students.
Collapse
Affiliation(s)
- Luan Lawson
- Department of Emergency Medicine, East Carolina University Brody School of Medicine, Greenville, North Carolina
| | - Julianna Jung
- Department of Emergency Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Douglas Franzen
- Division of Emergency Medicine, University of Washington School of Medicine, Seattle, Washington
| | - Katherine Hiller
- Department of Emergency Medicine, University of Arizona College of Medicine, Tucson, Arizona
| |
Collapse
|
11
|
Vaughan B, Moore K. The mini Clinical Evaluation Exercise (mini-CEX) in a pre-registration osteopathy program: Exploring aspects of its validity. INT J OSTEOPATH MED 2016. [DOI: 10.1016/j.ijosm.2015.07.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Moore K, Vaughan B. Assessment of Australian osteopathic learners' clinical competence during workplace learning. INT J OSTEOPATH MED 2016. [DOI: 10.1016/j.ijosm.2015.06.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
13
|
Foley T, Walsh E, Sweeney C, James M, Maher B, O'Flynn S. Training the assessors: A Mini-CEX workshop for GPs who assess undergraduate medical students. EDUCATION FOR PRIMARY CARE 2016; 26:446-7. [PMID: 26808954 DOI: 10.1080/14739879.2015.1101860] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Tony Foley
- a General Practitioner & Lecturer, Department of General Practice , University College Cork
| | - Elaine Walsh
- a General Practitioner & Lecturer, Department of General Practice , University College Cork
| | - Catherine Sweeney
- b Lecturer and Lead for Faculty Development, Medical Education Unit, University College Cork
| | - Mark James
- c Lecturer in Ophthalmology, Medical Education Unit, University College Cork
| | - Bridget Maher
- d Senior Lecturer, Medical Education Unit, University College Cork
| | - Siún O'Flynn
- e Head of Medical Education Unit, University College Cork
| |
Collapse
|
14
|
Bok HGJ, Jaarsma DADC, Spruijt A, Van Beukelen P, Van Der Vleuten CPM, Teunissen PW. Feedback-giving behaviour in performance evaluations during clinical clerkships. MEDICAL TEACHER 2016; 38:88-95. [PMID: 25776225 DOI: 10.3109/0142159x.2015.1017448] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
CONTEXT Narrative feedback documented in performance evaluations by the teacher, i.e. the clinical supervisor, is generally accepted to be essential for workplace learning. Many studies have examined factors of influence on the usage of mini-clinical evaluation exercise (mini-CEX) instruments and provision of feedback, but little is known about how these factors influence teachers' feedback-giving behaviour. In this study, we investigated teachers' use of mini-CEX in performance evaluations to provide narrative feedback in undergraduate clinical training. METHODS We designed an exploratory qualitative study using an interpretive approach. Focusing on the usage of mini-CEX instruments in clinical training, we conducted semi-structured interviews to explore teachers' perceptions. Between February and June 2013, we conducted interviews with 14 clinicians participated as teachers during undergraduate clinical clerkships. Informed by concepts from the literature, we coded interview transcripts and iteratively reduced and displayed data using template analysis. RESULTS We identified three main themes of interrelated factors that influenced teachers' practice with regard to mini-CEX instruments: teacher-related factors; teacher-student interaction-related factors, and teacher-context interaction-related factors. Four issues (direct observation, relationship between teacher and student, verbal versus written feedback, formative versus summative purposes) that are pertinent to workplace-based performance evaluations were presented to clarify how different factors interact with each other and influence teachers' feedback-giving behaviour. Embedding performance observation in clinical practice and establishing trustworthy teacher-student relationships in more longitudinal clinical clerkships were considered important in creating a learning environment that supports and facilitates the feedback exchange. CONCLUSION Teachers' feedback-giving behaviour within the clinical context results from the interaction between personal, interpersonal and contextual factors. Increasing insight into how teachers use mini-CEX instruments in daily practice may offer strategies for creating a professional learning culture in which feedback giving and seeking would be enhanced.
Collapse
Affiliation(s)
| | | | | | | | | | - Pim W Teunissen
- c Maastricht University , The Netherlands
- d VU University Medical Centre , The Netherlands
| |
Collapse
|
15
|
Kreiter CD, Wilson AB, Humbert AJ, Wade PA. Examining rater and occasion influences in observational assessments obtained from within the clinical environment. MEDICAL EDUCATION ONLINE 2016; 21:29279. [PMID: 26925540 PMCID: PMC4770864 DOI: 10.3402/meo.v21.29279] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Revised: 11/30/2015] [Accepted: 01/31/2016] [Indexed: 05/22/2023]
Abstract
BACKGROUND When ratings of student performance within the clerkship consist of a variable number of ratings per clinical teacher (rater), an important measurement question arises regarding how to combine such ratings to accurately summarize performance. As previous G studies have not estimated the independent influence of occasion and rater facets in observational ratings within the clinic, this study was designed to provide estimates of these two sources of error. METHOD During 2 years of an emergency medicine clerkship at a large midwestern university, 592 students were evaluated an average of 15.9 times. Ratings were performed at the end of clinical shifts, and students often received multiple ratings from the same rater. A completely nested G study model (occasion: rater: person) was used to analyze sampled rating data. RESULTS The variance component (VC) related to occasion was small relative to the VC associated with rater. The D study clearly demonstrates that having a preceptor rate a student on multiple occasions does not substantially enhance the reliability of a clerkship performance summary score. CONCLUSIONS Although further research is needed, it is clear that case-specific factors do not explain the low correlation between ratings and that having one or two raters repeatedly rate a student on different occasions/cases is unlikely to yield a reliable mean score. This research suggests that it may be more efficient to have a preceptor rate a student just once. However, when multiple ratings from a single preceptor are available for a student, it is recommended that a mean of the preceptor's ratings be used to calculate the student's overall mean performance score.
Collapse
Affiliation(s)
- Clarence D Kreiter
- Department of Family Medicine, University of Iowa College of Medicine, Iowa City, IA, USA
- Office of Consultation and Research in Medical Education, University of Iowa College of Medicine, Iowa City, IA, USA;
| | - Adam B Wilson
- Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Aloysius J Humbert
- Office of Undergraduate Medical Education, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Patricia A Wade
- Office for Mentoring and Student Development, Medical Student Affairs, Indiana University School of Medicine, Indianapolis, IN, USA
| |
Collapse
|
16
|
McGill DA, van der Vleuten CPM, Clarke MJ. Construct validation of judgement-based assessments of medical trainees' competency in the workplace using a "Kanesian" approach to validation. BMC MEDICAL EDUCATION 2015; 15:237. [PMID: 26715145 PMCID: PMC4696206 DOI: 10.1186/s12909-015-0520-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2015] [Accepted: 12/16/2015] [Indexed: 05/30/2023]
Abstract
BACKGROUND Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors' end-of-term summative assessments for prevocational medical trainees. METHODS The populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach's α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80. RESULTS The hypothesised competency constructs of "general professional job performance", "clinical skills" and "professional abilities" provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct "general professional job performance" indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach's alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score. CONCLUSION Internal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors' assessment in local training schemes is feasible using standard methods for gathering validity evidence.
Collapse
Affiliation(s)
- D A McGill
- Department of Cardiology, The Canberra Hospital, Garran, ACT 2605, Australia.
| | - C P M van der Vleuten
- Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| | - M J Clarke
- Clinical Trial Service Unit, University of Oxford, Oxford, UK
| |
Collapse
|
17
|
Lindeman BM, Sacks BC, Lipsett PA. Graduating Students' and Surgery Program Directors' Views of the Association of American Medical Colleges Core Entrustable Professional Activities for Entering Residency: Where are the Gaps? JOURNAL OF SURGICAL EDUCATION 2015; 72:e184-92. [PMID: 26276302 DOI: 10.1016/j.jsurg.2015.07.005] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2015] [Revised: 06/23/2015] [Accepted: 07/08/2015] [Indexed: 05/26/2023]
Abstract
OBJECTIVE Residency program directors have increasingly expressed concern about the preparedness of some medical school graduates for residency training. The Association of American Medical Colleges recently defined 13 core entrustable professional activities (EPAs) for entering residency that residents should be able to perform without direct supervision on the first day of training. It is not known how students' perception of their competency with these activities compares with that of surgery program directors'. DESIGN Cross-sectional survey. SETTING All surgery training programs in the United States. PARTICIPANTS All program directors (PDs) in the Association of Program Directors in Surgery (APDS) database (n = 222) were invited to participate in an electronic survey, and 119 complete responses were received (53.6%). Among the respondents, 83% were men and 35.2% represented community hospital programs. PDs' responses were compared with questions asking students to rate their confidence in performance of each EPA from the Association of American Medical Colleges Graduation Questionnaire (95% response). RESULTS PDs rated their confidence in residents' performance without direct supervision for every EPA significantly lower when compared with the rating by graduating students. Although PDs' ratings continued to be lower than students' ratings, PDs from academic programs (those associated with a medical school) gave higher ratings than those from community programs. PDs generally ranked all 13 EPAs as important to being a trustworthy physician. PDs from programs without preliminary residents gave higher ratings for confidence with EPA performance as compared with PDs with preliminary residents. Among PDs with preliminary residents, there were equal numbers of those who agreed and those who disagreed that there are no identifiable differences between categorical and preliminary residents (42.7% and 41.8%, respectively). CONCLUSIONS A large gap exists between confidence in performance of the 13 core EPAs for entering residency without direct supervision for graduating medical students and surgery program directors. Both the groups identified several key areas for improvement that may be addressed by medical school curricular interventions or expanding surgical boot camps in hopes to improve resident performance and patient safety.
Collapse
Affiliation(s)
- Brenessa M Lindeman
- Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| | - Bethany C Sacks
- Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Pamela A Lipsett
- Department of Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
18
|
Bord S, Retezar R, McCann P, Jung J. Development of an Objective Structured Clinical Examination for Assessment of Clinical Skills in an Emergency Medicine Clerkship. West J Emerg Med 2015; 16:866-70. [PMID: 26594280 PMCID: PMC4651584 DOI: 10.5811/westjem.2015.9.27307] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2015] [Revised: 09/28/2015] [Accepted: 09/29/2015] [Indexed: 11/16/2022] Open
Affiliation(s)
- Sharon Bord
- Johns Hopkins University School of Medicine, Department of Emergency Medicine, Baltimore, Maryland
| | - Rodica Retezar
- Johns Hopkins University School of Medicine, Department of Emergency Medicine, Baltimore, Maryland
| | - Pamela McCann
- Johns Hopkins University School of Medicine, Department of Emergency Medicine, Baltimore, Maryland
| | - Julianna Jung
- Johns Hopkins University School of Medicine, Department of Emergency Medicine, Baltimore, Maryland
| |
Collapse
|
19
|
Wouda JC, van de Wiel HBM. Supervisors' and residents' patient-education competency in challenging outpatient consultations. PATIENT EDUCATION AND COUNSELING 2015; 98:1084-1091. [PMID: 26074498 DOI: 10.1016/j.pec.2015.05.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2014] [Revised: 05/07/2015] [Accepted: 05/11/2015] [Indexed: 06/04/2023]
Abstract
OBJECTIVES We compared supervisors' and residents' patient-education competency in challenging consultations in order to establish whether supervisors demonstrate sufficient patient-education competency to act credibly as role models and coaches for residents. METHODS All consultations conducted at one, two, or three of the outpatient clinics of each of the participating physicians were videoed. Each participant selected two challenging consultations from each clinic for assessment. We assessed their patient-education competency using the CELI instrument, we calculated net consultation length for all videoed consultations and we measured patient opinion about the patient education received using a questionnaire. RESULTS Forty-four residents and fourteen supervisors participated in the study. They selected 230 consultations for assessment. On average, supervisors and residents demonstrated similar patient-education competency. Net consultation length was longer for supervisors. Patient opinion did not differ between supervisors and residents. CONCLUSIONS Supervising consultants generally do not possess sufficient patient-education competency to fulfill their teaching roles in workplace-based learning that is aimed at improving residents' patient-education competency. PRACTICE IMPLICATIONS Not only residents but also supervising consultants should improve their patient-education competency. Workplace-based learning consisting of self-assessment of and feedback on videoed consultations could be useful in attaining this goal.
Collapse
Affiliation(s)
- Jan C Wouda
- University of Groningen, University Medical Center Groningen, The Netherlands.
| | | |
Collapse
|
20
|
Scarff CE, Bearman M, Corderoy RM. Supervisor perspectives on the summative in-training assessment. Australas J Dermatol 2015; 57:128-34. [PMID: 26172219 DOI: 10.1111/ajd.12376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 06/06/2015] [Indexed: 11/30/2022]
Abstract
BACKGROUND Assessment is a fundamental component of medical education and exists in many formats. In-training assessments are one such example and they serve to provide feedback to learners about their performance during a period of clinical attachment. However, in addition to trainee knowledge and performance, many factors influence the assessment given to a trainee. METHOD This study used an anonymous survey to investigate the perceptions of supervisors of the influences on their assessments of Australian dermatology trainees, focusing on the summative in-training assessment (SITA) format. RESULTS A response rate of 41% was achieved. The importance of reporting underperformance and providing feedback to trainees was agreed on, but current limitations in the ability of the tool to do this were noted. Implications for practice are discussed including the education and support of supervisors, consideration of logistical issues, the process of SITA completion and supervisor appointment. Further research into the impact of supervisor concerns about potential challenges to a judgement and hesitations about making negative comments about a trainee are required. Examination of the trainee perspective is also required. CONCLUSION Quality feedback is essential for learners to guide and improve their performance. Supervisors face many potential influences on their assessments and if these are too great, they may jeopardise the quality of the assessment given. Attention to highlighted areas may serve to improve the process, so allowing trainees to develop into the best clinicians they can be.
Collapse
Affiliation(s)
- Catherine E Scarff
- Health Professions Education and Educational Research (HealthPEER), Monash University, Melbourne, Victoria
| | - Margaret Bearman
- Health Professions Education and Educational Research (HealthPEER), Monash University, Melbourne, Victoria
| | - Robert M Corderoy
- Educational Development, Planning and Innovation, Australasian College of Dermatologists, Sydney, New South Wales, Australia
| |
Collapse
|
21
|
Read EK, Bell C, Rhind S, Hecker KG. The use of global rating scales for OSCEs in veterinary medicine. PLoS One 2015; 10:e0121000. [PMID: 25822258 PMCID: PMC4379077 DOI: 10.1371/journal.pone.0121000] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Accepted: 02/09/2015] [Indexed: 11/19/2022] Open
Abstract
OSCEs (Objective Structured Clinical Examinations) are widely used in health professions to assess clinical skills competence. Raters use standardized binary checklists (CL) or multi-dimensional global rating scales (GRS) to score candidates performing specific tasks. This study assessed the reliability of CL and GRS scores in the assessment of veterinary students, and is the first study to demonstrate the reliability of GRS within veterinary medical education. Twelve raters from two different schools (6 from University of Calgary [UCVM] and 6 from Royal (Dick) School of Veterinary Studies [R(D)SVS] were asked to score 12 students (6 from each school). All raters assessed all students (video recordings) during 4 OSCE stations (bovine haltering, gowning and gloving, equine bandaging and skin suturing). Raters scored students using a CL, followed by the GRS. Novice raters (6 R(D)SVS) were assessed independently of expert raters (6 UCVM). Generalizability theory (G theory), analysis of variance (ANOVA) and t-tests were used to determine the reliability of rater scores, assess any between school differences (by student, by rater), and determine if there were differences between CL and GRS scores. There was no significant difference in rater performance with use of the CL or the GRS. Scores from the CL were significantly higher than scores from the GRS. The reliability of checklist scores were .42 and .76 for novice and expert raters respectively. The reliability of the global rating scale scores were .7 and .86 for novice and expert raters respectively. A decision study (D-study) showed that once trained using CL, GRS could be utilized to reliably score clinical skills in veterinary medicine with both novice and experienced raters.
Collapse
Affiliation(s)
- Emma K. Read
- Department of Veterinary Clinical and Diagnostic Sciences, University of Calgary Faculty of Veterinary Medicine, Calgary, Alberta, Canada
- * E-mail:
| | - Catriona Bell
- Royal (Dick) School of Veterinary Studies, University of Edinburgh, Roslin, Midlothian, Scotland
| | - Susan Rhind
- Royal (Dick) School of Veterinary Studies, University of Edinburgh, Roslin, Midlothian, Scotland
| | - Kent G. Hecker
- Department of Veterinary Clinical and Diagnostic Sciences, University of Calgary Faculty of Veterinary Medicine, Calgary, Alberta, Canada
| |
Collapse
|
22
|
Essers G, Dielissen P, van Weel C, van der Vleuten C, van Dulmen S, Kramer A. How do trained raters take context factors into account when assessing GP trainee communication performance? An exploratory, qualitative study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2015; 20:131-147. [PMID: 24858236 DOI: 10.1007/s10459-014-9511-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2013] [Accepted: 05/05/2014] [Indexed: 06/03/2023]
Abstract
Communication assessment in real-life consultations is a complex task. Generic assessment instruments help but may also have disadvantages. The generic nature of the skills being assessed does not provide indications for context-specific behaviour required in practice situations; context influences are mostly taken into account implicitly. Our research questions are: 1. What factors do trained raters observe when rating workplace communication? 2. How do they take context factors into account when rating communication performance with a generic rating instrument? Nineteen general practitioners (GPs), trained in communication assessment with a generic rating instrument (the MAAS-Global), participated in a think-aloud protocol reflecting concurrent thought processes while assessing videotaped real-life consultations. They were subsequently interviewed to answer questions explicitly asking them to comment on the influence of predefined contextual factors on the assessment process. Results from both data sources were analysed. We used a grounded theory approach to untangle the influence of context factors on GP communication and on communication assessment. Both from the think-aloud procedure and from the interviews we identified various context factors influencing communication, which were categorised into doctor-related (17), patient-related (13), consultation-related (18), and education-related factors (18). Participants had different views and practices on how to incorporate context factors into the GP(-trainee) communication assessment. Raters acknowledge that context factors may affect communication in GP consultations, but struggle with how to take contextual influences into account when assessing communication performance in an educational context. To assess practice situations, raters need extra guidance on how to handle specific contextual factors.
Collapse
Affiliation(s)
- Geurt Essers
- Department of Public Health and Primary Care, Leiden University Medical Centre, Hippocratespad 21, 2333 ZP, Leiden, The Netherlands,
| | | | | | | | | | | |
Collapse
|
23
|
Gingerich A, van der Vleuten CPM, Eva KW, Regehr G. More consensus than idiosyncrasy: Categorizing social judgments to examine variability in Mini-CEX ratings. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:1510-9. [PMID: 25250753 DOI: 10.1097/acm.0000000000000486] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE Social judgment research suggests that rater unreliability in performance assessments arises from raters' differing inferences about the performer and the underlying reasons for the performance observed. These varying social judgments are not entirely idiosyncratic but, rather, tend to partition into a finite number of distinct subgroups, suggesting some "signal" in the "noise" of interrater variability. The authors investigated the proportion of variance in Mini-CEX ratings attributable to such partitions of raters' social judgments about residents. METHOD In 2012 and 2013, physicians reviewed video-recorded patient encounters for seven residents, completed a Mini-CEX, and described their social judgments of the residents. Additional participants sorted these descriptions, which were analyzed using latent partition analysis (LPA). The best-fitting set of partitions for each resident served as an independent variable in a one-way ANOVA to determine the proportion of variance explained in Mini-CEX ratings. RESULTS Forty-eight physicians rated at least one resident (34 assessed all seven). The seven sets of social judgments were sorted by 14 participants. Across residents, 2 to 5 partitions (mode: 4) provided a good LPA fit, suggesting that subgroups of raters were making similar social judgments, while different causal explanations for each resident's performance existed across subgroups. The partitions accounted for 9% to 57% of the variance in Mini-CEX ratings across residents (mean = 32%). CONCLUSIONS These findings suggest that multiple "signals" do exist within the "noise" of interrater variability in performance-based assessment. It may be valuable to understand and exploit these multiple signals rather than try to eliminate them.
Collapse
Affiliation(s)
- Andrea Gingerich
- Ms. Gingerich is a PhD candidate, School of Health Professions Education, Maastricht University, Maastricht, Netherlands, and research associate, Northern Medical Program, University of Northern British Columbia, Prince George, British Columbia, Canada. Dr. van der Vleuten is scientific director, School of Health Professions Education, Maastricht University, Maastricht, Netherlands. Dr. Eva is acting director and senior scientist, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada. Dr. Regehr is associate director of research and senior scientist, Centre for Health Education Scholarship, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | | | |
Collapse
|
24
|
Wouda JC, van de Wiel HBM. The effects of self-assessment and supervisor feedback on residents' patient-education competency using videoed outpatient consultations. PATIENT EDUCATION AND COUNSELING 2014; 97:59-66. [PMID: 24993839 DOI: 10.1016/j.pec.2014.05.023] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2013] [Revised: 05/26/2014] [Accepted: 05/27/2014] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To determine the effects of residents' communication self-assessment and supervisor feedback on residents' communication-competency awareness, on their patient-education competency, and on their patients' opinion. METHODS The program consisted of the implementation of a communication self-assessment and feedback process using videoed outpatient consultations (video-CAF). Residents wrote down communication learning objectives during the instruction and after each video-CAF session. Residents' patient-education competency was assessed by trained raters, using the CELI instrument. Participating patients completed a questionnaire about the contact with their physician. RESULTS Forty-four residents and 21 supervisors participated in 87 video-CAF sessions. After their first video-CAF session, residents wrote down more learning objectives addressing their control and rapport skills and their listening skills. Video-CAF participation improved residents' patient-education competency, but only in their control and rapport skills. Video-CAF participation had no effect on patients' opinion. CONCLUSIONS Video-CAF appears to be a feasible procedure and might be effective in improving residents' patient-education competency in clinical practice. PRACTICE IMPLICATIONS Video-CAF could fill the existing deficiency of communication training in residency programs.
Collapse
Affiliation(s)
- Jan C Wouda
- University of Groningen, University Medical Center Groningen, The Netherlands.
| | | |
Collapse
|
25
|
Weston PSJ, Smith CA. The use of mini-CEX in UK foundation training six years following its introduction: lessons still to be learned and the benefit of formal teaching regarding its utility. MEDICAL TEACHER 2014; 36:155-63. [PMID: 24099402 DOI: 10.3109/0142159x.2013.836267] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
BACKGROUND The mini-clinical evaluation exercise (mini-CEX) is a widely used tool with a strong theoretical basis. It was introduced to UK foundation training in 2005. AIMS To assess current experiences, opinions and attitudes towards mini-CEX amongst foundation doctors, and explore what factors underpin these. METHODS Data were collected from foundation trainees via an on-line questionnaire. RESULTS Ninety-eight per cent of respondents had used mini-CEX during FY1, however, only 32% had ever received formal teaching regarding its use. In terms of understanding of the purpose of mini-CEX, only 30% of trainees commented on there being a formative aspect or requirement for feedback. The majority of trainees did not feel that mini-CEX was a useful part of their training. The main themes were the poor attitude and understanding of assessors and difficulties finding sufficient time. However, those who had received formal teaching as students regarding the use of mini-CEX were significantly more likely as postgraduates to find it beneficial (p = 0.031). CONCLUSIONS A more concerted effort to educate trainees and assessors regarding the correct use of mini-CEX will enhance its educational value. Increased education during undergraduate training regarding use of formative assessment may lead to more effective utilisation in the postgraduate setting.
Collapse
|
26
|
Evaluation of a Task-Specific Checklist and Global Rating Scale for Ultrasound-Guided Regional Anesthesia. Reg Anesth Pain Med 2014; 39:399-408. [DOI: 10.1097/aap.0000000000000126] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
27
|
Wouda JC, van de Wiel HBM. Inconsistency of residents' communication performance in challenging consultations. PATIENT EDUCATION AND COUNSELING 2013; 93:579-585. [PMID: 24080028 DOI: 10.1016/j.pec.2013.09.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2013] [Revised: 08/13/2013] [Accepted: 09/03/2013] [Indexed: 06/02/2023]
Abstract
OBJECTIVE Communication performance inconsistency between consultations is usually regarded as a measurement error that jeopardizes the reliability of assessments. However, inconsistency is an important phenomenon, since it indicates that physicians' communication may be below standard in some consultations. METHODS Fifty residents performed two challenging consultations. Residents' communication competency was assessed with the CELI instrument. Residents' background in communication skills training (CST) was also established. We used multilevel analysis to explore communication performance inconsistency between the two consultations. We also established the relationships between inconsistency and average performance quality, the type of consultation, and CST background. RESULTS Inconsistency accounted for 45.5% of variance in residents' communication performance. Inconsistency was dependent on the type of consultation. The effect of CST background training on performance quality was case specific. Inconsistency and average performance quality were related for those consultation combinations dissimilar in goals, structure, and required skills. CST background had no effect on inconsistency. CONCLUSION Physician communication performance should be of high quality, but also consistent regardless of the type and complexity of the consultation. PRACTICE IMPLICATIONS In order to improve performance quality and reduce performance inconsistency, communication education should offer ample opportunities to practice a wide variety of challenging consultations.
Collapse
Affiliation(s)
- Jan C Wouda
- University of Groningen, University Medical Center Groningen, The Netherlands.
| | | |
Collapse
|
28
|
Moonen-van Loon JMW, Overeem K, Donkers HHLM, van der Vleuten CPM, Driessen EW. Composite reliability of a workplace-based assessment toolbox for postgraduate medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:1087-102. [PMID: 23494202 DOI: 10.1007/s10459-013-9450-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2012] [Accepted: 02/21/2013] [Indexed: 05/16/2023]
Abstract
In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and certification as a specialist are based on the composite results of different WBAs collected in a portfolio, to our knowledge, the reliability of such a WBA toolbox has never been investigated. Using generalisability theory, we analysed the separate and composite reliability of three WBA tools [mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), and multisource feedback (MSF)] included in a resident portfolio. G-studies and D-studies of 12,779 WBAs from a total of 953 residents showed that a reliability coefficient of 0.80 was obtained for eight mini-CEXs, nine DOPS, and nine MSF rounds, whilst the same reliability was found for seven mini-CEXs, eight DOPS, and one MSF when combined in a portfolio. At the end of the first year of residency a portfolio with five mini-CEXs, six DOPS, and one MSF afforded reliable judgement. The results support the conclusion that several WBA tools combined in a portfolio can be a feasible and reliable method for high-stakes judgements.
Collapse
Affiliation(s)
- J M W Moonen-van Loon
- Department of Educational Research and Development, Faculty of Health, Medicine, and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands,
| | | | | | | | | |
Collapse
|
29
|
Jenkins L, Mash B, Derese A. Reliability testing of a portfolio assessment tool for postgraduate family medicine training in South Africa. Afr J Prim Health Care Fam Med 2013. [PMCID: PMC4502840 DOI: 10.4102/phcfm.v5i1.577] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Background Competency-based education and the validity and reliability of workplace-based assessment of postgraduate trainees have received increasing attention worldwide. Family medicine was recognised as a speciality in South Africa six years ago and a satisfactory portfolio of learning is a prerequisite to sit the national exit exam. A massive scaling up of the number of family physicians is needed in order to meet the health needs of the country. Aim The aim of this study was to develop a reliable, robust and feasible portfolio assessment tool (PAT) for South Africa. Methods Six raters each rated nine portfolios from the Stellenbosch University programme, using the PAT, to test for inter-rater reliability. This rating was repeated three months later to determine test–retest reliability. Following initial analysis and feedback the PAT was modified and the inter-rater reliability again assessed on nine new portfolios. An acceptable intra-class correlation was considered to be > 0.80. Results The total score was found to be reliable, with a coefficient of 0.92. For test–retest reliability, the difference in mean total score was 1.7%, which was not statistically significant. Amongst the subsections, only assessment of the educational meetings and the logbook showed reliability coefficients > 0.80. Conclusion This was the first attempt to develop a reliable, robust and feasible national portfolio assessment tool to assess postgraduate family medicine training in the South African context. The tool was reliable for the total score, but the low reliability of several sections in the PAT helped us to develop 12 recommendations regarding the use of the portfolio, the design of the PAT and the training of raters.
Collapse
Affiliation(s)
- Louis Jenkins
- Division of Family Medicine and Primary Care, Faculty of Health Sciences, University of Stellenbosch, South Africa
- Western Cape Department of Health, Eden district, George Hospital, South Africa
| | - Bob Mash
- Division of Family Medicine and Primary Care, Faculty of Health Sciences, University of Stellenbosch, South Africa
| | - Anselme Derese
- Centre for Education Development, Faculty of Medicine and Health Sciences, Ghent University, Belgium
| |
Collapse
|
30
|
McGill DA, van der Vleuten CPM, Clarke MJ. A critical evaluation of the validity and the reliability of global competency constructs for supervisor assessment of junior medical trainees. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:701-725. [PMID: 23053869 DOI: 10.1007/s10459-012-9410-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2012] [Accepted: 09/19/2012] [Indexed: 05/28/2023]
Abstract
Supervisor assessments are critical for both formative and summative assessment in the workplace. Supervisor ratings remain an important source of such assessment in many educational jurisdictions even though there is ambiguity about their validity and reliability. The aims of this evaluation is to explore the: (1) construct validity of ward-based supervisor competency assessments; (2) reliability of supervisors for observing any overarching domain constructs identified (factors); (3) stability of factors across subgroups of contexts, supervisors and trainees; and (4) position of the observations compared to the established literature. Evaluated assessments were all those used to judge intern (trainee) suitability to become an unconditionally registered medical practitioner in the Australian Capital Territory, Australia in 2007-2008. Initial construct identification is by traditional exploratory factor analysis (EFA) using Principal component analysis with Varimax rotation. Factor stability is explored by EFA of subgroups by different contexts such as hospital type, and different types of supervisors and trainees. The unit of analysis is each assessment, and includes all available assessments without aggregation of any scores to obtain the factors. Reliability of identified constructs is by variance components analysis of the summed trainee scores for each factor and the number of assessments needed to provide an acceptably reliable assessment using the construct, the reliability unit of analysis being the score for each factor for every assessment. For the 374 assessments from 74 trainees and 73 supervisors, the EFA resulted in 3 factors identified from the scree plot, accounting for only 68 % of the variance with factor 1 having features of a "general professional job performance" competency (eigenvalue 7.630; variance 54.5 %); factor 2 "clinical skills" (eigenvalue 1.036; variance 7.4 %); and factor 3 "professional and personal" competency (eigenvalue 0.867; variance 6.2 %). The percent trainee score variance for the summed competency item scores for factors 1, 2 and 3 were 40.4, 27.4 and 22.9 % respectively. The number of assessments needed to give a reliability coefficient of 0.80 was 6, 11 and 13 respectively. The factor structure remained stable for subgroups of female trainees, Australian graduate trainees, the central hospital, surgeons, staff specialist, visiting medical officers and the separation into single years. Physicians as supervisors, male trainees, and male supervisors all had a different grouping of items within 3 factors which all had competency items that collapsed into the predefined "face value" constructs of competence. These observations add new insights compared to the established literature. For the setting, most supervisors appear to be assessing a dominant construct domain which is similar to a general professional job performance competency. This global construct consists of individual competency items that supervisors spontaneously align and has acceptable assessment reliability. However, factor structure instability between different populations of supervisors and trainees means that subpopulations of trainees may be assessed differently and that some subpopulations of supervisors are assessing the same trainees with different constructs than other supervisors. The lack of competency criterion standardisation of supervisors' assessments brings into question the validity of this assessment method as currently used.
Collapse
Affiliation(s)
- D A McGill
- Department of Cardiology, The Canberra Hospital, Garran, ACT, 2605, Australia,
| | | | | |
Collapse
|
31
|
Dawson SD, Miller T, Goddard SF, Miller LM. Impact of outcome-based assessment on student learning and faculty instructional practices. JOURNAL OF VETERINARY MEDICAL EDUCATION 2013; 40:128-138. [PMID: 23709109 DOI: 10.3138/jvme.1112-100r] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Increased accountability has been a catalyst for the reformation of curriculum and assessment practices in postsecondary schools throughout North America, including veterinary schools. There is a call for a shift in assessment practices in clinical rotations, from a focus on content to a focus on assessing student performance. Learning is subsequently articulated in terms of observable outcomes and indicators that describe what the learner can do after engaging in a learning experience. The purpose of this study was to examine the ways in which a competency-based program in an early phase of implementation impacted student learning and faculty instructional practices. Findings revealed that negative student perceptions of the assessment instrument's reliability had a detrimental effect on the face validity of the instrument and, subsequently, on students' engagement with competency-based assessment and promotion of student-centered learning. While the examination of faculty practices echoed findings from other studies that cited the need for faculty development to improve rater reliability and for a better data management system, our study found that faculty members' instructional practices improved through the alignment of instruction and curriculum. This snap-shot of the early stages of implementing a competency-based program has been instrumental in refining and advancing the program.
Collapse
Affiliation(s)
- Susan D Dawson
- Department of Biomedical Sciences, Atlantic Veterinary College, University of Prince Edward Island, Charlottetown, PE Canada.
| | | | | | | |
Collapse
|
32
|
Norman G. Now you see it, now you don't? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2011; 16:287-289. [PMID: 21728020 DOI: 10.1007/s10459-011-9310-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2011] [Accepted: 06/15/2011] [Indexed: 05/31/2023]
|