1
|
Li M, Kurahashi AM, Kawaguchi S, Siemens I, Sirianni G, Myers J. When words are your scalpel, what and how information is exchanged may be differently salient to assessors. MEDICAL EDUCATION 2024; 58:1324-1332. [PMID: 38850193 DOI: 10.1111/medu.15458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 05/12/2024] [Accepted: 05/24/2024] [Indexed: 06/10/2024]
Abstract
PURPOSE Variable assessments of learner performances can occur when different assessors determine different elements to be differently important or salient. How assessors determine the importance of performance elements has historically been thought to occur idiosyncratically and thus be amenable to assessor training interventions. More recently, a main source of variation found among assessors was two underlying factors that were differently emphasised: medical expertise and interpersonal skills. This gave legitimacy to the theory that different interpretations of the same performance may represent multiple truths. A faculty development activity introducing assessors to entrustable professional activities in which they estimated a learner's level of readiness for entrustment provided an opportunity to qualitatively explore assessor variation in the context of an interaction and in a setting in which interpersonal skills are highly valued. METHODS Using a constructivist grounded theory approach, we explored variation in assessment processes among a group of palliative medicine assessors who completed a simulated direct observation and assessment of the same learner interaction. RESULTS Despite identifying similar learner strengths and areas for improvement, the estimated level of readiness for entrustment varied substantially among assessors. Those who estimated the learner as not yet ready for entrustment seemed to prioritise what information was exchanged and viewed missed information as performance gaps. Those who estimated the learner as ready for entrustment seemed to prioritise how information was exchanged and viewed the same missed information as personal style differences or appropriate clinical judgement. When presented with a summary, assessors expressed surprise and concern about the variation. CONCLUSION A main source of variation among our assessors was the differential salience of performance elements that align with medical expertise and interpersonal skills. These data support the theory that when assessing an interaction, differential salience for these two factors may be an important and perhaps inevitable source of assessor variation.
Collapse
Affiliation(s)
- Melissa Li
- Division of Palliative Care, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | | | - Sarah Kawaguchi
- Division of Palliative Care, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Isaac Siemens
- Division of Palliative Care, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Giovanna Sirianni
- Division of Palliative Care, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Jeff Myers
- Division of Palliative Care, Department of Family and Community Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
| |
Collapse
|
2
|
Costich M, Friedman S, Robinson V, Catallozzi M. Implementation and faculty perception of outpatient medical student workplace-based assessments. CLINICAL TEACHER 2024; 21:e13751. [PMID: 38433555 DOI: 10.1111/tct.13751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 02/06/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND There is growing interest in use of entrustable professional activity (EPA)-grounded workplace-based assessments (WBAs) to assess medical students through direct observation in the clinical setting. However, there has been very little reflection on how these tools are received by the faculty using them to deliver feedback. Faculty acceptance of WBAs is fundamentally important to sustained utilisation in the clinical setting, and understanding faculty perceptions of the WBA as an adjunct for giving targeted feedback is necessary to guide future faculty development in this area. APPROACH Use of a formative EPA-grounded WBA was implemented in the ambulatory setting during the paediatrics clerkship following performance-driven training and frame-of-reference training with faculty. Surveys and semi-structured interviews with faculty members explored how faculty perceived the tool and its impact on feedback delivery. EVALUATION Faculty reported providing more specific, task-oriented feedback following implementation of the WBA, as well as greater timeliness of feedback and greater satisfaction with opportunities to provide feedback, although these later two findings did not reach significance. Themes from the interviews reflected the benefits of WBAs, persistent barriers to the provision of feedback and suggestions for improvement of the WBA. IMPLICATIONS EPA-grounded WBAs are feasible to implement in the outpatient primary care setting and improve feedback delivery around core EPAs. The WBAs positively impacted the way faculty conceptualise feedback and provide learners with more actionable, behaviour-based feedback. Findings will inform modifications to the WBA and future faculty development and training to allow for sustainable WBA utilisation in the core clerkship.
Collapse
Affiliation(s)
- Marguerite Costich
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
| | - Suzanne Friedman
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
| | - Victoria Robinson
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
| | - Marina Catallozzi
- Division of Child and Adolescent Health, Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Pediatrics, Columbia University Vagelos College of Physicians and Surgeons and NewYork-Presbyterian, New York, New York, USA
- Department of Population and Family Health, Mailman School of Public Health, Columbia University Irving Medical Center, New York, New York, USA
| |
Collapse
|
3
|
Sharp S, Snowden A, Stables I, Paterson R. Ensuring robust OSCE assessments: A reflective account from a Scottish school of nursing. Nurse Educ Pract 2024; 78:104021. [PMID: 38917560 DOI: 10.1016/j.nepr.2024.104021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 05/21/2024] [Accepted: 06/05/2024] [Indexed: 06/27/2024]
Abstract
AIM This paper reflects on the experience of one Scottish University in conducting a face-to-face Objective Structured Examination (OSCE) for large cohorts of student nurses. It outlines the challenges experienced and learning gained. Borton's model of reflection frames this work due to its simplicity, ease of application and cyclical nature. BACKGROUND The theoretical framework for the OSCE is critical thinking, enabling students to apply those skills authentically. OSCE's are designed to transfer classroom knowledge to clinical practice and offer an authentic work-based assessment. DESIGN Validity and robustness are key considerations in any assessment and in OSCE, the number of stations that students encounter is important and debated. We used a case-study based OSCE approach initially over four stations and following reflection, changed to one long station with four phases. RESULTS In OSCE examinations, interrater reliability is a necessity, and students expect equity of approach. We identified that despite clear marking criteria, marks were polarised, with students achieving high or low marks with little middle ground. Review of examination papers highlighted that although students' overall performance was good, some had failed in at least one station, suggesting a four-station approach may skew results. On reflection we hypothesised that using a one station case study-based, phased approach enabled the examiner to build up a more holistic picture of student knowledge and skills. It also provided the student opportunity to develop a rapport with the examiner and standardised patient, thereby putting them more at ease. We argue that this approach is holistic, authentic and student centred. CONCLUSIONS Our experience highlights that a single station, four phase OSCE is preferrable, enabling students to integrate all aspects of the assessment and provides a holistic view of clinical skills and knowledge.
Collapse
Affiliation(s)
- Sandra Sharp
- Edinburgh Napier University, School of Health and social Care, 11 Sighthill Court, Edinburgh EH11 45BN, UK.
| | - Austyn Snowden
- Edinburgh Napier University, School of Health and social Care, 11 Sighthill Court, Edinburgh EH11 45BN, UK
| | - Ian Stables
- Edinburgh Napier University, School of Health and social Care, 11 Sighthill Court, Edinburgh EH11 45BN, UK
| | - Ruth Paterson
- Edinburgh Napier University, School of Health and social Care, 11 Sighthill Court, Edinburgh EH11 45BN, UK
| |
Collapse
|
4
|
Ryan MS, Gielissen KA, Shin D, Perera RA, Gusic M, Ferenchick G, Ownby A, Cutrer WB, Obeso V, Santen SA. How well do workplace-based assessments support summative entrustment decisions? A multi-institutional generalisability study. MEDICAL EDUCATION 2024; 58:825-837. [PMID: 38167833 DOI: 10.1111/medu.15291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/27/2023] [Accepted: 11/16/2023] [Indexed: 01/05/2024]
Abstract
BACKGROUND Assessment of the Core Entrustable Professional Activities for Entering Residency requires direct observation through workplace-based assessments (WBAs). Single-institution studies have demonstrated mixed findings regarding the reliability of WBAs developed to measure student progression towards entrustment. Factors such as faculty development, rater engagement and scale selection have been suggested to improve reliability. The purpose of this investigation was to conduct a multi-institutional generalisability study to determine the influence of specific factors on reliability of WBAs. METHODS The authors analysed WBA data obtained for clerkship-level students across seven institutions from 2018 to 2020. Institutions implemented a variety of strategies including selection of designated assessors, altered scales and different EPAs. Data were aggregated by these factors. Generalisability theory was then used to examine the internal structure validity evidence of the data. An unbalanced cross-classified random-effects model was used to decompose variance components. A phi coefficient of >0.7 was used as threshold for acceptable reliability. RESULTS Data from 53 565 WBAs were analysed, and a total of 77 generalisability studies were performed. Most data came from EPAs 1 (n = 17 118, 32%) 2 (n = 10 237, 19.1%), and 6 (n = 6000, 18.5%). Low variance attributed to the learner (<10%) was found for most (59/77, 76%) analyses, resulting in a relatively large number of observations required for reasonable reliability (range = 3 to >560, median = 60). Factors such as DA, scale or EPA were not consistently associated with improved reliability. CONCLUSION The results from this study describe relatively low reliability in the WBAs obtained across seven sites. Generalisability for these instruments may be less dependent on factors such as faculty development, rater engagement or scale selection. When used for formative feedback, data from these instruments may be useful. However, such instruments do not consistently provide reasonable reliability to justify their use in high-stakes summative entrustment decisions.
Collapse
Affiliation(s)
- Michael S Ryan
- Department of Pediatrics, University of Virginia, Charlottesville, Virginia, USA
- School of Health Professions Education, Maastricht University, Maastricht, The Netherlands
| | - Katherine A Gielissen
- Departments of Medicine and Pediatrics, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Dongho Shin
- Department of Biostatistics, Virginia Commonwealth University School of Medicine, Richmond, Virginia, USA
| | - Robert A Perera
- Department of Biostatistics, Virginia Commonwealth University School of Medicine, Richmond, Virginia, USA
| | - Maryellen Gusic
- Departments of Pediatrics, Biomedical Education and Data Science, Lewis Katz School of Medicine, Philadelphia, Pennsylvania, USA
| | - Gary Ferenchick
- Department of Medicine, College of Human Medicine, Michigan State University, East Lansing, Michigan, USA
| | - Allison Ownby
- McGovern Medical School at UTHealth Houston, Houston, Texas, USA
| | - William B Cutrer
- Department of Pediatrics, Vanderbilt University School of Medicine, Nashville, Tennessee, USA
| | - Vivian Obeso
- Department of Medical Education, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Sally A Santen
- Virginia Commonwealth University School of Medicine, Richmond, Virginia, USA
- Emergency Medicine and Medical Education at University of Cincinnati College of Medicine, Cincinnati, Ohio, USA
| |
Collapse
|
5
|
Jarrett JB, Elmes AT, Keller E, Stowe CD, Daugherty KK. Evaluating the Strengths and Barriers of Competency-Based Education in the Health Professions. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2024; 88:100709. [PMID: 38729616 DOI: 10.1016/j.ajpe.2024.100709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 04/30/2024] [Accepted: 04/30/2024] [Indexed: 05/12/2024]
Abstract
OBJECTIVE This study aimed to define competency-based education (CBE) for pharmacy education and describe how strengths and barriers of CBE can support or hinder implementation. FINDINGS Sixty-five studies were included from a variety of health professions in order to define competency based pharmacy education (CBPE) and identify barriers and benefits from the learner, faculty, institution, and society perspectives. From the 7 identified thematic categories, a CBPE definition was developed: "Competency-based pharmacy education is an outcomes-based curricular model of an organized framework of competencies (knowledge, skills, attitudes) for pharmacists to meet health care and societal needs. This learner-centered curricular model aligns authentic teaching and learning strategies and assessment (emphasizing workplace assessment and quality feedback) while deemphasizing time." SUMMARY This article provides a definition of CBE for its application within pharmacy education. The strengths and barriers for CBE were elucidated from other health professions' education literature. Identified implementation strengths and barriers aid in the discussions on what will support or hinder the implementation of CBE in pharmacy education.
Collapse
Affiliation(s)
- Jennie B Jarrett
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Abigail T Elmes
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Eden Keller
- University of Illinois Chicago College of Pharmacy, Department of Pharmacy Practice, Chicago, IL, USA
| | - Cindy D Stowe
- University of Arkansas for Medical Sciences College of Pharmacy, Little Rock, AR, USA
| | | |
Collapse
|
6
|
Tavares W, Kinnear B, Schumacher DJ, Forte M. "Rater training" re-imagined for work-based assessment in medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1697-1709. [PMID: 37140661 DOI: 10.1007/s10459-023-10237-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/30/2023] [Indexed: 05/05/2023]
Abstract
In this perspective, the authors critically examine "rater training" as it has been conceptualized and used in medical education. By "rater training," they mean the educational events intended to improve rater performance and contributions during assessment events. Historically, rater training programs have focused on modifying faculty behaviours to achieve psychometric ideals (e.g., reliability, inter-rater reliability, accuracy). The authors argue these ideals may now be poorly aligned with contemporary research informing work-based assessment, introducing a compatibility threat, with no clear direction on how to proceed. To address this issue, the authors provide a brief historical review of "rater training" and provide an analysis of the literature examining the effectiveness of rater training programs. They focus mainly on what has served to define effectiveness or improvements. They then draw on philosophical and conceptual shifts in assessment to demonstrate why the function, effectiveness aims, and structure of rater training requires reimagining. These include shifting competencies for assessors, viewing assessment as a complex cognitive task enacted in a social context, evolving views on biases, and reprioritizing which validity evidence should be most sought in medical education. The authors aim to advance the discussion on rater training by challenging implicit incompatibility issues and stimulating ways to overcome them. They propose that "rater training" (a moniker they suggest be reserved for strong psychometric aims) be augmented with "assessor readiness" programs that link to contemporary assessment science and enact the principle of compatibility between that science and ways of engaging with advances in real-world faculty-learner contexts.
Collapse
Affiliation(s)
- Walter Tavares
- Department of Health and Society, Wilson Centre, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| | - Benjamin Kinnear
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Daniel J Schumacher
- Department of Pediatrics, Cincinnati Children's Hospital Medical Center, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Milena Forte
- Department of Family and Community Medicine, Temerty Faculty of Medicine, Mount Sinai Hospital, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
7
|
Tahim A, Gill D, Bezemer J. Workplace-based assessments-Articulating the playbook. MEDICAL EDUCATION 2023; 57:939-948. [PMID: 36924016 DOI: 10.1111/medu.15083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 02/13/2023] [Accepted: 03/10/2023] [Indexed: 06/18/2023]
Abstract
INTRODUCTION A workplace-based assessment (WBA) is a learning recording device that is widely used in medical education globally. Although entrenched in medical curricula, and despite a substantial body of literature exploring them, it is not yet fully understood how WBAs play out in practice. Adopting a constructivist standpoint, we examine these assessments, in the workplace, using principles based upon naturalist inquiry, drawing from a theoretical framework based on Goffman's dramaturgical analogy for the presentation of self, and using qualitative research methods to articulate what is happening as learners complete them. METHODS Learners were voluntarily recruited to participate in the study from a single teaching hospital. Data were generated, in-situ, through observations with field notes and audiovisual recording of WBAs, along with accompanying interviews with learners. RESULTS Data from six learners was analysed to reveal a set of general principles-the WBA playbook. These four principles were tacit, unwritten, unofficial and learners applied them to complete their WBA proformas: (1) maintain the impression of progression, (2) manage the authenticity of the individual proforma, (3) avoid losing face with the assessor and (4) complete the proforma in an effort-efficient way. By adhering to these principles, learners expressed their understanding of their social position in their world at that time the documents were created. DISCUSSION This paper recognises the value of the WBA as a lived experience, and of the WBA document as a social space, where learners engage in a social performance before the readers of the proforma. Such an interpretation better represents what happens as learners undergo and record WBAs in the real-world, recognising WBAs as learner-centred, learner-driven, meaning-making phenomena. In this way, as a record of interpretation and meanings, the subjective nature of the WBA process is a strength to be harnessed, rather than a weakness to be glossed over.
Collapse
Affiliation(s)
- Arpan Tahim
- Department of Culture, Communication and Media, UCL Institute of Education, London, UK
| | - Deborah Gill
- Faculty of Medicine, University of Southampton, Southampton, UK
| | - Jeff Bezemer
- Department of Culture, Communication and Media, UCL Institute of Education, London, UK
| |
Collapse
|
8
|
Renting N, Jaarsma D, Borleffs JC, Slaets JPJ, Cohen-Schotanus J, Gans ROB. Effectiveness of a supervisor training on quality of feedback to internal medicine residents: a controlled longitudinal multicentre study. BMJ Open 2023; 13:e076946. [PMID: 37770280 PMCID: PMC10546104 DOI: 10.1136/bmjopen-2023-076946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 09/04/2023] [Indexed: 09/30/2023] Open
Abstract
OBJECTIVES High-quality feedback on different dimensions of competence is important for resident learning. Supervisors may need additional training and information to fulfil this demanding task. This study aimed to evaluate whether a short and simple training improves the quality of feedback residents receive from their clinical supervisors in daily practice. DESIGN Longitudinal quasi-experimental controlled study with a pretest/post-test design. We collected multiple premeasurements and postmeasurements for each supervisor over 2 years. A repeated measurements ANOVA was performed on the data. SETTING Internal medicine departments of seven Dutch teaching hospitals. PARTICIPANTS Internal medicine supervisors (n=181) and residents (n=192). INTERVENTION Half of the supervisors attended a short 2.5-hour training session during which they could practise giving feedback in a simulated setting using video fragments. Highly experienced internal medicine educators guided the group discussions about the feedback. The other half of the supervisors formed the control group and received no feedback training. OUTCOME MEASURES Residents rated the quality of supervisors' oral feedback with a previously validated questionnaire. Furthermore, the completeness of the supervisors' written feedback on evaluation forms was analysed. RESULTS The data showed a significant increase in the quality of feedback after the training F (1, 87)=6.76, p=0.04. This effect remained significant up to 6 months after the training session. CONCLUSIONS A short training session in which supervisors practise giving feedback in a simulated setting increases the quality of their feedback. This is a promising outcome since it is a feasible approach to faculty development.
Collapse
Affiliation(s)
- Nienke Renting
- Faculty of Behavioral & Social Sciences, GION, University of Groningen, Groningen, The Netherlands
| | - Debbie Jaarsma
- Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Jan Cc Borleffs
- Center for Education Developmand and Research in Health Professions, University Medical Center Groningen, Groningen, The Netherlands
| | - Joris P J Slaets
- Geriatric Medicine, Leyden Academy on Vitality and Ageing, Leiden, The Netherlands
| | - Janke Cohen-Schotanus
- Center for Education Developmand and Research in Health Professions, University Medical Center Groningen, Groningen, The Netherlands
| | - Rob O B Gans
- Internal Medicine, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
9
|
Holmboe ES, Osman NY, Murphy CM, Kogan JR. The Urgency of Now: Rethinking and Improving Assessment Practices in Medical Education Programs. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:S37-S49. [PMID: 37071705 DOI: 10.1097/acm.0000000000005251] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Assessment is essential to professional development. Assessment provides the information needed to give feedback, support coaching and the creation of individualized learning plans, inform progress decisions, determine appropriate supervision levels, and, most importantly, help ensure patients and families receive high-quality, safe care in the training environment. While the introduction of competency-based medical education has catalyzed advances in assessment, much work remains to be done. First, becoming a physician (or other health professional) is primarily a developmental process, and assessment programs must be designed using a developmental and growth mindset. Second, medical education programs must have integrated programs of assessment that address the interconnected domains of implicit, explicit and structural bias. Third, improving programs of assessment will require a systems-thinking approach. In this paper, the authors first address these overarching issues as key principles that must be embraced so that training programs may optimize assessment to ensure all learners achieve desired medical education outcomes. The authors then explore specific needs in assessment and provide suggestions to improve assessment practices. This paper is by no means inclusive of all medical education assessment challenges or possible solutions. However, there is a wealth of current assessment research and practice that medical education programs can use to improve educational outcomes and help reduce the harmful effects of bias. The authors' goal is to help improve and guide innovation in assessment by catalyzing further conversations.
Collapse
Affiliation(s)
- Eric S Holmboe
- E.S. Holmboe is chief, Research, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - Nora Y Osman
- N.Y. Osman is associate professor of medicine, Harvard Medical School, and director of undergraduate medical education, Brigham and Women's Hospital Department of Medicine, Boston, Massachusetts; ORCID: https://orcid.org/0000-0003-3542-1262
| | - Christina M Murphy
- C.M. Murphy is a fourth-year medical student and president, Medical Student Government at Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0003-3966-5264
| | - Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| |
Collapse
|
10
|
Chin M, Pack R, Cristancho S. "A whole other competence story": exploring faculty perspectives on the process of workplace-based assessment of entrustable professional activities. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:369-385. [PMID: 35997910 DOI: 10.1007/s10459-022-10156-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 08/07/2022] [Indexed: 05/11/2023]
Abstract
The centrality of entrustable professional activities (EPAs) in competency-based medical education (CBME) is predicated on the assumption that low-stakes, high-frequency workplace-based assessments used in a programmatic approach will result in accurate and defensible judgments of competence. While there have been conversations in the literature regarding the potential of this approach, only recently has the conversation begun to explore the actual experiences of clinical faculty in this process. The purpose of this qualitative study was to explore the process of EPA assessment for faculty in everyday practice. We conducted 18 semi-structured interviews with Anesthesia faculty at a Canadian academic center. Participants were asked to describe how they engage in EPA assessment in daily practice and the factors they considered. Interviews were audio-recorded, transcribed, and analysed using the constant comparative method of grounded theory. Participants in this study perceived two sources of tension in the EPA assessment process that influenced their scoring on official forms: the potential constraints of the assessment forms and the potential consequences of their assessment outcome. This was particularly salient in circumstances of uncertainty regarding the learner's level of competence. Ultimately, EPA assessment in CBME may be experienced as higher-stakes by faculty than officially recognized due to these tensions, suggesting a layer of discomfort and burden in the process that may potentially interfere with the goal of assessment for learning. Acknowledging and understanding the nature of this burden and identifying strategies to mitigate it are critical to achieving the assessment goals of CBME.
Collapse
Affiliation(s)
- Melissa Chin
- Department of Anesthesia and Perioperative Medicine, London Health Sciences Centre, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada.
| | - Rachael Pack
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| | - Sayra Cristancho
- Center for Education Research and Innovation, University of Western Ontario, London, ON, Canada
| |
Collapse
|
11
|
Alpine L, Barrett E, Broderick J, Mockler D, O'Connor A. Education programmes on performance-based assessment for allied health and nursing clinical educators: A scoping review protocol. HRB Open Res 2023. [DOI: 10.12688/hrbopenres.13669.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2023] Open
Abstract
Background: Performance-based assessment (PBA) is a complex process undertaken in the workplace by healthcare practitioners known as clinical educators, who assist universities in determining health professional students’ readiness for independent practice. Preparing healthcare professionals for PBA is considered essential to ensuring the quality of the assessment process in the clinical learning environment. A preliminary search of the literature indicated a paucity of research guiding the development of education programmes that support practice educators to understand and implement PBA. Objective: The aim of this scoping review is to investigate and describe education programmes delivered to allied health and nursing clinical educators, to develop PBA knowledge and skills. Methods: This review will follow the Joanna Briggs Institute (JBI) methodology for conducting scoping reviews. Electronic databases relevant to this research topic will be searched including, EMBASE, ERIC, MEDLINE (Ovid), Web of Science and CINAHL and other targeted databases for grey literature. Studies that include PBA as the main focus or a component of the education programmes, of any format, delivered to clinical educators in allied health and nursing will be included. Studies may report the design and/or implementation and/or evaluation of PBA education programmes. Relevant English language publications will be sought from January 2000 to October 2022. Two reviewers will screen all titles and abstracts against the inclusion/exclusion criteria, and publications deemed relevant will be eligible for full text screening, confirming appropriateness for inclusion in the scoping review. Data will be charted to create a table of the results, supported by narrative summary of findings in line with the review objectives.
Collapse
|
12
|
Kogan JR, Dine CJ, Conforti LN, Holmboe ES. Can Rater Training Improve the Quality and Accuracy of Workplace-Based Assessment Narrative Comments and Entrustment Ratings? A Randomized Controlled Trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2023; 98:237-247. [PMID: 35857396 DOI: 10.1097/acm.0000000000004819] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior research evaluating workplace-based assessment (WBA) rater training effectiveness has not measured improvement in narrative comment quality and accuracy, nor accuracy of prospective entrustment-supervision ratings. The purpose of this study was to determine whether rater training, using performance dimension and frame of reference training, could improve WBA narrative comment quality and accuracy. A secondary aim was to assess impact on entrustment rating accuracy. METHOD This single-blind, multi-institution, randomized controlled trial of a multifaceted, longitudinal rater training intervention consisted of in-person training followed by asynchronous online spaced learning. In 2018, investigators randomized 94 internal medicine and family medicine physicians involved with resident education. Participants assessed 10 scripted standardized resident-patient videos at baseline and follow-up. Differences in holistic assessment of narrative comment accuracy and specificity, accuracy of individual scenario observations, and entrustment rating accuracy were evaluated with t tests. Linear regression assessed impact of participant demographics and baseline performance. RESULTS Seventy-seven participants completed the study. At follow-up, the intervention group (n = 41), compared with the control group (n = 36), had higher scores for narrative holistic specificity (2.76 vs 2.31, P < .001, Cohen V = .25), accuracy (2.37 vs 2.06, P < .001, Cohen V = .20) and mean quantity of accurate (6.14 vs 4.33, P < .001), inaccurate (3.53 vs 2.41, P < .001), and overall observations (2.61 vs 1.92, P = .002, Cohen V = .47). In aggregate, the intervention group had more accurate entrustment ratings (58.1% vs 49.7%, P = .006, Phi = .30). Baseline performance was significantly associated with performance on final assessments. CONCLUSIONS Quality and specificity of narrative comments improved with rater training; the effect was mitigated by inappropriate stringency. Training improved accuracy of prospective entrustment-supervision ratings, but the effect was more limited. Participants with lower baseline rating skill may benefit most from training.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| | - C Jessica Dine
- C.J. Dine is associate dean, Evaluation and Assessment, and associate professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-5894-0861
| | - Lisa N Conforti
- L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7317-6221
| | - Eric S Holmboe
- E.S. Holmboe is chief, research, milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| |
Collapse
|
13
|
Holmboe ES, Kogan JR. Will Any Road Get You There? Examining Warranted and Unwarranted Variation in Medical Education. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:1128-1136. [PMID: 35294414 PMCID: PMC9311475 DOI: 10.1097/acm.0000000000004667] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Undergraduate and graduate medical education have long embraced uniqueness and variability in curricular and assessment approaches. Some of this variability is justified (warranted or necessary variation), but a substantial portion represents unwarranted variation. A primary tenet of outcomes-based medical education is ensuring that all learners acquire essential competencies to be publicly accountable to meet societal needs. Unwarranted variation in curricular and assessment practices contributes to suboptimal and variable educational outcomes and, by extension, risks graduates delivering suboptimal health care quality. Medical education can use lessons from the decades of study on unwarranted variation in health care as part of efforts to continuously improve the quality of training programs. To accomplish this, medical educators will first need to recognize the difference between warranted and unwarranted variation in both clinical care and educational practices. Addressing unwarranted variation will require cooperation and collaboration between multiple levels of the health care and educational systems using a quality improvement mindset. These efforts at improvement should acknowledge that some aspects of variability are not scientifically informed and do not support desired outcomes or societal needs. This perspective examines the correlates of unwarranted variation of clinical care in medical education and the need to address the interdependency of unwarranted variation occurring between clinical and educational practices. The authors explore the challenges of variation across multiple levels: community, institution, program, and individual faculty members. The article concludes with recommendations to improve medical education by embracing the principles of continuous quality improvement to reduce the harmful effect of unwarranted variation.
Collapse
Affiliation(s)
- Eric S. Holmboe
- E.S. Holmboe is chief, research, milestones development, and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois; ORCID: https://orcid.org/0000-0003-0108-6021
| | - Jennifer R. Kogan
- J.R. Kogan is associate dean, Student Success and Professional Development, and professor of medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; ORCID: https://orcid.org/0000-0001-8426-9506
| |
Collapse
|
14
|
Costich M, Bisono G, Meyers N, Lane M, Meyer D, Friedman S. A Pediatric Resident Curriculum for the Use of Health Literacy Communication Tools. Health Lit Res Pract 2022; 6:e121-e127. [PMID: 35680125 PMCID: PMC9179039 DOI: 10.3928/24748307-20220517-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/16/2021] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND Despite evidence that use of evidence-based communication tools (EBCT) with a universal precautions approach improves health outcomes, medical trainees report inadequate skills training. OBJECTIVE We developed, implemented, and evaluated a novel, interactive curriculum featuring a 30-minute, single-session didactic with video content, facilitated case-based discussions and preceptor modeling to improve use of EBCT among pediatric residents. A direct observation (DO) skills checklist was developed for preceptors to evaluate resident use of EBCT. METHODS Shortly after implementation of the curriculum, residents completed a survey assessing self-reported frequency of EBCT use both pre- and post-intervention. DOs were conducted 2 to 3 weeks after the didactic was completed and scores were compared among residents who participated in the curriculum and those who did not. A longitudinal 6-month follow-up survey was also distributed to assess changes over time. KEY RESULTS Forty-seven of 78 (60%) of residents completed the survey and 45 of 60 (75%) of the eligible residents participated in the DO. There was significant change in self-reported use of all but one EBCT after participation in the curriculum. Residents reported sustained increased frequency of use of all communication tools except for Teach Back, Show Back, and explanation of return precautions in the 6 months following the curriculum. Notably, there was no significant difference in resident scores in the DO among residents who participated in the didactic session and those who did not. CONCLUSIONS This novel interactive curriculum addresses ACGME (Accreditation Council for Graduate Medical Education) core competencies and fulfills a needed gap in resident curricula for health literacy-related skills training. Findings suggest a small, positive affect on frequency of self-reported use of health literacy EBCT. However, our findings demonstrate a lack of parallel improvement in resident performance during DO. Future curricula may require certain modifications, as well as reinforcement at regular intervals. [HLRP: Health Literacy Research and Practice. 2022;6(2):e121-e127.] Plain Language Summary: Use of evidence-based communication tools, such as presenting information in small chunks and avoiding complex medical terms among pediatric trainees, is limited. This study describes a new and interactive health literacy curriculum, with emphasis on preceptor modeling and DO to improve use of evidence-based communication tools among residents. After participation in the curriculum, residents report greater use of evidence-based communication tools. However, results from DO of residents did not demonstrate similar improvements.
Collapse
Affiliation(s)
| | | | | | | | | | - Suzanne Friedman
- Address correspondence to Suzanne Friedman, MD, Department of Pediatrics, Columbia University Irving Medical Center, 622 W. 168th Street, VC417, New York, NY 10032;
| |
Collapse
|
15
|
Ryan MS, Richards A, Perera R, Park YS, Stringer JK, Waterhouse E, Dubinsky B, Khamishon R, Santen SA. Generalizability of the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) Scale to Assess Medical Student Performance on Core EPAs in the Workplace: Findings From One Institution. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:1197-1204. [PMID: 33464735 DOI: 10.1097/acm.0000000000003921] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE Assessment of the Core Entrustable Professional Activities for Entering Residency (Core EPAs) requires direct observation of learners in the workplace to support entrustment decisions. The purpose of this study was to examine the internal structure validity evidence of the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) scale when used to assess medical student performance in the Core EPAs across clinical clerkships. METHOD During the 2018-2019 academic year, the Virginia Commonwealth University School of Medicine implemented a mobile-friendly, student-initiated workplace-based assessment (WBA) system to provide formative feedback for the Core EPAs across all clinical clerkships. Students were required to request a specified number of Core EPA assessments in each clerkship. A modified O-SCORE scale (1 = "I had to do" to 4 = "I needed to be in room just in case") was used to rate learner performance. Generalizability theory was applied to assess the generalizability (or reliability) of the assessments. Decision studies were then conducted to determine the number of assessments needed to achieve a reasonable reliability. RESULTS A total of 10,680 WBAs were completed on 220 medical students. The majority of ratings were completed on EPA 1 (history and physical) (n = 3,129; 29%) and EPA 6 (oral presentation) (n = 2,830; 26%). Mean scores were similar (3.5-3.6 out of 4) across EPAs. Variance due to the student ranged from 3.5% to 8%, with the majority of the variation due to the rater (29.6%-50.3%) and other unexplained factors. A range of 25 to 63 assessments were required to achieve reasonable reliability (Phi > 0.70). CONCLUSIONS The O-SCORE demonstrated modest reliability when used across clerkships. These findings highlight specific challenges for implementing WBAs for the Core EPAs including the process for requesting WBAs, rater training, and application of the O-SCORE scale in medical student assessment.
Collapse
Affiliation(s)
- Michael S Ryan
- M.S. Ryan is associate professor and assistant dean for clinical medical education, Department of Pediatrics, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0003-3266-9289
| | - Alicia Richards
- A. Richards is a graduate student, Department of Biostatistics, Virginia Commonwealth University, Richmond, Virginia
| | - Robert Perera
- R. Perera is associate professor, Department of Biostatistics, Virginia Commonwealth University, Richmond, Virginia
| | - Yoon Soo Park
- Y.S. Park is associate professor and associate head, Department of Medical Education, University of Illinois College of Medicine, Chicago, Illinois
| | - J K Stringer
- J.K. Stringer is assessment manager, Office of Integrated Medical Education, Rush Medical College, Chicago, Illinois
| | - Elizabeth Waterhouse
- E. Waterhouse is professor, Department of Neurology, Virginia Commonwealth University, Richmond, Virginia
| | - Brieanne Dubinsky
- B. Dubinsky is business analyst, Office of Academic Information Systems, Virginia Commonwealth University, Richmond, Virginia
| | - Rebecca Khamishon
- R. Khamishon is a third-year medical student, Virginia Commonwealth University, Richmond, Virginia
| | - Sally A Santen
- S.A. Santen is professor and senior associate dean of assessment, evaluation, and scholarship, Department of Emergency Medicine, Virginia Commonwealth University, Richmond, Virginia; ORCID: https://orcid.org/0000-0002-8327-8002
| |
Collapse
|
16
|
Bray MJ, Bradley EB, Martindale JR, Gusic ME. Implementing Systematic Faculty Development to Support an EPA-Based Program of Assessment: Strategies, Outcomes, and Lessons Learned. TEACHING AND LEARNING IN MEDICINE 2021; 33:434-444. [PMID: 33331171 DOI: 10.1080/10401334.2020.1857256] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Problem: Development of a novel, competency-based program of assessment requires creation of a plan to measure the processes that enable successful implementation. The principles of implementation science outline the importance of considering key drivers that support and sustain transformative change within an educational program. The introduction of Entrustable Professional Activities (EPAs) as a framework for assessment has underscored the need to create a structured plan to prepare assessors to engage in a new paradigm of assessment. Although approaches to rater training for workplace-based assessments have been described, specific strategies to prepare assessors to apply standards related to the level of supervision a student needs have not been documented. Intervention: We describe our systematic approach to prepare assessors, faculty and postgraduate trainees, to complete EPA assessments for medical students during the clerkship phase of our curriculum. This institution-wide program is designed to build assessors' skills in direct observation of learners during authentic patient encounters. Assessors apply new knowledge and practice skills in using established performance expectations to determine the level of supervision a learner needs to perform clinical tasks. Assessors also learn to provide feedback and narrative comments to coach students and promote their ongoing clinical development. Data visualizations for assessors facilitate reinforcement of the tenets learned during training. Collaborative learning and peer feedback during faculty development sessions promote the formation of a community of practice among assessors. Context: Faculty development for assessors was implemented in advance of implementation of the EPA program. Assessors in the program include residents/fellows who work closely with students, faculty with discipline-specific expertise and a group of experienced clinicians who were selected to serve as experts in competency-based EPA assessments, the Master Assessors. Training focused on creating a shared understanding about the application of criteria used to evaluate student performance. EPA assessments based on the AAMC's Core Entrustable Professional Activities for Entering Residency, were completed in nine core clerkships. EPA assessments included a supervision rating based on a modified scale for use in undergraduate medical education. Impact: Data from EPA assessments completed during the first year of the program were analyzed to evaluate the effectiveness of the faculty development activities implemented to prepare assessors to consistently apply standards for assessment. A systematic approach to training and attention to critical drivers that enabled institution-wide implementation, led to consistency in the supervision rating for students' first EPA assessment completed by any type of assessor, ratings by assessors done within a specific clinical context, and ratings assigned by a group of specific assessors across clinical settings. Lessons learned: A systematic approach to faculty development with a willingness to be flexible and reach potential participants using existing infrastructure, can facilitate assessors' engagement in a new culture of assessment. Interaction among participants during training sessions not only promotes learning but also contributes to community building. A leadership group responsible to oversee faculty development can ensure that the needs of stakeholders are addressed and that a change in assessment culture is sustained.
Collapse
Affiliation(s)
- Megan J Bray
- Department of Obstetrics and Gynecology, Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - Elizabeth B Bradley
- Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - James R Martindale
- Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - Maryellen E Gusic
- Center for Medical Education Research and Scholarly Innovation, Office of Medical Education, Department of Pediatrics, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| |
Collapse
|
17
|
Gottlieb M, Jordan J, Siegelman JN, Cooney R, Stehman C, Chan TM. Direct Observation Tools in Emergency Medicine: A Systematic Review of the Literature. AEM EDUCATION AND TRAINING 2021; 5:e10519. [PMID: 34041428 PMCID: PMC8138102 DOI: 10.1002/aet2.10519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 05/07/2023]
Abstract
OBJECTIVES Direct observation is important for assessing the competency of medical learners. Multiple tools have been described in other fields, although the degree of emergency medicine-specific literature is unclear. This review sought to summarize the current literature on direct observation tools in the emergency department (ED) setting. METHODS We searched PubMed, Scopus, CINAHL, the Cochrane Central Register of Clinical Trials, the Cochrane Database of Systematic Reviews, ERIC, PsycINFO, and Google Scholar from 2012 to 2020 for publications on direct observation tools in the ED setting. Data were dual extracted into a predefined worksheet, and quality analysis was performed using the Medical Education Research Study Quality Instrument. RESULTS We identified 38 publications, comprising 2,977 learners. Fifteen different tools were described. The most commonly assessed tools included the Milestones (nine studies), Observed Structured Clinical Exercises (seven studies), the McMaster Modular Assessment Program (six studies), Queen's Simulation Assessment Test (five studies), and the mini-Clinical Evaluation Exercise (four studies). Most of the studies were performed in a single institution, and there were limited validity or reliability assessments reported. CONCLUSIONS The number of publications on direct observation tools for the ED setting has markedly increased. However, there remains a need for stronger internal and external validity data.
Collapse
Affiliation(s)
- Michael Gottlieb
- Department of Emergency MedicineRush University Medical CenterChicagoILUSA
| | - Jaime Jordan
- Department of Emergency MedicineRonald Reagan UCLA Medical CenterLos AngelesCAUSA
| | | | - Robert Cooney
- Department of Emergency MedicineGeisinger Medical CenterDanvillePAUSA
| | | | - Teresa M. Chan
- Department of MedicineDivision of Emergency MedicineMcMaster UniversityHamiltonOntarioCanada
| |
Collapse
|
18
|
Touchie C, Kinnear B, Schumacher D, Caretta-Weyer H, Hamstra SJ, Hart D, Gruppen L, Ross S, Warm E, Ten Cate O. On the validity of summative entrustment decisions. MEDICAL TEACHER 2021; 43:780-787. [PMID: 34020576 DOI: 10.1080/0142159x.2021.1925642] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Health care revolves around trust. Patients are often in a position that gives them no other choice than to trust the people taking care of them. Educational programs thus have the responsibility to develop physicians who can be trusted to deliver safe and effective care, ultimately making a final decision to entrust trainees to graduate to unsupervised practice. Such entrustment decisions deserve to be scrutinized for their validity. This end-of-training entrustment decision is arguably the most important one, although earlier entrustment decisions, for smaller units of professional practice, should also be scrutinized for their validity. Validity of entrustment decisions implies a defensible argument that can be analyzed in components that together support the decision. According to Kane, building a validity argument is a process designed to support inferences of scoring, generalization across observations, extrapolation to new instances, and implications of the decision. A lack of validity can be caused by inadequate evidence in terms of, according to Messick, content, response process, internal structure (coherence) and relationship to other variables, and in misinterpreted consequences. These two leading frameworks (Kane and Messick) in educational and psychological testing can be well applied to summative entrustment decision-making. The authors elaborate the types of questions that need to be answered to arrive at defensible, well-argued summative decisions regarding performance to provide a grounding for high-quality safe patient care.
Collapse
Affiliation(s)
- Claire Touchie
- Medical Council of Canada, Ottawa, Canada
- The University of Ottawa, Ottawa, Canada
| | - Benjamin Kinnear
- Internal Medicine and Pediatrics, University of Cincinnati College of Medicine/Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Daniel Schumacher
- Pediatrics, Hospital Medical Center/University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Holly Caretta-Weyer
- Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Stanley J Hamstra
- University of Toronto, Toronto, Ontario, Canada
- Accreditation Council for Graduate Medical Education, Chicago, IL, USA
| | - Danielle Hart
- Emergency Medicine, Hennepin Healthcare and the University of Minnesota, Minneapolis, MN, USA
| | - Larry Gruppen
- Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Shelley Ross
- Department of Family Medicine, University of Alberta, Edmonton, AB, Canada
| | - Eric Warm
- University of Cincinnati College of Medicine Center, Cincinnati, OH, USA
| | - Olle Ten Cate
- Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
19
|
Young JQ, Holmboe ES, Frank JR. Competency-Based Assessment in Psychiatric Education: A Systems Approach. Psychiatr Clin North Am 2021; 44:217-235. [PMID: 34049645 DOI: 10.1016/j.psc.2020.12.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Medical education programs are failing to meet the health needs of patients and communities. Misalignments exist on multiple levels, including content (what trainees learn), pedagogy (how trainees learn), and culture (why trainees learn). To address these challenges effectively, competency-based assessment (CBA) for psychiatric medical education must simultaneously produce life-long learners who can self-regulate their own growth and trustworthy processes that determine and accelerate readiness for independent practice. The key to effectively doing so is situating assessment within a carefully designed system with several, critical, interacting components: workplace-based assessment, ongoing faculty development, learning analytics, longitudinal coaching, and fit-for-purpose clinical competency committees.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell and the Zucker Hillside Hospital at Northwell Health, Glen Oaks, NY, USA.
| | - Eric S Holmboe
- Accreditation Council for Graduate Medical Education, 401 North Michigan Avenue, Chicago, IL 60611, USA
| | - Jason R Frank
- Royal College of Physicians and Surgeons of Canada, 774 Echo Drive, Ottawa, Ontario K15 5NB, Canada; Education, Department of Emergency Medicine, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
20
|
Johansen RF, Nielsen RB, Malling BV, Storm H. Can case-based discussions in a group setting be used to assess residents' clinical skills? INTERNATIONAL JOURNAL OF MEDICAL EDUCATION 2021; 12:64-73. [PMID: 33840646 PMCID: PMC8411343 DOI: 10.5116/ijme.606a.eb39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 04/05/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVES The purpose of this study was to explore residents' and assessors' perception of a new group assessment concept. METHODS This qualitative study consists of observations of four group assessment sessions, followed by semi-structured interviews with six residents and four assessors (specialists in internal medicine), who all volunteered to be interviewed. All residents at a medical department (eleven to fifteen each time) and four assessors participated in four group assessments, where the residents' clinical skills were assessed through case-based discussions. An external consultant (an anthropologist) performed the observations and the interviews. Notes from the observations and the interviews were analyzed using an inductive approach. RESULTS Eight of the ten interviewed participants preferred group assessment to individual assessment. Results from the interviews suggested that the group assessments were more consistent and that the level of discussion was perceived to be higher in the group discussions compared to the one-to-one discussions. All residents indicated that they had acquired new knowledge during their assessment and reported having learned from listening to the assessment of their peers. Assessors similarly reported gaining new knowledge. CONCLUSIONS The residents and assessors expressed very favourable attitudes toward the new group assessment concept. The assessment process was perceived to be higher in quality and more consistent, contributing to learning for all participating doctors in the department. Group assessment is feasible and acceptable, and provides a promising tool for assessment of clinical skills in the future.
Collapse
Affiliation(s)
| | | | - Bente V. Malling
- Department of Clinical Medicine, Health, Aarhus University, Denmark
| | - Hanne Storm
- Diagnostic Center, Regional Hospital Silkeborg, Regional Hospital Central, Jutland, Denmark
| |
Collapse
|
21
|
Hung EK, Jibson M, Sadhu J, Stewart C, Walker A, Wichser L, Young JQ. Wresting with Implementation: a Step-By-Step Guide to Implementing Entrustable Professional Activities (EPAs) in Psychiatry Residency Programs. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2021; 45:210-216. [PMID: 33078330 DOI: 10.1007/s40596-020-01341-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 10/05/2020] [Indexed: 06/11/2023]
Affiliation(s)
- Erick K Hung
- University of California, San Francisco, School of Medicine, San Francisco, CA, USA.
| | - Michael Jibson
- University of Michigan School of Medicine, Ann Arbor, MI, USA
| | - Julie Sadhu
- Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Colin Stewart
- Georgetown University Medical Center and School of Medicine, Washington, DC, USA
| | - Ashley Walker
- University of Oklahoma School of Medicine, Oklahoma City, OK, USA
| | - Lora Wichser
- University of Minnesota Medical School, Minneapolis, MN, USA
| | - John Q Young
- Donald and Barbara Zucker School of Medicine and the Zucker Hillside Hospital, Hempstead, NY, USA
| |
Collapse
|
22
|
Implementation of a Workplace-Based Assessment System to Measure Performance of the Core Entrustable Professional Activities in the Pediatric Clerkship. Acad Pediatr 2021; 21:564-568. [PMID: 33035730 DOI: 10.1016/j.acap.2020.09.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 08/27/2020] [Accepted: 09/30/2020] [Indexed: 01/24/2023]
Abstract
OBJECTIVE The Core Entrustable Professional Activities for Entering Residency (Core EPAs) were developed to address the gap between medical school and residency. There is a lack of instruments to measure performance of the Core EPAs in clerkships. We describe the operationalization and outcomes of a workplace-based assessment (WBA) system to measure performance of the Core EPAs in the pediatrics clerkship. METHODS A mobile-friendly WBA was developed at the authors' institution. The WBA incorporated a modified version of the Ottawa Clinic Assessment Tool (OCAT), an instrument that rates performance on a scale of 1 to 4 (1- "I had to do it" to 4- "I had to be there just in case"). During 2018 to 2019, all students were required to request feedback for 6 of the 13 Core EPAs using the WBA in the Pediatrics clerkship. Descriptive and inferential statistics were calculated to assess mean OCAT scores, variance in performance and correlation between scores, clerkship timing, and grades. RESULTS Total 1655 WBAs were completed for 218 students. The overall mean OCAT score was 3.47 out of 4. Scores across Core EPAs were greater in later rotations (r = 0.157, P < .001). One-way analysis of variance revealed significant variance on score by student, assessor, and timing of clerkship block. Final grades were correlated with OCAT scores (Spearman's ρ = 0.25, P < .001). CONCLUSIONS The results of this study demonstrate initial outcomes for a WBA system to assess performance for the Core EPAs in pediatrics using the OCAT scale. Future studies will assess the system across clerkships.
Collapse
|
23
|
Scheurer JM, Davey C, Pereira AG, Olson APJ. Building a Shared Mental Model of Competence Across the Continuum: Trainee Perceptions of Subinternships for Residency Preparation. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2021; 8:23821205211063350. [PMID: 34988291 PMCID: PMC8721691 DOI: 10.1177/23821205211063350] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 11/09/2021] [Indexed: 06/01/2023]
Abstract
INTRODUCTION Toward a vision of competency-based medical education (CBME) spanning the undergraduate to graduate medical education (GME) continuum, University of Minnesota Medical School (UMMS) developed the Subinternship in Critical Care (SICC) offered across specialties and sites. Explicit course objectives and assessments focus on internship preparedness, emphasizing direct observation of handovers (Core Entrustable Professional Activity, "EPA," 8) and cross-cover duties (EPA 10). METHODS To evaluate students' perceptions of the SICC's and other clerkships' effectiveness toward internship preparedness, all 2016 and 2017 UMMS graduates in GME training (n = 440) were surveyed regarding skill development and assessment among Core EPAs 1, 4, 6, 8, 9, 10. Analysis included descriptive statistics plus chi-squared and Kappa agreement tests. RESULTS Respondents (n = 147, response rate 33%) rated the SICC as a rotation during which they gained most competence among EPAs both more (#4, 57% rated important; #8, 75%; #10, 70%) and less explicit (#6, 53%; #9, 69%) per rotation objectives. Assessments of EPA 8 (80% rated important) and 10 (76%) were frequently perceived as important toward residency preparedness. Agreement between importance of EPA development and assessment was moderate (Kappa = 0.40-0.59, all surveyed EPAs). CONCLUSIONS Graduates' perceptions support the SICC's educational utility and assessments. Based on this and other insight from the SICC, the authors propose implications toward collectively envisioning the continuum of physician competency.
Collapse
Affiliation(s)
- Johannah M. Scheurer
- Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN, USA
| | - Cynthia Davey
- Clinical and Translational Science Institute, University of Minnesota, Minneapolis, MN, USA
| | - Anne G. Pereira
- University of Minnesota Medical School, Minneapolis, MN, USA
| | - Andrew P. J. Olson
- Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN, USA
- Department of Medicine, University of Minnesota Medical School, Minneapolis, MN, USA
| |
Collapse
|
24
|
Prentice S, Benson J, Kirkpatrick E, Schuwirth L. Workplace-based assessments in postgraduate medical education: A hermeneutic review. MEDICAL EDUCATION 2020; 54:981-992. [PMID: 32403200 DOI: 10.1111/medu.14221] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 03/30/2020] [Accepted: 05/06/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Since their introduction, workplace-based assessments (WBAs) have proliferated throughout postgraduate medical education. Previous reviews have identified mixed findings regarding WBAs' effectiveness, but have not considered the importance of user-tool-context interactions. The present review was conducted to address this gap by generating a thematic overview of factors important to the acceptability, effectiveness and utility of WBAs in postgraduate medical education. METHOD This review utilised a hermeneutic cycle for analysis of the literature. Four databases were searched to identify articles pertaining to WBAs in postgraduate medical education from the United Kingdom, Canada, Australia, New Zealand, the Netherlands and Scandinavian countries. Over the course of three rounds, 30 published articles were thematically analysed in an iterative fashion to deeply engage with the literature in order to answer three scoping questions concerning acceptability, effectiveness and assessment training. As each round was coded, themes were refined and questions added until saturation was reached. RESULTS Stakeholders value WBAs for permitting assessment of trainees' performance in an authentic context. Negative perceptions of WBAs stem from misuse due to low assessment literacy, disagreement with definitions and frameworks, and inadequate summative use of WBAs. Effectiveness is influenced by user (eg, engagement and assessment literacy) and tool attributes (eg, definitions and scales), but most fundamentally by user-tool-context interactions, particularly trainee-assessor relationships. Assessors' assessment literacy must be combined with cultural and administrative factors in organisations and the broader medical discipline. CONCLUSIONS The pivotal determinants of WBAs' effectiveness and utility are the user-tool-context interactions. From the identified themes, we present 12 lessons learned regarding users, tools and contexts to maximise WBA utility, including the separation of formative and summative WBA assessors, use of maximally useful scales, and instituting measures to reduce competitive demands.
Collapse
Affiliation(s)
- Shaun Prentice
- GPEx Ltd., Adelaide, South Australia, Australia
- School of Psychology, University of Adelaide, Adelaide, South Australia, Australia
| | - Jill Benson
- GPEx Ltd., Adelaide, South Australia, Australia
- Health in Human Diversity Unit, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
- Prideaux Centre, Flinders University, Adelaide, South Australia, Australia
| | - Emily Kirkpatrick
- GPEx Ltd., Adelaide, South Australia, Australia
- School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
| | - Lambert Schuwirth
- Prideaux Centre, Flinders University, Adelaide, South Australia, Australia
- Maastrich University, Maastricht, the Netherlands
- Uniformed University for the Health Sciences, Bethesda, Maryland, USA
| |
Collapse
|
25
|
Young JQ, Sugarman R, Schwartz J, O'Sullivan PS. Overcoming the Challenges of Direct Observation and Feedback Programs: A Qualitative Exploration of Resident and Faculty Experiences. TEACHING AND LEARNING IN MEDICINE 2020; 32:541-551. [PMID: 32529844 DOI: 10.1080/10401334.2020.1767107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Problem: Prior studies have reported significant negative attitudes amongst both faculty and residents toward direct observation and feedback. Numerous contributing factors have been identified, including insufficient time for direct observation and feedback, poorly understood purpose, inadequate training, disbelief in the formative intent, inauthentic resident-patient clinical interactions, undermining of resident autonomy, lack of trust between the faculty-resident dyad, and low-quality feedback information that lacks credibility. Strategies are urgently needed to overcome these challenges and more effectively engage faculty and residents in direct observation and feedback. Otherwise, the primary goals of supporting both formative and summative assessment will not be realized and the viability of competency-based medical education will be threatened. Intervention: Toward this end, recent studies have recommended numerous strategies to overcome these barriers: protected time for direct observation and feedback; ongoing faculty and resident training on goals and bidirectional, co-constructed feedback; repeated direct observations and feedback within a longitudinal resident-supervisor relationship; utilization of assessment tools with evidence for validity; and monitoring for engagement. Given the complexity of the problem, it is likely that bundling multiple strategies together will be necessary to overcome the challenges. The Direct Observation Structured Feedback Program (DOSFP) incorporated many of the recommended features, including protected time for direct observation and feedback within longitudinal faculty-resident relationships. Using a qualitative thematic approach the authors conducted semi-structured interviews, during February and March, 2019, with 10 supervisors and ten residents. Participants were asked to reflect on their experiences. Interview guide questions explored key themes from the literature on direct observation and feedback. Transcripts were anonymized. Two authors independently and iteratively coded the transcripts. Coding was theory-driven and differences were discussed until consensus was reached. The authors then explored the relationships between the codes and used a semantic approach to construct themes. Context: The DOSFP was implemented in a psychiatry continuity clinic for second and third year residents. Impact: Faculty and residents were aligned around the goals. They both perceived the DOSFP as focused on growth rather than judgment even though residents understood that the feedback had both formative and summative purposes. The DOSFP facilitated educational alliances characterized by trust and respect. With repeated practice within a longitudinal relationship, trainees dropped the performance orientation and described their interactions with patients as authentic. Residents generally perceived the feedback as credible, described feedback quality as high, and valued the two-way conversation. However, when receiving feedback with which they did not agree, residents demurred or, at most, would ask a clarifying question, but then internally discounted the feedback. Lessons Learned: Direct observation and structured feedback programs that bundle recent recommendations may overcome many of the challenges identified by previous research. Yet, residents discounted disagreeable feedback, illustrating a significant limitation and the need for other strategies that help residents reconcile conflict between external data and one's self-appraisal.
Collapse
Affiliation(s)
- John Q Young
- Department of Psychiatry, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York, USA
| | - Rebekah Sugarman
- Department of Psychiatry, The Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York, USA
| | - Jessica Schwartz
- Department of Psychiatry, The Zucker Hillside Hospital at Northwell Health, Glen Oaks, New York, USA
| | - Patricia S O'Sullivan
- Office of Medical Education, University of California San Francisco, San Francisco, California, USA
| |
Collapse
|
26
|
Alpine LM, O'Connor A, McGuinness M, Barrett EM. Performance-based assessment during clinical placement: Cross-sectional investigation of a training workshop for practice educators. Nurs Health Sci 2020; 23:113-122. [PMID: 32803810 DOI: 10.1111/nhs.12768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/28/2020] [Accepted: 08/13/2020] [Indexed: 11/29/2022]
Abstract
Performance-based assessment evaluates a health professional student's performance as they integrate their knowledge and skills into clinical practice. Performance-based assessment grades, however, are reported to be highly variable due to the complexity of decision-making in the clinical environment. The aim of this study was to evaluate the impact of a training workshop based on frame-of-reference principles on grading of student performance by physiotherapy practice educators. This was a prospective cross-sectional study which used a single group pre-test, post-test design. Fifty-three practice educators rated two video vignettes depicting a poor and very good student performance, using a subsection of a physiotherapy performance-based assessment tool before and after training. Overall, results showed that participants amended their scores on approximately half of all scoring occasions following training, with the majority decreasing the scores awarded. This impacted positively on scoring for the poor performance video, bringing scores more in line with the true score. This study provides evidence of the benefit of a training workshop to influence decision-making in performance-based assessment as part of a wider education program for practice educators.
Collapse
Affiliation(s)
- Lucy M Alpine
- Discipline of Physiotherapy, School of Medicine, Trinity College Dublin, The University of Dublin, Dublin, Ireland
| | - Anne O'Connor
- School of Allied Health, Health Sciences Building, University of Limerick, Limerick, Ireland
| | | | - Emer M Barrett
- Discipline of Physiotherapy, School of Medicine, Trinity College Dublin, The University of Dublin, Dublin, Ireland
| |
Collapse
|
27
|
Sirianni G, Glover Takahashi S, Myers J. Taking stock of what is known about faculty development in competency-based medical education: A scoping review paper. MEDICAL TEACHER 2020; 42:909-915. [PMID: 32450047 DOI: 10.1080/0142159x.2020.1763285] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose: The primary objective was to inventory what is currently known about faculty development (FD) for competency-based medical educations (CBME) and identify gaps in the literature.Methods: A scoping review methodology was employed. Inclusion criteria for article selection were established with two reviewers completing a full-text analysis. Quality checks were included, along with iterative consultation on data collection and consensus decision making via a grounded theory approach.Results: The review identified 19 articles published between 2009 and 2018. Most articles (N = 15) offered suggestions as to what should happen with FD in CBME, but few (N = 4) adopted an experimental design. Six main themes were identified with three main features of FD noted across themes: (1) The importance of direct and timely feedback to faculty members on their teaching and assessment skills. (2) The role of establishing shared mental models for CBME curricula. (3) That FD is thought of longitudinally, not as a one-time bolus.Conclusion: This work illustrates that there is limited, high quality research in FD for CBME. Future FD activities should consider employing a longitudinal and multi-modal program format that includes feedback for the faculty participants on their teaching and assessments skills, including the development of faculty coaching skills.
Collapse
Affiliation(s)
- Giovanna Sirianni
- Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Family and Community Medicine, University of Toronto, Toronto, Canada
| | - Susan Glover Takahashi
- Department of Family and Community Medicine, University of Toronto, Toronto, Canada
- Postgraduate Medical Education, University of Toronto, Toronto, Canada
- Centre for Faculty Development, University of Toronto, Toronto, Canada
| | - Jeff Myers
- Department of Family and Community Medicine, University of Toronto, Toronto, Canada
- Sinai Health System, Toronto, Canada
| |
Collapse
|
28
|
de Jonge LPJWM, Mesters I, Govaerts MJB, Timmerman AA, Muris JWM, Kramer AWM, van der Vleuten CPM. Supervisors' intention to observe clinical task performance: an exploratory study using the theory of planned behaviour during postgraduate medical training. BMC MEDICAL EDUCATION 2020; 20:134. [PMID: 32354331 PMCID: PMC7193388 DOI: 10.1186/s12909-020-02047-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 04/21/2020] [Indexed: 06/01/2023]
Abstract
BACKGROUND Direct observation of clinical task performance plays a pivotal role in competency-based medical education. Although formal guidelines require supervisors to engage in direct observations, research demonstrates that trainees are infrequently observed. Supervisors may not only experience practical and socio-cultural barriers to direct observations in healthcare settings, they may also question usefulness or have low perceived self-efficacy in performing direct observations. A better understanding of how these multiple factors interact to influence supervisors' intention to perform direct observations may help us to more effectively implement the aforementioned guidelines and increase the frequency of direct observations. METHODS We conducted an exploratory quantitative study, using the Theory of Planned Behaviour (TPB) as our theoretical framework. In applying the TPB, we transfer a psychological theory to medical education to get insight in the influence of cognitive and emotional processes on intentions to use direct observations in workplace based learning and assessment. We developed an instrument to investigate supervisors intention to perform direct observations. The relationships between the TPB measures of our questionnaire were explored by computing bivariate correlations using Pearson's R tests. Hierarchical regression analysis was performed in order to assess the impact of the respective TPB measures as predictors on the intention to perform direct observations. RESULTS In our study 82 GP supervisors completed our TPB questionnaire. We found that supervisors had a positive attitude towards direct observations. Our TPB model explained 45% of the variance in supervisors' intentions to perform them. Normative beliefs and past behaviour were significant determinants of this intention. CONCLUSION Our study suggests that supervisors use their past experiences to form intentions to perform direct observations in a careful, thoughtful manner and, in doing so, also take the preferences of the learner and other stakeholders potentially engaged in direct observations into consideration. These findings have potential implications for research into work-based assessments and the development of training interventions to foster a shared mental model on the use of direct observations.
Collapse
Affiliation(s)
- Laury P J W M de Jonge
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands.
| | - Ilse Mesters
- Department of Epidemiology, Maastricht University, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| | - Angelique A Timmerman
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands
| | - Jean W M Muris
- Department of General Practice, Maastricht University, P.O. Box 616, 6200, MD, Maastricht, The Netherlands
| | - Anneke W M Kramer
- Department of Family Medicine, Leiden University, Leiden, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
29
|
|
30
|
Lewis LD, Steinert Y. How Culture Is Understood in Faculty Development in the Health Professions: A Scoping Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2020; 95:310-319. [PMID: 31599755 DOI: 10.1097/acm.0000000000003024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE To examine the ways in which culture is conceptualized in faculty development (FD) in the health professions. METHOD The authors searched PubMed, Web of Science, ERIC, and CINAHL, as well as the reference lists of identified publications, for articles on culture and FD published between 2006 and 2018. Based on inclusion criteria developed iteratively, they screened all articles. A total of 955 articles were identified, 100 were included in the full-text screen, and 70 met the inclusion criteria. Descriptive and thematic analyses of data extracted from the included articles were conducted. RESULTS The articles emanated from 20 countries; primarily focused on teaching and learning, cultural competence, and career development; and frequently included multidisciplinary groups of health professionals. Only 1 article evaluated the cultural relevance of an FD program. The thematic analysis yielded 3 main themes: culture was frequently mentioned but not explicated; culture centered on issues of diversity, aiming to promote institutional change; and cultural consideration was not routinely described in international FD. CONCLUSIONS Culture was frequently mentioned but rarely defined in the FD literature. In programs focused on cultural competence and career development, addressing culture was understood as a way of accounting for racial and socioeconomic disparities. In international FD programs, accommodations for cultural differences were infrequently described, despite authors acknowledging the importance of national norms, values, beliefs, and practices. In a time of increasing international collaboration, an awareness of, and sensitivity to, cultural contexts is needed.
Collapse
Affiliation(s)
- Lerona Dana Lewis
- L.D. Lewis was postdoctoral fellow, Centre for Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada, at the time this work was completed. Y. Steinert is professor of family medicine and health sciences education, director of the Institute of Health Sciences Education, and the Richard and Sylvia Cruess Chair in Medical Education, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | | |
Collapse
|
31
|
Tekian A, Park YS, Tilton S, Prunty PF, Abasolo E, Zar F, Cook DA. Competencies and Feedback on Internal Medicine Residents' End-of-Rotation Assessments Over Time: Qualitative and Quantitative Analyses. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1961-1969. [PMID: 31169541 PMCID: PMC6882536 DOI: 10.1097/acm.0000000000002821] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
PURPOSE To examine how qualitative narrative comments and quantitative ratings from end-of-rotation assessments change for a cohort of residents from entry to graduation, and explore associations between comments and ratings. METHOD The authors obtained end-of-rotation quantitative ratings and narrative comments for 1 cohort of internal medicine residents at the University of Illinois at Chicago College of Medicine from July 2013-June 2016. They inductively identified themes in comments, coded orientation (praising/critical) and relevance (specificity and actionability) of feedback, examined associations between codes and ratings, and evaluated changes in themes and ratings across years. RESULTS Data comprised 1,869 assessments (828 comments) on 33 residents. Five themes aligned with ACGME competencies (interpersonal and communication skills, professionalism, medical knowledge, patient care, and systems-based practice), and 3 did not (personal attributes, summative judgment, and comparison to training level). Work ethic was the most frequent subtheme. Comments emphasized medical knowledge more in year 1 and focused more on autonomy, leadership, and teaching in later years. Most comments (714/828 [86%]) contained high praise, and 412/828 (50%) were very relevant. Average ratings correlated positively with orientation (β = 0.46, P < .001) and negatively with relevance (β = -0.09, P = .01). Ratings increased significantly with each training year (year 1, mean [standard deviation]: 5.31 [0.59]; year 2: 5.58 [0.47]; year 3: 5.86 [0.43]; P < .001). CONCLUSIONS Narrative comments address resident attributes beyond the ACGME competencies and change as residents progress. Lower quantitative ratings are associated with more specific and actionable feedback.
Collapse
Affiliation(s)
- Ara Tekian
- A. Tekian is professor and associate dean for international affairs, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-9252-1588
| | - Yoon Soo Park
- Y.S. Park is associate professor, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, Illinois; ORCID: http://orcid.org/0000-0001-8583-4335
| | - Sarette Tilton
- S. Tilton is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Patrick F. Prunty
- P.F. Prunty is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Eric Abasolo
- E. Abasolo is a PharmD candidate, University of Illinois at Chicago College of Pharmacy, Chicago, Illinois
| | - Fred Zar
- F. Zar is professor and program director, Department of Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois
| | - David A. Cook
- D.A. Cook is professor of medicine and medical education and associate director, Office of Applied Scholarship and Education Science, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota; ORCID: https://orcid.org/0000-0003-2383-4633
| |
Collapse
|
32
|
Hodwitz K, Kuper A, Brydges R. Realizing One's Own Subjectivity: Assessors' Perceptions of the Influence of Training on Their Conduct of Workplace-Based Assessments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1970-1979. [PMID: 31397710 DOI: 10.1097/acm.0000000000002943] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE Assessor training is essential for defensible assessments of physician performance, yet research on the effectiveness of training programs for promoting assessor consistency has produced mixed results. This study explored assessors' perceptions of the influence of training and assessment tools on their conduct of workplace-based assessments of physicians. METHOD In 2017, the authors used a constructivist grounded theory approach to interview 13 physician assessors about their perceptions of the effects of training and tool development on their conduct of assessments. RESULTS Participants reported that training led them to realize that there is a potential for variability in assessors' judgments, prompting them to change their scoring and feedback behaviors to enhance consistency. However, many participants noted they had not substantially changed their numerical scoring. Nonetheless, most thought training would lead to increased standardization and consistency among assessors, highlighting a "standardization paradox" in which participants perceived a programmatic shift toward standardization but minimal changes in their own ratings. An "engagement effect" was also found in which participants involved in both tool development and training cited more substantial learnings than participants involved only in training. CONCLUSIONS Findings suggest that training may help assessors recognize their own subjectivity when judging performance, which may prompt behaviors that support rigorous and consistent scoring but may not lead to perceptible changes in assessors' numeric ratings. Results also suggest that participating in tool development may help assessors align their judgments with the scoring criteria. Overall, results support the continued study of assessor training programs as a means of enhancing assessor consistency.
Collapse
Affiliation(s)
- Kathryn Hodwitz
- K. Hodwitz is research associate, College of Physicians and Surgeons of Ontario, Toronto, Ontario, Canada. A. Kuper is associate professor and faculty co-lead, Person-Centred Care Education, Department of Medicine, scientist and associate director, Wilson Centre for Research in Education, University Health Network, University of Toronto, and staff physician, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada. R. Brydges is research director and scientist and holds the professorship in Technology Enabled Education at the Allan Waters Family Simulation Centre, St. Michael's Hospital, and is associate professor, Department of Medicine and Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada
| | | | | |
Collapse
|
33
|
Young JQ. Advancing Our Understanding of Narrative Comments Generated by Direct Observation Tools: Lessons From the Psychopharmacotherapy-Structured Clinical Observation. J Grad Med Educ 2019; 11:570-579. [PMID: 31636828 PMCID: PMC6795331 DOI: 10.4300/jgme-d-19-00207.1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Revised: 07/07/2019] [Accepted: 08/05/2019] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND While prior research has focused on the validity of quantitative ratings generated by direct observation tools, much less is known about the written comments. OBJECTIVE This study examines the quality of written comments and their relationship with checklist scores generated by a direct observation tool, the Psychopharmacotherapy-Structured Clinical Observation (P-SCO). METHODS From 2008 to 2012, faculty in a postgraduate year 3 psychiatry outpatient clinic completed 601 P-SCOs. Twenty-five percent were randomly selected from each year; the sample included 8 faculty and 57 residents. To assess quality, comments were coded for valence (reinforcing or corrective), behavioral specificity, and content. To assess the relationship between comments and scores, the authors calculated the correlation between comment and checklist score valence and examined the degree to which comments and checklist scores addressed the same content. RESULTS Ninety-one percent of the comments were behaviorally specific. Sixty percent were reinforcing, and 40% were corrective. Eight themes were identified, including 2 constructs not adequately represented by the checklist. Comment and checklist score valence was moderately correlated (Spearman's rho = 0.57, P < .001). Sixty-seven percent of high and low checklist scores were associated with a comment of the same valence and content. Only 50% of overall comments were associated with a checklist score of the same valence and content. CONCLUSIONS A direct observation tool such as the P-SCO can generate high-quality written comments. Narrative comments both explain checklist scores and convey unique content. Thematic coding of comments can improve the content validity of a checklist.
Collapse
|
34
|
Crossley JGM, Groves J, Croke D, Brennan PA. Examiner training: A study of examiners making sense of norm-referenced feedback. MEDICAL TEACHER 2019; 41:787-794. [PMID: 30912989 DOI: 10.1080/0142159x.2019.1579902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Purpose: Examiner training has an inconsistent impact on subsequent performance. To understand this variation, we explored how examiners think about changing the way they assess. Method: We provided comparative data to 17 experienced examiners about their assessments, captured their sense-making processes using a modified think-aloud protocol, and identified patterns by inductive thematic analysis. Results: We observed five sense-making processes: (1) testing personal relevance (2) interpretation (3) attribution (4) considering the need for change, and (5) considering the nature of change. Three observed meta-themes describe the manner of examiners' thinking: Guarded curiosity - where examiners expressed curiosity over how their judgments compared with others', but they also expressed guardedness about the relevance of the comparisons; Dysfunctional assimilation - where examiners' interpretation and attribution exhibited cognitive anchoring, personalization, and affective bias; Moderated conservatism - where examiners expressed openness to change, but also loyalty to their judgment-framing values and aphorisms. Conclusions: Our examiners engaged in complex processes as they considered changing their assessments. The 'stabilising' mechanisms some used resembled learners assimilating educational feedback. If these are typical examiner responses, they may well explain the variable impact of examiner training, and have significant implications for the pursuit of meaningful and defensible judgment-based assessment.
Collapse
Affiliation(s)
- James G M Crossley
- a Department of Medical Education, The Medical School , University of Sheffield , Sheffield , United Kingdom of Great Britain and Northern Ireland
| | - Jeremy Groves
- b Department of Surgery and Critical Care , Chesterfield Royal Hospital NHS Foundation Trust , Chesterfield , United Kingdom of Great Britain and Northern Ireland
| | - David Croke
- c Quality Enhancement Office , Royal College of Surgeons of Ireland , Dublin , Ireland
| | - Peter A Brennan
- d Department of Surgery , Queen Alexandra Hospital , Portsmouth , United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
35
|
Hatala R, Ginsburg S, Hauer KE, Gingerich A. Entrustment Ratings in Internal Medicine Training: Capturing Meaningful Supervision Decisions or Just Another Rating? J Gen Intern Med 2019; 34:740-743. [PMID: 30993616 PMCID: PMC6502893 DOI: 10.1007/s11606-019-04878-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The implementation of Entrustable Professional Activities has led to the simultaneous development of assessment based on a supervisor's entrustment of a learner to perform these activities without supervision. While entrustment may be intuitive when we consider the direct observation of a procedural task, the current implementation of rating scales for internal medicine's non-procedural tasks, based on entrustability, may not translate into meaningful learner assessment. In these Perspectives, we outline a number of potential concerns with ad hoc entrustability assessments in internal medicine post-graduate training: differences in the scope of procedural vs. non-procedural tasks, acknowledgement of the type of clinical oversight common within internal medicine, and the limitations of entrustment language. We point towards potential directions for inquiry that would require us to clarify the purpose of the entrustability assessment, reconsider each of the fundamental concepts of entrustment in internal medicine supervision and explore the use of descriptive rather than numeric assessment approaches.
Collapse
Affiliation(s)
- Rose Hatala
- Department of Medicine, University of British Columbia, Vancouver, Canada. .,St. Paul's Hospital, Suite 5907 Burrard Bldg, 1081 Burrard St., Vancouver, BC, V6Z 1Y6, Canada.
| | - Shiphra Ginsburg
- Department of Medicine, Faculty of Medicine, University of Toronto, Toronto, Canada
| | - Karen E Hauer
- Department of Medicine, University of California at San Francisco, San Francisco, CA, USA
| | - Andrea Gingerich
- Northern Medical Program, University of Northern British Columbia, Prince George, Canada
| |
Collapse
|
36
|
Moroz A, King A, Kim B, Fusco H, Carmody K. Constructing a Shared Mental Model for Feedback Conversations: Faculty Workshop Using Video Vignettes Developed by Residents. MEDEDPORTAL : THE JOURNAL OF TEACHING AND LEARNING RESOURCES 2019; 15:10821. [PMID: 31139740 PMCID: PMC6519682 DOI: 10.15766/mep_2374-8265.10821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2019] [Accepted: 11/12/2018] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Providing feedback is a fundamental principle in medical education; however, as educators, our community lacks the necessary skills to give meaningful, impactful feedback to those under our supervision. By improving our feedback-giving skills, we provide concrete ways for trainees to optimize their performance, ultimately leading to better patient care. METHODS In this faculty development workshop, faculty groups used six feedback video vignettes scripted, enacted, and produced by residents to arrive at a shared mental model of feedback. During workshop development, we used qualitative analysis for faculty narratives combined with the findings from a focused literature review to define dimensions of feedback. RESULTS Twenty-three faculty (physical medicine and rehabilitation and neurology) participated in seven small-group workshops. Analysis of group discussion notes yielded 343 codes that were collapsed into 25 coding categories. After incorporating the results of a focused literature review, we identified 48 items grouped into 10 dimensions of feedback. Online session evaluation indicated that faculty members liked the workshop's format and thought they were better at providing feedback to residents as a result of the workshop. DISCUSSION Small faculty groups were able to develop a shared mental model of dimensions of feedback that was also grounded in medical education literature. The theme of specificity of feedback was prominent and echoed recent medical education research findings. Defining performance expectations for feedback providers in the form of a practical and psychometrically sound rubric can enhance reliable scoring of feedback performance assessments and should be the next step in our work.
Collapse
Affiliation(s)
- Alex Moroz
- Associate Professor, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Anna King
- Chief Resident, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Baruch Kim
- Chief Resident, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Heidi Fusco
- Clinical Assistant Professor, Department of Rehabilitation Medicine, New York University School of Medicine
| | - Kristin Carmody
- Associate Professor, Department of Emergency Medicine, New York University School of Medicine
| |
Collapse
|
37
|
Prins SH, Brøndt SG, Malling B. Implementation of workplace-based assessment in general practice. EDUCATION FOR PRIMARY CARE 2019; 30:133-144. [PMID: 31018801 DOI: 10.1080/14739879.2019.1588788] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Background: Workplace-based assessment (WPBA) is widely accepted, but few studies have investigated implementation issues during general practice (GP) placements. This study explored possible barriers and identified key elements for successful implementation of a WPBA-programme in Danish GP specialist training. Methods: Supervisors had attended a one-day course in WPBA and trainees had received a short introduction. Questionnaires on experiences with implementation of WPBA were distributed to 106 GP supervisors and 110 trainees after the rotation was finished. Results: The response rate was 61/96 (64%) for trainees and 67/94 (71%) for supervisors. Supervisors were generally more positive towards WPBA and saw fewer barriers than trainees. Lack of planning was most often reported as an impediment to WPBA. Supervisors did not identify trainees' uneasiness of being observed as a problem as often as trainees. A total of 34% of trainees reported uneasiness as an obstacle to WPBA. Conclusions: It seems that the education of supervisors positively influenced supervisors' perception and use of WPBA. Adequate planning of WPBA may be just as big a problem as assigning the time. Further investigations on the impact of education on trainees' perception of WPBA are needed.
Collapse
Affiliation(s)
- Søren Hast Prins
- a Centre for Health Sciences Education, Health, Aarhus University , Aarhus , Denmark
| | | | - Bente Malling
- a Centre for Health Sciences Education, Health, Aarhus University , Aarhus , Denmark
| |
Collapse
|
38
|
Lee V, Brain K, Martin J. From opening the 'black box' to looking behind the curtain: cognition and context in assessor-based judgements. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2019; 24:85-102. [PMID: 30302670 DOI: 10.1007/s10459-018-9851-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 09/06/2018] [Indexed: 06/08/2023]
Abstract
The increasing use of direct observation tools to assess routine performance has resulted in the growing reliance on assessor-based judgements in the workplace. However, we have a limited understanding of how assessors make judgements and formulate ratings in real world contexts. The current research on assessor cognition has largely focused on the cognitive domain but the contextual factors are equally important, and both are closely interconnected. This study aimed to explore the perceived cognitive and contextual factors influencing Mini-CEX assessor judgements in the Emergency Department setting. We used a conceptual framework of assessor-based judgement to develop a sequential mixed methods study. We analysed and integrated survey and focus group results to illustrate self-reported cognitive and contextual factors influencing assessor judgements. We used situated cognition theory as a sensitizing lens to explore the interactions between people and their environment. The major factors highlighted through our mixed methods study were: clarity of the assessment, reliance on and variable approach to overall impression (gestalt), role tension especially when giving constructive feedback, prior knowledge of the trainee and case complexity. We identified prevailing tensions between participants (assessors and trainees), interactions (assessment and feedback) and setting. The two practical implications of our research are the need to broaden assessor training to incorporate both cognitive and contextual domains, and the need to develop a more holistic understanding of assessor-based judgements in real world contexts to better inform future research and development in workplace-based assessments.
Collapse
Affiliation(s)
- Victor Lee
- Department of Emergency Medicine, Austin Health, P.O. Box 5555, Heidelberg, VIC, 3084, Australia.
| | | | - Jenepher Martin
- Eastern Health Clinical School, Monash University and Deakin University, Box Hill, VIC, Australia
| |
Collapse
|
39
|
Thoma B, Sebok-Syer SS, Colmers-Gray I, Sherbino J, Ankel F, Trueger NS, Grock A, Siemens M, Paddock M, Purdy E, Kenneth Milne W, Chan TM. Quality Evaluation Scores are no more Reliable than Gestalt in Evaluating the Quality of Emergency Medicine Blogs: A METRIQ Study. TEACHING AND LEARNING IN MEDICINE 2018; 30:294-302. [PMID: 29381099 DOI: 10.1080/10401334.2017.1414609] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
UNLABELLED Construct: We investigated the quality of emergency medicine (EM) blogs as educational resources. PURPOSE Online medical education resources such as blogs are increasingly used by EM trainees and clinicians. However, quality evaluations of these resources using gestalt are unreliable. We investigated the reliability of two previously derived quality evaluation instruments for blogs. APPROACH Sixty English-language EM websites that published clinically oriented blog posts between January 1 and February 24, 2016, were identified. A random number generator selected 10 websites, and the 2 most recent clinically oriented blog posts from each site were evaluated using gestalt, the Academic Life in Emergency Medicine (ALiEM) Approved Instructional Resources (AIR) score, and the Medical Education Translational Resources: Impact and Quality (METRIQ-8) score, by a sample of medical students, EM residents, and EM attendings. Each rater evaluated all 20 blog posts with gestalt and 15 of the 20 blog posts with the ALiEM AIR and METRIQ-8 scores. Pearson's correlations were calculated between the average scores for each metric. Single-measure intraclass correlation coefficients (ICCs) evaluated the reliability of each instrument. RESULTS Our study included 121 medical students, 88 EM residents, and 100 EM attendings who completed ratings. The average gestalt rating of each blog post correlated strongly with the average scores for ALiEM AIR (r = .94) and METRIQ-8 (r = .91). Single-measure ICCs were fair for gestalt (0.37, IQR 0.25-0.56), ALiEM AIR (0.41, IQR 0.29-0.60) and METRIQ-8 (0.40, IQR 0.28-0.59). CONCLUSION The average scores of each blog post correlated strongly with gestalt ratings. However, neither ALiEM AIR nor METRIQ-8 showed higher reliability than gestalt. Improved reliability may be possible through rater training and instrument refinement.
Collapse
Affiliation(s)
- Brent Thoma
- a Department of Emergency Medicine , College of Medicine, University of Saskatchewan , Saskatoon , Saskatchewan , Canada
| | - Stefanie S Sebok-Syer
- b Center for Education Research & Innovation, Schulich School of Medicine and Dentistry, Western University , London , Ontario , Canada
| | - Isabelle Colmers-Gray
- c Department of Emergency Medicine University of Alberta , Edmonton , Alberta , Canada
| | - Jonathan Sherbino
- d Department of Emergency Medicine , McMaster University , Hamilton , Ontario , Canada
| | - Felix Ankel
- e Department of Health Professions Education at HealthPartners Institute , Bloomington , Minnesota , USA
| | - N Seth Trueger
- f Department of Emergency Medicine , Northwestern University , Chicago , Illinois , USA
| | - Andrew Grock
- g Department of Emergency Medicine , University of California Los Angeles , Los Angeles , California , USA
| | - Marshall Siemens
- h College of Medicine, University of Saskatchewan , Saskatoon , Saskatchewan , Canada
| | - Michael Paddock
- i Department of Emergency Medicine , University of Minnesota , Minneapolis , Minnesota , USA
| | - Eve Purdy
- j Department of Emergency Medicine , Queen's University, Kingston , Ontario , Canada
| | | | - Teresa M Chan
- d Department of Emergency Medicine , McMaster University , Hamilton , Ontario , Canada
| |
Collapse
|
40
|
Young JQ, Hasser C, Hung EK, Kusz M, O'Sullivan PS, Stewart C, Weiss A, Williams N. Developing End-of-Training Entrustable Professional Activities for Psychiatry: Results and Methodological Lessons. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:1048-1054. [PMID: 29166349 DOI: 10.1097/acm.0000000000002058] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
PURPOSE To develop entrustable professional activities (EPAs) for psychiatry and to demonstrate an innovative, validity-enhancing methodology that may be relevant to other specialties. METHOD A national task force employed a three-stage process from May 2014 to February 2017 to develop EPAs for psychiatry. In stage 1, the task force used an iterative consensus-driven process to construct proposed EPAs. Each included a title, full description, and relevant competencies. In stage 2, the task force interviewed four nonpsychiatric experts in EPAs and further revised the EPAs. In stage 3, the task force performed a Delphi study of national experts in psychiatric education and assessment. All survey participants completed a brief training program on EPAs. Quantitative and qualitative analysis led to further modifications. Essentialness was measured on a five-point scale. EPAs were included if the content validity index was at least 0.8 and the lower end of the asymmetric confidence interval was not lower than 4.0. RESULTS Stages 1 and 2 yielded 24 and 14 EPAs, respectively. In stage 3, 31 of the 39 invited experts participated in both rounds of the Delphi study. Round 1 reduced the proposed EPAs to 13. Ten EPAs met the inclusion criteria in Round 2. CONCLUSIONS The final EPAs provide a strong foundation for competency-based assessment in psychiatry. Methodological features such as critique by nonpsychiatry experts, a national Delphi study with frame-of-reference training, and stringent inclusion criteria strengthen the content validity of the findings and may serve as a model for future efforts in other specialties.
Collapse
Affiliation(s)
- John Q Young
- J.Q. Young is professor, Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, New York, New York. C. Hasser is assistant professor, Department of Psychiatry, UCSF School of Medicine, San Francisco, California. E.K. Hung is associate professor, Department of Psychiatry, UCSF School of Medicine, San Francisco, California. M. Kusz is research assistant, Department of Psychiatry, Hofstra Northwell School of Medicine, New York, New York. P.S. O'Sullivan is professor, Department of Medicine and Surgery, UCSF School of Medicine, San Francisco, California. C. Stewart is assistant professor, Department of Psychiatry, Georgetown School of Medicine, Washington, DC. A. Weiss is associate professor, Department of Psychiatry and Behavioral Sciences, Albert Einstein School of Medicine, New York, New York. N. Williams is professor, Department of Psychiatry, University of Iowa Carver College of Medicine, Iowa City, Iowa
| | | | | | | | | | | | | | | |
Collapse
|
41
|
Eva KW. Cognitive Influences on Complex Performance Assessment: Lessons from the Interplay between Medicine and Psychology. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2018. [DOI: 10.1016/j.jarmac.2018.03.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
42
|
Mahmood O, Dagnæs J, Bube S, Rohrsted M, Konge L. Nonspecialist Raters Can Provide Reliable Assessments of Procedural Skills. JOURNAL OF SURGICAL EDUCATION 2018; 75:370-376. [PMID: 28716383 DOI: 10.1016/j.jsurg.2017.07.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/26/2017] [Accepted: 07/01/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND Competency-based learning has become a crucial component in medical education. Despite the advantages of competency-based learning, there are still challenges that need to be addressed. Currently, the common perception is that specialist assessment is needed for evaluating procedural skills which is difficult owing to the limited availability of faculty time. The aim of this study was to explore the validity of assessments of video recorded procedures performed by nonspecialist raters. METHODS This study was a blinded observational trial. Twenty-three novices (senior medical students) and 9 experienced doctors were video recorded while each performing 2 flexible cystoscopies on patients. The recordings were anonymized and placed in random order and then rated by 2 experienced cystoscopists (specialist raters) and 2 medical students (nonspecialist raters). Flexible cystoscopy was chosen as it is a simple procedural skill that is crucial to master in a resident urology program. RESULTS The internal consistency of assessments was high, Cronbach's α = 0.93 and 0.95 for nonspecialist and specialist raters, respectively (p < 0.001 for both correlations). The interrater reliability was significant (p < 0.001) with a Pearson's correlation of 0.77 for the nonspecialists and 0.75 for the specialists. The test-retest reliability showed the biggest difference between the 2 groups, 0.59 and 0.38 for the nonspecialist raters and the specialist raters, respectively (p < 0.001). CONCLUSION Our study suggests that nonspecialist raters can provide reliable and valid assessments of video recorded cystoscopies. This could make mastery learning and competency-based education more feasible.
Collapse
Affiliation(s)
- Oria Mahmood
- Copenhagen Academy for Medical Education and Simulation, Copenhagen, Capital Region, Denmark; University of Copenhagen, Copenhagen, Denmark.
| | - Julia Dagnæs
- Copenhagen Academy for Medical Education and Simulation, Copenhagen, Capital Region, Denmark; University of Copenhagen, Copenhagen, Denmark; Department of Urology, Rigshospitalet, Copenhagen, Denmark
| | - Sarah Bube
- University of Copenhagen, Copenhagen, Denmark; Department of Urology, University Hospital Zealand, Roskilde, Denmark
| | | | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation, Copenhagen, Capital Region, Denmark; University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
43
|
de Jonge LPJWM, Timmerman AA, Govaerts MJB, Muris JWM, Muijtjens AMM, Kramer AWM, van der Vleuten CPM. Stakeholder perspectives on workplace-based performance assessment: towards a better understanding of assessor behaviour. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2017; 22:1213-1243. [PMID: 28155004 PMCID: PMC5663793 DOI: 10.1007/s10459-017-9760-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Accepted: 01/24/2017] [Indexed: 05/13/2023]
Abstract
Workplace-Based Assessment (WBA) plays a pivotal role in present-day competency-based medical curricula. Validity in WBA mainly depends on how stakeholders (e.g. clinical supervisors and learners) use the assessments-rather than on the intrinsic qualities of instruments and methods. Current research on assessment in clinical contexts seems to imply that variable behaviours during performance assessment of both assessors and learners may well reflect their respective beliefs and perspectives towards WBA. We therefore performed a Q methodological study to explore perspectives underlying stakeholders' behaviours in WBA in a postgraduate medical training program. Five different perspectives on performance assessment were extracted: Agency, Mutuality, Objectivity, Adaptivity and Accountability. These perspectives reflect both differences and similarities in stakeholder perceptions and preferences regarding the utility of WBA. In comparing and contrasting the various perspectives, we identified two key areas of disagreement, specifically 'the locus of regulation of learning' (i.e., self-regulated versus externally regulated learning) and 'the extent to which assessment should be standardised' (i.e., tailored versus standardised assessment). Differing perspectives may variously affect stakeholders' acceptance, use-and, consequently, the effectiveness-of assessment programmes. Continuous interaction between all stakeholders is essential to monitor, adapt and improve assessment practices and to stimulate the development of a shared mental model. Better understanding of underlying stakeholder perspectives could be an important step in bridging the gap between psychometric and socio-constructivist approaches in WBA.
Collapse
Affiliation(s)
- Laury P J W M de Jonge
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - Angelique A Timmerman
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Marjan J B Govaerts
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| | - Jean W M Muris
- Department of Family Medicine, FHML, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Arno M M Muijtjens
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| | - Anneke W M Kramer
- Department of Family Medicine, Leiden University, Leiden, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, FHML, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
44
|
Wilbur K. Does faculty development influence the quality of in-training evaluation reports in pharmacy? BMC MEDICAL EDUCATION 2017; 17:222. [PMID: 29157239 PMCID: PMC5697106 DOI: 10.1186/s12909-017-1054-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 11/02/2017] [Indexed: 06/02/2023]
Abstract
BACKGROUND In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable". RESULTS Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.
Collapse
Affiliation(s)
- Kerry Wilbur
- College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
| |
Collapse
|
45
|
Holmboe ES. Work-based Assessment and Co-production in Postgraduate Medical Training. GMS JOURNAL FOR MEDICAL EDUCATION 2017; 34:Doc58. [PMID: 29226226 PMCID: PMC5704603 DOI: 10.3205/zma001135] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 03/15/2017] [Accepted: 05/09/2017] [Indexed: 05/24/2023]
Abstract
Assessment has always been an essential component of postgraduate medical education and for many years focused predominantly on various types of examinations. While examinations of medical knowledge and more recently of clinical skills with standardized patients can assess learner capability in controlled settings and provide a level of assurance for the public, persistent and growing concerns regarding quality of care and patient safety worldwide has raised the importance and need for better work-based assessments. Work-based assessments, when done effectively, can more authentically capture the abilities of learners to actually provide safe, effective, patient-centered care. Furthermore, we have entered the era of interprofessional care where effective teamwork among multiple health care professionals is now paramount. Work-based assessment methods are now essential in an interprofessional healthcare world. To better prepare learners for these newer competencies and the ever-growing complexity of healthcare, many post-graduate medical education systems across the globe have turned to outcomes-based models of education, codified through competency frameworks. This commentary provides a brief overview on key methods of work-based assessment such as direct observation, multisource feedback, patient experience surveys and performance measures that are needed in a competency-based world that places a premium on educational and clinical outcomes. However, the full potential of work-based assessments will only be realized if post-graduate learners play an active role in their own assessment program. This will require a substantial culture change, and culture change only occurs through actions and changed behaviors. Co-production offers a practical and philosophical approach to engaging postgraduate learners to be active, intrinsically motivated agents for their own professional development, help to change learning culture and contribute to improving programmatic assessment in post-graduate training.
Collapse
Affiliation(s)
- Eric S. Holmboe
- Accreditation Council for Graduate Medical Education, Chicago, USA
| |
Collapse
|
46
|
Kogan JR, Hatala R, Hauer KE, Holmboe E. Guidelines: The do's, don'ts and don't knows of direct observation of clinical skills in medical education. PERSPECTIVES ON MEDICAL EDUCATION 2017; 6:286-305. [PMID: 28956293 PMCID: PMC5630537 DOI: 10.1007/s40037-017-0376-7] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
INTRODUCTION Direct observation of clinical skills is a key assessment strategy in competency-based medical education. The guidelines presented in this paper synthesize the literature on direct observation of clinical skills. The goal is to provide a practical list of Do's, Don'ts and Don't Knows about direct observation for supervisors who teach learners in the clinical setting and for educational leaders who are responsible for clinical training programs. METHODS We built consensus through an iterative approach in which each author, based on their medical education and research knowledge and expertise, independently developed a list of Do's, Don'ts, and Don't Knows about direct observation of clinical skills. Lists were compiled, discussed and revised. We then sought and compiled evidence to support each guideline and determine the strength of each guideline. RESULTS A final set of 33 Do's, Don'ts and Don't Knows is presented along with a summary of evidence for each guideline. Guidelines focus on two groups: individual supervisors and the educational leaders responsible for clinical training programs. Guidelines address recommendations for how to focus direct observation, select an assessment tool, promote high quality assessments, conduct rater training, and create a learning culture conducive to direct observation. CONCLUSIONS High frequency, high quality direct observation of clinical skills can be challenging. These guidelines offer important evidence-based Do's and Don'ts that can help improve the frequency and quality of direct observation. Improving direct observation requires focus not just on individual supervisors and their learners, but also on the organizations and cultures in which they work and train. Additional research to address the Don't Knows can help educators realize the full potential of direct observation in competency-based education.
Collapse
Affiliation(s)
- Jennifer R Kogan
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA.
| | - Rose Hatala
- University of British Columbia, Vancouver, British Columbia, Canada
| | - Karen E Hauer
- University of California San Francisco, San Francisco, CA, USA
| | - Eric Holmboe
- Accreditation Council of Graduate Medical Education, Chicago, IL, USA
| |
Collapse
|
47
|
Renting N, Raat ANJ, Dornan T, Wenger-Trayner E, van der Wal MA, Borleffs JCC, Gans ROB, Jaarsma ADC. Integrated and implicit: how residents learn CanMEDS roles by participating in practice. MEDICAL EDUCATION 2017; 51:942-952. [PMID: 28485074 DOI: 10.1111/medu.13335] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2016] [Revised: 09/14/2016] [Accepted: 03/06/2017] [Indexed: 06/07/2023]
Abstract
CONTEXT Learning outcomes for residency training are defined in competency frameworks such as the CanMEDS framework, which ultimately aim to better prepare residents for their future tasks. Although residents' training relies heavily on learning through participation in the workplace under the supervision of a specialist, it remains unclear how the CanMEDS framework informs practice-based learning and daily interactions between residents and supervisors. OBJECTIVES This study aimed to explore how the CanMEDS framework informs residents' practice-based training and interactions with supervisors. METHODS Constructivist grounded theory guided iterative data collection and analyses. Data were collected by direct observations of residents and supervisors, combined with formal and field interviews. We progressively arrived at an explanatory theory by coding and interpreting the data, building provisional theories and through continuous conversations. Data analysis drew on sensitising insights from communities of practice theory, which provided this study with a social learning perspective. RESULTS CanMEDS roles occurred in an integrated fashion and usually remained implicit during interactions. The language of CanMEDS was not adopted in clinical practice, which seemed to impede explicit learning interactions. The CanMEDS framework seemed only one of many factors of influence in practice-based training: patient records and other documents were highly influential in daily activities and did not always correspond with CanMEDS roles. Additionally, the position of residents seemed too peripheral to allow them to learn certain aspects of the Health Advocate and Leader roles. CONCLUSIONS The CanMEDS framework did not really guide supervisors' and residents' practice or interactions. It was not explicitly used as a common language in which to talk about resident performance and roles. Therefore, the extent to which CanMEDS actually helps improve residents' learning trajectories and conversations between residents and supervisors about residents' progress remains questionable. This study highlights the fact that the reification of competency frameworks into the complexity of practice-based learning is not a straightforward exercise.
Collapse
Affiliation(s)
- Nienke Renting
- Centre for Education Development and Research in Health Professions (CEDAR), University Medical Centre Groningen, University of Groningen, Groningen, the Netherlands
| | - A N Janet Raat
- Research Centre for Talent Development in Higher Education and Society, Hanze University of Applied Sciences, Groningen, the Netherlands
| | - Tim Dornan
- Centre for Medical Education, Queen's University Belfast, Belfast, UK
| | | | - Martha A van der Wal
- Centre for Education Development and Research in Health Professions (CEDAR), University Medical Centre Groningen, University of Groningen, Groningen, the Netherlands
| | - Jan C C Borleffs
- Centre for Education Development and Research in Health Professions (CEDAR), University Medical Centre Groningen, University of Groningen, Groningen, the Netherlands
| | - Rijk O B Gans
- Department of Internal Medicine, University Medical Centre Groningen, University of Groningen, Groningen, the Netherlands
| | - A Debbie C Jaarsma
- Centre for Education Development and Research in Health Professions (CEDAR), University Medical Centre Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
48
|
Favreau MA, Tewksbury L, Lupi C, Cutrer WB, Jokela JA, Yarris LM. Constructing a Shared Mental Model for Faculty Development for the Core Entrustable Professional Activities for Entering Residency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:759-764. [PMID: 28557935 DOI: 10.1097/acm.0000000000001511] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In 2014, the Association of American Medical Colleges identified 13 Core Entrustable Professional Activities for Entering Residency (Core EPAs), which are activities that entering residents might be expected to perform without direct supervision. This work included the creation of an interinstitutional concept group focused on faculty development efforts, as the processes and tools for teaching and assessing entrustability in undergraduate medical education (UME) are still evolving. In this article, the authors describe a conceptual framework for entrustment that they developed to better prepare all educators involved in entrustment decision making in UME. This framework applies to faculty with limited or longitudinal contact with medical students and to those who contribute to entrustment development or render summative entrustment decisions.The authors describe a shared mental model for entrustment that they developed, based on a critical synthesis of the EPA literature, to serve as a guide for UME faculty development efforts. This model includes four dimensions for Core EPA faculty development: (1) observation skills in authentic settings (workplace-based assessments), (2) coaching and feedback skills, (3) self-assessment and reflection skills, and (4) peer guidance skills developed through a community of practice. These dimensions form a conceptual foundation for meaningful faculty participation in entrustment decision making.The authors also differentiate between the UME learning environment and the graduate medical education learning environment to highlight distinct challenges and opportunities for faculty development in UME settings. They conclude with recommendations and research questions for future Core EPA faculty development efforts.
Collapse
Affiliation(s)
- Michele A Favreau
- M.A. Favreau is associate professor of pediatrics, and adjunct associate professor, Division of Management, Oregon Health and Science University School of Medicine, Portland, Oregon. She was also associate dean for professional development and lifelong learning, Oregon Health and Science University School of Medicine, Portland, Oregon, at the time this work was done. L. Tewksbury is associate dean for student affairs and associate professor of pediatrics, New York University School of Medicine, New York, New York. C. Lupi is assistant dean for learning and teaching and professor of obstetrics and gynecology, Florida International University Herbert Wertheim College of Medicine, Miami, Florida. W.B. Cutrer is assistant professor of pediatrics, Vanderbilt University School of Medicine, Nashville, Tennessee. J.A. Jokela is professor and head, Department of Medicine, University of Illinois College of Medicine at Urbana-Champaign, Urbana, Illinois. L.M. Yarris is associate professor of emergency medicine and program director for emergency medicine, Oregon Health and Science University School of Medicine, Portland, Oregon
| | | | | | | | | | | |
Collapse
|
49
|
Shea JA, Norcini JJ. All the [training] world's a stage…. MEDICAL EDUCATION 2017; 51:458-460. [PMID: 28394067 DOI: 10.1111/medu.13269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
|
50
|
Kogan JR, Conforti LN, Yamazaki K, Iobst W, Holmboe ES. Commitment to Change and Challenges to Implementing Changes After Workplace-Based Assessment Rater Training. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2017; 92:394-402. [PMID: 27465231 DOI: 10.1097/acm.0000000000001319] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
PURPOSE Faculty development for clinical faculty who assess trainees is necessary to improve assessment quality and impor tant for competency-based education. Little is known about what faculty plan to do differently after training. This study explored the changes faculty intended to make after workplace-based assessment rater training, their ability to implement change, predictors of change, and barriers encountered. METHOD In 2012, 45 outpatient internal medicine faculty preceptors (who supervised residents) from 26 institutions participated in rater training. They completed a commitment to change form listing up to five commitments and ranked (on a 1-5 scale) their motivation for and anticipated difficulty implementing each change. Three months later, participants were interviewed about their ability to implement change and barriers encountered. The authors used logistic regression to examine predictors of change. RESULTS Of 191 total commitments, the most common commitments focused on what faculty would change about their own teaching (57%) and increasing direct observation (31%). Of the 183 commitments for which follow-up data were available, 39% were fully implemented, 40% were partially implemented, and 20% were not implemented. Lack of time/competing priorities was the most commonly cited barrier. Higher initial motivation (odds ratio [OR] 2.02; 95% confidence interval [CI] 1.14, 3.57) predicted change. As anticipated difficulty increased, implementation became less likely (OR 0.67; 95% CI 0.49, 0.93). CONCLUSIONS While higher baseline motivation predicted change, multiple system-level barriers undermined ability to implement change. Rater-training faculty development programs should address how faculty motivation and organizational barriers interact and influence ability to change.
Collapse
Affiliation(s)
- Jennifer R Kogan
- J.R. Kogan is professor of medicine and assistant dean of faculty development, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania. L.N. Conforti is research associate for milestones evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois. When this study was conducted, she was research associate for academic programs, American Board of Internal Medicine, Philadelphia, Pennsylvania. K. Yamazaki is outcome assessment project associate, Accreditation Council for Graduate Medical Education, Chicago, Illinois. W. Iobst is vice president for academic and clinical affairs and vice dean, Commonwealth Medical College, Scranton, Pennsylvania. When this study was conducted, he was vice president of academic affairs, American Board of Internal Medicine, Philadelphia, Pennsylvania. E.S. Holmboe is senior vice president for milestones development and evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois. When this study was conducted, he was chief medical officer and senior vice president, American Board of Internal Medicine, Philadelphia, Pennsylvania
| | | | | | | | | |
Collapse
|