126
|
Chan T, Bakewell F, Orlich D, Sherbino J. Conflict prevention, conflict mitigation, and manifestations of conflict during emergency department consultations. Acad Emerg Med 2014; 21:308-13. [PMID: 24628756 DOI: 10.1111/acem.12325] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2013] [Revised: 08/19/2013] [Accepted: 09/26/2013] [Indexed: 11/27/2022]
Abstract
OBJECTIVES The objective was to determine the causes of and mitigating factors for conflict between emergency physicians and other colleagues during consultations. METHODS From March to September 2010, a total of 61 physicians (31 residents and 30 attendings from emergency medicine [EM], internal medicine, and general surgery) were interviewed about how junior learners should be taught about emergency department (ED) consultations. During these interviews, they were asked if and how conflict manifests during the ED consultation process. Two investigators reviewed the transcripts independently to generate themes related to conflict until saturation was reached. Disagreements were resolved by consensus. The trustworthiness of the analysis was ensured by generating an audit trail, which was subsequently audited by an investigator not involved with the initial analysis. This analysis was compared to previously proposed models of trust and conflict from the sociology and business literature. RESULTS All participants recalled some manifestation of conflict. There were 12 negative conflict-producing themes and 10 protective conflict-mitigating themes. When comparing these themes to a previously developed model of the domains of trust, each theme mapped to domains of the model. CONCLUSIONS Conflict affects the ED consultation process. Areas that lead to conflict are identified that map to previous models of trust and conflict. This work extends the current understanding about intradisciplinary conflict in the clinical realm. These new findings may improve the understanding of the nature of conflicts that occur and form the foundation for interventions that may decrease conflict during ED consultations.
Collapse
|
127
|
Norman G, Sherbino J, Dore K, Wood T, Young M, Gaissmaier W, Kreuger S, Monteiro S. The etiology of diagnostic errors: a controlled trial of system 1 versus system 2 reasoning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:277-84. [PMID: 24362377 DOI: 10.1097/acm.0000000000000105] [Citation(s) in RCA: 109] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
PURPOSE Diagnostic errors are thought to arise from cognitive biases associated with System 1 reasoning, which is rapid and unconscious. The primary hypothesis of this study was that the instruction to be slow and thorough will have no advantage in diagnostic accuracy over the instruction to proceed rapidly. METHOD Participants were second-year residents who volunteered after they had taken the Medical Council of Canada (MCC) Qualifying Examination Part II. Participants were tested at three Canadian medical schools (McMaster, Ottawa, and McGill) in 2010 (n = 96) and 2011 (n = 108). The intervention consisted of 20 computer-based internal medicine cases, with instructions either (1) to be as quick as possible but not make mistakes (the Speed cohort, 2010), or (2) to be careful, thorough, and reflective (the Reflect cohort, 2011). The authors examined accuracy scores on the 20 cases, time taken to diagnose cases, and MCC examination performance. RESULTS Overall accuracy in the Speed condition was 44.5%, and in the Reflect condition was 45.0%; this was not significant. The Speed cohort took an average of 69 seconds per case versus 89 seconds for the Reflect cohort (P < .001). In both cohorts, cases diagnosed incorrectly took an average of 17 seconds longer than cases diagnosed correctly. Diagnostic accuracy was moderately correlated with performance on both written and problem-solving components of the MCC licensure examination and inversely correlated with time. CONCLUSIONS The study demonstrates that simply encouraging slowing down and increasing attention to analytical thinking is insufficient to increase diagnostic accuracy.
Collapse
|
128
|
Sherbino J, Kulasegaram K, Worster A, Norman GR. The reliability of encounter cards to assess the CanMEDS roles. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:987-96. [PMID: 23307097 DOI: 10.1007/s10459-012-9440-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Accepted: 12/18/2012] [Indexed: 05/25/2023]
Abstract
The purpose of this study was to determine the reliability of a computer-based encounter card (EC) to assess medical students during an emergency medicine rotation. From April 2011 to March 2012, multiple physicians assessed an entire medical school class during their emergency medicine rotation using the CanMEDS framework. At the end of an emergency department shift, an EC was scored (1-10) for each student on Medical Expert, 2 additional Roles, and an overall score. Analysis of 1,819 ECs (155 of 186 students) revealed the following: Collaborator, Manager, Health Advocate and Scholar were assessed on less than 25 % of ECs. On average, each student was assessed 11 times with an inter-rater reliability of 0.6. The largest source of variance was rater bias. A D-study showed that a minimum of 17 ECs were required for a reliability of 0.7. There was moderate to strong correlations between all Roles and overall score; and the factor analysis revealed all items loading on a single factor, accounting for 87 % of the variance. The global assessment of the CanMEDS Roles using ECs has significant variance in estimates of performance, derived from differences between raters. Some Roles are seldom selected for assessment, suggesting that raters have difficulty identifying related performance. Finally, correlation and factor analyses demonstrate that raters are unable to discriminate among Roles and are basing judgments on an overall impression.
Collapse
|
129
|
Chan T, Sabir K, Sanhan S, Sherbino J. Understanding the impact of residents' interpersonal relationships during emergency department referrals and consultations. J Grad Med Educ 2013; 5:576-81. [PMID: 24455004 PMCID: PMC3886454 DOI: 10.4300/jgme-d-12-00211.1] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2012] [Revised: 01/02/2013] [Accepted: 02/24/2013] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Communicating with colleagues is a key physician competency. Yet few studies have sought to uncover the complex nature of relationships between referring and consulting physicians, which may be affected by the inherent relationships between the participants. OBJECTIVE Our study examines themes identified from discussions about communications and the role of relationships during the referral-consultation process. METHODS From March to September 2010, 30 residents (10 emergency medicine, 10 general surgery, 10 internal medicine) were interviewed using a semistructured focus group protocol. Two investigators independently reviewed the transcripts using inductive methods and grounded theory to generate themes (using codes for ease of analysis) until saturation was reached. Disagreements were resolved by consensus, yielding an inventory of themes and subthemes. Measures for ensuring trustworthiness of the analysis included generating an audit trail and external auditing of the material by investigators not involved with the initial analysis. RESULTS Two main relationship-related themes affected the referral-consultation process: familiarity and trust. Various subthemes were further delineated and studied in the context of pertinent literature. CONCLUSIONS Relationships between physicians have a powerful influence on the emergency department referral-consultation dynamic. The emergency department referral-consultation may be significantly altered by the familiarity and perceived trustworthiness of the referring and consulting physicians. Our proposed framework may further inform and improve instructional methods for teaching interpersonal communication. Most importantly, it may help junior learners understand inherent difficulties they may encounter during the referral process between emergency and consulting physicians.
Collapse
|
130
|
Norman G, Monteiro S, Sherbino J. Is clinical cognition binary or continuous? ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:1058-1060. [PMID: 23899852 DOI: 10.1097/acm.0b013e31829a3c32] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
A dominant theory of clinical reasoning is the so-called "dual processing theory," in which the diagnostic process may proceed through a rapid, unconscious, intuitive process (System 1) or a slow, conceptual, analytical process (System 2). Diagnostic errors are thought to arise primarily from cognitive biases originating in System 1. In this issue, Custers points out that this model is unnecessarily restrictive and that it is more likely that diagnostic tasks may proceed through a variety of mental strategies ranging from "analytical" to "intuitive."The authors of this commentary agree that the notion that System 1 and System 2 processes are somehow in competition and will necessarily lead to different conclusions is unnecessarily restrictive. On the other hand, they argue that there is substantial evidence in support of a dual processing model, and that most objections to dual processing theory can be easily accommodated by simply presuming that both processes operate in concertand that solving any task may rely to varying degrees on both processes.
Collapse
|
131
|
Sherbino J, Chan T, Schiff K. The reverse classroom: lectures on your own and homework with faculty. CAN J EMERG MED 2013; 15:178-180. [PMID: 23663466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
With the arrival of a technologically proficient generation of learners (often described with the moniker "digital natives") into Canadian medical schools and residency programs, there is an increasing trend toward harnessing technology to enhance education and increase teaching efficiency. We present an instructional method that allows medical educators to "reverse" the traditional classroom paradigm. Imagine that prior to an academic half-day session, learners watch an e-lecture on their own time; then during class, they do "homework" with tailored consultations from a content expert. The reverse classroom uses simple, readily accessible technology to allow faculty members to engage learners in high-order learning such as information analysis and synthesis. With this instructional method, the inefficient, repetitious delivery of recurring core lectures is no longer required. The reverse classroom is an effective instructional method. Using this technique, learners engage in high-order learning and interaction with teachers, and teachers are able to optimally share their expertise.
Collapse
|
132
|
Sherbino J, Norman GR. In reply to Petrie and Campbell. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:557-558. [PMID: 23611966 DOI: 10.1097/acm.0b013e31828ffb05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
|
133
|
Sherbino J, Norman GR. In the real world, faster diagnoses are not necessarily more accurate. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:298. [PMID: 23442426 DOI: 10.1097/acm.0b013e3182816880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
|
134
|
Ilgen JS, Sherbino J, Cook DA. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis. Acad Emerg Med 2013; 20:117-27. [PMID: 23406070 DOI: 10.1111/acem.12076] [Citation(s) in RCA: 119] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2012] [Revised: 08/27/2012] [Accepted: 08/27/2012] [Indexed: 12/16/2022]
Abstract
OBJECTIVES Technology-enhanced simulation is used frequently in emergency medicine (EM) training programs. Evidence for its effectiveness, however, remains unclear. The objective of this study was to evaluate the effectiveness of technology-enhanced simulation for training in EM and identify instructional design features associated with improved outcomes by conducting a systematic review. METHODS The authors systematically searched MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Original research articles in any language were selected if they compared simulation to no intervention or another educational activity for the purposes of training EM health professionals (including student and practicing physicians, midlevel providers, nurses, and prehospital providers). Reviewers evaluated study quality and abstracted information on learners, instructional design (curricular integration, feedback, repetitive practice, mastery learning), and outcomes. RESULTS From a collection of 10,903 articles, 85 eligible studies enrolling 6,099 EM learners were identified. Of these, 56 studies compared simulation to no intervention, 12 compared simulation with another form of instruction, and 19 compared two forms of simulation. Effect sizes were pooled using a random-effects model. Heterogeneity among these studies was large (I(2) ≥ 50%). Among studies comparing simulation to no intervention, pooled effect sizes were large (range = 1.13 to 1.48) for knowledge, time, and skills and small to moderate for behaviors with patients (0.62) and patient effects (0.43; all p < 0.02 except patient effects p = 0.12). Among comparisons between simulation and other forms of instruction, the pooled effect sizes were small (≤ 0.33) for knowledge, time, and process skills (all p > 0.1). Qualitative comparisons of different simulation curricula are limited, although feedback, mastery learning, and higher fidelity were associated with improved learning outcomes. CONCLUSIONS Technology-enhanced simulation for EM learners is associated with moderate or large favorable effects in comparison with no intervention and generally small and nonsignificant benefits in comparison with other instruction. Future research should investigate the features that lead to effective simulation-based instructional design.
Collapse
|
135
|
Sherbino J, Norman GR, Gaissmaier W. Clinical decision making: the need for meaningful research. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:150-151. [PMID: 23361019 DOI: 10.1097/acm.0b013e31827b2941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
|
136
|
Ilgen JS, Humbert AJ, Kuhn G, Hansen ML, Norman GR, Eva KW, Charlin B, Sherbino J. Assessing diagnostic reasoning: a consensus statement summarizing theory, practice, and future needs. Acad Emerg Med 2012; 19:1454-61. [PMID: 23279251 DOI: 10.1111/acem.12034] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2012] [Accepted: 06/27/2012] [Indexed: 01/22/2023]
Abstract
Assessment of an emergency physician (EP)'s diagnostic reasoning skills is essential for effective training and patient safety. This article summarizes the findings of the diagnostic reasoning assessment track of the 2012 Academic Emergency Medicine consensus conference "Education Research in Emergency Medicine: Opportunities, Challenges, and Strategies for Success." Existing theories of diagnostic reasoning, as they relate to emergency medicine (EM), are outlined. Existing strategies for the assessment of diagnostic reasoning are described. Based on a review of the literature, expert thematic analysis, and iterative consensus agreement during the conference, this article summarizes current assessment gaps and prioritizes future research questions concerning the assessment of diagnostic reasoning in EM.
Collapse
|
137
|
Sherbino J. Brief educational reports: a new manuscript category. CAN J EMERG MED 2012; 14:325. [PMID: 23131475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
|
138
|
Sherbino J, Dore KL, Wood TJ, Young ME, Gaissmaier W, Kreuger S, Norman GR. The relationship between response time and diagnostic accuracy. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2012; 87:785-91. [PMID: 22534592 DOI: 10.1097/acm.0b013e318253acbd] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
PURPOSE Psychologists theorize that cognitive reasoning involves two distinct processes: System 1, which is rapid, unconscious, and contextual, and System 2, which is slow, logical, and rational. According to the literature, diagnostic errors arise primarily from System 1 reasoning, and therefore they are associated with rapid diagnosis. This study tested whether accuracy is associated with shorter or longer times to diagnosis. METHOD Immediately after the 2010 administration of the Medical Council of Canada Qualifying Examination (MCCQE) Part II at three test centers, the authors recruited participants, who read and diagnosed a series of 25 written cases of varying difficulty. The authors computed accuracy and response time (RT) for each case. RESULTS Seventy-five Canadian medical graduates (of 95 potential participants) participated. The overall correlation between RT and accuracy was -0.54; accuracy, then, was strongly associated with more rapid RT. This negative relationship with RT held for 23 of 25 cases individually and overall when the authors controlled for participants' knowledge, as judged by their MCCQE Part I and II scores. For 19 of 25 cases, accuracy on each case was positively related to experience with that specific diagnosis. A participant's performance on the test overall was significantly correlated with his or her performance on both the MCCQE Part I and II. CONCLUSIONS These results are inconsistent with clinical reasoning models that presume that System 1 reasoning is necessarily more error prone than System 2. These results suggest instead that rapid diagnosis is accurate and relates to other measures of competence.
Collapse
|
139
|
Harris KA, Allen T, Bullock G, Dath D, Sherbino J, Frank JR. Royal College white papers: assessment of training. The authors reply. Can J Surg 2012; 55:E5. [DOI: 10.1503/cjs.006512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
|
140
|
Sherbino J, Norman G. Black balls and diagnostic reasoning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2012; 87:2; author reply 2. [PMID: 22201625 DOI: 10.1097/acm.0b013e31823a91e2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
|
141
|
Sherbino J, Frank JR, Flynn L, Snell L. "Intrinsic Roles" rather than "armour": renaming the "non-medical expert roles" of the CanMEDS framework to match their intent. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2011; 16:695-7. [PMID: 21850502 DOI: 10.1007/s10459-011-9318-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2011] [Accepted: 07/21/2011] [Indexed: 05/25/2023]
|
142
|
Kessler C, Woods R, Chan TM, Sherbino J. A Call To Action In Consultation Training. CAN J EMERG MED 2011; 13:361. [DOI: 10.2310/8000.2011.110596] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
143
|
Worster A, Kulasegaram K, Carpenter CR, Vallera T, Upadhye S, Sherbino J, Haynes RB. Consensus conference follow-up: inter-rater reliability assessment of the Best Evidence in Emergency Medicine (BEEM) rater scale, a medical literature rating tool for emergency physicians. Acad Emerg Med 2011; 18:1193-200. [PMID: 22092904 DOI: 10.1111/j.1553-2712.2011.01214.x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
BACKGROUND Studies published in general and specialty medical journals have the potential to improve emergency medicine (EM) practice, but there can be delayed awareness of this evidence because emergency physicians (EPs) are unlikely to read most of these journals. Also, not all published studies are intended for or ready for clinical practice application. The authors developed "Best Evidence in Emergency Medicine" (BEEM) to ameliorate these problems by searching for, identifying, appraising, and translating potentially practice-changing studies for EPs. An initial step in the BEEM process is the BEEM rater scale, a novel tool for EPs to collectively evaluate the relative clinical relevance of EM-related studies found in more than 120 journals. The BEEM rater process was designed to serve as a clinical relevance filter to identify those studies with the greatest potential to affect EM practice. Therefore, only those studies identified by BEEM raters as having the highest clinical relevance are selected for the subsequent critical appraisal process and, if found methodologically sound, are promoted as the best evidence in EM. OBJECTIVES The primary objective was to measure inter-rater reliability (IRR) of the BEEM rater scale. Secondary objectives were to determine the minimum number of EP raters needed for the BEEM rater scale to achieve acceptable reliability and to compare performance of the scale against a previously published evidence rating system, the McMaster Online Rating of Evidence (MORE), in an EP population. METHODS The authors electronically distributed the title, conclusion, and a PubMed link for 23 recently published studies related to EM to a volunteer group of 134 EPs. The volunteers answered two demographic questions and rated the articles using one of two randomly assigned seven-point Likert scales, the BEEM rater scale (n = 68) or the MORE scale (n = 66), over two separate administrations. The IRR of each scale was measured using generalizability theory. RESULTS The IRR of the BEEM rater scale ranged between 0.90 (95% confidence interval [CI] = 0.86 to 0.93) to 0.92 (95% CI = 0.89 to 0.94) across administrations. Decision studies showed a minimum of 12 raters is required for acceptable reliability of the BEEM rater scale. The IRR of the MORE scale was 0.82 to 0.84. CONCLUSIONS The BEEM rater scale is a highly reliable, single-question tool for a small number of EPs to collectively rate the relative clinical relevance within the specialty of EM of recently published studies from a variety of medical journals. It compares favorably with the MORE system because it achieves a high IRR despite simply requiring raters to read each article's title and conclusion.
Collapse
|
144
|
Sherbino J, Yip S, Dore KL, Siu E, Norman GR. The effectiveness of cognitive forcing strategies to decrease diagnostic error: an exploratory study. TEACHING AND LEARNING IN MEDICINE 2011; 23:78-84. [PMID: 21240788 DOI: 10.1080/10401334.2011.536897] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
BACKGROUND Cognitive forcing strategies, a form of metacognition, have been advocated as a strategy to prevent diagnostic error. Increasingly, curricula are being implemented in medical training to address this error. Yet there is no experimental evidence that these curricula are effective. DESCRIPTION This was an exploratory, prospective study using consecutive enrollment of 56 senior medical students during their emergency medicine rotation. Students received interactive, standardized cognitive forcing strategy training. EVALUATION Using a cross-over design to assess transfer between similar (to instructional cases) and novel diagnostic cases, students were evaluated on 6 test cases. Forty-seven students were immediately tested and 9 were tested 2 weeks later. Data were analyzed using descriptive statistics and a McNemar chi-square test. CONCLUSIONS This is the first study to explore the impact of cognitive forcing strategy training on diagnostic error. Our preliminary findings suggest that application and retention is poor. Further large studies are required to determine if transfer across diagnostic formats occurs.
Collapse
|
145
|
Sherbino J, Snell L, Dath D, Dojeiji S, Abbott C, Frank JR. A national clinician-educator program: a model of an effective community of practice. MEDICAL EDUCATION ONLINE 2010; 15:10.3402/meo.v15i0.5356. [PMID: 21151594 PMCID: PMC3000230 DOI: 10.3402/meo.v15i0.5356] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2010] [Revised: 10/27/2010] [Accepted: 10/27/2010] [Indexed: 05/17/2023]
Abstract
BACKGROUND The increasing complexity of medical training often requires faculty members with educational expertise to address issues of curriculum design, instructional methods, assessment, program evaluation, faculty development, and educational scholarship, among others. DISCUSSION In 2007, The Royal College of Physicians & Surgeons of Canada responded to this need by establishing the first national clinician-educator program. We define a clinician-educator and describe the development of the program. Adopting a construct from the business community, we use a community of practice framework to describe the benefits (with examples) of this program and challenges in developing it. The benefits of the clinician-educator program include: improved educational problem solving, recognition of educational needs and development of new projects, enhanced personal educational expertise, maintenance of professional satisfaction and retention of group members, a positive influence within the Royal College, and a positive influence within other Canadian academic institutions. SUMMARY Our described experience of a social reorganization - a community of practice - suggests that the organizational and educational benefits of a national clinician-educator program are not theoretical, but real.
Collapse
|
146
|
Sherbino J. Do Antiviral Medications Improve Recovery in Patients With Bell's Palsy? Ann Emerg Med 2010; 55:475-6. [DOI: 10.1016/j.annemergmed.2009.12.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2009] [Revised: 11/27/2009] [Accepted: 12/03/2009] [Indexed: 11/17/2022]
|
147
|
Campbell C, Silver I, Sherbino J, Cate OT, Holmboe ES. Competency-based continuing professional development. MEDICAL TEACHER 2010; 32:657-62. [PMID: 20662577 DOI: 10.3109/0142159x.2010.500708] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Competence is traditionally viewed as the attainment of a static set of attributes rather than a dynamic process in which physicians continuously use their practice experiences to "progress in competence" toward the attainment of expertise. A competency-based continuing professional development (CPD) model is premised on a set of learning competencies that include the ability to (a) use practice information to identify learning priorities and to develop and monitor CPD plans; (b) access information sources for innovations in development and new evidence that may potentially be integrated into practice; (c) establish a personal knowledge management system to store and retrieve evidence and to select and manage learning projects; (d) construct questions, search for evidence, and record and track conclusions for practice; and (e) use tools and processes to measure competence and performance and develop action plans to enhance practice. Competency-based CPD emphasizes self-directed learning processes and promotes the role of assessment as a professional expectation and obligation. Various approaches to defining general competencies for practice require the creation of specific performance metrics to be meaningful and relevant to the lifelong learning strategies of physicians. This paper describes the assumptions, advantages, and challenges of establishing a CPD system focused on competencies that improve physician performance and the quality and safety of patient care. Implications for competency-based CPD are discussed from an individual and organizational perspective, and a model to bridge the transition from residency to practice is explored.
Collapse
|
148
|
Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. MEDICAL TEACHER 2010; 32:676-82. [PMID: 20662580 DOI: 10.3109/0142159x.2010.500704] [Citation(s) in RCA: 517] [Impact Index Per Article: 36.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Competency-based medical education (CBME), by definition, necessitates a robust and multifaceted assessment system. Assessment and the judgments or evaluations that arise from it are important at the level of the trainee, the program, and the public. When designing an assessment system for CBME, medical education leaders must attend to the context of the multiple settings where clinical training occurs. CBME further requires assessment processes that are more continuous and frequent, criterion-based, developmental, work-based where possible, use assessment methods and tools that meet minimum requirements for quality, use both quantitative and qualitative measures and methods, and involve the wisdom of group process in making judgments about trainee progress. Like all changes in medical education, CBME is a work in progress. Given the importance of assessment and evaluation for CBME, the medical education community will need more collaborative research to address several major challenges in assessment, including "best practices" in the context of systems and institutional culture and how to best to train faculty to be better evaluators. Finally, we must remember that expertise, not competence, is the ultimate goal. CBME does not end with graduation from a training program, but should represent a career that includes ongoing assessment.
Collapse
|
149
|
Frank JR, Snell LS, Cate OT, Holmboe ES, Carraccio C, Swing SR, Harris P, Glasgow NJ, Campbell C, Dath D, Harden RM, Iobst W, Long DM, Mungroo R, Richardson DL, Sherbino J, Silver I, Taber S, Talbot M, Harris KA. Competency-based medical education: theory to practice. MEDICAL TEACHER 2010; 32:638-45. [PMID: 20662574 DOI: 10.3109/0142159x.2010.501190] [Citation(s) in RCA: 1251] [Impact Index Per Article: 89.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Although competency-based medical education (CBME) has attracted renewed interest in recent years among educators and policy-makers in the health care professions, there is little agreement on many aspects of this paradigm. We convened a unique partnership - the International CBME Collaborators - to examine conceptual issues and current debates in CBME. We engaged in a multi-stage group process and held a consensus conference with the aim of reviewing the scholarly literature of competency-based medical education, identifying controversies in need of clarification, proposing definitions and concepts that could be useful to educators across many jurisdictions, and exploring future directions for this approach to preparing health professionals. In this paper, we describe the evolution of CBME from the outcomes movement in the 20th century to a renewed approach that, focused on accountability and curricular outcomes and organized around competencies, promotes greater learner-centredness and de-emphasizes time-based curricular design. In this paradigm, competence and related terms are redefined to emphasize their multi-dimensional, dynamic, developmental, and contextual nature. CBME therefore has significant implications for the planning of medical curricula and will have an important impact in reshaping the enterprise of medical education. We elaborate on this emerging CBME approach and its related concepts, and invite medical educators everywhere to enter into further dialogue about the promise and the potential perils of competency-based medical curricula for the 21st century.
Collapse
|
150
|
Iobst WF, Sherbino J, Cate OT, Richardson DL, Dath D, Swing SR, Harris P, Mungroo R, Holmboe ES, Frank JR. Competency-based medical education in postgraduate medical education. MEDICAL TEACHER 2010; 32:651-6. [PMID: 20662576 DOI: 10.3109/0142159x.2010.500709] [Citation(s) in RCA: 274] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
With the introduction of Tomorrow's Doctors in 1993, medical education began the transition from a time- and process-based system to a competency-based training framework. Implementing competency-based training in postgraduate medical education poses many challenges but ultimately requires a demonstration that the learner is truly competent to progress in training or to the next phase of a professional career. Making this transition requires change at virtually all levels of postgraduate training. Key components of this change include the development of valid and reliable assessment tools such as work-based assessment using direct observation, frequent formative feedback, and learner self-directed assessment; active involvement of the learner in the educational process; and intensive faculty development that addresses curricular design and the assessment of competency.
Collapse
|