1
|
Abbasi M, Shirazi M, Torkmandi H, Homayoon S, Abdi M. Impact of teaching, learning, and assessment of medical law on cognitive, affective and psychomotor skills of medical students: a systematic review. BMC MEDICAL EDUCATION 2023; 23:703. [PMID: 37752500 PMCID: PMC10523676 DOI: 10.1186/s12909-023-04695-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 09/18/2023] [Indexed: 09/28/2023]
Abstract
BACKGROUND It is necessary to improve medical students' legal cognitive, affective, and psychomotor skills to prevent further legal issues in the medical profession. Choosing the proper teaching and assessment methods is crucial in this matter. This study aimed to investigate the impact of teaching, learning, and assessment of medical law on the cognitive, affective, and psychomotor skills of medical students. METHODS A systematic review was conducted in PubMed, Embass, and Web of Science databases, and Google Scholar search engine using MECIR and PRISMA, AMEE Guide 94 for 1980 to 2022.12.30. Nineteen articles met the inclusion criteria. MERSQI checklist was used to assess the quality of the articles, and URSEC (theoretical underpinning of the development, resources required, setting, educational methods employed, and content) used to assess the risk of educational bias. RESULTS Internship courses called Medical Education Humanities and Society (MESH), clinical scenario design, seminars and small group discussions, web-based interactive training, legal training courses, PBL, and mind maps have been used to improve the medico-legal knowledge of medical students. MESH clerkship, simulation of a legal event, medico-legal advocacy program based on interdisciplinary education, group discussion, integration, and court-based learning used to improve student attitudes. Multidisciplinary training, small group discussions after the seminar, mock trial competition, and interdisciplinary education are used to teach psychomotor skills. All studies, except one on knowledge, reported positive effects of legal education on students' knowledge, attitudes, and legal performance. Written assessments were used for cognitive and affective domains, while performance was assessed by OSCE, simulated court, and evaluation of patient referrals. CONCLUSION There are few studies to examine the cognitive, affective, and legal psychomotor skills of medical students. The texts have not yet fully explored the high level of affective and psychomotor domains, which is evidence of a gap in this sector. Recognizing that medico-legal problems are prevented through proper education and assessment, it is recommended that this area be considered a research priority and that effective educational policies are adopted.
Collapse
Affiliation(s)
- Mahmoud Abbasi
- Medical Ethics and Law Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mandana Shirazi
- Department of Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Hojjat Torkmandi
- Medical Ethics and Law Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Nursing, Zanjan University of Medical Sciences, Zanjan, Iran
| | | | - Mohammad Abdi
- Medical Ethics and Law Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
- Department of Medical Education, Faculty of Medicine, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nursing, Zanjan University of Medical Sciences, Zanjan, Iran.
| |
Collapse
|
2
|
Long S, Rodriguez C, St-Onge C, Tellier PP, Torabi N, Young M. Factors affecting perceived credibility of assessment in medical education: A scoping review. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:229-262. [PMID: 34570298 DOI: 10.1007/s10459-021-10071-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 08/31/2021] [Accepted: 09/11/2021] [Indexed: 06/13/2023]
Abstract
Assessment is more educationally effective when learners engage with assessment processes and perceive the feedback received as credible. With the goal of optimizing the educational value of assessment in medical education, we mapped the primary literature to identify factors that may affect a learner's perceptions of the credibility of assessment and assessment-generated feedback (i.e., scores or narrative comments). For this scoping review, search strategies were developed and executed in five databases. Eligible articles were primary research studies with medical learners (i.e., medical students to post-graduate fellows) as the focal population, discussed assessment of individual learners, and reported on perceived credibility in the context of assessment or assessment-generated feedback. We identified 4705 articles published between 2000 and November 16, 2020. Abstracts were screened by two reviewers; disagreements were adjudicated by a third reviewer. Full-text review resulted in 80 articles included in this synthesis. We identified three sets of intertwined factors that affect learners' perceived credibility of assessment and assessment-generated feedback: (i) elements of an assessment process, (ii) learners' level of training, and (iii) context of medical education. Medical learners make judgments regarding the credibility of assessments and assessment-generated feedback, which are influenced by a variety of individual, process, and contextual factors. Judgments of credibility appear to influence what information will or will not be used to improve later performance. For assessment to be educationally valuable, design and use of assessment-generated feedback should consider how learners interpret, use, or discount assessment-generated feedback.
Collapse
Affiliation(s)
- Stephanie Long
- Department of Family Medicine, McGill University, Montreal, QC, Canada
| | - Charo Rodriguez
- Department of Family Medicine, McGill University, Montreal, QC, Canada
| | - Christina St-Onge
- Faculty of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, QC, Canada
| | | | - Nazi Torabi
- Science Collections, University of Toronto Libraries, Toronto, ON, Canada
| | - Meredith Young
- Institute of Health Sciences Education, McGill University, 1110 Pine Ave West, Montreal, QC, H3A 1A3, Canada.
- Department of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
3
|
Lai H, Ameli N, Patterson S, Senior A, Lunardon D. Development of an electronic learning progression dashboard to monitor student clinical experiences. J Dent Educ 2022; 86:759-765. [PMID: 34989405 DOI: 10.1002/jdd.12871] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 12/08/2021] [Accepted: 12/23/2021] [Indexed: 11/09/2022]
Abstract
INTRODUCTION Clinical experience tracking mechanisms for students at dental schools provide patient assignment, student experience, and learning progression feedback. The purpose of this study was to evaluate dental students' clinical experiences following the implementation of a learning progression dashboard (LPD). METHODS After developing and deploying an electronic LPD using PHP, secondary data analysis on dental students' clinical experiences from 2017-2019 was conducted. Student experience differences were compared between the year before continuous use of the LPD and the first year using it. LPD data contained the required clinical procedures dentistry students must perform across all disciplines and the number of planned, in progress, and completed tasks each student has accomplished. Using two time points, the students' experiences were compared. Univariate statistics and independent t-tests were conducted in R for detecting the differences in the number and categories of codes. RESULTS The number and category of codes showed significant differences between the academic year 2017-2018 and 2018-2019 for both third- and fourth-year dental students after one and two terms. Overall, students recorded a 26% greater number of treatment codes and experienced a 26% greater number of code categories compared to the previous year. CONCLUSION Applying information management methods such as dashboards can better inform educators on student clinical experiences and improve clinical learning outcomes for students.
Collapse
Affiliation(s)
- Hollis Lai
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Nazila Ameli
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Steven Patterson
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Anthea Senior
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Doris Lunardon
- School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
4
|
Heggarty P, Teague PA, Alele F, Adu M, Malau-Aduli BS. Role of formative assessment in predicting academic success among GP registrars: a retrospective longitudinal study. BMJ Open 2020; 10:e040290. [PMID: 33234642 PMCID: PMC7689087 DOI: 10.1136/bmjopen-2020-040290] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVES The James Cook University General Practice Training (JCU GPT) programme's internal formative exams were compared with the Royal Australian College of General Practitioners (RACGP) pre-entry exams to determine ability to predict final performance in the RACGP fellowship exams. DESIGN A retrospective longitudinal study. SETTING General Practice (GP) trainees enrolled between 2016 and 2019 at a Registered Training Organisation in regional Queensland, Australia. PARTICIPANTS 376 GP trainees enrolled in the training programme. EXPOSURE MEASURES The pre-entry exams were Multiple-Mini Interviews (MMI), Situational Judgement Test (SJT) and Candidate Assessment and Applied Knowledge Test. The internal formative exams comprised multiple choice questions (MCQ1 and MCQ2), short answer questions, clinical skills and clinical reasoning. PRIMARY OUTCOME MEASURE The college exams were Applied Knowledge Test (AKT), Key Feature Problems (KFP) and Objective Structured Clinical Examination (OSCE). RESULTS Correlations (r), coefficients of determination (R2) and OR were used as parameters for estimating strength of relationship and precision of predictive accuracy. SJT and MMI were moderately (r=0.13 to 0.31) and MCQ1 and MCQ2 highly (r=0.37 to 0.53) correlated with all college exams (p<0.05 to p<0.01), with R2 ranging from 0.070 to 0.376. MCQ1 was predictive of failure in all college exams (AKT: OR=2.32, KFP: OR=3.99; OSCE: OR=3.46); while MCQ2 predicted failure in AKT (OR=2.83) and KFP (OR=3.15). CONCLUSION We conclude that the internal MCQ formative exams predict performance in the RACGP fellowship exams. We propose that our formative assessment tools could be used as academic markers for early identification of potentially struggling trainees.
Collapse
Affiliation(s)
- Paula Heggarty
- College of Medicine and Dentistry, James Cook University, Townsville, Queensland, Australia
| | - Peta-Ann Teague
- College of Medicine and Dentistry, James Cook University, Townsville, Queensland, Australia
| | - Faith Alele
- College of Public Health, Medical and Veterinary Sciences, James Cook University, Townsville, Queensland, Australia
| | - Mary Adu
- College of Medicine and Dentistry, James Cook University, Townsville, Queensland, Australia
| | - Bunmi S Malau-Aduli
- College of Medicine and Dentistry, James Cook University, Townsville, Queensland, Australia
| |
Collapse
|
5
|
Scarff CE, Bearman M, Chiavaroli N, Trumble S. Trainees' perspectives of assessment messages: a narrative systematic review. MEDICAL EDUCATION 2019; 53:221-233. [PMID: 30672012 DOI: 10.1111/medu.13775] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 04/17/2018] [Accepted: 10/16/2018] [Indexed: 06/09/2023]
Abstract
OBJECTIVES This study was designed as a narrative systematic literature review of medical specialist trainees' perspectives of the assessment messages they receive in the context of clinical performance assessments. The aim of the study was to determine if trainees value the information they receive through the formats designed to promote their development and, if not, the reasons for this. METHODS The authors searched the ERIC, EMBASE, Ovid MEDLINE and PsycINFO databases for articles published up to 16 June 2018 that present original data on trainees' perspectives of the assessment messages they receive in the context of work-based assessments (WBAs) and in-training assessments (ITAs) used within their training programmes. All authors screened 938 abstracts and 139 full-text articles were assessed after this. Descriptions of quantitative data and thematic analysis of qualitative data were used to present the opinions of trainees. RESULTS Thirty-three articles met the inclusion criteria. Twenty-six articles (79%) described trainees' perspectives in the context of WBA and the remaining articles referred to ITA formats. Wide-ranging opinions were reported. The analysis categorised these into three themes: trainees value developmental assessment messages; trainees become disengaged when assessment messages are not developmental, and trainees' views depend on the environment, the assessor and themselves. Some trainees reported that the assessment messages were valuable and provided input on their performance to guide their development, but many disagreed. In particular, the trainee's own level of engagement with the assessments influenced his or her perspectives on the messages received. CONCLUSIONS Trainees do not universally perceive that clinical performance assessments provide them with the valuable developmental input on their performance they were designed to do. Factors related to the environment, the assessor and themselves influence their perspectives.
Collapse
Affiliation(s)
- Catherine E Scarff
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| | - Margaret Bearman
- Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University, Geelong, Victoria, Australia
| | - Neville Chiavaroli
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| | - Steve Trumble
- Department of Medical Education, Melbourne Medical School, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
6
|
Weller JM, Henning M. Impact of Assessments on Learning and Quality of Life during Anaesthesia Training in Australia and New Zealand. Anaesth Intensive Care 2019; 39:35-9. [DOI: 10.1177/0310057x1103900105] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
- J. M. Weller
- Centre for Medical and Health Sciences Education, University of Auckland, Auckland, New Zealand
- Head of Centre for Medical and Health Sciences Education; Specialist Anaesthetist, Faculty of Medical and Health Sciences, University of Auckland and Auckland City Hospital and Chair of Assessments Committee, Australian and New Zealand College of Anaesthetists
| | - M. Henning
- Centre for Medical and Health Sciences Education, University of Auckland, Auckland, New Zealand
| |
Collapse
|
7
|
Duitsman ME, Fluit CRMG, van der Goot WE, ten Kate-Booij M, de Graaf J, Jaarsma DADC. Judging residents' performance: a qualitative study using grounded theory. BMC MEDICAL EDUCATION 2019; 19:13. [PMID: 30621674 PMCID: PMC6325830 DOI: 10.1186/s12909-018-1446-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 12/28/2018] [Indexed: 05/12/2023]
Abstract
BACKGROUND Although program directors judge residents' performance for summative decisions, little is known about how they do this. This study examined what information program directors use and how they value this information in making a judgment of residents' performance and what residents think of this process. METHODS Sixteen semi-structured interviews were held with residents and program directors from different hospitals in the Netherlands in 2015-2016. Participants were recruited from internal medicine, surgery and radiology. Transcripts were analysed using grounded theory methodology. Concepts and themes were identified by iterative constant comparison. RESULTS When approaching semi-annual meetings with residents, program directors report primarily gathering information from the following: assessment tools, faculty members and from their own experience with residents. They put more value on faculty's comments during meetings and in the corridors than on feedback provided in the assessment tools. They are influenced by their own beliefs about learning and education in valuing feedback. Residents are aware that faculty members discuss their performance in meetings, but they believe the assessment tools provide the most important proof to demonstrate their clinical competency. CONCLUSIONS Residents think that feedback in the assessment tools is the most important proof to demonstrate their performance, whereas program directors scarcely use this feedback to form a judgment about residents' performance. They rely heavily on remarks of faculty in meetings instead. Therefore, residents' performance may be better judged in group meetings that are organised to enhance optimal information sharing and decision making about residents' performance.
Collapse
Affiliation(s)
- Marrigje E. Duitsman
- Department of Internal Medicine and Health Academy, Radboud Health Academy, Radboud University Medical Centre, Gerard van Swietenlaan 4, Postbus 9101, 6500 HB Nijmegen, the Netherlands
| | - Cornelia R. M. G. Fluit
- Health Academy, Department of Research in Learning and Education, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Wieke E. van der Goot
- Martini Hospital, Groningen, the Netherlands
- Centre for Education Development and Research in Health Professions, University Medical Centre Groningen, Groningen, the Netherlands
| | - Marianne ten Kate-Booij
- Department of Obstetrics and Gynaecology, Erasmus University Medical Centre, Rotterdam, the Netherlands
| | - Jacqueline de Graaf
- Department of Internal Medicine, Radboudumc Nijmegen, Nijmegen, the Netherlands
| | - Debbie A. D. C. Jaarsma
- Centre for Education Development and Research in Health Professions, University Medical Centre Groningen, Groningen, the Netherlands
| |
Collapse
|
8
|
Bisgaard CH, Rubak SLM, Rodt SA, Petersen JAK, Musaeus P. The effects of graduate competency-based education and mastery learning on patient care and return on investment: a narrative review of basic anesthetic procedures. BMC MEDICAL EDUCATION 2018; 18:154. [PMID: 29954376 PMCID: PMC6025802 DOI: 10.1186/s12909-018-1262-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2017] [Accepted: 06/19/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND Despite the widespread implementation of competency-based education, evidence of ensuing enhanced patient care and cost-benefit remains scarce. This narrative review uses the Kirkpatrick/Phillips model to investigate the patient-related and organizational effects of graduate competency-based medical education for five basic anesthetic procedures. METHODS The MEDLINE, ERIC, CINAHL, and Embase databases were searched for papers reporting results in Kirkpatrick/Phillips levels 3-5 from graduate competency-based education for five basic anesthetic procedures. A gray literature search was conducted by reference search in Google Scholar. RESULTS In all, 38 studies were included, predominantly concerning central venous catheterization. Three studies reported significant cost-effectiveness by reducing infection rates for central venous catheterization. Furthermore, the procedural competency, retention of skills and patient care as evaluated by fewer complications improved in 20 of the reported studies. CONCLUSION Evidence suggests that competency-based education with procedural central venous catheterization courses have positive effects on patient care and are both cost-effective. However, more rigorously controlled and reproducible studies are needed. Specifically, future studies could focus on organizational effects and the possibility of transferability to other medical specialties and the broader healthcare system.
Collapse
Affiliation(s)
- Claus Hedebo Bisgaard
- Centre for Health Sciences Education, Faculty of Health, Aarhus University, Palle Juul Jensens Boulevard 82, Building B, DK-8200 Aarhus N, Denmark
| | - Sune Leisgaard Mørck Rubak
- Department of Paediatrics and Adolescent Medicine, Aarhus University Hospital, Palle Juul Jensens Boulevard 99, DK-8200 Aarhus N, Denmark
| | - Svein Aage Rodt
- Department of Anaesthesiology and Intensive Care, South Section, Aarhus University Hospital, Tage-Hansens Gade 2, 8000 Aarhus C, Denmark
| | - Jens Aage Kølsen Petersen
- Department of Anesthesiology and Intensive Care, North Section, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C, Denmark
| | - Peter Musaeus
- Centre for Health Sciences Education, Faculty of Health, Aarhus University, Palle Juul Jensens Boulevard 82, Building B, DK-8200 Aarhus N, Denmark
| |
Collapse
|
9
|
Spreckelsen C, Juenger J. Repeated testing improves achievement in a blended learning approach for risk competence training of medical students: results of a randomized controlled trial. BMC MEDICAL EDUCATION 2017; 17:177. [PMID: 28950855 PMCID: PMC5615441 DOI: 10.1186/s12909-017-1016-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Accepted: 09/19/2017] [Indexed: 05/12/2023]
Abstract
BACKGROUND Adequate estimation and communication of risks is a critical competence of physicians. Due to an evident lack of these competences, effective training addressing risk competence during medical education is needed. Test-enhanced learning has been shown to produce marked effects on achievements. This study aimed to investigate the effect of repeated tests implemented on top of a blended learning program for risk competence. METHODS We introduced a blended-learning curriculum for risk estimation and risk communication based on a set of operationalized learning objectives, which was integrated into a mandatory course "Evidence-based Medicine" for third-year students. A randomized controlled trial addressed the effect of repeated testing on achievement as measured by the students' pre- and post-training score (nine multiple-choice items). Basic numeracy and statistical literacy were assessed at baseline. Analysis relied on descriptive statistics (histograms, box plots, scatter plots, and summary of descriptive measures), bootstrapped confidence intervals, analysis of covariance (ANCOVA), and effect sizes (Cohen's d, r) based on adjusted means and standard deviations. RESULTS All of the 114 students enrolled in the course consented to take part in the study and were assigned to either the intervention or control group (both: n = 57) by balanced randomization. Five participants dropped out due to non-compliance (control: 4, intervention: 1). Both groups profited considerably from the program in general (Cohen's d for overall pre vs. post scores: 2.61). Repeated testing yielded an additional positive effect: while the covariate (baseline score) exhibits no relation to the post-intervention score, F(1, 106) = 2.88, p > .05, there was a significant effect of the intervention (repeated tests scenario) on learning achievement, F(1106) = 12.72, p < .05, d = .94, r = .42 (95% CI: [.26, .57]). However, in the subgroup of participants with a high initial numeracy score no similar effect could be observed. CONCLUSION Dedicated training can improve relevant components of risk competence of medical students. An already promising overall effect of the blended learning approach can be improved significantly by implementing a test-enhanced learning design, namely repeated testing. As students with a high initial numeracy score did not profit equally from repeated testing, target-group specific opt-out may be offered.
Collapse
Affiliation(s)
- C. Spreckelsen
- Department of Medical Informatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - J. Juenger
- Department of Psychosomatic and General Internal Medicine, University of Heidelberg, Im Neuenheimer Feld 410, Heidelberg, 69120 Germany
- Institute for Medical and Pharmaceutical Tests, Große Langgasse 8, Mainz, 55116 Germany
| |
Collapse
|
10
|
Scarff CE, Bearman M, Corderoy RM. Supervisor perspectives on the summative in-training assessment. Australas J Dermatol 2015; 57:128-34. [PMID: 26172219 DOI: 10.1111/ajd.12376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 06/06/2015] [Indexed: 11/30/2022]
Abstract
BACKGROUND Assessment is a fundamental component of medical education and exists in many formats. In-training assessments are one such example and they serve to provide feedback to learners about their performance during a period of clinical attachment. However, in addition to trainee knowledge and performance, many factors influence the assessment given to a trainee. METHOD This study used an anonymous survey to investigate the perceptions of supervisors of the influences on their assessments of Australian dermatology trainees, focusing on the summative in-training assessment (SITA) format. RESULTS A response rate of 41% was achieved. The importance of reporting underperformance and providing feedback to trainees was agreed on, but current limitations in the ability of the tool to do this were noted. Implications for practice are discussed including the education and support of supervisors, consideration of logistical issues, the process of SITA completion and supervisor appointment. Further research into the impact of supervisor concerns about potential challenges to a judgement and hesitations about making negative comments about a trainee are required. Examination of the trainee perspective is also required. CONCLUSION Quality feedback is essential for learners to guide and improve their performance. Supervisors face many potential influences on their assessments and if these are too great, they may jeopardise the quality of the assessment given. Attention to highlighted areas may serve to improve the process, so allowing trainees to develop into the best clinicians they can be.
Collapse
Affiliation(s)
- Catherine E Scarff
- Health Professions Education and Educational Research (HealthPEER), Monash University, Melbourne, Victoria
| | - Margaret Bearman
- Health Professions Education and Educational Research (HealthPEER), Monash University, Melbourne, Victoria
| | - Robert M Corderoy
- Educational Development, Planning and Innovation, Australasian College of Dermatologists, Sydney, New South Wales, Australia
| |
Collapse
|
11
|
Paravicini I, Peterson CK. Introduction, development, and evaluation of the miniclinical evaluation exercise in postgraduate education of chiropractors. THE JOURNAL OF CHIROPRACTIC EDUCATION 2015; 29:22-28. [PMID: 25408995 PMCID: PMC4360767 DOI: 10.7899/jce-14-14] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Revised: 06/04/2014] [Accepted: 08/03/2014] [Indexed: 06/04/2023]
Abstract
OBJECTIVE To determine if the clinical evaluation exercise (CEX) format is reliable, applicable and useful for evaluating clinical competency in the postgraduate chiropractic program as formative feedback. METHODS Twelve mini-CEX clinical encounters were evaluated by 2 assessors per clinical encounter (7 assessors per session) in 23 chiropractic residents over a 12-month period. Two different rating scales (9 point and 5 point) were used, and the 2 assessors completed the forms independently. Individual competencies assessed consisted of history taking, physical examination, organization/efficiency, clinical judgment, professionalism/communication, counseling, and overall clinical performance. Interassessor reliability was calculated using κ and intraclass correlation coefficient statistics. Cronbach α assessed internal consistency of the mini-CEX. Spearman correlation coefficient evaluated correlation between the various competencies. The Mann-Whitney U test evaluated differences between the assessors' median numerical scores. RESULTS The κ value for the 9-point rating scale was 0.31 (fair) and for the 5-point scale was 0.42 (moderate) with statistically significant intraclass correlation values (p < .05) for 4 of the 6 competencies. High correlation coefficients (p = .0001) were found when comparing the various competencies at each clinical encounter. There were no significant differences between the 2 assessors per clinical encounter for the scores awarded to the residents. CONCLUSIONS The mini-CEX is a reliable and useful tool to provide valuable formative feedback to postgraduate chiropractic residents. The 5-point grading scale was more user-friendly with better reliability.
Collapse
|
12
|
Cobb KA, Brown G, Hammond R, Mossop LH. Students' perceptions of the Script Concordance Test and its impact on their learning behavior: a mixed methods study. JOURNAL OF VETERINARY MEDICAL EDUCATION 2015; 42:45-52. [PMID: 25526762 DOI: 10.3138/jvme.0514-057r1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The Script Concordance Test (SCT) is increasingly used in postgraduate and undergraduate education as a method of summative clinical assessment. It has been shown to have high validity and reliability but there is little evidence of its use in veterinary education as assessment for learning. This study investigates some students' perceptions of the SCT and its effects on their approaches to learning. Final-year undergraduates of the School of Veterinary Medicine and Science (SVMS) at the University of Nottingham participated in a mixed-methods study after completing three formative SCT assessments. A qualitative, thematic analysis was produced from transcripts of three focus group discussions. The quantitative study was a survey based on the analyses of the qualitative study. Out of 50 students who registered for the study, 18 participated in the focus groups and 28 completed the survey. Clinical experience was regarded as the most useful source of information for answering the SCT. The students also indicated that recall of facts was perceived as useful for multiple-choice questions but least useful for the SCT. Themes identified in the qualitative study related to reliability, acceptability, educational impact, and validity of the SCT. The evidence from this study shows that the SCT has high face validity among veterinary students. They reported that it encouraged them to reflect upon their clinical experience, to participate in discussions of case material, and to adopt a deeper approach to clinical learning. These findings strongly suggest that the SCT is potentially a valuable method for assessing clinical reasoning and enhancing student learning.
Collapse
|
13
|
Fokkema JPI, Scheele F, Westerman M, van Exel J, Scherpbier AJJA, van der Vleuten CPM, Dörr PJ, Teunissen PW. Perceived effects of innovations in postgraduate medical education: a Q study focusing on workplace-based assessment. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:1259-66. [PMID: 24988425 DOI: 10.1097/acm.0000000000000394] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE Anticipating users' perceptions of the effects an innovation will have in daily practice prior to implementation may lead to a more optimal innovation process. In this study, the authors aimed to identify the kinds of perceptions that exist concerning the effects of workplace-based assessment (WBA), an innovation that is widely used in medical education, among its users. METHOD In 2012, the authors used Q methodology to ascertain the principal user perceptions of effects of WBA in practice. Participating obstetrics-gynecology residents and attending physicians (including residency program directors) at six hospitals in the Netherlands performed individual Q sorts to rank 36 statements concerning WBA and WBA tools according to their level of agreement. The authors conducted by-person factor analysis to uncover patterns in the rankings of the statements. They used the statistical results and participant comments about their sorts to interpret and describe distinct perceptions. RESULTS The analysis of 65 Q sorts (completed by 22 residents and 43 attendings) identified five distinct user perceptions regarding the effects of WBA in practice, which the authors labeled enthusiasm, compliance, effort, neutrality, and skepticism. These perceptions were characterized by differences in views on three main issues: the intended goals of the innovation, its applicability (ease of applying it to practice), and its actual impact. CONCLUSIONS User perceptions of the effects of innovations in medical education can be typified and should be anticipated. This study's insights into five principal user perceptions can support the design and implementation of innovations in medical education.
Collapse
Affiliation(s)
- Joanne P I Fokkema
- Dr. Fokkema is a physician and PhD student, St. Lucas Andreas Hospital, Amsterdam, the Netherlands. Dr. Scheele is professor, VU University Medical Center, Amsterdam, the Netherlands, and a gynecologist and residency program director, St. Lucas Andreas Hospital, Amsterdam, the Netherlands. Dr. Westerman is a researcher, School of Medical Sciences, VU University Medical Center, Amsterdam, the Netherlands, and a resident in internal medicine, St. Lucas Andreas Hospital, Amsterdam, the Netherlands. Dr. van Exel is associate professor, Institute of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, the Netherlands. Dr. Scherpbier is professor and dean, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands. Dr. Dörr, deceased, was professor, Department of Education and Teaching, Leiden University Medical Center, Leiden, the Netherlands, and a gynecologist, Medical Centre Haaglanden, Den Haag, the Netherlands. Dr. Teunissen is a resident in obstetrics-gynecology, VU University Medical Center, Amsterdam, the Netherlands, and associate professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
14
|
Ortwein H, Blaum WE, Spies CD. Anesthesiology residents' perspective about good teaching--a qualitative needs assessment. GERMAN MEDICAL SCIENCE : GMS E-JOURNAL 2014; 12:Doc05. [PMID: 24574941 PMCID: PMC3935158 DOI: 10.3205/000190] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2013] [Revised: 01/10/2014] [Indexed: 11/30/2022]
Abstract
Background: Germany, like many other countries, will soon have a shortage of qualified doctors. One reason for the dissatisfaction amongst medical residents are the relatively unstructured residency training programs despite increasing importance of outcome-based education. The aim of our study was to identify characteristics and requirements for good teaching during anesthesiology residency training from the resident’s point of view. Methods: A consensus workshop with residents from all medical universities in Germany was held. Participants were allocated to one of the three topics, chosen based on a 2009 nationwide evaluation of residency. The three topics were (A) characteristics of helpful/good teachers, (B) characteristics of helpful/good conditions and (C) characteristics of helpful/good curricular structure. Each group followed a nominal group technique consensus process to define and rank characteristics for a good residency. Results: 31 (79.5%) resident representatives were present. The consented results put emphasis on the importance of structured curricula including transparent goals and objectives, in training formative assessments and quality assurance measures for the program. Residents further long for trained trainers with formal teaching qualifications and protected teaching time. Conclusions: Good residency training requires careful consideration of all stakeholders’ needs. Results reflect and extend previous findings and are at least to some degree easily implemented. These findings are an important step to establish a broader consensus within the discipline.
Collapse
Affiliation(s)
- Heiderose Ortwein
- Klinik für Anästhesiologie mit Schwerpunkt operative Intensivmedizin, Charité - Universitätsmedizin Berlin, Campus Virchow Klinikum und Campus Mitte, Berlin, Germany
| | - Wolf E Blaum
- Klinik für Anästhesiologie mit Schwerpunkt operative Intensivmedizin, Charité - Universitätsmedizin Berlin, Campus Virchow Klinikum und Campus Mitte, Berlin, Germany ; Lernzentrum, Abteilung für Curriculumsorganisation, Charité - Universitätsmedizin Berlin, Campus Mitte, Berlin, Germany
| | - Claudia D Spies
- Klinik für Anästhesiologie mit Schwerpunkt operative Intensivmedizin, Charité - Universitätsmedizin Berlin, Campus Virchow Klinikum und Campus Mitte, Berlin, Germany
| |
Collapse
|
15
|
Cobb KA, Brown G, Jaarsma DADC, Hammond RA. The educational impact of assessment: a comparison of DOPS and MCQs. MEDICAL TEACHER 2013; 35:e1598-607. [PMID: 23808609 PMCID: PMC3809925 DOI: 10.3109/0142159x.2013.803061] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
AIM To evaluate the impact of two different assessment formats on the approaches to learning of final year veterinary students. The relationship between approach to learning and examination performance was also investigated. METHOD An 18-item version of the Study Process Questionnaire (SPQ) was sent to 87 final year students. Each student responded to the questionnaire with regards to DOPS (Direct Observation of Procedural Skills) and a Multiple Choice Examination (MCQ). Semi-structured interviews were conducted with 16 of the respondents to gain a deeper insight into the students' perception of assessment. RESULTS Students' adopted a deeper approach to learning for DOPS and a more surface approach with MCQs. There was a positive correlation between an achieving approach to learning and examination performance. Analysis of the qualitative data revealed that deep, surface and achieving approaches were reported by the students and seven major influences on their approaches to learning were identified: motivation, purpose, consequence, acceptability, feedback, time pressure and the individual difference of the students. CONCLUSIONS The format of DOPS has a positive influence on approaches to learning. There is a conflict for students between preparing for final examinations and preparing for clinical practice.
Collapse
Affiliation(s)
- Kate A. Cobb
- University of Nottingham, UK
- KATE A. COBB, B. Vet. Med. PGCE, MMedSci, MRCVS, is a Lecturer in Teaching Learning and Assessment at the School of Veterinary Medicine and Science, University of Nottingham
- Correspondence: Kate Cobb, School of Veterinary Medicine and Science, The University of Nottingham, Loughborough LE12 5RD, UK. +44(0)1159516477; +44(0)1159516440;
| | - George Brown
- University of Nottingham, UK
- GEORGE BROWN, BSc, DPhil, Hon D.Odont, is a retired professor from the Medical Education Unit, University of Nottingham
| | - Debbie A. D. C. Jaarsma
- University of Amsterdam, the Netherlands
- DEBBIE A. D. C. Jaarsma, DVM, PhD, is a Professor of Evidence-based Education at the Academic Medical Centre, University of Amsterdam
| | - Richard A. Hammond
- University of Nottingham, UK
- RICHARD A. HAMMOND, BSc(Hons), B. Vet. Med. PhD, Dipl.ECVAA, MRCVS, is an Associate Professor of Pharmacology and Anaesthesia at the School of Veterinary Medicine and Science, University of Nottingham
| |
Collapse
|
16
|
Fokkema J, Teunissen PW. Assessing the assessment of interventions: we're not there yet. MEDICAL EDUCATION 2013; 47:954-956. [PMID: 24016162 DOI: 10.1111/medu.12273] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
|
17
|
Weller JM, Henning M, Civil N, Lavery L, Boyd MJ, Jolly B. Approaches to learning for the ANZCA Final Examination and validation of the revised Study Process Questionnaire in specialist medical training. Anaesth Intensive Care 2013; 41:631-40. [PMID: 23977915 DOI: 10.1177/0310057x1304100509] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
When evaluating assessments, the impact on learning is often overlooked. Approaches to learning can be deep, surface and strategic. To provide insights into exam quality, we investigated the learning approaches taken by trainees preparing for the Australian and New Zealand College of Anaesthetists (ANZCA) Final Exam. The revised two-factor Study Process Questionnaire (R-SPQ-2F) was modified and validated for this context and was administered to ANZCA advanced trainees. Additional questions were asked about perceived value for anaesthetic practice, study time and approaches to learning for each exam component. Overall, 236 of 690 trainees responded (34%). Responses indicated both deep and surface approaches to learning with a clear preponderance of deep approaches. The anaesthetic viva was valued most highly and the multiple choice question component the least. Despite this, respondents spent the most time studying for the multiple choice questions. The traditionally low short answer questions pass rate could not be explained by limited study time, perceived lack of value or study approaches. Written responses suggested that preparation for multiple choice questions was characterised by a surface approach, with rote memorisation of past questions. Minimal reference was made to the ANZCA syllabus as a guide for learning. These findings indicate that, although trainees found the exam generally relevant to practice and adopted predominantly deep learning approaches, there was considerable variation between the four components. These results provide data with which to review the existing ANZCA Final Exam and comparative data for future studies of the revisions to the ANZCA curriculum and exam process.
Collapse
Affiliation(s)
- J M Weller
- Centre for Medical and Health Sciences Education, University of Auckland, Auckland, New Zealand.
| | | | | | | | | | | |
Collapse
|
18
|
Bok HGJ, Teunissen PW, Favier RP, Rietbroek NJ, Theyse LFH, Brommer H, Haarhuis JCM, van Beukelen P, van der Vleuten CPM, Jaarsma DADC. Programmatic assessment of competency-based workplace learning: when theory meets practice. BMC MEDICAL EDUCATION 2013; 13:123. [PMID: 24020944 PMCID: PMC3851012 DOI: 10.1186/1472-6920-13-123] [Citation(s) in RCA: 164] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Accepted: 09/06/2013] [Indexed: 05/07/2023]
Abstract
BACKGROUND In competency-based medical education emphasis has shifted towards outcomes, capabilities, and learner-centeredness. Together with a focus on sustained evidence of professional competence this calls for new methods of teaching and assessment. Recently, medical educators advocated the use of a holistic, programmatic approach towards assessment. Besides maximum facilitation of learning it should improve the validity and reliability of measurements and documentation of competence development. We explored how, in a competency-based curriculum, current theories on programmatic assessment interacted with educational practice. METHODS In a development study including evaluation, we investigated the implementation of a theory-based programme of assessment. Between April 2011 and May 2012 quantitative evaluation data were collected and used to guide group interviews that explored the experiences of students and clinical supervisors with the assessment programme. We coded the transcripts and emerging topics were organised into a list of lessons learned. RESULTS The programme mainly focuses on the integration of learning and assessment by motivating and supporting students to seek and accumulate feedback. The assessment instruments were aligned to cover predefined competencies to enable aggregation of information in a structured and meaningful way. Assessments that were designed as formative learning experiences were increasingly perceived as summative by students. Peer feedback was experienced as a valuable method for formative feedback. Social interaction and external guidance seemed to be of crucial importance to scaffold self-directed learning. Aggregating data from individual assessments into a holistic portfolio judgement required expertise and extensive training and supervision of judges. CONCLUSIONS A programme of assessment with low-stakes assessments providing simultaneously formative feedback and input for summative decisions proved not easy to implement. Careful preparation and guidance of the implementation process was crucial. Assessment for learning requires meaningful feedback with each assessment. Special attention should be paid to the quality of feedback at individual assessment moments. Comprehensive attention for faculty development and training for students is essential for the successful implementation of an assessment programme.
Collapse
Affiliation(s)
- Harold GJ Bok
- Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Pim W Teunissen
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
- Department of Obstetrics and Gynaecology, VU University Medical Centre, Amsterdam, The Netherlands
| | - Robert P Favier
- Department of Clinical Sciences of Companion Animals, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Nancy J Rietbroek
- Department of Equine Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, the Netherlands
| | - Lars FH Theyse
- Department of Clinical Sciences of Companion Animals, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Harold Brommer
- Department of Equine Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, the Netherlands
| | - Jan CM Haarhuis
- Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Peter van Beukelen
- Quality Improvement in Veterinary Education, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
| | - Cees PM van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Debbie ADC Jaarsma
- Evidence-Based Education, Academic Medical Centre, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
19
|
Fokkema JPI, Teunissen PW, Westerman M, van der Lee N, van der Vleuten CPM, Scherpbier AJJA, Dörr PJ, Scheele F. Exploration of perceived effects of innovations in postgraduate medical education. MEDICAL EDUCATION 2013; 47:271-81. [PMID: 23398013 DOI: 10.1111/medu.12081] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
CONTEXT Many studies have examined how educational innovations in postgraduate medical education (PGME) impact on teaching and learning, but little is known about effects in the clinical workplace outside the strictly education-related domain. Insights into the full scope of effects may facilitate the implementation and acceptance of innovations because expectations can be made more realistic, and difficulties and pitfalls anticipated. Using workplace-based assessment (WBA) as a reference case, this study aimed to determine which types of effect are perceived by users of innovations in PGME. METHODS Focusing on WBA as a recent instance of innovation in PGME, we conducted semi-structured interviews to explore perceptions of the effects of WBA in a purposive sample of Dutch trainees and (lead) consultants in surgical and non-surgical specialties. Interviews conducted in 2011 with 17 participants were analysed thematically using template analysis. To support the exploration of effects outside the domain of education, the study design was informed by theory on the diffusion of innovations. RESULTS Six domains of effects of WBA were identified: sentiments (affinity with the innovation and emotions); dealing with the innovation; specialty training; teaching and learning; workload and tasks, and patient care. Users' affinity with WBA partly determined its effects on teaching and learning. Organisational support and the match between the innovation and routine practice were considered important to minimise additional workload and ensure that WBA was used for relevant rather than easily assessable training activities. Dealing with WBA stimulated attention for specialty training and placed specialty training on the agenda of clinical departments. CONCLUSIONS These outcomes are in line with theoretical notions regarding innovations in general and may be helpful in the implementation of other innovations in PGME. Given the substantial effects of innovations outside the strictly education-related domain, individuals designing and implementing innovations should consider all potential effects, including those identified in this study.
Collapse
Affiliation(s)
- Joanne P I Fokkema
- Department of Education, St Lucas Andreas Hospital, Amsterdam, the Netherlands.
| | | | | | | | | | | | | | | |
Collapse
|
20
|
Abbas F, Coburn M. Education and Training of an Academic Urologic Surgeon. Urolithiasis 2012. [DOI: 10.1007/978-1-4471-4387-1_101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
21
|
Ortwein H, Knigge M, Rehberg B, Hein OV, Spies C. Validation of core competencies during residency training in anaesthesiology. GERMAN MEDICAL SCIENCE : GMS E-JOURNAL 2011; 9:Doc23. [PMID: 21921997 PMCID: PMC3172723 DOI: 10.3205/000146] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2011] [Revised: 07/22/2011] [Indexed: 11/30/2022]
Abstract
BACKGROUND AND GOAL Curriculum development for residency training is increasingly challenging in times of financial restrictions and time limitations. Several countries have adopted the CanMEDS framework for medical education as a model into their curricula of specialty training. The purpose of the present study was to validate the competency goals, as derived from CanMEDS, of the Department of Anaesthesiology and Intensive Care Medicine of the Berlin Charité University Medical Centre, by conducting a staff survey. These goals for the qualification of specialists stipulate demonstrable competencies in seven areas: expert medical action, efficient collaboration in a team, communications with patients and family, management and organisation, lifelong learning, professional behaviour, and advocacy of good health. We had previously developed a catalogue of curriculum items based on these seven core competencies. In order to evaluate the validity of this catalogue, we surveyed anaesthetists at our department in regard to their perception of the importance of each of these items. In addition to the descriptive acquisition of data, it was intended to assess the results of the survey to ascertain whether there were differences in the evaluation of these objectives by specialists and registrars. METHODS The questionnaire with the seven adapted CanMEDS Roles included items describing each of their underlying competencies. Each anaesthetist (registrars and specialists) working at our institution in May of 2007 was asked to participate in the survey. Individual perception of relevance was rated for each item on a scale similar to the Likert system, ranging from 1 (highly relevant) to 5 (not at all relevant), from which ratings means were calculated. For determination of reliability, we calculated Cronbach's alpha. To assess differences between subgroups, we performed analysis of variance. RESULTS All seven roles were rated as relevant. Three of the seven competency goals (expert medical action, efficient collaboration in a team, and communication with patients and family) achieved especially high ratings. Only a few items differed significantly in their average rating between specialists and registrars. CONCLUSIONS We succeeded in validating the relevance of the adapted seven CanMEDS competencies for residency training within our institution. So far, many countries have adopted the Canadian Model, which indicates the great practicability of this competency-based model in curriculum planning. Roles with higher acceptance should be prioritised in existing curricula. It would be desirable to develop and validate a competency-based curriculum for specialty training in anaesthesiology throughout Germany by conducting a national survey to include specialists as well as registrars in curriculum development.
Collapse
Affiliation(s)
- Heiderose Ortwein
- Department of Anaesthesiology and Intensive Care Medicine, Charité-Universitätsmedizin Berlin, Campus Virchow Klinikum and Campus Mitte, Berlin, Germany.
| | | | | | | | | |
Collapse
|
22
|
Mortensen L, Malling B, Ringsted C, Rubak S. What is the impact of a national postgraduate medical specialist education reform on the daily clinical training 3.5 years after implementation? A questionnaire survey. BMC MEDICAL EDUCATION 2010; 10:46. [PMID: 20565832 PMCID: PMC2902490 DOI: 10.1186/1472-6920-10-46] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2009] [Accepted: 06/18/2010] [Indexed: 05/15/2023]
Abstract
BACKGROUND Many countries have recently reformed their postgraduate medical education (PGME). New pedagogic initiatives and blueprints have been introduced to improve quality and effectiveness of the education. Yet it is unknown whether these changes improved the daily clinical training. The purpose was to examine the impact of a national PGME reform on the daily clinical training practice. METHODS The Danish reform included change of content and format of specialist education in line with outcome-based education using the CanMEDS framework. We performed a questionnaire survey among all hospital doctors in the North Denmark Region. The questionnaire included items on educational appraisal meetings, individual learning plans, incorporating training issues into work routines, supervision and feedback, and interpersonal acquaintance. Data were collected before start and 31/2 years later. Mean score values were compared, and response variables were analysed by multiple regression to explore the relation between the ratings and seniority, type of hospital, type of specialty, and effect of attendance to courses in learning and teaching among respondents. RESULTS Response rates were 2105/2817 (75%) and 1888/3284 (58%), respectively. We found limited impact on clinical training practice and learning environment. Variances in ratings were hardly affected by type of hospital, whereas belonging to the laboratory specialities compared to other specialties was related to higher ratings concerning all aspects. CONCLUSIONS The impact on daily clinical training practice of a national PGME reform was limited after 31/2 years. Future initiatives must focus on changing the pedagogical competences of the doctors participating in daily clinical training and on implementation strategies for changing educational culture.
Collapse
Affiliation(s)
- Lene Mortensen
- Regional Hospital Viborg, Heiberg Alle 4, DK-8800 Viborg, Denmark
| | - Bente Malling
- Aarhus University Hospital Skejby, Department of Human Resources, Brendstrupgaardsvej, DK-8200 Aarhus N, Denmark
| | - Charlotte Ringsted
- University of Copenhagen and Capital Region, Centre of Clinical Education, Rigshospitalet afsnit 5404, Teilumbygningen, Blegdamsvej 9, DK-2100 Copenhagen Ø, Denmark
| | - Sune Rubak
- Aarhus University Hospital Skejby, Department of. Paediatrics, Brendstrupgaardsvej, DK-8200 Aarhus N, Denmark
| |
Collapse
|
23
|
Davis DJ, Ringsted C, Bonde M, Scherpbier A, van der Vleuten C. Using participatory design to develop structured training in child and adolescent psychiatry. Eur Child Adolesc Psychiatry 2009; 18:33-41. [PMID: 18545869 DOI: 10.1007/s00787-008-0700-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/08/2008] [Indexed: 10/22/2022]
Abstract
CONTEXT Learning during residency in child and adolescent psychiatry (CAP) is primarily work-based and has traditionally been opportunistic. There are increasing demands from both postgraduate trainees and medical organisations for structured programmes with defined learning outcomes. OBJECTIVES The aim of this study was to partner with postgraduate trainees and consultants in psychiatry to identify key learning issues that should be considered during CAP residency and to use these in designing a structured programme to meet the learning outcome requirements of a competency framework. METHODS Participatory design was used to structure a learning and assessment programme in CAP. First, during working seminars, consultants and postgraduate trainees were interviewed about the characteristics of the learning and working in CAP. These interviews were audio taped, transcribed and analyzed for recurrent themes to identify key issues. Descriptive results were fed back to the participants for validation. In a subsequent iterative process the researchers and practitioners partnered to construct a learning and assessment programme. RESULTS The tasks within CAP were poorly described by study participants. Several other types of professionals within the healthcare team perform many of the tasks a CAP postgraduate trainee has to learn. Participants had difficulties describing how learning takes place and what postgraduate trainees need to learn in CAP. The partnership between researchers and practitioners identified three key issues to consider in CAP residencies: (1) Preparation for tasks postgraduate trainees are expected to fulfil, (2) Ensuring acquisition of physician-specific knowledge and skills, and (3) Clarifying roles and professional identity within the team. A structured training programme incorporating the key learning issues identified was created. CONCLUSION Participatory design was very helpful to structure a contextually suitable training programme in CAP. The researchers speculate that this approach will result in easier implementation of the new training programme.
Collapse
Affiliation(s)
- Deborah J Davis
- Centre for Clinical Education, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
| | | | | | | | | |
Collapse
|
24
|
McKay J, Shepherd A, Bowie P, Lough M. Acceptability and educational impact of a peer feedback model for significant event analysis. MEDICAL EDUCATION 2008; 42:1210-1217. [PMID: 19120952 DOI: 10.1111/j.1365-2923.2008.03235.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
CONTEXT A model of independent, external review of significant event analysis by trained peers was introduced by NHS Scotland in 1998 to support the learning needs of general practitioners (GPs). Engagement with this feedback model has increased over time, but participants' views and experiences are largely unknown and there is limited evidence of its educational impact. This is important if external feedback is to play a potential role in appraisal and future revalidation. OBJECTIVE The study aimed to explore aspects of the acceptability and educational impact of this external feedback model with participating GPs. METHODS Semi-structured interviews were carried out with nine GPs. Participants were sampled to reflect their level of learning need (low, moderate or high) to gain a range of views and experiences. Transcribed interviews were analysed for content. RESULTS This system of external peer feedback is generally acceptable to participants. It complemented and enhanced the appraisal process. External feedback had positive educational outcomes, particularly in imparting technical knowledge on how to analyse significant events. Training issues for peer reviewers were suggested that would further enhance the educational gain from participation. There was disagreement over whether this type of feedback could or should be used as supporting evidence of the quality of doctors' work to educational and regulatory authorities. CONCLUSIONS The findings add to the evidence for the acceptability and educational impact of external review by trained peers. Aligning such a model with the current national appraisal system may provide GPs with a more robust demonstration of participation in reflective learning.
Collapse
Affiliation(s)
- John McKay
- Department of Postgraduate Medicine, Division of Community-based Sciences, University of Glasgow, Glasgow, UK.
| | | | | | | |
Collapse
|
25
|
Abstract
CONTEXT In education, tests are primarily used for assessment, thus permitting teachers to assess the efficacy of their curriculum and to assign grades. However, research in cognitive psychology has shown that tests can also directly affect learning by promoting better retention of information, a phenomenon known as the testing effect. COGNITIVE PSYCHOLOGY RESEARCH Cognitive psychology laboratory studies show that repeated testing of information produces superior retention relative to repeated study, especially when testing is spaced out over time. Tests that require effortful retrieval of information, such as short-answer tests, promote better retention than tests that require recognition, such as multiple-choice tests. The mnemonic benefits of testing are further enhanced by feedback, which helps students to correct errors and confirm correct answers. APPLICATION TO MEDICAL EDUCATION Medical educational research has focused extensively on assessment issues. Such assessment research permits the conclusion that clinical expertise is founded on a broad fund of knowledge and effective memory networks that allow easy access to that knowledge. Test-enhanced learning can potentially strengthen clinical knowledge that will lead to improved expertise. CONCLUSIONS Tests should be given often and spaced out in time to promote better retention of information. Questions that require effortful recall produce the greatest gains in memory. Feedback is crucial to learning from tests. Test-enhanced learning may be an effective tool for medical educators to use in promoting retention of clinical knowledge.
Collapse
Affiliation(s)
- Douglas P Larsen
- Department of Neurology, Washington University in St Louis, St Louis, Missouri 63110, USA.
| | | | | |
Collapse
|
26
|
Chou S, Cole G, McLaughlin K, Lockyer J. CanMEDS evaluation in Canadian postgraduate training programmes: tools used and programme director satisfaction. MEDICAL EDUCATION 2008; 42:879-86. [PMID: 18715485 DOI: 10.1111/j.1365-2923.2008.03111.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
CONTEXT The Royal College of Physicians and Surgeons of Canada (RCPSC) CanMEDS framework is being incorporated into specialty education worldwide. However, the literature on how to evaluate trainees in the CanMEDS competencies remains sparse. OBJECTIVES The goals of this study were to examine the assessment tools used and programme directors' perceptions of how well they evaluate performance of the CanMEDS roles in Canadian postgraduate training programmes. METHODS We conducted a web-based survey of programme directors of RCPSC-accredited training programmes. The survey consisted of two questions. Question 1 was designed to establish which assessment tools were used to assess each of the CanMEDS roles. Question 2 was intended to assess programme directors' perceived satisfaction with CanMEDS evaluation in their programmes. RESULTS A total of 149 of the eligible 280 programme directors participated in the survey. Programme directors used a variety of assessment tools to evaluate trainees in CanMEDS competencies. Programmes used more tools to evaluate the Medical Expert (mean = 4.03, standard deviation [SD] = 1.59) and Communicator (mean = 2.36, SD = 1.02) roles. Programme directors used the fewest tools for the Collaborator (mean = 1.75, SD = 1.10) and Manager (mean = 1.75, SD = 1.18) roles. More than 92% of the programmes used in-training evaluation reports to evaluate all the CanMEDS roles. Programme directors were satisfied with their evaluation of the Medical Expert role, but less so with assessment of the other CanMEDS competencies. CONCLUSIONS This study demonstrates that Canadian postgraduate training programmes use a variety of assessment tools to evaluate the CanMEDS competencies. Programme directors are neutral or concerned about how the CanMEDS roles other than that of Medical Expert are evaluated in their programmes. Further efforts are required to establish best practice in CanMEDS evaluation.
Collapse
Affiliation(s)
- Sophia Chou
- Department of Medicine, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada.
| | | | | | | |
Collapse
|
27
|
Bansal PK, Saoji VA, Gruppen LD. From a “Generalist” Medical Graduate to a “Specialty” Resident: Can an Entry-level Assessment Facilitate the Transition? Assessing the Preparedness Level of New Surgical Trainees. ANNALS OF THE ACADEMY OF MEDICINE, SINGAPORE 2007. [DOI: 10.47102/annals-acadmedsg.v36n9p719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
Abstract
Introduction: Concerns have been raised in the literature about how well the undergraduate curriculum prepares medical students for residency. An assessment was designed and administered to entering postgraduate residents in surgery to test their preparedness vis-a-vis the competence level expected of them at the beginning of their training. This paper explores the role and place of such an assessment in the medical education continuum.
Materials and Methods: Faculty members from the Department of Surgery at Bharati Vidyapeeth University Medical College (BVUMC), Pune, India and experts from the Department of Medical Education, University of Michigan Medical School, Ann Arbor designed and administered an assessment based on the multiple-choice question examination (MCQE) and objective structured clinical examination (OSCE) in June 2005 to 24 examinees from 3 different training levels at BVUMC.
Results: All subsections of the MCQE showed significant correlation except the breast and endocrine section. The test showed an overall reliability of 0.8 (Cronbach’s alpha). The scores and level of difficulty of the OSCE were inversely related. There was a significant difference in performance between the 3 groups and these differences were more pronounced for more complex tasks, specifically the procedural skills station, where the intern performance was particularly poor. Clinical skills reliability was 0.85. The communication skills score correlated well with the clinical skills score and also showed good reliability. Four out of the 5 new residents had below-satisfactory levels of competence for this level.
Conclusion: This pilot study reveals definite educational gaps in both knowledge and skills among the residents studied. Such an intervention can be very informative, providing immense educational benefit to the learner, faculty and programme, and has an important place in the continuum of medical training.
Key words: Competence, Education, Postgraduate, Surgery
Collapse
Affiliation(s)
- Payal K Bansal
- Bharati Vidyapeeth University Medical College, Dhankawadi, Pune, India
| | - Vivek A Saoji
- Bharati Vidyapeeth University Medical College, Dhankawadi, Pune, India
| | | |
Collapse
|
28
|
Ringsted C, Hansen TL, Davis D, Scherpbier A. Are some of the challenging aspects of the CanMEDS roles valid outside Canada? MEDICAL EDUCATION 2006; 40:807-15. [PMID: 16869928 DOI: 10.1111/j.1365-2929.2006.02525.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
CONTEXT Many countries have adopted the CanMEDS roles. However, there is limited information on how these apply in an international context and in different specialties. OBJECTIVES To survey trainee and specialist ratings of the importance of the CanMEDS roles and perceived ability to perform tasks within the roles. METHODS We surveyed 8749 doctors within a defined region (eastern Denmark) via a single-issue, mailed questionnaire. Each of the 7 roles was represented by 3 questionnaire items to be rated for perceived importance and confidence in ability to perform the role. RESULTS Responses were received from 3476 doctors (42.8%), including 190 interns, 201 doctors in the introductory year of specialist training, 529 residents and 2152 specialists. The overall mean rating of importance (on a scale of 1-5) of the aspects of competence described in the CanMEDS roles was 4.2 (standard deviation 0.6) and did not differ between trainee groups and specialists. Mean ratings of confidence were lower than ratings of importance and increased across the groups from interns to specialists. Differences between specialty groups were evident in both importance and confidence for many of the roles. For laboratory, technical and, to a lesser extent, cognitive specialties, the role of Health Advocate scored the lowest in importance. For general medicine specialties, the roles of Medical Expert, Collaborator, Manager and Scholar all scored lower for importance and confidence. CONCLUSIONS This study provides a sketch of the content and construct validity of the CanMEDS roles in a non-Canadian setting. More research is needed in how these aspects of competence can be best taught and applied across specialties in different jurisdictions.
Collapse
Affiliation(s)
- Charlotte Ringsted
- Centre of Clinical Education, Copenhagen University Hospital, Rigshospitalet, Copenhagen, Denmark.
| | | | | | | |
Collapse
|
29
|
Ringsted C, Skaarup AM, Henriksen AH, Davis D. Person-task-context: a model for designing curriculum and in-training assessment in postgraduate education. MEDICAL TEACHER 2006; 28:70-6. [PMID: 16627328 DOI: 10.1080/01421590500237721] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Structured curricula for senior house officers have often been lacking. The aim of this study was to trial a person-task-context model in designing a curriculum and in-training assessment (ITA) programme for SHOs in internal medicine. A working group designed the programme based on triangulation of information from interviews with trainees and programme directors, analysis of patient case mix and national quality assurance data. The interview data showed that the main difference currently between trainee levels was in expected degree of responsibility for patient management rather than in actual tasks. Key learning needs were how to take a structured approach to the tasks and get an overview of situations. SHOs expressed a need for explicit learning goals and standards of performance. SHOs requested formal teaching in non-medical aspects of competence such as communication, interpersonal skills and professionalism. This article points out how consideration of the type of trainees involved, the tasks they must do and learn, and the context in which they work are important in designing postgraduate curricula. The person-task-context model can be used to tailor curricula and ITA that support learning and may be especially beneficial in promoting learning in non-dominant areas of a specialty.
Collapse
Affiliation(s)
- C Ringsted
- Copenhagen Hospital Corporation Postgraduate Medical Institute, Denmark.
| | | | | | | |
Collapse
|
30
|
Ringsted C, Pallisgaard J, Østergaard D, Scherpbier A. The effect of in-training assessment on clinical confidence in postgraduate education. MEDICAL EDUCATION 2004; 38:1261-1269. [PMID: 15566537 DOI: 10.1111/j.1365-2929.2004.02018.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION The literature on how in-training assessment (ITA) works in practice and what educational outcomes can actually be achieved is limited. One of the aims of introducing ITA is to increase trainees' clinical confidence; this relies on the assumption that assessment drives learning through its content, format and programming. The aim of this study was to investigate the effect of introducing a structured ITA programme on junior doctors' clinical confidence. The programme was aimed at first year trainees in anaesthesiology. METHODS The study involved a nationwide survey of junior doctors' self-confidence in clinical performance before (in 2001) and 2 years after (in 2003) the introduction of an ITA programme. Respondents indicated confidence on a 155-item questionnaire related to performance of clinical skills and tasks reflecting broad aspects of competence. A total of 23 of these items related to the ITA programme. RESULTS The response rate was 377/531 (71%) in 2001 and 344/521 (66%) in 2003. There were no statistically significant differences in mean levels of confidence before and 2 years after the introduction of the ITA programme - neither in aspects that were related to the programme nor in those that were unrelated to the programme. DISCUSSION This study demonstrates that the introduction of a structured ITA programme did not have any significant effect on trainees' mean level of confidence on a broad range of aspects of clinical competence. The importance of timeliness and rigorousness in the application of ITA is discussed.
Collapse
Affiliation(s)
- Charlotte Ringsted
- Copenhagen Hospital Corporation Postgraduate Medical Institute, Bispebjerg Hospital, Copenhagen, Denmark.
| | | | | | | |
Collapse
|