151
|
Scheele F, Novak Z, Vetter K, Caccia N, Goverde A. Obstetrics and gynaecology training in Europe needs a next step. Eur J Obstet Gynecol Reprod Biol 2014; 180:130-2. [DOI: 10.1016/j.ejogrb.2014.04.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Accepted: 04/08/2014] [Indexed: 11/30/2022]
|
152
|
van der Vleuten CPM. Medical education research: a vibrant community of research and education practice. MEDICAL EDUCATION 2014; 48:761-7. [PMID: 25039732 DOI: 10.1111/medu.12508] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2014] [Revised: 03/24/2014] [Accepted: 03/26/2014] [Indexed: 05/11/2023]
Abstract
OBJECTIVES Medical education research is thriving. In recent decades, numbers of journals and publications have increased enormously, as have the number and size of medical education meetings around the world. The aim of this paper is to shed some light on the origins of this success. My central argument is that dialogue between education practice (and its teachers) and education research (and its researchers) is indispensable. REFLECTIONS To illustrate how I have come to this perspective, I discuss two crucial developments of personal import to myself. The first is the development of assessment theory informed by both research findings and insights emerging from implementations conducted in collaboration with teachers and learners. The second is the establishment of a department of education that includes many members from the medical domain. CONCLUSIONS Medical education is thriving because it is shaped and nourished within a community of practice of collaborating teachers, practitioners and researchers. This obviates the threat of a fissure between education research and education practice. The values of this community of practice - inclusiveness, openness, supportiveness, nurture and mentorship - are key elements for its sustainability. In pacing the development of our research in a manner that maintains this synergy, we should be mindful of the zone of proximal development of our community of practice.
Collapse
Affiliation(s)
- Cees P M van der Vleuten
- Department of Educational Development and Research, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
153
|
Sánchez del Hierro G, Remmen R, Verhoeven V, Van Royen P, Hendrickx K. Are recent graduates enough prepared to perform obstetric skills in their rural and compulsory year? A study from Ecuador. BMJ Open 2014; 4:e005759. [PMID: 25082424 PMCID: PMC4120372 DOI: 10.1136/bmjopen-2014-005759] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The aim of this study was to assess the possible mismatch of obstetrical skills between the training offered in Ecuadorian medical schools and the tasks required for compulsory rural service. SETTING Primary care, rural health centres in Southern Ecuador. PARTICIPANTS A total of 92 recent graduated medical doctors during their compulsory rural year. PRIMARY AND SECONDARY OUTCOMES MEASURES A web-based survey was developed with 21 obstetrical skills. The questionnaire was sent to all rural doctors who work in Loja province, Southern Ecuador, at the Ministry of Health (n=92). WE MEASURED TWO CATEGORIES 'importance of skills in rural practice' with a five-point Likert-type scale (1= strongly disagree; 5= strongly agree); and 'clerkship experience' using a nominal scale divided in five levels: level 1 (not seen, not performed) to level 5 (performed 10 times or more). Spearman's rank correlation coefficient (r) was used to observe associations. RESULTS A negative correlation was found in the skills: 'episiotomy and repair', 'umbilical vein catheterisation', 'speculum examination', 'evaluation of cervical dilation during active labour', 'neonatal resuscitation' and 'vacuum-assisted vaginal delivery'. For instance 'Episiotomy and repair' is important (strongly agree and agree) to 100% of respondents, but in practice, only 38.9% of rural doctors performed the task three times and 8.3% only once during the internship, similar pattern is seen in the others. CONCLUSIONS In this study we have noted the gap between the medical needs of populations in rural areas and training provided during the clerkship experiences of physicians during their rural service year. It is imperative to ensure that rural doctors are appropriately trained and skilled in the performance of routine obstetrical duties. This will help to decrease perinatal morbidity and mortality in rural Ecuador.
Collapse
Affiliation(s)
- Galo Sánchez del Hierro
- Departamento de Ciencias de la Salud, Universidad Técnica Particular de Loja, Loja, Ecuador
- Department of Primary and Interdisciplinary Care, Faculty of Medicine and Health Sciences,University of Antwerp, Antwerp, Belgium
| | - Roy Remmen
- Department of Primary and Interdisciplinary Care, Faculty of Medicine and Health Sciences,University of Antwerp, Antwerp, Belgium
| | - Veronique Verhoeven
- Department of Primary and Interdisciplinary Care, Faculty of Medicine and Health Sciences,University of Antwerp, Antwerp, Belgium
| | - Paul Van Royen
- Department of Primary and Interdisciplinary Care, Faculty of Medicine and Health Sciences,University of Antwerp, Antwerp, Belgium
| | - Kristin Hendrickx
- Department of Primary and Interdisciplinary Care, Faculty of Medicine and Health Sciences,University of Antwerp, Antwerp, Belgium
| |
Collapse
|
154
|
Lafave MR, Butterwick DJ. A Generalizability Theory Study of Athletic Taping Using the Technical Skill Assessment Instrument. J Athl Train 2014; 49:368-72. [DOI: 10.4085/1062-6050-49.2.22] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Context:
Athletic taping skills are highly valued clinical competencies in the athletic therapy and training profession. The Technical Skill Assessment Instrument (TSAI) has been content validated and tested for intrarater reliability.
Objective:
To test the reliability of the TSAI using a more robust measure of reliability, generalizability theory, and to hypothetically and mathematically project the optimal number of raters and scenarios to reliably measure athletic taping skills in the future.
Setting:
Mount Royal University.
Design:
Observational study.
Patients or Other Participants:
A total of 29 university students (8 men, 21 women; age = 20.79 ± 1.59 years) from the Athletic Therapy Program at Mount Royal University.
Intervention(s):
Participants were allowed 10 minutes per scenario to complete prophylactic taping for a standardized patient presenting with (1) a 4-week-old second-degree ankle sprain and (2) a thumb that had been hyperextended. Two raters judged student performance using the TSAI.
Main Outcome Measure(s):
Generalizability coefficients were calculated using variance scores for raters, participants, and scenarios. A decision study was calculated to project the optimal number of raters and scenarios to achieve acceptable levels of reliability. Generalizability coefficients were interpreted the same as other reliability coefficients, with 0 indicating no reliability and 1.0 indicating perfect reliability.
Results:
The result of our study design (2 raters, 1 standardized patient, 2 scenarios) was a generalizability coefficient of 0.67. Decision study projects indicated that 4 scenarios were necessary to reliably measure athletic taping skills.
Conclusions:
We found moderate reliability coefficients. Researchers should include more scenarios to reliably measure athletic taping skills. They should also focus on the development of evidence-based practice guidelines and standards of athletic taping and should test those standards using a psychometrically sound instrument, such as the TSAI.
Collapse
Affiliation(s)
- Mark R. Lafave
- Department of Physical Education and Recreation Studies, Mount Royal University, Calgary, AB, Canada
| | | |
Collapse
|
155
|
van der Vleuten CPM, Driessen EW. What would happen to education if we take education evidence seriously? PERSPECTIVES ON MEDICAL EDUCATION 2014; 3:222-232. [PMID: 24925627 PMCID: PMC4078056 DOI: 10.1007/s40037-014-0129-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Educational practice and educational research are not aligned with each other. Current educational practice heavily relies on information transmission or content delivery to learners. Yet evidence shows that delivery is only a minor part of learning. To illustrate the directions we might take to find better educational strategies, six areas of educational evidence are briefly reviewed. The flipped classroom idea is proposed to shift our expenditure and focus in education. All information delivery could be web distributed, thus creating more time for other more expensive educational strategies to support the learner. In research our focus should shift from comparing one curriculum to the other, to research that explains why things work in education and under which conditions. This may generate ideas for creative designers to develop new educational strategies. These best practices should be shared and further researched. At the same time attention should be paid to implementation and the realization that teachers learn in a way very similar to the people they teach. If we take the evidence seriously, our educational practice will look quite different to the way it does now.
Collapse
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, the Netherlands.
| | - E W Driessen
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, the Netherlands
| |
Collapse
|
156
|
Ferguson J, Wakeling J, Bowie P. Factors influencing the effectiveness of multisource feedback in improving the professional practice of medical doctors: a systematic review. BMC MEDICAL EDUCATION 2014; 14:76. [PMID: 24725268 PMCID: PMC4011765 DOI: 10.1186/1472-6920-14-76] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 04/03/2014] [Indexed: 05/09/2023]
Abstract
BACKGROUND Multisource feedback (MSF) is currently being introduced in the UK as part of a cycle of performance review for doctors. However, although it is suggested that the provision of feedback can lead to a positive change in performance and learning for medical professionals, the evidence supporting these assumptions is unclear. The aim of this review, therefore, was to identify the key factors that influence the effectiveness of multisource feedback in improving the professional practice of medical doctors. METHOD Relevant electronic bibliographic databases were searched for studies that aimed to assess the impact of MSF on professional practice. Two reviewers independently selected and quality assessed the studies and abstracted data regarding study design, setting, MSF instrument, behaviour changes identified and influencing factors using a standard data extraction form. RESULTS A total of 16 studies met the inclusion criteria and quality assessment criteria. While seven studies reported only a general change in professional practice, a further seven studies identified specific changes in behaviour. The main professional behaviours that were found to be influenced by the feedback were communication, both with colleagues and patients and an improvement in clinical competence/skills. The main factors found to influence the acceptance and use of MSF were the format of the feedback, specifically in terms of whether it was facilitated, or if narrative comments were included in the review, and if the feedback was from sources that the physician believed to be knowledgeable and credible. CONCLUSIONS While there is limited evidence suggesting that MSF can influence professional performance, the quality of this evidence is variable. Further research is necessary to establish how this type of feedback actually influences behaviours and what factors have greatest influence.
Collapse
Affiliation(s)
- Julie Ferguson
- NHS Education for Scotland, 3rd Floor, 2 Central Quay, 89 Hydepark Street, Glasgow G3 8BW, Scotland
- Queen Margaret University, Edinburgh EH21 6UU, Scotland
| | - Judy Wakeling
- NHS Education for Scotland, 3rd Floor, 2 Central Quay, 89 Hydepark Street, Glasgow G3 8BW, Scotland
| | - Paul Bowie
- NHS Education for Scotland, 3rd Floor, 2 Central Quay, 89 Hydepark Street, Glasgow G3 8BW, Scotland
| |
Collapse
|
157
|
van den Eertwegh V, van Dalen J, van Dulmen S, van der Vleuten C, Scherpbier A. Residents' perceived barriers to communication skills learning: comparing two medical working contexts in postgraduate training. PATIENT EDUCATION AND COUNSELING 2014; 95:91-7. [PMID: 24468200 DOI: 10.1016/j.pec.2014.01.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 12/23/2013] [Accepted: 01/04/2014] [Indexed: 05/13/2023]
Abstract
OBJECTIVE Contextual factors are known to influence the acquisition and application of communication skills in clinical settings. Little is known about residents' perceptions of these factors. This article aims to explore residents' perceptions of contextual factors affecting the acquisition and application of communication skills in the medical workplace. METHOD We conducted an exploratory study comprising seven focus groups with residents in two different specialities: general practice (n=23) and surgery (n=18). RESULTS Residents perceive the use of summative assessment checklists that reduce communication skills to behavioural components as impeding the learning of their communication skills. Residents perceive encouragement to deliberately practise in an environment in which the value of communication skills is recognised and support is institutionalised with appropriate feedback from role models as the most important enhancing factors in communication skills learning. CONCLUSION To gradually realise a clinical working environment in which the above results are incorporated, we propose to use transformative learning theory to guide further studies. PRACTICAL IMPLICATIONS Provided it is used continuously, an approach that combines self-directed learning with observation and discussion of resident-patient consultations seems an effective method for transformative learning of communication skills.
Collapse
Affiliation(s)
| | - Jan van Dalen
- Skillslab, Maastricht University, Maastricht, The Netherlands
| | - Sandra van Dulmen
- NIVEL (Netherlands Institute for Health Services Research), Utrecht, The Netherlands; Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands; Buskerud University College, Drammen, Norway
| | - Cees van der Vleuten
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands; University of Copenhagen, Copenhagen, Denmark; King Saudi University, Riyadh, Saudi Arabia; Radboud University, Nijmegen, The Netherlands
| | - Albert Scherpbier
- Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
158
|
Burt J, Abel G, Elmore N, Campbell J, Roland M, Benson J, Silverman J. Assessing communication quality of consultations in primary care: initial reliability of the Global Consultation Rating Scale, based on the Calgary-Cambridge Guide to the Medical Interview. BMJ Open 2014; 4:e004339. [PMID: 24604483 PMCID: PMC3948635 DOI: 10.1136/bmjopen-2013-004339] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
OBJECTIVES To investigate initial reliability of the Global Consultation Rating Scale (GCRS: an instrument to assess the effectiveness of communication across an entire doctor-patient consultation, based on the Calgary-Cambridge guide to the medical interview), in simulated patient consultations. DESIGN Multiple ratings of simulated general practitioner (GP)-patient consultations by trained GP evaluators. SETTING UK primary care. PARTICIPANTS 21 GPs and six trained GP evaluators. OUTCOME MEASURES GCRS score. METHODS 6 GP raters used GCRS to rate randomly assigned video recordings of GP consultations with simulated patients. Each of the 42 consultations was rated separately by four raters. We considered whether a fixed difference between scores had the same meaning at all levels of performance. We then examined the reliability of GCRS using mixed linear regression models. We augmented our regression model to also examine whether there were systematic biases between the scores given by different raters and to look for possible order effects. RESULTS Assessing the communication quality of individual consultations, GCRS achieved a reliability of 0.73 (95% CI 0.44 to 0.79) for two raters, 0.80 (0.54 to 0.85) for three and 0.85 (0.61 to 0.88) for four. We found an average difference of 1.65 (on a 0-10 scale) in the scores given by the least and most generous raters: adjusting for this evaluator bias increased reliability to 0.78 (0.53 to 0.83) for two raters; 0.85 (0.63 to 0.88) for three and 0.88 (0.69 to 0.91) for four. There were considerable order effects, with later consultations (after 15-20 ratings) receiving, on average, scores more than one point higher on a 0-10 scale. CONCLUSIONS GCRS shows good reliability with three raters assessing each consultation. We are currently developing the scale further by assessing a large sample of real-world consultations.
Collapse
Affiliation(s)
- Jenni Burt
- Cambridge Centre for Health Services Research, University of Cambridge, Cambridge, UK
| | - Gary Abel
- Cambridge Centre for Health Services Research, University of Cambridge, Cambridge, UK
| | - Natasha Elmore
- Cambridge Centre for Health Services Research, University of Cambridge, Cambridge, UK
| | - John Campbell
- University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Martin Roland
- Cambridge Centre for Health Services Research, University of Cambridge, Cambridge, UK
| | - John Benson
- Primary Care Unit, University of Cambridge, Cambridge, UK
| | | |
Collapse
|
159
|
|
160
|
Sheepway L, Lincoln M, McAllister S. Impact of placement type on the development of clinical competency in speech-language pathology students. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2014; 49:189-203. [PMID: 24182204 DOI: 10.1111/1460-6984.12059] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
BACKGROUND Speech-language pathology students gain experience and clinical competency through clinical education placements. However, currently little empirical information exists regarding how competency develops. Existing research about the effectiveness of placement types and models in developing competency is generally descriptive and based on opinions and perceptions. The changing nature of education of speech-language pathology students, diverse student cohorts, and the crisis in finding sufficient clinical education placements mean that establishing the most effective and efficient methods for developing clinical competency in students is needed. AIMS To gather empirical information regarding the development of competence in speech-language pathology students; and to determine if growth of competency differs in groups of students completing placements that differ in terms of caseload, intensity and setting. METHODS & PROCEDURES Participants were students in the third year of a four-year undergraduate speech-language pathology degree who completed three clinical placements across the year and were assessed with the COMPASS® competency assessment tool. Competency development for the whole group across the three placements is described. Growth of competency in groups of students completing different placement types is compared. Interval-level data generated from the students' COMPASS® results were subjected to parametric statistical analyses. OUTCOMES & RESULTS The whole group of students increased significantly in competency from placement to placement across different placement settings, intensities and client age groups. Groups completing child placements achieved significantly higher growth in competency when compared with the competency growth of students completing adult placements. Growth of competency was not significantly different for students experiencing different intensity of placements, or different placement settings. CONCLUSIONS & IMPLICATIONS These results confirm that the competency of speech-language pathology students develops across three clinical placements over a one-year period regardless of placement type or context, indicating that there may be a transfer of learning between placements types. Further research investigating patterns of competency development in speech-language pathology students is warranted to ensure that assumptions used to design clinical learning opportunities are based on valid evidence.
Collapse
Affiliation(s)
- Lyndal Sheepway
- Work Integrated Learning, Faculty of Health Sciences, University of Sydney, Lidcombe, NSW, Australia
| | | | | |
Collapse
|
161
|
Southgate L, van der Vleuten CPM. A conversation about the role of medical regulators. MEDICAL EDUCATION 2014; 48:215-218. [PMID: 24528403 DOI: 10.1111/medu.12309] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
|
162
|
Govaerts M, van der Vleuten CPM. Validity in work-based assessment: expanding our horizons. MEDICAL EDUCATION 2013; 47:1164-74. [PMID: 24206150 DOI: 10.1111/medu.12289] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2013] [Revised: 05/08/2013] [Accepted: 06/14/2013] [Indexed: 05/13/2023]
Abstract
CONTEXT Although work-based assessments (WBA) may come closest to assessing habitual performance, their use for summative purposes is not undisputed. Most criticism of WBA stems from approaches to validity consistent with the quantitative psychometric framework. However, there is increasing research evidence that indicates that the assumptions underlying the predictive, deterministic framework of psychometrics may no longer hold. In this discussion paper we argue that meaningfulness and appropriateness of current validity evidence can be called into question and that we need alternative strategies to assessment and validity inquiry that build on current theories of learning and performance in complex and dynamic workplace settings. METHODS Drawing from research in various professional fields we outline key issues within the mechanisms of learning, competence and performance in the context of complex social environments and illustrate their relevance to WBA. In reviewing recent socio-cultural learning theory and research on performance and performance interpretations in work settings, we demonstrate that learning, competence (as inferred from performance) as well as performance interpretations are to be seen as inherently contextualised, and can only be under-stood 'in situ'. Assessment in the context of work settings may, therefore, be more usefully viewed as a socially situated interpretive act. DISCUSSION We propose constructivist-interpretivist approaches towards WBA in order to capture and understand contextualised learning and performance in work settings. Theoretical assumptions underlying interpretivist assessment approaches call for a validity theory that provides the theoretical framework and conceptual tools to guide the validation process in the qualitative assessment inquiry. Basic principles of rigour specific to qualitative research have been established, and they can and should be used to determine validity in interpretivist assessment approaches. If used properly, these strategies generate trustworthy evidence that is needed to develop the validity argument in WBA, allowing for in-depth and meaningful information about professional competence.
Collapse
Affiliation(s)
- Marjan Govaerts
- Educational Development and Research, Maastricht University, Maastricht, the Netherlands
| | | |
Collapse
|
163
|
Moonen-van Loon JMW, Overeem K, Donkers HHLM, van der Vleuten CPM, Driessen EW. Composite reliability of a workplace-based assessment toolbox for postgraduate medical education. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:1087-102. [PMID: 23494202 DOI: 10.1007/s10459-013-9450-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2012] [Accepted: 02/21/2013] [Indexed: 05/16/2023]
Abstract
In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and certification as a specialist are based on the composite results of different WBAs collected in a portfolio, to our knowledge, the reliability of such a WBA toolbox has never been investigated. Using generalisability theory, we analysed the separate and composite reliability of three WBA tools [mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), and multisource feedback (MSF)] included in a resident portfolio. G-studies and D-studies of 12,779 WBAs from a total of 953 residents showed that a reliability coefficient of 0.80 was obtained for eight mini-CEXs, nine DOPS, and nine MSF rounds, whilst the same reliability was found for seven mini-CEXs, eight DOPS, and one MSF when combined in a portfolio. At the end of the first year of residency a portfolio with five mini-CEXs, six DOPS, and one MSF afforded reliable judgement. The results support the conclusion that several WBA tools combined in a portfolio can be a feasible and reliable method for high-stakes judgements.
Collapse
Affiliation(s)
- J M W Moonen-van Loon
- Department of Educational Research and Development, Faculty of Health, Medicine, and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands,
| | | | | | | | | |
Collapse
|
164
|
van der Leeuw RM, Overeem K, Arah OA, Heineman MJ, Lombarts KMJMH. Frequency and determinants of residents' narrative feedback on the teaching performance of faculty: narratives in numbers. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:1324-31. [PMID: 23886996 DOI: 10.1097/acm.0b013e31829e3af4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
PURPOSE Physicians involved in residency training often receive feedback from residents on their teaching. Research shows that learners value narrative feedback, but knowledge of the frequency and determinants of narrative feedback in teaching performance evaluation is lacking. This study aims to identifythe frequency with which residentsgave positive comments and suggestions for improvement to faculty, and the factors influencing that frequency. METHOD From September 2008 through May 2010, the authors collected data, using a validated formative feedback system (System for Evaluation of Teaching Qualities). The authors used univariate and multivariable analysis to investigate the associations between participants' characteristics, including faculty members' teaching performance, and the frequency of the two types of narrative comments. RESULTS In total, 659 residents (79% of 839) completed 6,216 evaluations on 917 faculty (95% of 964), resulting in 11,574 positive comments and 4,870 suggestions for improvement. On average, faculty members received 13 positive comments and 5 suggestions for improvement. Multivariable analysis showed that higher teaching performance was associated with higher numbers of positive comments (regression coefficient 0.538; 95% confidence interval: 0.464 to 0.613) and with lower numbers of suggestions for improvement (-0.802; -0.911 to -0.692), both P < .0001. Nonacademic hospitals, participation in teacher training, and female residents' evaluation were statistically significant determinants of receiving more narrative feedback. CONCLUSIONS Residents provided narrative feedback that paralleled and elaborated on quantitative evaluations they provided; therefore, faculty would be wise to attend to narrative feedback. Analysis of the quality of narrative feedback is needed to understand its effectiveness.
Collapse
Affiliation(s)
- Renée M van der Leeuw
- Center for Evidence-based Education, Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands.
| | | | | | | | | |
Collapse
|
165
|
Pelgrim EAM, Kramer AWM, Mokkink HGA, van der Vleuten CPM. Reflection as a component of formative assessment appears to be instrumental in promoting the use of feedback; an observational study. MEDICAL TEACHER 2013; 35:772-8. [PMID: 23808841 DOI: 10.3109/0142159x.2013.801939] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
BACKGROUND Although the literature suggests that reflection has a positive impact on learning, there is a paucity of evidence to support this notion. AIM We investigated feedback and reflection in relation to the likelihood that feedback will be used to inform action plans. We hypothesised that feedback and reflection present a cumulative sequence (i.e. trainers only pay attention to trainees' reflections when they provided specific feedback) and we hypothesised a supplementary effect of reflection. METHOD We analysed copies of assessment forms containing trainees' reflections and trainers' feedback on observed clinical performance. We determined whether the response patterns revealed cumulative sequences in line with the Guttman scale. We further examined the relationship between reflection, feedback and the mean number of specific comments related to an action plan (ANOVA) and we calculated two effect sizes. RESULTS Both hypotheses were confirmed by the results. The response pattern found showed an almost perfect fit with the Guttman scale (0.99) and reflection seems to have supplementary effect on the variable action plan. CONCLUSIONS Reflection only occurs when a trainer has provided specific feedback; trainees who reflect on their performance are more likely to make use of feedback. These results confirm findings and suggestions reported in the literature.
Collapse
Affiliation(s)
- E A M Pelgrim
- Radboud University Nijmegen Medical Centre, the Netherlands.
| | | | | | | |
Collapse
|
166
|
Govaerts MJB, Van de Wiel MWJ, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: raters' performance theories and constructs. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:375-96. [PMID: 22592323 PMCID: PMC3728456 DOI: 10.1007/s10459-012-9376-x] [Citation(s) in RCA: 116] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2011] [Accepted: 04/25/2012] [Indexed: 05/14/2023]
Abstract
Weaknesses in the nature of rater judgments are generally considered to compromise the utility of workplace-based assessment (WBA). In order to gain insight into the underpinnings of rater behaviours, we investigated how raters form impressions of and make judgments on trainee performance. Using theoretical frameworks of social cognition and person perception, we explored raters' implicit performance theories, use of task-specific performance schemas and the formation of person schemas during WBA. We used think-aloud procedures and verbal protocol analysis to investigate schema-based processing by experienced (N = 18) and inexperienced (N = 16) raters (supervisor-raters in general practice residency training). Qualitative data analysis was used to explore schema content and usage. We quantitatively assessed rater idiosyncrasy in the use of performance schemas and we investigated effects of rater expertise on the use of (task-specific) performance schemas. Raters used different schemas in judging trainee performance. We developed a normative performance theory comprising seventeen inter-related performance dimensions. Levels of rater idiosyncrasy were substantial and unrelated to rater expertise. Experienced raters made significantly more use of task-specific performance schemas compared to inexperienced raters, suggesting more differentiated performance schemas in experienced raters. Most raters started to develop person schemas the moment they began to observe trainee performance. The findings further our understanding of processes underpinning judgment and decision making in WBA. Raters make and justify judgments based on personal theories and performance constructs. Raters' information processing seems to be affected by differences in rater expertise. The results of this study can help to improve rater training, the design of assessment instruments and decision making in WBA.
Collapse
Affiliation(s)
- M J B Govaerts
- Department of Educational Research and Development, FHML, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands.
| | | | | | | | | |
Collapse
|
167
|
Driessen E, Scheele F. What is wrong with assessment in postgraduate training? Lessons from clinical practice and educational research. MEDICAL TEACHER 2013; 35:569-74. [PMID: 23701250 DOI: 10.3109/0142159x.2013.798403] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Workplace-based assessment is more commonly given a lukewarm than a warm welcome by its prospective users. In this article, we summarise the workplace-based assessment literature as well as our own experiences with workplace-based assessment to derive lessons that can facilitate acceptance of workplace-based assessment in postgraduate specialty training. We propose to shift the emphasis in workplace-based assessment from assessment of trainee performance to the learning of trainees. Workplace-based assessment should focus on supporting supervisors in taking entrustment decisions by complementing their "gut feeling" with information from assessments and focus less on assessment and testability. One of the most stubborn problems with workplace-based assessment is the absence of observation of trainees and the lack of feedback based on observations. Non-standardised observations are used to organise feedback. To make these assessments meaningful for learning, it is essential that they are not perceived as summative by their users, that they provide narrative feedback for the learner and that there is a form of facilitation that helps to integrate the feedback in trainees' self-assessments.
Collapse
|
168
|
Affiliation(s)
- Felix Ankel
- Department of Emergency Medicine; Regions Hospital; St. Paul MN
| | - Douglas Franzen
- Department of Emergency Medicine; Virginia Commonwealth University; Richmond VA
| | - Jason Frank
- Department of Emergency Medicine University of Ottawa; Ottowa Ontario Canada
| |
Collapse
|
169
|
DeNunzio NJ, Joseph L, Handal R, Agarwal A, Ahuja D, Hirsch AE. Devising the optimal preclinical oncology curriculum for undergraduate medical students in the United States. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2013; 28:228-36. [PMID: 23681770 DOI: 10.1007/s13187-012-0442-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A third of women and a near majority of men in the United States will be diagnosed with cancer in their lifetimes. To prepare future physicians for this reality, we have developed a preclinical oncology curriculum that introduces second-year medical students to essential concepts and practices in oncology to improve their abilities to appropriately care for these patients. We surveyed the oncology and education literature and compiled subjects important to students' education including basic science and clinical aspects of oncology and addressing patients' psychosocial needs. Along with the proposed curriculum content, scheduling, independent learning exercises, and case studies, we discuss practical considerations for curriculum implementation based on experience at our institution. Given the changing oncology healthcare landscape, all (new) physicians must competently address their cancer patients' needs, regardless of chosen specialty. A thorough and logically organized cancer curriculum for preclinical medical students should help achieve these aims. This new model curriculum, with accompanying strategies to evaluate its efforts, is essential to update how medical students are educated about cancer.
Collapse
Affiliation(s)
- Nicholas J DeNunzio
- Department of Radiation Oncology, Boston University School of Medicine, 830 Harrison Avenue, Moakley Building - LL, Boston, MA, 02118, USA
| | | | | | | | | | | |
Collapse
|
170
|
Sulaiman ND, Hamdy H. Assessment of clinical competencies using clinical images and videos "CIVA". BMC MEDICAL EDUCATION 2013; 13:78. [PMID: 23721093 PMCID: PMC3673902 DOI: 10.1186/1472-6920-13-78] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Accepted: 05/07/2013] [Indexed: 06/02/2023]
Abstract
BACKGROUND This paper describes an assessment approach of clinical competencies which widens the number of problems and tasks evaluated using videos and images. METHOD Clinical Image and Video Assessment (CIVA) was used to assess clinical reasoning and decision making of final year medical students. Forty to fifty clinical videos and images supported by rich text vignette and reviewed by subject matter experts were selected based on examination blueprints for analysis. CIVA scores were correlated with OSCE, Direct Observation Clinical Encounter Exam (DOCEE) and written exam scores, using the 2-sided Pearson correlation analysis, and their reliability was analyzed using Cronbach's Alpha Coefficient. Furthermore, students personally evaluated the CIVA using a 5- point Likert scale. RESULTS CIVA and OSCE scores showed a high correlation (r = 0.83) in contrast with the correlation scores of the written examination (r = .36) and the DOCEE (r = 0.35). Cronbach's Alpha for the OSCE and CIVA for the first batch was 0.71 and 0.78. As for the second batch it was 0.91 and 0.91 respectively. Eighty-two percent of students were very satisfied or satisfied with the CIVA process, contents and quality. CONCLUSIONS A well constructed CIVA type assessment with a rich authentic vignette and good quality videos and images could be used to assess clinical reasoning and decision making of final year medical students. CIVA is an assessment tool which correlates well with OSCE, compliments the written and DOCEE and is easier to conduct at a possibly reduced cost.
Collapse
Affiliation(s)
- Nabil D Sulaiman
- Department of Family and Community Medicine and Behavioural Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Hossam Hamdy
- College of Medicine, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
171
|
van der Vleuten C, Verhoeven B. In-training assessment developments in postgraduate education in Europe. ANZ J Surg 2013; 83:454-9. [DOI: 10.1111/ans.12190] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2013] [Indexed: 11/27/2022]
Affiliation(s)
| | - Bas Verhoeven
- Department of Surgery; Maastricht University Medical Centre; Maastricht; The Netherlands
| |
Collapse
|
172
|
Whitehead CR, Austin Z, Hodges BD. Continuing the competency debate: reflections on definitions and discourses. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:123-7. [PMID: 23053866 DOI: 10.1007/s10459-012-9407-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Accepted: 09/05/2012] [Indexed: 05/14/2023]
Affiliation(s)
- C R Whitehead
- Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, 500 University Ave., 5th Floor, Toronto, ON M5G 1V7, Canada.
| | | | | |
Collapse
|
173
|
Alves de Lima A, Conde D, Costabel J, Corso J, Van der Vleuten C. A laboratory study on the reliability estimations of the mini-CEX. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:5-13. [PMID: 22193944 PMCID: PMC3569586 DOI: 10.1007/s10459-011-9343-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2011] [Accepted: 12/08/2011] [Indexed: 05/24/2023]
Abstract
Reliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the variance caused by the case/patient or by assessor is completely confounded. We have no idea how much each of these factors contribute to the noise in the measurement. The aim of this study was to use a controlled setup that overcomes these difficulties and to estimate the reproducibility of the mini-CEX. Three encounters were videotaped from 21 residents. The patients were the same for all residents. Each encounter was assessed by 3 assessors who assessed all encounters for all residents. This delivered a fully crossed (all random) two-facet generalizability design. A quarter of the total variance was associated with universe score variance (28%). The largest source of variance was the general error term (34%) followed by the main effect of assessors (18%). Generalizability coefficients indicated that an approximate sample of 9 encounters was needed assuming a single different assessor per encounter and assuming different cases per encounter (the usual situation in real practice), 4 encounters when 2 raters were used and 3 encounters when 3 raters are used. Unexplained general error and the leniency/stringency of assessors are the major causes for unreliability in mini-CEX. To optimize reliability rater training might have an effect.
Collapse
Affiliation(s)
- Alberto Alves de Lima
- Instituto Cardiovascular de Buenos Aires, Blanco Encalada 1525, 1428 Ciudad de Buenos Aires, Buenos Aires, Argentina.
| | | | | | | | | |
Collapse
|
174
|
Ragan RE, Virtue DW, Chi SJ. An assessment program using standardized clients to determine student readiness for clinical practice. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2013; 77:14. [PMID: 23459249 PMCID: PMC3578327 DOI: 10.5688/ajpe77114] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2012] [Accepted: 08/26/2012] [Indexed: 05/16/2023]
Abstract
Objective. To develop, implement, and review a competence-assessment program to identify students at risk of underperforming at advanced pharmacy practice experience (APPE) sites and to facilitate remediation before they assume responsibility for patient care.Design. As part of the standardized client program, pharmacy students were examined in realistic live client-encounter simulations. Authentic scenarios were developed, and actors were recruited and trained to portray clients so students could be examined solving multiple pharmacy problems. Evaluations of students were conducted in the broad areas of knowledge and live performance.Assessment. Measurements included student-experience survey instruments used to evaluate case realism and challenge; videos used to determine the fidelity of standardized clients, and clerkship performance predictions used to identify students who required individual attention and improvement prior to clerkship courses.Conclusions. The assessment program showed promise as a means of discriminating between students who are prepared for APPEs and those at risk for underperforming.
Collapse
Affiliation(s)
- Ronald E Ragan
- The University of Kansas School of Pharmacy, Lawrence, KS
| | | | | |
Collapse
|
175
|
J.B. Govaerts M, W.J. van de Wiel M, P.M. van der Vleuten C. Quality of feedback following performance assessments: does assessor expertise matter? EUROPEAN JOURNAL OF TRAINING AND DEVELOPMENT 2013. [DOI: 10.1108/03090591311293310] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
176
|
Swanson DB, van der Vleuten CPM. Assessment of clinical skills with standardized patients: state of the art revisited. TEACHING AND LEARNING IN MEDICINE 2013; 25 Suppl 1:S17-25. [PMID: 24246102 DOI: 10.1080/10401334.2013.842916] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Affiliation(s)
- David B Swanson
- a National Board of Medical Examiners , Philadelphia , Pennsylvania , USA
| | | |
Collapse
|
177
|
Kogan JR, Holmboe E. Realizing the promise and importance of performance-based assessment. TEACHING AND LEARNING IN MEDICINE 2013; 25 Suppl 1:S68-74. [PMID: 24246110 DOI: 10.1080/10401334.2013.842912] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Work-based assessment (WBA) is the assessment of trainees and physicians across the educational continuum of day-to-day competencies and practices in authentic, clinical environments. What distinguishes WBA from other assessment modalities is that it enables the evaluation of performance in context. In this perspective, we describe the growing importance, relevance, and evolution of WBA as it relates to competency-based medical education, supervision, and entrustment. Although a systematic review is beyond the purview of this perspective, we highlight specific methods and needed shifts to WBA that (a) consider patient outcomes, (b) use nonphysician assessors, and (c) assess the care provided to populations of patients. We briefly describe strategies for the effective implementation of WBA and identify outstanding research questions related to its use.
Collapse
Affiliation(s)
- Jennifer R Kogan
- a Division of General Internal Medicine , Raymond and Ruth Perelman School of Medicine at the University of Pennsylvania , Philadelphia , Pennsylvania , USA
| | | |
Collapse
|
178
|
Widyahening IS, van der Heijden GJMG, Moy FM, van der Graaf Y, Sastroasmoro S, Bulgiba A. From west to east; experience with adapting a curriculum in evidence-based medicine. PERSPECTIVES ON MEDICAL EDUCATION 2012; 1:249-261. [PMID: 23240103 PMCID: PMC3518799 DOI: 10.1007/s40037-012-0029-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Clinical epidemiology (CE) and evidence-based medicine (EBM) have become an important part of medical school curricula. This report describes the implementation and some preliminary outcomes of an integrated CE and EBM module in the Faculty of Medicine Universitas Indonesia (UI), Jakarta and in the University of Malaya (UM) in Kuala Lumpur. A CE and EBM module, originally developed at the University Medical Center Utrecht (UMCU), was adapted for implementation in Jakarta and Kuala Lumpur. Before the start of the module, UI and UM staff followed a training of teachers (TOT). Student competencies were assessed through pre and post multiple-choice knowledge tests, an oral and written structured evidence summary (evidence-based case report, EBCR) as well as a written exam. All students also filled in a module evaluation questionnaire. The TOT was well received by staff in Jakarta and Kuala Lumpur and after adaptation the CE and EBM modules were integrated in both medical schools. The pre-test results of UI and UM were significantly lower than those of UMCU students (p < 0.001). The post test results of UMCU students were comparable (p = 0.48) with UI, but significantly different (p < 0.001) from UM. Common problems for the modules in both UI and UM were limited access to literature and variability of the tutors' skills. Adoption and integration of an existing Western CE-EBM teaching module into Asian medical curricula is feasible while learning outcomes obtained are quite similar.
Collapse
Affiliation(s)
- Indah S. Widyahening
- Department of Community Medicine, Faculty of Medicine, Universitas Indonesia, Jl. Pegangsaan Timur 16, Jakarta Pusat, 10430 Indonesia
- Department of Epidemiology, Division Julius Center for Health Sciences and Primary Care, University Medical Center, Utrecht, the Netherlands
| | - Geert J. M. G. van der Heijden
- Department of Epidemiology, Division Julius Center for Health Sciences and Primary Care, University Medical Center, Utrecht, the Netherlands
| | - Foong Ming Moy
- Department of Social and Preventive Medicine, Faculty of Medicine, Julius Centre University of Malaya, University of Malaya, Kuala Lumpur, Malaysia
| | - Yolanda van der Graaf
- Department of Epidemiology, Division Julius Center for Health Sciences and Primary Care, University Medical Center, Utrecht, the Netherlands
| | - Sudigdo Sastroasmoro
- Center for Clinical Epidemiology and Evidence-Based Medicine, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia
| | - Awang Bulgiba
- Department of Social and Preventive Medicine, Faculty of Medicine, Julius Centre University of Malaya, University of Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
179
|
Landau A, Reid W, Watson A, McKenzie C. Objective Structured Assessment of Technical Skill in assessing technical competence to carry out caesarean section with increasing seniority. Best Pract Res Clin Obstet Gynaecol 2012; 27:197-207. [PMID: 23062591 DOI: 10.1016/j.bpobgyn.2012.08.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2012] [Accepted: 08/29/2012] [Indexed: 11/29/2022]
Abstract
Since the incorporation of workplace-based assessment within the specialty training programme in obstetrics and gynaecology, the assessment of technical competence to carry out caesarean section has been undertaken by the Objective Structured Assessment of Technical Skill tool. This requirement has been formalised in the Matrix of Educational Progression, ensuring that the tool must assess trainees' technical competence in caesarean section procedures of varying levels of complexity throughout training. Trainee feedback suggests that the effectiveness of the tool diminishes as the seniority of the trainee increases, with technical competence assessed less effectively in more complex procedures. This seems to be a result of the generic design of the tool and insufficient training on the part of assessors. Both of these are due to be addressed within a division of the Objective Structured Assessment of Technical Skill tool into explicitly formative and summative assessments of technical skill, following a General Medical Council-led consultation on the future of workplace-based assessment.
Collapse
Affiliation(s)
- Alex Landau
- Royal College of Obstetricians and Gynaecologists, 27 Sussex Place, London NW1 4RG, London, UK.
| | | | | | | |
Collapse
|
180
|
Sweet LP, Glover P, McPhee T. The midwifery miniCEX--a valuable clinical assessment tool for midwifery education. Nurse Educ Pract 2012; 13:147-53. [PMID: 23044360 DOI: 10.1016/j.nepr.2012.08.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2011] [Revised: 06/05/2012] [Accepted: 08/29/2012] [Indexed: 11/28/2022]
Abstract
BACKGROUND Midwifery students, clinicians and educators in Australia identified the need for improved feedback for midwifery students whilst they are on clinical placement; in particular formative assessment. The miniCEX or mini-clinical evaluation exercise is one approach to assessment that has been proven valid and reliable in medical education. The aim of this research was to develop, implement and evaluate a miniCEX tool for midwifery education. METHODS Using an action research approach, this project engaged midwifery clinicians and midwifery students to adapt and implement the miniCEX in a postnatal ward environment. Focus groups were held to establish the clinical expectations and develop performance guidelines of students across the domains of midwifery practice, as well as evaluate their use in practice. FINDINGS Evaluation of the midwifery miniCEX, including its applicability from the perspective of staff and students was positive. The miniCEX was found to be easy to use, time efficient and valuable for learning. DISCUSSION The miniCEX is an innovative approach to assessment and feedback in midwifery education, and there is currently no identified evidence of its use in midwifery education despite broad use globally in medical education. CONCLUSION The implementation of the midwifery miniCEX offers broad benefit to both midwifery students and midwifery clinicians and educators globally.
Collapse
Affiliation(s)
- Linda P Sweet
- Flinders University Rural Clinical School, Adelaide, SA 5001, Australia.
| | | | | |
Collapse
|
181
|
Tromp F, Vernooij-Dassen M, Grol R, Kramer A, Bottema B. Assessment of CanMEDS roles in postgraduate training: the validation of the Compass. PATIENT EDUCATION AND COUNSELING 2012; 89:199-204. [PMID: 22796085 DOI: 10.1016/j.pec.2012.06.028] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Revised: 06/14/2012] [Accepted: 06/25/2012] [Indexed: 05/25/2023]
Abstract
OBJECTIVE In medical education the focus has shifted from gaining knowledge to developing competencies. To effectively monitor performance in practice throughout the entire training, a new approach of assessment is needed. This study aimed to evaluate an instrument that monitors the development of competencies during postgraduate training in the setting of training of general practice: the Competency Assessment List (Compass). METHODS The distribution of scores, reliability, validity, responsiveness and feasibility of the Compass were evaluated. RESULTS Scores of the Compass ranged from 1 to 9 on a 10-point scale, showing excellent internal consistency ranging from .89 to .94. Most trainees showed improving ratings during training. Medium to large effect sizes (.31-1.41) were demonstrated when we compared mean scores of three consecutive periods. Content validity of the Compass was supported by the results of a qualitative study using the RAND modified Delphi Method. The feasibility of the Compass was demonstrated. CONCLUSION The Compass is a competency based instrument that shows in a reliable and valid way trainees' progress towards the standard of performance. PRACTICE IMPLICATIONS The programmatic approach of the Compass could be applied in other specialties provided that the instrument is tailored to specific needs of that specialism.
Collapse
Affiliation(s)
- Fred Tromp
- Department of Primary and Community Care, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | |
Collapse
|
182
|
Uvedba 360-Stopinjskega ocenjevanja odnosa in obnašanja specializantov. Zdr Varst 2012. [DOI: 10.2478/v10152-012-0026-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Izvleček
Uvod:Učenje in ocenjevanje profesionalnega odnosa in obnašanja sta enako pomembna kot ocenjevanje
teoretičnega znanja in veščin, vendar se malokrat izvaja.
Metode: Specializant razdeli ocenjevalne obrazce, ki jih ocenjevalci izpolnjene in podpisane pošljejo nacionalni
koordinatorici; ko dobi deset ocen in samooceno, povabi specializanta na pogovor s presojo.
Rezultati: V 6 ciklusih 360-stopinjskega ocenjevanja je bilo opravljenih 118 pogovorov. Večina je bila ocenjena
dobro. V manj kot 10 % je bila ocena »sem zaskrbljen« in »sem hudo zaskrbljen«.
Razprava: Večina specializantov je bila prijetno presenečena nad pohvalami, ki so jih napisali sodelavci. Osem
specializantov so opozorili na manjše pomanjkljivosti v odnosu in obnašanju, ki se jih specializanti niso zavedali. Do
naslednjega ocenjevanja so pomanjkljivosti večinoma odpravili. Specializanti, ocenjeni s »sem hudo zaskrbljen«, so
bili redki, vendar so se pri njih take ocene ponavljale; niso bili samokritični in pogovora z odzivom niso sprejemali
kot dobronamernega.
Zaključki: V specializacijo porodništva in ginekologije so uvedli ocenjevanje odnosa in obnašanja specializantov.
Hkrati z ocenjevanjem so uvedli tudi druga orodja, ki vodijo k zavedanju profesionalizma: učiteljsko tutorstvo
na Medicinski fakulteti v Ljubljani z delavnicami samorefleksije in delavnice treniranja trenerjev. Poudarek na
profesionalizmu naj bo vidnejši v izbirnem postopku za specializacijo, uvodnem intervjuju s specializantom in v
pogovorih s presojo z glavnim mentorjem v rednih presledkih, v večji komunikaciji med glavnimi in neposrednimi
mentorji ter v ocenjevanju mentorjev. Na uvedbo še čaka pogovor o profesionalizmu ob sprejemu na medicinsko
fakulteto ter predmet profesionalizem na medicinskih fakultetah in med specializacijo.
Collapse
|
183
|
|
184
|
Eva KW, Hodges BD. Scylla or Charybdis? Can we navigate between objectification and judgement in assessment? MEDICAL EDUCATION 2012; 46:914-9. [PMID: 22891912 DOI: 10.1111/j.1365-2923.2012.04310.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Affiliation(s)
- Kevin W Eva
- Centre for Health Education Scholarship, Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
| | | |
Collapse
|
185
|
Sagasser MH, Kramer AWM, van der Vleuten CPM. How do postgraduate GP trainees regulate their learning and what helps and hinders them? A qualitative study. BMC MEDICAL EDUCATION 2012; 12:67. [PMID: 22866981 PMCID: PMC3479408 DOI: 10.1186/1472-6920-12-67] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2012] [Accepted: 06/26/2012] [Indexed: 05/11/2023]
Abstract
BACKGROUND Self-regulation is essential for professional development. It involves monitoring of performance, identifying domains for improvement, undertaking learning activities, applying newly learned knowledge and skills and self-assessing performance. Since self-assessment alone is ineffective in identifying weaknesses, learners should seek external feedback too. Externally regulated educational interventions, like reflection, learning portfolios, assessments and progress meetings, are increasingly used to scaffold self-regulation.The aim of this study is to explore how postgraduate trainees regulate their learning in the workplace, how external regulation promotes self-regulation and which elements facilitate or impede self-regulation and learning. METHODS In a qualitative study with a phenomenologic approach we interviewed first- and third-year GP trainees from two universities in the Netherlands. Twenty-one verbatim transcripts were coded. Through iterative discussion the researchers agreed on the interpretation of the data and saturation was reached. RESULTS Trainees used a short and a long self-regulation loop. The short loop took one week at most and was focused on problems that were easy to resolve and needed minor learning activities. The long loop was focused on complex or recurring problems needing multiple and planned longitudinal learning activities. External assessments and formal training affected the long but not the short loop. The supervisor had a facilitating role in both loops. Self-confidence was used to gauge competence.Elements influencing self-regulation were classified into three dimensions: personal (strong motivation to become a good doctor), interpersonal (stimulation from others) and contextual (organizational and educational features). CONCLUSIONS Trainees did purposefully self-regulate their learning. Learning in the short loop may not be visible to others. Trainees should be encouraged to actively seek and use external feedback in both loops. An important question for further research is which educational interventions might be used to scaffold learning in the short loop. Investing in supervisor quality remains important, since they are close to trainee learning in both loops.
Collapse
Affiliation(s)
- Margaretha H Sagasser
- Department of Primary and Community Care, Radboud University Nijmegen Medical Centre, Radboud, The Netherlands
| | - Anneke WM Kramer
- Department of Primary and Community Care, Radboud University Nijmegen Medical Centre, Radboud, The Netherlands
| | - Cees PM van der Vleuten
- Department of Primary and Community Care, Radboud University Nijmegen Medical Centre, Radboud, The Netherlands
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands
- King Saud University, Riyadh, Saudi Arabia
- University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
186
|
Driessen EW, van Tartwijk J, Govaerts M, Teunissen P, van der Vleuten CPM. The use of programmatic assessment in the clinical workplace: a Maastricht case report. MEDICAL TEACHER 2012; 34:226-31. [PMID: 22364455 DOI: 10.3109/0142159x.2012.652242] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
The differences of learning experiences in the workplace put challenges on the assessment: the assessment programme should be aligned with the general competency framework of the curriculum and also fit to the differences in learning contexts of the workplace. We used van der Vleuten's programmatic assessment model to develop a workplace-based assessment programme for final year clerkships. We aimed to design a programme that stimulates learning, supports assessment decision, is feasible and non-bureaucratic. The first experiences with the programme show that students think that the programme has high learning value and the assessment is sufficiently robust. Many of the commonly reported weaknesses of work-based assessment (not a good fit with the educational context, too complex, too bureaucratic and too much work) were not mentioned by the students.
Collapse
Affiliation(s)
- Erik W Driessen
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, The Netherlands.
| | | | | | | | | |
Collapse
|
187
|
van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, van Tartwijk J. A model for programmatic assessment fit for purpose. MEDICAL TEACHER 2012; 34:205-14. [PMID: 22364452 DOI: 10.3109/0142159x.2012.652239] [Citation(s) in RCA: 405] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
Collapse
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
188
|
Schuwirth LWT, van der Vleuten CPM. Programmatic assessment and Kane's validity perspective. MEDICAL EDUCATION 2012; 46:38-48. [PMID: 22150195 DOI: 10.1111/j.1365-2923.2011.04098.x] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
CONTEXT Programmatic assessment is a notion that implies that the strength of the assessment process results from a careful combination of various assessment instruments. Accordingly, no single instrument is superior to another, but each has its own strengths, weaknesses and purpose in a programme. Yet, in terms of psychometric methods, a one-size-fits-all approach is often used. Kane's views on validity as represented by a series of arguments provide a useful framework from which to highlight the value of different widely used approaches to improve the quality and validity of assessment procedures. METHODS In this paper we discuss four inferences which form part of Kane's validity theory: from observations to scores; from scores to universe scores; from universe scores to target domain, and from target domain to construct. For each of these inferences, we provide examples and descriptions of approaches and arguments that may help to support the validity inference. CONCLUSIONS As well as standard psychometric methods, a programme of assessment makes use of various other arguments, such as: item review and quality control, structuring and examiner training; probabilistic methods, saturation approaches and judgement processes, and epidemiological methods, collation, triangulation and member-checking procedures. In an assessment programme each of these can be used.
Collapse
Affiliation(s)
- Lambert W T Schuwirth
- Flinders Innovation in Clinical Education, Flinders University, South Australia, Australia.
| | | |
Collapse
|
189
|
Review article: Teaching, learning, and the pursuit of excellence in anesthesia education. Can J Anaesth 2011; 59:171-81. [DOI: 10.1007/s12630-011-9636-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2011] [Accepted: 11/16/2011] [Indexed: 10/14/2022] Open
|
190
|
Whitehead CR, Austin Z, Hodges BD. Flower power: the armoured expert in the CanMEDS competency framework? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2011; 16:681-94. [PMID: 21286808 DOI: 10.1007/s10459-011-9277-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2010] [Accepted: 01/14/2011] [Indexed: 05/25/2023]
Abstract
Competency frameworks based on roles definitions are currently being used extensively in health professions education internationally. One of the most successful and widely used models is the CanMEDS Roles Framework. The medical literature has raised questions about both the theoretical underpinnings and the practical application of outcomes-based frameworks, however little empirical research has yet been done examining specific roles frameworks. This study examines the historical development of an important early roles framework, the Educating Future Physicians of Ontario (EFPO) roles, which were instrumental in the development of the CanMEDS roles. Prominent discourses related to roles development are examined using critical discourse analysis methodology. Exploration of discourses that emerged in the development of this particular set of roles definitions highlights the contextual and negotiated nature of roles construction. The discourses of threat and protection prevalent in the EFPO roles development offer insight into the visual construction of a centre of medical expertise surrounded by supporting roles (such as collaborator and manager). Non-medical expert roles may perhaps play the part of 'armour' for the authority of medical expertise under threat. This research suggests that it may not be accurate to consider roles as objective ideals. Effective training models may require explicit acknowledgement of the socially negotiated and contextual nature of roles definitions.
Collapse
Affiliation(s)
- Cynthia R Whitehead
- Department of Family and Community Medicine, Faculty of Medicine, University of Toronto, ON, Canada.
| | | | | |
Collapse
|
191
|
Baartman LKJ, Prins FJ, Kirschner PA, van der Vleuten CPM. Self-evaluation of assessment programs: a cross-case analysis. EVALUATION AND PROGRAM PLANNING 2011; 34:206-216. [PMID: 21555044 DOI: 10.1016/j.evalprogplan.2011.03.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2010] [Revised: 01/21/2011] [Accepted: 03/01/2011] [Indexed: 05/30/2023]
Abstract
The goal of this article is to contribute to the validation of a self-evaluation method, which can be used by schools to evaluate the quality of their Competence Assessment Program (CAP). The outcomes of the self-evaluations of two schools are systematically compared: a novice school with little experience in competence-based education and assessment, and an innovative school with extensive experience. The self-evaluation was based on 12 quality criteria for CAPs, including both validity and reliability, and criteria stressing the importance of the formative function of assessment, such as meaningfulness and educational consequences. In each school, teachers, management and examination board participated. Results show that the two schools use different approaches to assure assessment quality. The innovative school seems to be more aware of its own strengths and weaknesses, to have a more positive attitude towards teachers, students, and educational innovations, and to explicitly involve stakeholders (i.e., teachers, students, and the work field) in their assessments. This school also had a more explicit vision of the goal of competence-based education and could design its assessments in accordance with these goals.
Collapse
|
192
|
Sargeant J, Macleod T, Sinclair D, Power M. How do physicians assess their family physician colleagues' performance?: creating a rubric to inform assessment and feedback. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2011; 31:87-94. [PMID: 21671274 DOI: 10.1002/chp.20111] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
INTRODUCTION The Colleges of Physicians and Surgeons of Alberta and Nova Scotia (CPSNS) use a standardized multisource feedback program, the Physician Achievement Review (PAR/NSPAR), to provide physicians with performance assessment data via questionnaires from medical colleagues, coworkers, and patients on 5 practice domains: consultation communication, patient interaction, professional self-management, clinical competence, and psychosocial management of patients. Physicians receive a confidential report; the intent is practice improvement. However, research indicates that feedback from medical colleagues appears to be less understood than that from coworkers or patients, due to a lack of specificity and concerns regarding feedback credibility. The purpose of this study was to determine how physicians make decisions about performance ratings for family physician (FP) colleagues in the 5 practice domains. METHODS This was an exploratory qualitative study using focus groups-one with 11 family physicians and one with 12 specialists-who had served as NSPAR "medical colleague'' reviewers. We analyzed focus group transcripts using content analysis. RESULTS Family and specialist physicians provided examples of behaviors indicative of both high- and low-scoring performance for items within the 5 practice domains. From these, an assessment rubric was created to inform both external reviewers and the physicians being reviewed of performance expectations. Reviewers reported using varied sources of information to make assessments, including shared patients, medical records, referral letters, feedback from others, and self-reference. DISCUSSION The CPSNS has used the assessment rubric to create an online resource to inform medical colleague assessment and enhance the usefulness of their NSPAR scores. Further research will be required to determine its impact.
Collapse
Affiliation(s)
- Joan Sargeant
- Research and Evaluation, Continuing Medical Education, Dalhousie University, Halifax.
| | | | | | | |
Collapse
|