1
|
Mokkink LB, Boers M, van der Vleuten CPM, Bouter LM, Alonso J, Patrick DL, de Vet HCW, Terwee CB. COSMIN Risk of Bias tool to assess the quality of studies on reliability or measurement error of outcome measurement instruments: a Delphi study. BMC Med Res Methodol 2020; 20:293. [PMID: 33267819 PMCID: PMC7712525 DOI: 10.1186/s12874-020-01179-5] [Citation(s) in RCA: 167] [Impact Index Per Article: 41.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 11/25/2020] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND Scores on an outcome measurement instrument depend on the type and settings of the instrument used, how instructions are given to patients, how professionals administer and score the instrument, etc. The impact of all these sources of variation on scores can be assessed in studies on reliability and measurement error, if properly designed and analyzed. The aim of this study was to develop standards to assess the quality of studies on reliability and measurement error of clinician-reported outcome measurement instruments, performance-based outcome measurement instrument, and laboratory values. METHODS We conducted a 3-round Delphi study involving 52 panelists. RESULTS Consensus was reached on how a comprehensive research question can be deduced from the design of a reliability study to determine how the results of a study inform us about the quality of the outcome measurement instrument at issue. Consensus was reached on components of outcome measurement instruments, i.e. the potential sources of variation. Next, we reached consensus on standards on design requirements (n = 5), standards on preferred statistical methods for reliability (n = 3) and measurement error (n = 2), and their ratings on a four-point scale. There was one term for a component and one rating of one standard on which no consensus was reached, and therefore required a decision by the steering committee. CONCLUSION We developed a tool that enables researchers with and without thorough knowledge on measurement properties to assess the quality of a study on reliability and measurement error of outcome measurement instruments.
Collapse
Affiliation(s)
- L B Mokkink
- Department of Epidemiology and Data Science, Amsterdam Public Health research institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| | - M Boers
- Department of Epidemiology and Data Science, Amsterdam Public Health research institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Amsterdam Rheumatology and Immunology Center, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, School of Health Professions Education, Maastricht, The Netherlands
| | - L M Bouter
- Department of Epidemiology and Data Science, Amsterdam Public Health research institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.,Department of Philosophy, Faculty of Humanities, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - J Alonso
- Health Services Research Unit, IMIM-Hospital del Mar Medical Research Institute; CIBER Epidemiología y Salud Pública (CIBERESP); Pompeu Fabra University (UPF), Barcelona, Spain
| | - D L Patrick
- Department of Health Services, University of Washington, Seattle, WA, USA
| | - H C W de Vet
- Department of Epidemiology and Data Science, Amsterdam Public Health research institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - C B Terwee
- Department of Epidemiology and Data Science, Amsterdam Public Health research institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Berkhout JJ, Slootweg IA, Helmich E, Teunissen PW, van der Vleuten CPM, Jaarsma ADC. How characteristic routines of clinical departments influence students' self-regulated learning: A grounded theory study. Med Teach 2017; 39:1174-1181. [PMID: 28784026 DOI: 10.1080/0142159x.2017.1360472] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
BACKGROUND In clerkships, students are expected to self-regulate their learning. How clinical departments and their routine approach on clerkships influences students' self-regulated learning (SRL) is unknown. AIM This study explores how characteristic routines of clinical departments influence medical students' SRL. METHODS Six focus groups including 39 purposively sampled participants from one Dutch university were organized to study how characteristic routines of clinical departments influenced medical students' SRL from a constructivist paradigm, using grounded theory methodology. The focus groups were audio recorded, transcribed verbatim and were analyzed iteratively using constant comparison and open, axial and interpretive coding. RESULTS Students described that clinical departments influenced their SRL through routines which affected the professional relationships they could engage in and affected their perception of a department's invested effort in them. Students' SRL in a clerkship can be supported by enabling them to engage others in their SRL and by having them feel that effort is invested in their learning. CONCLUSIONS Our study gives a practical insight in how clinical departments influenced students' SRL. Clinical departments can affect students' motivation to engage in SRL, influence the variety of SRL strategies that students can use and how meaningful students perceive their SRL experiences to be.
Collapse
Affiliation(s)
- J J Berkhout
- a Center for Evidence-Based Education, Academic Medical Center (AMC-UvA) , University of Amsterdam , Amsterdam , The Netherlands
| | - I A Slootweg
- a Center for Evidence-Based Education, Academic Medical Center (AMC-UvA) , University of Amsterdam , Amsterdam , The Netherlands
| | - E Helmich
- b Center for Research and Innovation in Medical Education , University Medical Center Groningen, University of Groningen , Groningen , The Netherlands
| | - P W Teunissen
- c Department of Obstetrics and Gynecology , VU University Medical Center, VU University Amsterdam , Amsterdam , The Netherlands
- d Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - C P M van der Vleuten
- d Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - A D C Jaarsma
- b Center for Research and Innovation in Medical Education , University Medical Center Groningen, University of Groningen , Groningen , The Netherlands
| |
Collapse
|
3
|
McGill DA, van der Vleuten CPM, Clarke MJ. Construct validation of judgement-based assessments of medical trainees' competency in the workplace using a "Kanesian" approach to validation. BMC Med Educ 2015; 15:237. [PMID: 26715145 PMCID: PMC4696206 DOI: 10.1186/s12909-015-0520-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2015] [Accepted: 12/16/2015] [Indexed: 05/30/2023]
Abstract
BACKGROUND Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors' end-of-term summative assessments for prevocational medical trainees. METHODS The populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach's α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80. RESULTS The hypothesised competency constructs of "general professional job performance", "clinical skills" and "professional abilities" provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct "general professional job performance" indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach's alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score. CONCLUSION Internal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors' assessment in local training schemes is feasible using standard methods for gathering validity evidence.
Collapse
Affiliation(s)
- D A McGill
- Department of Cardiology, The Canberra Hospital, Garran, ACT 2605, Australia.
| | - C P M van der Vleuten
- Department of Educational Research and Development, Maastricht University, Maastricht, The Netherlands
| | - M J Clarke
- Clinical Trial Service Unit, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Embo M, Driessen E, Valcke M, van der Vleuten CPM. Integrating learning assessment and supervision in a competency framework for clinical workplace education. Nurse Educ Today 2015; 35:341-346. [PMID: 25497139 DOI: 10.1016/j.nedt.2014.11.022] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 09/14/2014] [Accepted: 11/19/2014] [Indexed: 06/04/2023]
Abstract
Although competency-based education is well established in health care education, research shows that the competencies do not always match the reality of clinical workplaces. Therefore, there is a need to design feasible and evidence-based competency frameworks that fit the workplace reality. This theoretical paper outlines a competency-based framework, designed to facilitate learning, assessment and supervision in clinical workplace education. Integration is the cornerstone of this holistic competency framework.
Collapse
Affiliation(s)
- M Embo
- Midwifery Department, University College Arteveldehogeschool Ghent, Voetweg 66, 9000 Ghent, Belgium.
| | - E Driessen
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 Maastricht, The Netherlands.
| | - M Valcke
- Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, 9000 Ghent, Belgium.
| | - C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 Maastricht, The Netherlands.
| |
Collapse
|
5
|
Abstract
Educational practice and educational research are not aligned with each other. Current educational practice heavily relies on information transmission or content delivery to learners. Yet evidence shows that delivery is only a minor part of learning. To illustrate the directions we might take to find better educational strategies, six areas of educational evidence are briefly reviewed. The flipped classroom idea is proposed to shift our expenditure and focus in education. All information delivery could be web distributed, thus creating more time for other more expensive educational strategies to support the learner. In research our focus should shift from comparing one curriculum to the other, to research that explains why things work in education and under which conditions. This may generate ideas for creative designers to develop new educational strategies. These best practices should be shared and further researched. At the same time attention should be paid to implementation and the realization that teachers learn in a way very similar to the people they teach. If we take the evidence seriously, our educational practice will look quite different to the way it does now.
Collapse
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, the Netherlands.
| | - E W Driessen
- Department of Educational Development and Research, Maastricht University, PO Box 616, 6200 MD, Maastricht, the Netherlands
| |
Collapse
|
6
|
Embo M, Driessen E, Valcke M, van der Vleuten CPM. A framework to facilitate self-directed learning, assessment and supervision in midwifery practice: a qualitative study of supervisors' perceptions. Nurse Educ Pract 2014; 14:441-6. [PMID: 24780309 DOI: 10.1016/j.nepr.2014.01.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Revised: 12/28/2013] [Accepted: 01/07/2014] [Indexed: 11/16/2022]
Abstract
BACKGROUND Self-directed learning is an educational concept that has received increasing attention. The recent workplace literature, however, reports problems with the facilitation of self-directed learning in clinical practice. We developed the Midwifery Assessment and Feedback Instrument (MAFI) as a framework to facilitate self-directed learning. In the present study, we sought clinical supervisors' perceptions of the usefulness of MAFI. METHODS Interviews with fifteen clinical supervisors were audio taped, transcribed verbatim and analysed thematically using Atlas-Ti software for qualitative data analysis. RESULTS Four themes emerged from the analysis. (1) The competency-based educational structure promotes the setting of realistic learning outcomes and a focus on competency development, (2) instructing students to write reflections facilitates student-centred supervision, (3) creating a feedback culture is necessary to achieve continuity in supervision and (4) integrating feedback and assessment might facilitate competency development under the condition that evidence is discussed during assessment meetings. Supervisors stressed the need for direct observation, and instruction how to facilitate a self-directed learning process. CONCLUSION The MAFI appears to be a useful framework to promote self-directed learning in clinical practice. The effect can be advanced by creating a feedback and assessment culture where learners and supervisors share the responsibility for developing self-directed learning.
Collapse
Affiliation(s)
- M Embo
- Midwifery Department, University College Arteveldehogeschool Ghent, Voetweg 66, 9000 Ghent, Belgium.
| | - E Driessen
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 Maastricht, The Netherlands.
| | - M Valcke
- Department of Educational Studies, Faculty of Psychology and Educational Sciences, Ghent University, H. Dunantlaan 2, 9000 Ghent, Belgium.
| | - C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 Maastricht, The Netherlands.
| |
Collapse
|
7
|
Moonen-van Loon JMW, Overeem K, Donkers HHLM, van der Vleuten CPM, Driessen EW. Composite reliability of a workplace-based assessment toolbox for postgraduate medical education. Adv Health Sci Educ Theory Pract 2013; 18:1087-102. [PMID: 23494202 DOI: 10.1007/s10459-013-9450-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2012] [Accepted: 02/21/2013] [Indexed: 05/16/2023]
Abstract
In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and certification as a specialist are based on the composite results of different WBAs collected in a portfolio, to our knowledge, the reliability of such a WBA toolbox has never been investigated. Using generalisability theory, we analysed the separate and composite reliability of three WBA tools [mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), and multisource feedback (MSF)] included in a resident portfolio. G-studies and D-studies of 12,779 WBAs from a total of 953 residents showed that a reliability coefficient of 0.80 was obtained for eight mini-CEXs, nine DOPS, and nine MSF rounds, whilst the same reliability was found for seven mini-CEXs, eight DOPS, and one MSF when combined in a portfolio. At the end of the first year of residency a portfolio with five mini-CEXs, six DOPS, and one MSF afforded reliable judgement. The results support the conclusion that several WBA tools combined in a portfolio can be a feasible and reliable method for high-stakes judgements.
Collapse
Affiliation(s)
- J M W Moonen-van Loon
- Department of Educational Research and Development, Faculty of Health, Medicine, and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands,
| | | | | | | | | |
Collapse
|
8
|
McGill DA, van der Vleuten CPM, Clarke MJ. A critical evaluation of the validity and the reliability of global competency constructs for supervisor assessment of junior medical trainees. Adv Health Sci Educ Theory Pract 2013; 18:701-725. [PMID: 23053869 DOI: 10.1007/s10459-012-9410-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2012] [Accepted: 09/19/2012] [Indexed: 05/28/2023]
Abstract
Supervisor assessments are critical for both formative and summative assessment in the workplace. Supervisor ratings remain an important source of such assessment in many educational jurisdictions even though there is ambiguity about their validity and reliability. The aims of this evaluation is to explore the: (1) construct validity of ward-based supervisor competency assessments; (2) reliability of supervisors for observing any overarching domain constructs identified (factors); (3) stability of factors across subgroups of contexts, supervisors and trainees; and (4) position of the observations compared to the established literature. Evaluated assessments were all those used to judge intern (trainee) suitability to become an unconditionally registered medical practitioner in the Australian Capital Territory, Australia in 2007-2008. Initial construct identification is by traditional exploratory factor analysis (EFA) using Principal component analysis with Varimax rotation. Factor stability is explored by EFA of subgroups by different contexts such as hospital type, and different types of supervisors and trainees. The unit of analysis is each assessment, and includes all available assessments without aggregation of any scores to obtain the factors. Reliability of identified constructs is by variance components analysis of the summed trainee scores for each factor and the number of assessments needed to provide an acceptably reliable assessment using the construct, the reliability unit of analysis being the score for each factor for every assessment. For the 374 assessments from 74 trainees and 73 supervisors, the EFA resulted in 3 factors identified from the scree plot, accounting for only 68 % of the variance with factor 1 having features of a "general professional job performance" competency (eigenvalue 7.630; variance 54.5 %); factor 2 "clinical skills" (eigenvalue 1.036; variance 7.4 %); and factor 3 "professional and personal" competency (eigenvalue 0.867; variance 6.2 %). The percent trainee score variance for the summed competency item scores for factors 1, 2 and 3 were 40.4, 27.4 and 22.9 % respectively. The number of assessments needed to give a reliability coefficient of 0.80 was 6, 11 and 13 respectively. The factor structure remained stable for subgroups of female trainees, Australian graduate trainees, the central hospital, surgeons, staff specialist, visiting medical officers and the separation into single years. Physicians as supervisors, male trainees, and male supervisors all had a different grouping of items within 3 factors which all had competency items that collapsed into the predefined "face value" constructs of competence. These observations add new insights compared to the established literature. For the setting, most supervisors appear to be assessing a dominant construct domain which is similar to a general professional job performance competency. This global construct consists of individual competency items that supervisors spontaneously align and has acceptable assessment reliability. However, factor structure instability between different populations of supervisors and trainees means that subpopulations of trainees may be assessed differently and that some subpopulations of supervisors are assessing the same trainees with different constructs than other supervisors. The lack of competency criterion standardisation of supervisors' assessments brings into question the validity of this assessment method as currently used.
Collapse
Affiliation(s)
- D A McGill
- Department of Cardiology, The Canberra Hospital, Garran, ACT, 2605, Australia,
| | | | | |
Collapse
|
9
|
Pelgrim EAM, Kramer AWM, Mokkink HGA, van der Vleuten CPM. Reflection as a component of formative assessment appears to be instrumental in promoting the use of feedback; an observational study. Med Teach 2013; 35:772-8. [PMID: 23808841 DOI: 10.3109/0142159x.2013.801939] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
BACKGROUND Although the literature suggests that reflection has a positive impact on learning, there is a paucity of evidence to support this notion. AIM We investigated feedback and reflection in relation to the likelihood that feedback will be used to inform action plans. We hypothesised that feedback and reflection present a cumulative sequence (i.e. trainers only pay attention to trainees' reflections when they provided specific feedback) and we hypothesised a supplementary effect of reflection. METHOD We analysed copies of assessment forms containing trainees' reflections and trainers' feedback on observed clinical performance. We determined whether the response patterns revealed cumulative sequences in line with the Guttman scale. We further examined the relationship between reflection, feedback and the mean number of specific comments related to an action plan (ANOVA) and we calculated two effect sizes. RESULTS Both hypotheses were confirmed by the results. The response pattern found showed an almost perfect fit with the Guttman scale (0.99) and reflection seems to have supplementary effect on the variable action plan. CONCLUSIONS Reflection only occurs when a trainer has provided specific feedback; trainees who reflect on their performance are more likely to make use of feedback. These results confirm findings and suggestions reported in the literature.
Collapse
Affiliation(s)
- E A M Pelgrim
- Radboud University Nijmegen Medical Centre, the Netherlands.
| | | | | | | |
Collapse
|
10
|
van Tartwijk J, Driessen EW, van der Vleuten CPM, Wubbels T. [Educational science, 'the hardest science of all']. Ned Tijdschr Tandheelkd 2012; 119:302-305. [PMID: 22812268 DOI: 10.5177/ntvt.2012.06.12103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Educational research not only showed that student characteristics are of major importance for study success, but also that education does make a difference. Essentially, teaching is about stimulating students to invest time in learning and to use that time as effectively as possible. Assessment, goal-orientated work, and feedback have a major effect. The teacher is the key figure. With the aim to better understand teaching and learning, educational researchers usefindingsfrom other disciplines more and more often. A pitfall is to apply the findings of educational research without taking into consideration the context and the specific characteristics of students and teachers. Because of the large number offactors that influence the results ofeducation, educational science is referred as 'the hardest science of all'.
Collapse
Affiliation(s)
- J van Tartwijk
- Faculteit Sociale Wetenschappen, Centrum voor Onderwijs en Leren, van de Universiteit Utrecht.
| | | | | | | |
Collapse
|
11
|
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands.
| | | |
Collapse
|
12
|
van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, van Tartwijk J. A model for programmatic assessment fit for purpose. Med Teach 2012; 34:205-14. [PMID: 22364452 DOI: 10.3109/0142159x.2012.652239] [Citation(s) in RCA: 405] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
Collapse
Affiliation(s)
- C P M van der Vleuten
- Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
13
|
van der Zwet J, Zwietering PJ, Teunissen PW, van der Vleuten CPM, Scherpbier AJJA. Workplace learning from a socio-cultural perspective: creating developmental space during the general practice clerkship. Adv Health Sci Educ Theory Pract 2011; 16:359-73. [PMID: 21188514 PMCID: PMC3139899 DOI: 10.1007/s10459-010-9268-x] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Accepted: 12/13/2010] [Indexed: 05/08/2023]
Abstract
Workplace learning in undergraduate medical education has predominantly been studied from a cognitive perspective, despite its complex contextual characteristics, which influence medical students' learning experiences in such a way that explanation in terms of knowledge, skills, attitudes and single determinants of instructiveness is unlikely to suffice. There is also a paucity of research which, from a perspective other than the cognitive or descriptive one, investigates student learning in general practice settings, which are often characterised as powerful learning environments. In this study we took a socio-cultural perspective to clarify how students learn during a general practice clerkship and to construct a conceptual framework that captures this type of learning. Our analysis of group interviews with 44 fifth-year undergraduate medical students about their learning experiences in general practice showed that students needed developmental space to be able to learn and develop their professional identity. This space results from the intertwinement of workplace context, personal and professional interactions and emotions such as feeling respected and self-confident. These forces framed students' participation in patient consultations, conversations with supervisors about consultations and students' observation of supervisors, thereby determining the opportunities afforded to students to mind their learning. These findings resonate with other conceptual frameworks and learning theories. In order to refine our interpretation, we recommend that further research from a socio-cultural perspective should also explore other aspects of workplace learning in medical education.
Collapse
Affiliation(s)
- J van der Zwet
- Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, PO Box 616, 6200 MD, Maastricht, The Netherlands.
| | | | | | | | | |
Collapse
|
14
|
McGill DA, van der Vleuten CPM, Clarke MJ. Supervisor assessment of clinical and professional competence of medical trainees: a reliability study using workplace data and a focused analytical literature review. Adv Health Sci Educ Theory Pract 2011; 16:405-425. [PMID: 21607744 DOI: 10.1007/s10459-011-9296-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2010] [Accepted: 04/14/2011] [Indexed: 05/30/2023]
Abstract
Even though rater-based judgements of clinical competence are widely used, they are context sensitive and vary between individuals and institutions. To deal adequately with rater-judgement unreliability, evaluating the reliability of workplace rater-based assessments in the local context is essential. Using such an approach, the primary intention of this study was to identify the trainee score variation around supervisor ratings, identify sampling number needs of workplace assessments for certification of competence and position the findings within the known literature. This reliability study of workplace-based supervisors' assessments of trainees has a rater-nested-within-trainee design. Score variation attributable to the trainee for each competency item assessed (variance component) were estimated by the minimum-norm quadratic unbiased estimator. Score variance was used to estimate the number needed for a reliability value of 0.80. The trainee score variance for each of 14 competency items varied between 2.3% for emergency skills to 35.6% for communication skills, with an average for all competency items of 20.3%; the "Overall rating" competency item trainee variance was 28.8%. These variance components translated into 169, 7, 17 and 28 assessments needed for a reliability of 0.80, respectively. Most variation in assessment scores was due to measurement error, ranging from 97.7% for emergency skills to 63.4% for communication skills. Similar results have been demonstrated in previously published studies. In summary, overall supervisors' workplace based assessments have poor reliability and are not suitable for use in certification processes in their current form. The marked variation in the supervisors' reliability in assessing different competencies indicates that supervisors may be able to assess some with acceptable reproducibility; in this case communication and possibly overall competence. However, any continued use of this format for assessment of trainee competencies necessitates the identification of what supervisors in different institutions can reliably assess rather than continuing to impose false expectations from unreliable assessments.
Collapse
Affiliation(s)
- D A McGill
- Department of Cardiology, The Canberra Hospital, Garran, ACT 2605, Australia.
| | | | | |
Collapse
|
15
|
Prescott-Clements LE, van der Vleuten CPM, Schuwirth L, Gibb E, Hurst Y, Rennie JS. Measuring the development of insight by dental health professionals in training using workplace-based assessment. Eur J Dent Educ 2011; 15:159-164. [PMID: 21762320 DOI: 10.1111/j.1600-0579.2010.00650.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
INTRODUCTION For health professionals, the development of insight into their performance is vital for safe practice, professional development and self-regulation. This study investigates whether the development of dental trainees' insight, when provided with external feedback on performance, can be assessed using a single criterion on a simple global ratings form such as the Longitudinal Evaluation of Performance or Mini Clinical Evaluation Exercise. METHODS Postgraduate dental trainees (N = 139) were assessed using this tool on a weekly basis for 6 months. Regression analysis of the data was carried out using SPSS, and a short trainer questionnaire was implemented to investigate feasibility. RESULTS Ratings for insight were shown to increase with time in a similar manner to the growth observed in other essential skills. The gradient of the slope for growth of insight was slightly less than that of the other observed skills. Trainers were mostly positive about the new criterion assessing trainees' insight, although the importance of training for trainers in this process was highlighted. DISCUSSION Our data suggest that practitioners' insight into their performance can be developed with experience and regular feedback. However, this is most likely a complex skill dependent on a number of intrinsic and external factors. CONCLUSION The development of trainees' insight into their performance can be assessed using a single criterion on a simple global ratings form. The process involves no additional burden on evaluators in terms of their time or cost, and promotes best practice in the provision of feedback for trainees.
Collapse
|
16
|
Pelgrim EAM, Kramer AWM, Mokkink HGA, van den Elsen L, Grol RPTM, van der Vleuten CPM. In-training assessment using direct observation of single-patient encounters: a literature review. Adv Health Sci Educ Theory Pract 2011; 16:131-42. [PMID: 20559868 PMCID: PMC3074070 DOI: 10.1007/s10459-010-9235-6] [Citation(s) in RCA: 97] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2009] [Accepted: 05/12/2010] [Indexed: 05/17/2023]
Abstract
We reviewed the literature on instruments for work-based assessment in single clinical encounters, such as the mini-clinical evaluation exercise (mini-CEX), and examined differences between these instruments in characteristics and feasibility, reliability, validity and educational effect. A PubMed search of the literature published before 8 January 2009 yielded 39 articles dealing with 18 different assessment instruments. One researcher extracted data on the characteristics of the instruments and two researchers extracted data on feasibility, reliability, validity and educational effect. Instruments are predominantly formative. Feasibility is generally deemed good and assessor training occurs sparsely but is considered crucial for successful implementation. Acceptable reliability can be achieved with 10 encounters. The validity of many instruments is not investigated, but the validity of the mini-CEX and the 'clinical evaluation exercise' is supported by strong and significant correlations with other valid assessment instruments. The evidence from the few studies on educational effects is not very convincing. The reports on clinical assessment instruments for single work-based encounters are generally positive, but supporting evidence is sparse. Feasibility of instruments seems to be good and reliability requires a minimum of 10 encounters, but no clear conclusions emerge on other aspects. Studies on assessor and learner training and studies examining effects beyond 'happiness data' are badly needed.
Collapse
Affiliation(s)
- E A M Pelgrim
- Department of Primary Care and Community Care, Radboud University Nijmegen Medical Centre, Postbus 9101, Huispostnummer, 6500 HB, Nijmegen, The Netherlands.
| | | | | | | | | | | |
Collapse
|
17
|
Perron NJ, Sommer J, Hudelson P, Demaurex F, Luthy C, Louis-Simonet M, Nendaz M, De Grave W, Dolmans D, van der Vleuten CPM. Clinical supervisors' perceived needs for teaching communication skills in clinical practice. Med Teach 2009; 31:e316-e322. [PMID: 19811140 DOI: 10.1080/01421590802650134] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND Lack of faculty training is often cited as the main obstacle to post-graduate teaching in communication skills. AIMS To explore clinical supervisors' needs and perceptions regarding their role as communication skills trainers. METHODS Four focus group discussions were conducted with clinical supervisors from two in-patient and one out-patient medical services from the Geneva University Hospitals. Focus groups were audio taped, transcribed verbatim and analyzed in a thematic way using Maxqda software for qualitative data analysis. RESULTS Clinical supervisors said that they frequently addressed communication issues with residents but tended to intervene as rescuers, clinicians or coaches rather than as formal instructors. They felt their own training did not prepare them to teach communication skills. Other barriers to teach communication skills include lack of time, competing demands, lack of interest and experience on the part of residents, and lack of institutional priority given to communication issues. Respondents expressed a desire for experiential and reflective training in a work-based setting and emphasised the need for a non-judgmental learning atmosphere. CONCLUSIONS Results suggest that organisational priorities, culture and climate strongly influence the degree to which clinical supervisors may feel comfortable to teach communication skills to residents. Attention must be given to these contextual factors in the development of an effective communication skills teaching program for clinical supervisors.
Collapse
|
18
|
Duvivier RJ, van Dalen J, van der Vleuten CPM, Scherpbier AJJA. Teacher perceptions of desired qualities, competencies and strategies for clinical skills teachers. Med Teach 2009; 31:634-41. [PMID: 19513926 DOI: 10.1080/01421590802578228] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
INTRODUCTION Clinical skills centres (or Skillslabs) prepare students for patient-encounters. Evidence on teaching skills in these centres is lacking. What teaching skills do teachers view as effective in supporting the acquisition of physical examination skills in undergraduate medical training? METHOD Structured interviews of 10 teachers (1/3 of staff of Maastricht University, Skillslab) were conducted. Selection was based on even representation of age, years teaching experience, gender and previous experience at Maastricht University. A topic grid was used to ensure comparability. Interviews (average 70 min, range 33-95 min) were recorded and transcripts were analyzed independently by two researchers. RESULTS Teaching skills identified include the ability to adapt content of the training, level of depth and teaching method according to the needs of any particular group. Thorough comprehension of students' context (level of knowledge,prior experience and insight in the curriculum) is considered helpful. Explicitly inviting students to ask questions and providing relevant literature is seen to stimulate learning. Providing constructive feedback is essential, as is linking physical examination skills training to clinical situations. The ideal attitude includes appropriate dress and behaviour, as well as the use of humour. Affinity for teaching is regarded as the most important reason to work as a teacher. CONCLUSION Desired characteristics for undergraduate skills teachers resemble findings in other teaching roles. Affinity for teaching and flexibility in teaching methods are novel findings.
Collapse
Affiliation(s)
- R J Duvivier
- Faculty of Health, Medicine and Life Sciences, Skillslab, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands.
| | | | | | | |
Collapse
|
19
|
|
20
|
Teunissen PW, Stapel DA, Scheele F, Scherpbier AJJA, Boor K, van Diemen-Steenvoorde JAAM, van der Vleuten CPM. The influence of context on residents' evaluations: effects of priming on clinical judgment and affect. Adv Health Sci Educ Theory Pract 2009; 14:23-41. [PMID: 17940843 DOI: 10.1007/s10459-007-9082-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2007] [Accepted: 09/12/2007] [Indexed: 05/25/2023]
Abstract
Different lines of research have suggested that context is important in acting and learning in the clinical workplace. It is not clear how contextual information influences residents' constructions of the situations in which they participate. The category accessibility paradigm from social psychology appears to offer an interesting perspective for studying this topic. We explored the effect of activating medically irrelevant mental concepts in one context, so-called 'priming', on residents' interpretations as reflected in their judgments in another, work-related context. Obstetric-gynecologic residents participated in two unrelated-tasks experiments. In the first experiment residents were asked to indicate affect about a change in a routine procedure after performing an ostensibly unrelated 'priming' task which activated the concept of either ineffective coping or effective coping. The second experiment concerned residents' patient management decisions in a menorrhagia case after 'priming' with either action or holding off. Contextually activated mental concepts lead to divergent affective and cognitive evaluations in a subsequent medical context. Residents are not aware of this effect. The strength of the effect varies with residents' level of experience. Context influences residents' constructions of a work-related situation by activating mental concepts which in turn affect how residents experience situations. Level of experience appears to play a mediating role in this process.
Collapse
Affiliation(s)
- P W Teunissen
- Vu University Medical Center, Amsterdam, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
21
|
Burch VC, Norman GR, Schmidt HG, van der Vleuten CPM. Are specialist certification examinations a reliable measure of physician competence? Adv Health Sci Educ Theory Pract 2008; 13:521-33. [PMID: 17476579 DOI: 10.1007/s10459-007-9063-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2006] [Accepted: 03/16/2007] [Indexed: 05/15/2023]
Abstract
High stakes postgraduate specialist certification examinations have considerable implications for the future careers of examinees. Medical colleges and professional boards have a social and professional responsibility to ensure their fitness for purpose. To date there is a paucity of published data about the reliability of specialist certification examinations and objective methods for improvement. Such data are needed to improve current assessment practices and sustain the international credibility of specialist certification processes. To determine the component and composite reliability of the Fellowship examination of the College of Physicians of South Africa, and identify strategies for further improvement, generalizability and multivariate generalizability theory were used to estimate the reliability of examination subcomponents and the overall reliability of the composite examination. Decision studies were used to identify strategies for improving the composition of the examination. Reliability coefficients of the component subtests ranged from 0.58 to 0.64. The composite reliability of the examination was 0.72. This could be increased to 0.8 by weighting all test components equally or increasing the number of patient encounters in the clinical component of the examination. Correlations between examination components were high, suggesting that similar parameters of competence were being assessed. This composite certification examination, if equally weighted, achieved an overall reliability sufficient for high stakes examination purposes. Increasing the weighting of the clinical component decreased the reliability. This could be rectified by increasing the number of patient encounters in the examination. Practical ways of achieving this are suggested.
Collapse
Affiliation(s)
- V C Burch
- Department of Medicine, University of Cape Town, Groote Schuur Hospital, Observatory, Cape Town, South Africa.
| | | | | | | |
Collapse
|
22
|
Singaram VS, Dolmans DHJM, Lachman N, van der Vleuten CPM. Perceptions of problem-based learning (PBL) group effectiveness in a socially-culturally diverse medical student population. Educ Health (Abingdon) 2008; 21:116. [PMID: 19039743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
INTRODUCTION A key aspect of the success of a PBL curriculum is the effective implementation of its small group tutorials. Diversity among students participating in tutorials may affect the effectiveness of the tutorials and may require different implementation strategies. AIMS To determine how students from diverse backgrounds perceive the effectiveness of the processes and content of the PBL tutorials. This study also aims to explore the relationship between students' perceptions of their PBL tutorials and their gender, age, language, prior educational training, and secondary schooling. MATERIALS/METHODS Data were survey results from 244 first-year student-respondents at the Nelson Mandela School of Medicine at the University of KwaZulu-Natal in South Africa. Exploratory factor analysis was conducted to verify scale constructs in the questionnaire. Relationships between independent and dependent variables were investigated in an analysis of variance. RESULTS The average scores for the items measured varied between 3.3 and 3.8 (scale value 1 indicated negative regard and 5 indicated positive regard). Among process measures, approximately two-thirds of students felt that learning in a group was neither frustrating nor stressful and that they enjoyed learning how to work with students from different social and cultural backgrounds. Among content measures, 80% of the students felt that they learned to work successfully with students from different social and cultural groups and 77% felt that they benefited from the input of other group members. Mean ratings on these measures did not vary with students' gender, age, first language, prior educational training, and the types of schools they had previously attended. DISCUSSION AND CONCLUSION Medical students of the University of KwaZulu-Natal, regardless of their backgrounds, generally have positive perceptions of small group learning. These findings support previous studies in highlighting the role that small group tutorials can play in overcoming cultural barriers and promoting unity and collaborative learning within diverse student groups.
Collapse
Affiliation(s)
- V S Singaram
- Nelson R Mandela School of Medicine, School of Undergraduate Medical Education, University of KwaZulu Natal, Durban, South Africa.
| | | | | | | |
Collapse
|
23
|
Teunissen PW, Boor K, Scherpbier AJJA, van der Vleuten CPM, van Diemen-Steenvoorde JAAM, van Luijk SJ, Scheele F. Attending doctors' perspectives on how residents learn. Med Educ 2007; 41:1050-8. [PMID: 17973765 DOI: 10.1111/j.1365-2923.2007.02858.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
CONTEXT Graduate medical education is currently facing major educational reforms. There is a lack of empirical evidence in the literature about the learning processes of residents in the clinical workplace. This qualitative study uses a 'grounded theory' approach to continue the development of a theoretical framework of learning in the clinical workplace by adding the perspective of attending doctors. METHODS A total of 21 Dutch attending doctors involved in the training of residents in obstetrics and gynaecology participated in 1 of 3 focus group sessions. They discussed their perceptions of how residents learn and what factors influence residents' learning. A grounded theory approach was used to analyse the transcribed discussions. RESULTS Three related themes emerged. The first concerned the central role of participation in work-related activities: according to attending doctors, residents learn by tackling the everyday challenges of clinical work. The second involved the ways in which attending doctors influence what residents learn from work-related activities. The final theme focused on attending doctors' views of the essential characteristics of residents and their development during residency. CONCLUSIONS Attending doctors' perspectives complement current insights derived from similar research among residents and from related literature. As part of an ongoing effort to further develop understanding of how residents learn, this study adds several ways in which attending doctors strive to combine guidance in both patient care and resident training. Furthermore, attending doctors' perspectives draw attention to other aspects of learning in the clinical workplace, such as the role of confidence and the balance between supervision and independence.
Collapse
Affiliation(s)
- P W Teunissen
- Institute for Medical Education, VU University Medical Center, Amsterdam, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
24
|
Teunissen PW, Scheele F, Scherpbier AJJA, van der Vleuten CPM, Boor K, van Luijk SJ, van Diemen-Steenvoorde JAAM. How residents learn: qualitative evidence for the pivotal role of clinical activities. Med Educ 2007; 41:763-70. [PMID: 17661884 DOI: 10.1111/j.1365-2923.2007.02778.x] [Citation(s) in RCA: 176] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
OBJECTIVES Medical councils worldwide have outlined new standards for postgraduate medical education. This means that residency programmes will have to integrate modern educational views into the clinical workplace. Postgraduate medical education is often characterised as a process of learning from experience. However, empirical evidence regarding the learning processes of residents in the clinical workplace is lacking. This qualitative study sought insight into the intricate process of how residents learn in the clinical workplace. METHODS We carried out a qualitative study using focus groups. A grounded theory approach was used to analyse the transcribed tape recordings. A total of 51 obstetrics and gynaecology residents from teaching hospitals and affiliated general hospitals participated in 7 focus group discussions. Participants discussed how they learn and what factors influence their learning. RESULTS An underlying theoretical framework emerged from the data, which clarified what happens when residents learn by doing in the clinical workplace. This framework shows that work-related activities are the starting point for learning. The subsequent processes of 'interpretation' and 'construction of meaning' lead to refinement and expansion of residents' knowledge and skills. Interaction plays an important role in the learning process. This is in line with both cognitivist and sociocultural views on learning. CONCLUSIONS The presented theoretical framework of residents' learning provides much needed empirical evidence for the actual learning processes of residents in the clinical workplace. The insights it offers can be used to exploit the full educational potential of the clinical workplace.
Collapse
Affiliation(s)
- P W Teunissen
- Institute for Medical Education, Free University Medical Centre, Amsterdam, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
25
|
Boor K, Scheele F, van der Vleuten CPM, Scherpbier AJJA, Teunissen PW, Sijtsma K. Psychometric properties of an instrument to measure the clinical learning environment. Med Educ 2007; 41:92-9. [PMID: 17209897 DOI: 10.1111/j.1365-2929.2006.02651.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
OBJECTIVES The clinical learning environment is an influential factor in work-based learning. Evaluation of this environment gives insight into the educational functioning of clinical departments. The Postgraduate Hospital Educational Environment Measure (PHEEM) is an evaluation tool consisting of a validated questionnaire with 3 subscales. In this paper we further investigate the psychometric properties of the PHEEM. We set out to validate the 3 subscales and test the reliability of the PHEEM for both clerks (clinical medical students) and registrars (specialists in training). METHODS Clerks and registrars from different hospitals and specialties filled out the PHEEM. To investigate the construct validity of the 3 subscales, we used an exploratory factor analysis followed by varimax rotation, and a cluster analysis known as Mokken scale analysis. We estimated the reliability of the questionnaire by means of variance components according to generalisability theory. RESULTS A total of 256 clerks and 339 registrars filled out the questionnaire. The exploratory factor analysis plus varimax rotation suggested a 1-dimensional scale. The Mokken scale analysis confirmed this result. The reliability analysis showed a reliable outcome for 1 department with 14 clerks or 11 registrars. For multiple departments 3 respondents combined with 10 departments provide a reliable outcome for both groups. DISCUSSION The PHEEM is a questionnaire measuring 1 dimension instead of the hypothesised 3 dimensions. The sample size required to achieve a reliable outcome is feasible. The instrument can be used to evaluate both single and multiple departments for both clerks and registrars.
Collapse
Affiliation(s)
- K Boor
- Department of Medical Education, Sint Lucas Andreas Hospital, Amsterdam, The Netherlands.
| | | | | | | | | | | |
Collapse
|
26
|
van der Vleuten CPM. [Qualitative ranking of medical curricula is useful]. Ned Tijdschr Geneeskd 2006; 150:2330. [PMID: 17089553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Benchmarking lists are indicators of the quality of medical curricula. The measurement on which a list in order of merit is based must of course have validity, but this is generally the case. The irresponsible use of ranking lists is to be deplored, but that does not constitute a reason not to publish them. More competition in attracting the aspirant medical student, for example on the basis of ranking lists, would be a good thing. This can contribute to making the quality more visible. Lists in order of quality are therefore a useful aid to assist the aspirant student in choosing a school.
Collapse
|
27
|
Niemantsverdriet S, van der Vleuten CPM, Majoor GD, Scherpbier AJJA. The learning processes of international students through the eyes of foreign supervisors. Med Teach 2006; 28:e104-11. [PMID: 16807160 DOI: 10.1080/01421590600726904] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Semi-structured interviews were conducted with external supervisors of international electives undertaken by Dutch undergraduate students, in order to gain insight into student learning processes during these electives. The interviews served to triangulate information on these learning processes that was obtained from students' self-reports. The results of the case study reported in this paper were largely consistent with findings from prior studies of international electives in which learning processes and sociocultural differences were examined: experiential learning processes appeared to dominate and sociocultural differences occasionally seemed to blur productive learning, especially when the differences between the national cultures of host country and student home country were substantial. It is recommended that students' experiential learning from international electives should be supplemented with 'guided' and 'self-directed' learning with a focus on the sociocultural dimension.
Collapse
Affiliation(s)
- S Niemantsverdriet
- Department of Educational Development and Research, University of Maastricht, Maastricht, The Netherlands.
| | | | | | | |
Collapse
|
28
|
Niemantsverdriet S, Majoor GD, van der Vleuten CPM, Scherpbier AJJA. Internationalization of medical education in the Netherlands: state of affairs. Med Teach 2006; 28:187-9. [PMID: 16707304 DOI: 10.1080/01421590500271225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In the framework of the Bologna Process, internationalization co-ordinators of seven (out of eight) Dutch medical schools completed an electronic survey about internationalization-related aspects of the curriculum. Common features of internationalization in Dutch medical schools were: the numbers of outgoing students exceeded the numbers of incoming students, and most international programmes involved clinical training and research projects. We recommend that Dutch medical schools should pay more attention to 'Internationalization at Home' and focus on conditions that are conducive to participation by foreign students.
Collapse
Affiliation(s)
- S Niemantsverdriet
- Department of Educational Development and Research, University Maastricht, Maastricht, Netherlands.
| | | | | | | |
Collapse
|
29
|
Daelmans HEM, Overmeer RM, van der Hem-Stokroos HH, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. In-training assessment: qualitative study of effects on supervision and feedback in an undergraduate clinical rotation. Med Educ 2006; 40:51-8. [PMID: 16441323 DOI: 10.1111/j.1365-2929.2005.02358.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
BACKGROUND Supervision and feedback are essential factors that contribute to the learning environment in the context of workplace learning and their frequency and quality can be improved. Assessment is a powerful tool with which to influence students' learning and supervisors' teaching and thus the learning environment. OBJECTIVE To investigate an in-training assessment (ITA) programme in action and to explore its effects on supervision and feedback. DESIGN A qualitative study using individual, semistructured interviews. SUBJECTS AND SETTING Eight students and 17 assessors (9 members of staff and 8 residents) in the internal medicine undergraduate clerkship at Vrije Universiteit Medical Centre, Amsterdam, the Netherlands. RESULTS The ITA programme in action differed from the intended programme. Assessors provided hardly any follow-up on supervision and feedback given during assessments. Although students wanted more supervision and feedback, they rarely asked for it. Students and assessors failed to integrate the whole range of competencies included in the ITA programme into their respective learning and supervision and feedback. When giving feedback, assessors rarely gave borderline or fail judgements. DISCUSSION AND CONCLUSION If an ITA programme in action is to be congruent with the intended programme, the implementation of the programme must be monitored. It is also necessary to provide full information about the programme and to ensure this information is given repeatedly. Introducing an ITA programme that includes the assessment of several competencies does not automatically lead to more attention being paid to these competencies in terms of supervision and feedback. Measures that facilitate change in the learning environment seem to be a prerequisite for enabling the assessment programme to steer the learning environment.
Collapse
Affiliation(s)
- H E M Daelmans
- Skills Training Department, Vrije Universiteit Medical Centre, 5th Floor, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands.
| | | | | | | | | | | |
Collapse
|
30
|
Schuwirth LWT, van der Vleuten CPM. [Assessment of medical competence in clinical education]. Ned Tijdschr Geneeskd 2005; 149:2752-5. [PMID: 16375022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
There has been considerable change in the field of assessment of medical competence. At the moment, competency-orientated assessment, 'mini-CEX' (brief clinical evaluation exercises) and portfolios are quite popular. These methods are based on research findings indicating that medical competence can better be described as a collection of the complex tasks (so-called competencies) that a doctor must be able to perform than as the sum of knowledge, skills, problem-solving ability and attitudes. Mini-CEX represents a method for the assessment of medical competence reliably and validly in a practical setting. Using a portfolio, information on the student's competence can be collated and evaluated from various sources, including mini-CEX. As such, a portfolio has much in common with a patient chart.
Collapse
Affiliation(s)
- L W T Schuwirth
- Universiteit Maastricht, Capaciteitsgroep Onderwijsontwikkeling en Onderwijsresearch, Postbus 616, 6200 MD Maastricht
| | | |
Collapse
|
31
|
Verhoeven BH, Snellen-Balendong HAM, Hay IT, Boon JM, van der Linde MJ, Blitz-Lindeque JJ, Hoogenboom RJI, Verwijnen GM, Wijnen WHFW, Scherpbier AJJA, van der Vleuten CPM. The versatility of progress testing assessed in an international context: a start for benchmarking global standardization? Med Teach 2005; 27:514-20. [PMID: 16199358 DOI: 10.1080/01421590500136238] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Sharing and collaboration relating to progress testing already takes place on a national level and allows for quality control and comparisons of the participating institutions. This study explores the possibilities of international sharing of the progress test after correction for cultural bias and translation problems. Three progress tests were reviewed and administered to 3043 Pretoria and 3001 Maastricht medical students. In total, 16% of the items were potentially biased and removed from the test items administered to the Pretoria students (9% due to translation problems; 7% due to cultural differences). Of the three clusters (basic, clinical and social sciences) the social sciences contained most bias (32%), basic sciences least (11%). The differences that were found, comparing the student results of both schools, seem a reflection of the deliberate accentuations that both curricula pursue. The results suggest that the progress test methodology provides a versatile instrument that can be used to assess medical schools across the world. Sharing of test material is a viable strategy and test outcomes are interesting and can be used in international quality control.
Collapse
|
32
|
van der Hem-Stokroos HH, van der Vleuten CPM, Daelmans HEM, Haarman HJTM, Scherpbier AJJA. Reliability of the clinical teaching effectiveness instrument. Med Educ 2005; 39:904-10. [PMID: 16150030 DOI: 10.1111/j.1365-2929.2005.02245.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
INTRODUCTION The Clinical Teaching Effectiveness Instrument (CTEI) was developed to evaluate the quality of the clinical teaching of educators. Its authors reported evidence supporting content and criterion validity and found favourable reliability findings. We tested the validity and reliability of this instrument in a European context and investigated its reliability as an instrument to evaluate the quality of clinical teaching at group level rather than at the level of the individual teacher. METHODS Students participating in a surgical clerkship were asked to fill in a questionnaire reflecting a student-teacher encounter with a staff member or a resident. We calculated variance components using the urgenova program. For individual score interpretation of the quality of clinical teaching the standard error of estimate was calculated. For group interpretation we calculated the root mean square error. RESULTS The results did not differ statistically between staff and residents. The average score was 3.42. The largest variance component was associated with rater variance. For individual score interpretation a reliability of > 0.80 was reached with 7 ratings or more. To reach reliable outcomes at group level, 15 educators or more were needed with a single rater per educator. DISCUSSION The required sample size for appraisal of individual teaching is easily achievable. Reliable findings can also be obtained at group level with a feasible sample size. The results provide additional evidence of the reliability of the CTEI in undergraduate medical education in a European setting. The results also showed that the instrument can be used to measure the quality of teaching at group level.
Collapse
|
33
|
Kristina TN, Majoor GD, van der Vleuten CPM. Does CBE come close to what it should be? A case study from the developing world. Evaluating a programme in action against objectives on paper. Educ Health (Abingdon) 2005; 18:194-208. [PMID: 16009614 DOI: 10.1080/13576280500148205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
CONTEXT A growing number of health professions schools have implemented programmes for community-based education (CBE) for their students. There are indications, however, that particularly in developing countries, CBE programmes are not always optimally implemented or sustained. OBJECTIVE To test the suitability of an established method for curriculum evaluation, combined with a set of generic objectives for CBE programmes, for evaluation of CBE programmes. METHODS As a case study, Coles and Grant's model for curriculum evaluation was applied to the CBE programme of the Medical Faculty of Diponegoro University (MFDU) in Semarang, Indonesia. Document analysis yielded information on the programme on paper; participatory observation and staff interviews on the programme in action. In addition, MFDU's CBE programme was evaluated against a set of generic objectives for CBE programmes recently designed by us. RESULTS MFDU has created great opportunities for its CBE programme in which, however, also significant weaknesses were revealed. (1) In the community, much time was spent on formal teaching; (2) Students' work in the community was not jointly identified with community members regarding the community's felt health needs; (3) There was rarely continuity, and evaluation or follow-up of the students' work in the community; and (4) No systematic programme evaluations are carried out. DISCUSSION This evaluation study showed shortcomings in the implementation of MFDU's CBE programme. The major weaknesses identified point at an underutilization of the opportunities and potential jeopardization of the facilities in the community. On the other hand, more time is needed in the CBE programme to establish the health needs to be addressed jointly with the community and to assess the impact of activities undertaken. A thorough review of the CBE programme, perhaps taking the outcomes of this study into account, could turn MFDU's CBE programme into a fine example for other medical schools in Indonesia and beyond. CONCLUSION Coles and Grant's method for curriculum evaluation proved suitable for evaluation of a CBE programme in a developing country. After additional comparison with a reference list of objectives for CBE programmes, reasoned suggestions for programme can be made.
Collapse
Affiliation(s)
- T N Kristina
- Faculty of Medicine of Diponegoro University, Semarang, Indonesia
| | | | | |
Collapse
|
34
|
Daelmans HEM, van der Hem-Stokroos HH, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Global clinical performance rating, reliability and validity in an undergraduate clerkship. Neth J Med 2005; 63:279-84. [PMID: 16093582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
BACKGROUND Global performance rating is frequently used in clinical training despite its known psychometric drawbacks. Inter-rater reliability is low in undergraduate training but better in residency training, possibly because residency offers more opportunities for supervision. The low or moderate predictive validity of global performance ratings in undergraduate and residency training may be due to low or unknown reliability of both global performance ratings and criterion measures. In an undergraduate clerkship, we investigated whether reliability improves when raters are more familiar with students' work and whether validity improves with increased reliability of the predictor and criterion instrument. METHODS Inter-rater reliability was determined in a clerkship with more student-rater contacts than usual. The in-training assessment programme of the clerkship that immediately followed was used as the criterion measure to determine predictive validity. RESULTS With four ratings, inter-rater reliability was 0.41 and predictive validity was 0.32. Reliability was lower and validity slightly higher than similar results published for residency training. CONCLUSION Even with increased student-rater interaction, the reliability and validity of global performance ratings were too low to warrant the usage of global performance ratings as individual assessment format. However, combined with other assessment measures, global performance ratings may lead to improved integral assessment.
Collapse
Affiliation(s)
- H E M Daelmans
- Department of Skills Training, Vrije Universiteit Medical Centre, Amsterdam, Netherlands.
| | | | | | | | | | | |
Collapse
|
35
|
Daelmans HEM, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Effects of an in-training assessment programme on supervision of and feedback on competencies in an undergraduate Internal Medicine clerkship. Med Teach 2005; 27:158-63. [PMID: 16019338 DOI: 10.1080/01421590400019534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Assessment drives the educational behaviour of students and supervisors. Therefore, an assessment programme targeted at specific competencies may be expected to motivate supervisors and students to pay more attention to those competencies. In-training assessment (ITA) is regarded as a feasible method for assessing a broad range of competencies. Before and after the implementation of an ITA programme in an undergraduate Internal Medicine clerkship we surveyed students on the frequency of unobserved and observed supervision, and the quality of feedback as inferred from the seniority of the person providing it. After the implementation of the ITA programme supervision increased, but the difference was not statistically significant. The quality of feedback showed no significant change either. Inter-student variation in supervision and feedback remained invariably high after the implementation of the ITA programme. Whether these results are attributable to the way the programme was implemented or to the way the results were assessed remains to be clarified.
Collapse
Affiliation(s)
- H E M Daelmans
- Vrije Universiteit Medical Center, Skills Training Department, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
36
|
Hobma SO, Ram PM, Muijtjens AMM, Grol RPTM, van der Vleuten CPM. Setting a standard for performance assessment of doctor-patient communication in general practice. Med Educ 2004; 38:1244-1252. [PMID: 15566535 DOI: 10.1111/j.1365-2929.2004.01918.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
CONTEXT Continuing professional development (CPD) of general practitioners. OBJECTIVE Criterion-referenced standards for assessing performance in the real practice of general practitioners (GPs) should be available to identify learning needs or poor performers for CPD. The applicability of common standard setting procedures in authentic assessment has not been investigated. METHODS To set a standard for assessment of GP-patient communication with video observation of daily practice, we investigated 2 well known examples of 2 different standard setting approaches. An Angoff procedure was applied to 8 written cases. A borderline regression method was applied to videotaped consultations of 88 GPs. The procedures and outcomes were evaluated by the applicability of the procedure, the reliability of the standards and the credibility as perceived by the stakeholders, namely, the GPs. RESULTS Both methods are applicable and reliable; the obtained standards are credible according to the GPs. CONCLUSIONS Both modified methods can be used to set a standard for assessment in daily practice. The context in which the standard will be used - i.e. the specific purpose of the standard, the moment the standard must be available or if specific feedback must be given - is important because methods differ in practical aspects.
Collapse
Affiliation(s)
- S O Hobma
- Department of General Practice, Centre for Quality of Care Research, University of Maastricht, Maastricht, The Netherlands.
| | | | | | | | | |
Collapse
|
37
|
van der Vleuten CPM, Schuwirth LWT, Muijtjens AMM, Thoben AJNM, Cohen-Schotanus J, van Boven CPA. Cross institutional collaboration in assessment: a case on progress testing. Med Teach 2004; 26:719-25. [PMID: 15763876 DOI: 10.1080/01421590400016464] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
The practice of assessment is governed by an interesting paradox. On the one hand good assessment requires substantial resources which may exceed the capacity of a single institution and we have reason to doubt the quality of our in-house examinations. On the other hand, our parsimonity with regard to our resources makes us reluctant to pool efforts and share our test material. This paper reports on an initiative to share test material across different medical schools. Three medical schools in The Netherlands have successfully set up a partnership for a specific testing method: progress testing. At present, these three schools collaboratively produce high-quality test items. The jointly produced progress tests are administered concurrently by these three schools and one other school, which buys the test. The steps taken in establishing this partnership are described and results are presented to illustrate the unique sort of information that is obtained by cross-institutional assessment. In addition, plans to improve test content and procedure and to expand the partnership are outlined. Eventually, the collaboration may even extend to other test formats. This article is intended to give evidence of the feasibility and exciting potential of between school collaboration in test development and test administration. Our experiences have demonstrated that such collaboration has excellent potential to combine economic benefit with educational advantages, which exceed what is achievable by individual schools.
Collapse
|
38
|
Daelmans HEM, van der Hem-Stokroos HH, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Feasibility and reliability of an in-training assessment programme in an undergraduate clerkship. Med Educ 2004; 38:1270-7. [PMID: 15566538 DOI: 10.1111/j.1365-2929.2004.02019.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
INTRODUCTION Structured assessment, embedded in a training programme, with systematic observation, feedback and appropriate documentation may improve the reliability of clinical assessment. This type of assessment format is referred to as in-training assessment (ITA). The feasibility and reliability of an ITA programme in an internal medicine clerkship were evaluated. The programme comprised 4 ward-based test formats and 1 outpatient clinic-based test format. Of the 4 ward-based test formats, 3 were single-sample tests, consisting of 1 student-patient encounter, 1 critical appraisal session and 1 case presentation. The other ward-based test and the outpatient-based test were multiple sample tests, consisting of 12 ward-based case write-ups and 4 long cases in the outpatient clinic. In all the ITA programme consisted of 19 assessments. METHODS During 41 months, data were collected from 119 clerks. Feasibility was defined as over two thirds of the students obtaining 19 assessments. Reliability was estimated by performing generalisability analyses with 19 assessments as items and 5 test formats as items. RESULTS A total of 73 students (69%) completed 19 assessments. Reliability expressed by the generalisability coefficients was 0.81 for 19 assessments and 0.55 for 5 test formats. CONCLUSIONS The ITA programme proved to be feasible. Feasibility may be improved by scheduling protected time for assessment for both students and staff. Reliability may be improved by more frequent use of some of the test formats.
Collapse
Affiliation(s)
- H E M Daelmans
- Skills Training Department, Vrije Universiteit Medical Centre, Amsterdam, The Netherlands.
| | | | | | | | | | | |
Collapse
|
39
|
van der Hem-Stokroos HH, Daelmans HEM, van der Vleuten CPM, Haarman HJTM, Scherpbier AJJA. The impact of multifaceted educational structuring on learning effectiveness in a surgical clerkship. Med Educ 2004; 38:879-886. [PMID: 15271049 DOI: 10.1111/j.1365-2929.2004.01899.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Various measures have been introduced to enhance learning experiences in clerkships, generally with limited success. This study evaluated the impact of a multifaceted approach on the effectiveness of learning in a surgical clerkship. In accordance with results obtained in continuing medical education, several interventions were introduced simultaneously. We compared students' evaluations of the traditional surgical clerkship with those of the restructured clerkship. METHODS Two consecutive cohorts of students were asked to complete a questionnaire about the quality and quantity of their learning experiences. Cohort 1 (n = 28) undertook the traditional clerkship and cohort 2 (n = 72) the restructured clerkship. A Mann-Whitney test was used to compare outcomes between the 2 cohorts. RESULTS There were few statistically significant differences between cohorts 1 and 2. Overall, quality indicators did not differ between the 2 cohorts. DISCUSSION A short-term multifaceted intervention led to a slight increase in the performance of clinical skills and a slight decrease in time spent on activities of limited educational value. The intervention may have been too brief to produce substantial effects. Future interventions should also target teachers, including trainees, in order to assess their opinions and address their educational needs.
Collapse
Affiliation(s)
- H H van der Hem-Stokroos
- Department of Surgery, Vrije Universiteit Medical Centre, PO Box 7057, 1007 MB Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
40
|
Daelmans HEM, Hoogenboom RJI, Donker AJM, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Effectiveness of clinical rotations as a learning environment for achieving competences. Med Teach 2004; 26:305-12. [PMID: 15203842 DOI: 10.1080/01421590410001683195] [Citation(s) in RCA: 82] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Competences are becoming more and more prominent in undergraduate medical education. Workplace learning is regarded as crucial in competence learning. Assuming that effective learning depends on adequate supervision, feedback and assessment, the authors studied the occurrence of these three variables in relation to a set of clinical competences. They surveyed students at the end of their rotation in surgery, internal medicine or paediatrics asking them to indicate for each competence how often they had received observed and unobserved supervision, the seniority of the person who provided most of their feedback, and whether the competence was addressed in formal assessments. Supervision was found to be scarce and mostly unobserved. Senior staff did not provide much feedback, and assessment mostly targeted patient-related competences. For all variables, the variation between students exceeded that between disciplines. We conclude that conditions for adequate workplace learning are poorly met and that clerkship experiences show huge inter-student variation.
Collapse
Affiliation(s)
- H E M Daelmans
- Vrije Universiteit Medical Centre, Skills Training Department, Amsterdam, The Netherlands.
| | | | | | | | | | | |
Collapse
|
41
|
Abstract
RATIONALE The availability of a framework for the definition of generic objectives for community-based education (CBE) programmes may assist in the rational design of objectives for specific CBE programmes. STRATEGY Factors impacting on community health from the perspective of a developing country were collected. Potential assistance from medical students to communities to improve their health status was determined. Competencies required in students to execute tasks in the community were defined and eventually educational objectives to develop these competencies in the students were established. METHODS Factors impacting on community health and activities of medical students in CBE programmes were identified by review of literature and Internet resources. Competencies desired for execution of tasks by students and educational objectives to develop these competencies were defined by us and checked against pertinent literature. A draft table representing the 4 elements of the framework was discussed by an international group of experts for external validation. MAIN OUTCOMES A total of 26 factors impacting on community health were identified and clustered in 5 domains. Twenty-one generic objectives for CBE programmes were defined to develop the required competencies in students. Analogues of each of these 21 objectives were found in at least 1 publication specifying objectives for specific CBE programmes but none of these publications stated any objective not covered by our list of generic objectives. CONCLUSION It proved possible to develop a framework to define generic objectives for CBE programmes. An example was elaborated from the perspective of a medical school in a developing country.
Collapse
Affiliation(s)
- T N Kristina
- Faculty of Medicine, Diponegoro University, Semarang, Indonesia
| | | | | |
Collapse
|
42
|
Kramer AWM, Düsman H, Tan LHC, Jansen JJM, Grol RPTM, van der Vleuten CPM. Acquisition of communication skills in postgraduate training for general practice. Med Educ 2004; 38:158-167. [PMID: 14871386 DOI: 10.1111/j.1365-2923.2004.01747.x] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE The evidence suggests that a longitudinal training of communication skills embedded in a rich clinical context is most effective. In this study we evaluated the acquisition of communication skills under such conditions. METHODS In a longitudinal design the communication skills of a randomly selected sample of 25 trainees of a three-year postgraduate training programme for general practice were assessed at the start and at the end of training. Eight videotaped real life consultations were rated per measurement and per trainee, using the MAAS-Global scoring list. The results were compared with each other and with those of a reference group of 94 experienced GPs. RESULTS The mean score of the MAAS-Global was slightly increased at the end of training (2.4) compared with the start (2.2). No significant difference was found between the final results of the trainees and the reference group. According to the criteria of the rating scale the performance of both trainees and GPs was unsatisfactory. CONCLUSION The results of this study indicate that communication skills do not improve in a three-year postgraduate training comprising both a rich clinical context and a longitudinal training of communication skills, and that an unsatisfactory level still exists at the end of training. Moreover, GPs do not acquire communication skills during independent practice as they perform comparably to the trainees. Further research into the measurement of communication skills, the teaching procedures, the role of the GP-trainer as a model and the influence of rotations through hospitals and the like, is required.
Collapse
Affiliation(s)
- A W M Kramer
- Centre for Postgraduate Training in General Practice (VOHA), University Medical Centre, Nijmegen, The Netherlands.
| | | | | | | | | | | |
Collapse
|
43
|
Blok GA, Morton J, Morley M, Kerckhoffs CCJM, Kootstra G, van der Vleuten CPM. Requesting organ donation: the case of self-efficacy--effects of the European Donor Hospital Education Programme (EDHEP). Adv Health Sci Educ Theory Pract 2004; 9:261-282. [PMID: 15583482 DOI: 10.1007/s10459-004-9404-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
One of the major reasons for the shortage of donor organs is the high number of refusals by relatives. Studies have shown that the quality of communication with bereaved relatives influences whether to object or agree to organ and/or tissue donation. Breaking news of brain stem death, approaching relatives for permission to donate organs while also appropriately managing emotional reactions of relatives are complex tasks, which require knowledge of the domains as well as adequate skills to communicate information and understanding. In this study the effect of the European Donor Hospital Education Programme (EDHEP) on the self-efficacy of Intensive Care staff is evaluated. Self-efficacy scores significantly improved after attending EDHEP; an effect that was maintained at six month follow-up. EDHEP participants with high baseline scores on self-efficacy, maintained the increase at follow-up. EDHEP participants with low baseline scores on self-efficacy showed the greatest increase at the post-test. Increases in self-efficacy were significantly related to decreases in the perceived difficulty of requesting. Experience had a significant effect on both self-efficacy beliefs and perceived difficulty of requesting donation. As self-efficacy beliefs are perceived as better predictors for future behaviour than prior attainments, the results call for further research in this domain. The data indicate that training programmes should be tailored not only to working circumstances of participants, but should also take levels of experience and self-efficacy into account. Further study is necessary and the best way to proceed is to relate the outcomes of this study to behavioural outcomes.
Collapse
Affiliation(s)
- G A Blok
- Department of Educational Development & Research, University of Maastricht, P.O. Box 616, 6200 MD Maastricht, The Netherlands.
| | | | | | | | | | | |
Collapse
|
44
|
Ringsted C, Østergaard D, van der Vleuten CPM. Implementation of a formal in-training assessment programme in anaesthesiology and preliminary results of acceptability. Acta Anaesthesiol Scand 2003; 47:1196-203. [PMID: 14616315 DOI: 10.1046/j.1399-6576.2003.00255.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND A new reform on postgraduate education in Denmark requires a formal in-training assessment in all specialties. The aim of this study was to survey the implementation and acceptability of the first example of a nation-wide in-training assessment programme for first-year trainees in anaesthesiology developed by a working group under the Danish Society of Anaesthesiology and Intensive Care Medicine. METHODS A questionnaire about the implementation of the programme in practice and the characteristics of the trainees was sent to the educational responsible consultant (ERC) in each of the 26 anaesthetic departments in the country with first-year trainees in anaesthesiology. Standard evaluations of the assessment programme were regularly collected from trainees. RESULTS Twenty-five (96%) departments returned the questionnaire. In total the departments reported on 100 trainees and 83 of these had been enrolled in the programme. Thirteen departments reported in total on 27 trainees who had completed their first year of training and these departments had applied a median 21 (range 17-21) of the 21 tests included in the entire programme. Time constraints and resistance among senior clinicians were the most frequently cited barriers to implementation. Evaluations from trainees showed a generally positive attitude towards most of the programme. They especially praised the programme's effect on structuring training and having a positive effect on learning. CONCLUSION The in-training assessment programme has been widely implemented across the country. The majority of the programme was acceptable to trainees and had a positive effect on structuring training and on fostering learning.
Collapse
Affiliation(s)
- C Ringsted
- Copenhagen Hospital Corporation, Postgraduate Medical Institute, Bispebjerg Hospital, Copenhagen, Denmark.
| | | | | |
Collapse
|
45
|
Abstract
CONTEXT Simulation-based testing methods have been developed to meet the need for assessment procedures that are both authentic and well-structured. It is widely acknowledged that, although the authenticity of a procedure may be a contributing factor to its validity, authenticity alone never is a sufficient factor. AIM In this paper we describe the mainstream development of various simulation-based approaches, with their strengths and weaknesses. The purpose is not to provide a review based on an extensive meta-analysis but to present crucial factors in the development of these methods and their implications for current and future developments. METHOD The description of these simulation-based instruments uses a subdivision according to the layers of Miller's pyramid. Written and computer-based simulations are aimed at measuring the 'knows how' layer, observation-based techniques such as standardised patient-based examinations and objective structured clinical examinations target the 'shows how' layer and performance practice measures assess performance at the 'does' layer. CONCLUSION In all simulations, case specificity was found to pose the most prominent threat to reliability, while too much structure threatened to trivialise the assessment. The conclusion is that authentic and reliable assessment is predicated on a wise balance between efficiency and adequate content sampling.
Collapse
Affiliation(s)
- L W T Schuwirth
- Department of Educational Development and Research, University of Maastricht, The Netherlands.
| | | |
Collapse
|
46
|
Ringsted C, Østergaard D, Ravn L, Pedersen JA, Berlac PA, van der Vleuten CPM. A feasibility study comparing checklists and global rating forms to assess resident performance in clinical skills. Med Teach 2003; 25:654-658. [PMID: 15369915 DOI: 10.1080/01421590310001605642] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This study evaluated the feasibility of two different scoring forms for assessing the clinical performance of residents in anaesthesiology. One of the forms had a checklist format including task-specific items and the other was a global rating form with general dimensions of competence including 'clinical skills', 'communication skills' and 'knowledge'. Thirty-two clinicians representing 25 (83%) of the 30 training hospitals in the country participated in the study. The clinicians were randomized into two groups, each of which used one of the scoring formats to assess a resident's performance in four simulated clinical scenarios on videotape. Clinicians' opinions about the appropriateness of the scoring forms were rated on a scale of 1-5. The checklist format was rated significantly higher compared with the global rating form (mean 4.6, 0.5 vs. mean 3.5, 1.4, p < 0.001). The inter-rater agreement regarding pass/fail decisions was poor irrespective of the scoring form used. This was explained by clinicians' leniency as assessors rather than by lack of vigilance in the observations or disagreements on standards for good performance.
Collapse
Affiliation(s)
- C Ringsted
- Copenhagen Hospital Corporation, Postgraduate Medical Institute, Bispebjerg Hospital, Copenhagen, Denmark.
| | | | | | | | | | | |
Collapse
|
47
|
van der Hem-Stokroos HH, Daelmans HEM, van der Vleuten CPM, Haarman HJTM, Scherpbier AJJA. A qualitative study of constructive clinical learning experiences. Med Teach 2003; 25:120-126. [PMID: 12745517 DOI: 10.1080/0142159031000092481] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Little is known about the effectiveness of clinical education. A more educational structure is considered to be potentially beneficial. The following structured components were added to a surgical clerkship: logbooks, an observed student-patient encounter, individual appraisals, feedback on patient notes, and (case) presentations by students. The authors organized two focus-group sessions in which 19 students participated to explore their perceptions about effective clinical learning experiences and the newly introduced structured components. The analysis of the transcripts showed that observation and constructive feedback are key features of clinical training. The structured activities were appreciated and the results show the direction to be taken for further improvement. Learning experiences depended vastly on individual clinicians' educational qualities. Students experienced being on call, assisting in theatre and time for self-study as instructive elements. Recommended clerkship components are: active involvement of students, direct observation, selection of teachers, a positive learning environment and time for self-study.
Collapse
|
48
|
Verhoeven BH, Verwijnen GM, Muijtjens AMM, Scherpbier AJJA, van der Vleuten CPM. Panel expertise for an Angoff standard setting procedure in progress testing: item writers compared to recently graduated students. Med Educ 2002; 36:860-867. [PMID: 12354249 DOI: 10.1046/j.1365-2923.2002.01301.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
INTRODUCTION An earlier study showed that an Angoff procedure with > or = 10 recently graduated students as judges can be used to estimate the passing score of a progress test. As the acceptability and feasibility of this approach are questionable, we conducted an Angoff procedure with test item writers as judges. This paper reports on the reliability and credibility of this procedure and compares the standards set by the two different panels. METHODS Fourteen item writers judged 146 test items. Recently graduated students had assessed these items in a previous study. Generalizability was investigated as a function of the number of items and judges. Credibility was judged by comparing the pass/fail rates associated with the Angoff standard, a relative standard and a fixed standard. The Angoff standards obtained by item writers and graduates were compared. RESULTS The variance associated with consistent variability of item writers across items was 1.5% and for graduate students it was 0.4%. An acceptable error score required 39 judges. Item-Angoff estimates of the two panels and item P-values correlated highly. Failure rates of 57%, 55% and 7% were associated with the item writers' standard, the fixed standard and the graduates' standard, respectively. CONCLUSION The graduates' and the item writers' standards differed substantially, as did the associated failure rates. A panel of 39 item writers is not feasible. The item writers' passing score appears to be less credible. The credibility of the graduates' standard needs further evaluation. The acceptability and feasibility of a panel consisting of both students and item writers may be worth investigating.
Collapse
Affiliation(s)
- B H Verhoeven
- Department of Surgery, University Hospital, Leeuwarderstraat 2A, NL-9718 HX Groningen, The Netherlands.
| | | | | | | | | |
Collapse
|
49
|
Kramer AWM, Jansen JJM, Zuithoff P, Düsman H, Tan LHC, Grol RPTM, van der Vleuten CPM. Predictive validity of a written knowledge test of skills for an OSCE in postgraduate training for general practice. Med Educ 2002; 36:812-819. [PMID: 12354243 DOI: 10.1046/j.1365-2923.2002.01297.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE To examine the validity of a written knowledge test of skills for performance on an OSCE in postgraduate training for general practice. METHODS A randomly-selected sample of 47 trainees in general practice took a knowledge test of skills, a general knowledge test and an OSCE. The OSCE included technical stations and stations including complete patient encounters. Each station was checklist rated and global rated. RESULTS The knowledge test of skills was better correlated to the OSCE than the general knowledge test. Technical stations were better correlated to the knowledge test of skills than stations including complete patient encounters. For the technical stations the rating system had no influence on the correlation. For the stations including complete patient encounters the checklist rating correlated better to the knowledge test of skills than the global rating. CONCLUSION The results of this study support the predictive validity of the knowledge test of skills. In postgraduate training for general practice a written knowledge test of skills can be used as an instrument to estimate the level of clinical skills, especially for group evaluation, such as in studies examining the efficacy of a training programme or as a screening instrument for deciding about courses to be offered. This estimation is more accurate when the content of the test matches the skills under study. However, written testing of skills cannot replace direct observation of performance of skills.
Collapse
Affiliation(s)
- A W M Kramer
- National Centre for Evaluation of Postgraduate Training in General Practice (SVUH), Mauritsstraat 92, 3583 HV Utrecht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
50
|
Abstract
BACKGROUND Knowledge is an essential component of medical competence and a major objective of medical education. Thus, the degree of acquisition of knowledge by students is one of the measures of the effectiveness of a medical curriculum. We studied the growth in student knowledge over the course of Maastricht Medical School's 6-year problem-based curriculum. METHODS We analysed 60 491 progress test (PT) scores of 3226 undergraduate students at Maastricht Medical School. During the 6-year curriculum a student sits 24 PTs (i.e. four PTs in each year), intended to assess knowledge at graduation level. On each test occasion all students are given the same PT, which means that in year 1 a student is expected to score considerably lower than in year 6. The PT is therefore a longitudinal, objective assessment instrument. Mean scores for overall knowledge and for clinical, basic, and behavioural/social sciences knowledge were calculated and used to estimate growth curves. FINDINGS Overall medical knowledge and clinical sciences knowledge demonstrated a steady upward growth curve. However, the curves for behavioural/social sciences and basic sciences started to level off in years 4 and 5, respectively. The increase in knowledge was greatest for clinical sciences (43%), whereas it was 32% and 25% for basic and behavioural/social sciences, respectively. INTERPRETATION Maastricht Medical School claims to offer a problem-based, student-centred, horizontally and vertically integrated curriculum in the first 4 years, followed by clerkships in years 5 and 6. Students learn by analysing patient problems and exploring pathophysiological explanations. Originally, it was intended that students' knowledge of behavioural/social sciences would continue to increase during their clerkships. However, the results for years 5 and 6 show diminishing growth in basic and behavioural/social sciences knowledge compared to overall and clinical sciences knowledge, which appears to suggest there are discrepancies between the actual and the planned curricula. Further research is needed to explain this.
Collapse
Affiliation(s)
- B H Verhoeven
- Department of Surgery, University Hospital Groningen, Groningen, The Netherland.
| | | | | | | |
Collapse
|