26
|
van der Vleuten CPM. [Qualitative ranking of medical curricula is useful]. NEDERLANDS TIJDSCHRIFT VOOR GENEESKUNDE 2006; 150:2330. [PMID: 17089553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Benchmarking lists are indicators of the quality of medical curricula. The measurement on which a list in order of merit is based must of course have validity, but this is generally the case. The irresponsible use of ranking lists is to be deplored, but that does not constitute a reason not to publish them. More competition in attracting the aspirant medical student, for example on the basis of ranking lists, would be a good thing. This can contribute to making the quality more visible. Lists in order of quality are therefore a useful aid to assist the aspirant student in choosing a school.
Collapse
|
27
|
Niemantsverdriet S, van der Vleuten CPM, Majoor GD, Scherpbier AJJA. The learning processes of international students through the eyes of foreign supervisors. MEDICAL TEACHER 2006; 28:e104-11. [PMID: 16807160 DOI: 10.1080/01421590600726904] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Semi-structured interviews were conducted with external supervisors of international electives undertaken by Dutch undergraduate students, in order to gain insight into student learning processes during these electives. The interviews served to triangulate information on these learning processes that was obtained from students' self-reports. The results of the case study reported in this paper were largely consistent with findings from prior studies of international electives in which learning processes and sociocultural differences were examined: experiential learning processes appeared to dominate and sociocultural differences occasionally seemed to blur productive learning, especially when the differences between the national cultures of host country and student home country were substantial. It is recommended that students' experiential learning from international electives should be supplemented with 'guided' and 'self-directed' learning with a focus on the sociocultural dimension.
Collapse
|
28
|
Niemantsverdriet S, Majoor GD, van der Vleuten CPM, Scherpbier AJJA. Internationalization of medical education in the Netherlands: state of affairs. MEDICAL TEACHER 2006; 28:187-9. [PMID: 16707304 DOI: 10.1080/01421590500271225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In the framework of the Bologna Process, internationalization co-ordinators of seven (out of eight) Dutch medical schools completed an electronic survey about internationalization-related aspects of the curriculum. Common features of internationalization in Dutch medical schools were: the numbers of outgoing students exceeded the numbers of incoming students, and most international programmes involved clinical training and research projects. We recommend that Dutch medical schools should pay more attention to 'Internationalization at Home' and focus on conditions that are conducive to participation by foreign students.
Collapse
|
29
|
Daelmans HEM, Overmeer RM, van der Hem-Stokroos HH, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. In-training assessment: qualitative study of effects on supervision and feedback in an undergraduate clinical rotation. MEDICAL EDUCATION 2006; 40:51-8. [PMID: 16441323 DOI: 10.1111/j.1365-2929.2005.02358.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
BACKGROUND Supervision and feedback are essential factors that contribute to the learning environment in the context of workplace learning and their frequency and quality can be improved. Assessment is a powerful tool with which to influence students' learning and supervisors' teaching and thus the learning environment. OBJECTIVE To investigate an in-training assessment (ITA) programme in action and to explore its effects on supervision and feedback. DESIGN A qualitative study using individual, semistructured interviews. SUBJECTS AND SETTING Eight students and 17 assessors (9 members of staff and 8 residents) in the internal medicine undergraduate clerkship at Vrije Universiteit Medical Centre, Amsterdam, the Netherlands. RESULTS The ITA programme in action differed from the intended programme. Assessors provided hardly any follow-up on supervision and feedback given during assessments. Although students wanted more supervision and feedback, they rarely asked for it. Students and assessors failed to integrate the whole range of competencies included in the ITA programme into their respective learning and supervision and feedback. When giving feedback, assessors rarely gave borderline or fail judgements. DISCUSSION AND CONCLUSION If an ITA programme in action is to be congruent with the intended programme, the implementation of the programme must be monitored. It is also necessary to provide full information about the programme and to ensure this information is given repeatedly. Introducing an ITA programme that includes the assessment of several competencies does not automatically lead to more attention being paid to these competencies in terms of supervision and feedback. Measures that facilitate change in the learning environment seem to be a prerequisite for enabling the assessment programme to steer the learning environment.
Collapse
|
30
|
Schuwirth LWT, van der Vleuten CPM. [Assessment of medical competence in clinical education]. NEDERLANDS TIJDSCHRIFT VOOR GENEESKUNDE 2005; 149:2752-5. [PMID: 16375022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
There has been considerable change in the field of assessment of medical competence. At the moment, competency-orientated assessment, 'mini-CEX' (brief clinical evaluation exercises) and portfolios are quite popular. These methods are based on research findings indicating that medical competence can better be described as a collection of the complex tasks (so-called competencies) that a doctor must be able to perform than as the sum of knowledge, skills, problem-solving ability and attitudes. Mini-CEX represents a method for the assessment of medical competence reliably and validly in a practical setting. Using a portfolio, information on the student's competence can be collated and evaluated from various sources, including mini-CEX. As such, a portfolio has much in common with a patient chart.
Collapse
|
31
|
Verhoeven BH, Snellen-Balendong HAM, Hay IT, Boon JM, van der Linde MJ, Blitz-Lindeque JJ, Hoogenboom RJI, Verwijnen GM, Wijnen WHFW, Scherpbier AJJA, van der Vleuten CPM. The versatility of progress testing assessed in an international context: a start for benchmarking global standardization? MEDICAL TEACHER 2005; 27:514-20. [PMID: 16199358 DOI: 10.1080/01421590500136238] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Sharing and collaboration relating to progress testing already takes place on a national level and allows for quality control and comparisons of the participating institutions. This study explores the possibilities of international sharing of the progress test after correction for cultural bias and translation problems. Three progress tests were reviewed and administered to 3043 Pretoria and 3001 Maastricht medical students. In total, 16% of the items were potentially biased and removed from the test items administered to the Pretoria students (9% due to translation problems; 7% due to cultural differences). Of the three clusters (basic, clinical and social sciences) the social sciences contained most bias (32%), basic sciences least (11%). The differences that were found, comparing the student results of both schools, seem a reflection of the deliberate accentuations that both curricula pursue. The results suggest that the progress test methodology provides a versatile instrument that can be used to assess medical schools across the world. Sharing of test material is a viable strategy and test outcomes are interesting and can be used in international quality control.
Collapse
|
32
|
van der Hem-Stokroos HH, van der Vleuten CPM, Daelmans HEM, Haarman HJTM, Scherpbier AJJA. Reliability of the clinical teaching effectiveness instrument. MEDICAL EDUCATION 2005; 39:904-10. [PMID: 16150030 DOI: 10.1111/j.1365-2929.2005.02245.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
INTRODUCTION The Clinical Teaching Effectiveness Instrument (CTEI) was developed to evaluate the quality of the clinical teaching of educators. Its authors reported evidence supporting content and criterion validity and found favourable reliability findings. We tested the validity and reliability of this instrument in a European context and investigated its reliability as an instrument to evaluate the quality of clinical teaching at group level rather than at the level of the individual teacher. METHODS Students participating in a surgical clerkship were asked to fill in a questionnaire reflecting a student-teacher encounter with a staff member or a resident. We calculated variance components using the urgenova program. For individual score interpretation of the quality of clinical teaching the standard error of estimate was calculated. For group interpretation we calculated the root mean square error. RESULTS The results did not differ statistically between staff and residents. The average score was 3.42. The largest variance component was associated with rater variance. For individual score interpretation a reliability of > 0.80 was reached with 7 ratings or more. To reach reliable outcomes at group level, 15 educators or more were needed with a single rater per educator. DISCUSSION The required sample size for appraisal of individual teaching is easily achievable. Reliable findings can also be obtained at group level with a feasible sample size. The results provide additional evidence of the reliability of the CTEI in undergraduate medical education in a European setting. The results also showed that the instrument can be used to measure the quality of teaching at group level.
Collapse
|
33
|
Kristina TN, Majoor GD, van der Vleuten CPM. Does CBE come close to what it should be? A case study from the developing world. Evaluating a programme in action against objectives on paper. EDUCATION FOR HEALTH (ABINGDON, ENGLAND) 2005; 18:194-208. [PMID: 16009614 DOI: 10.1080/13576280500148205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
CONTEXT A growing number of health professions schools have implemented programmes for community-based education (CBE) for their students. There are indications, however, that particularly in developing countries, CBE programmes are not always optimally implemented or sustained. OBJECTIVE To test the suitability of an established method for curriculum evaluation, combined with a set of generic objectives for CBE programmes, for evaluation of CBE programmes. METHODS As a case study, Coles and Grant's model for curriculum evaluation was applied to the CBE programme of the Medical Faculty of Diponegoro University (MFDU) in Semarang, Indonesia. Document analysis yielded information on the programme on paper; participatory observation and staff interviews on the programme in action. In addition, MFDU's CBE programme was evaluated against a set of generic objectives for CBE programmes recently designed by us. RESULTS MFDU has created great opportunities for its CBE programme in which, however, also significant weaknesses were revealed. (1) In the community, much time was spent on formal teaching; (2) Students' work in the community was not jointly identified with community members regarding the community's felt health needs; (3) There was rarely continuity, and evaluation or follow-up of the students' work in the community; and (4) No systematic programme evaluations are carried out. DISCUSSION This evaluation study showed shortcomings in the implementation of MFDU's CBE programme. The major weaknesses identified point at an underutilization of the opportunities and potential jeopardization of the facilities in the community. On the other hand, more time is needed in the CBE programme to establish the health needs to be addressed jointly with the community and to assess the impact of activities undertaken. A thorough review of the CBE programme, perhaps taking the outcomes of this study into account, could turn MFDU's CBE programme into a fine example for other medical schools in Indonesia and beyond. CONCLUSION Coles and Grant's method for curriculum evaluation proved suitable for evaluation of a CBE programme in a developing country. After additional comparison with a reference list of objectives for CBE programmes, reasoned suggestions for programme can be made.
Collapse
|
34
|
Daelmans HEM, van der Hem-Stokroos HH, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Global clinical performance rating, reliability and validity in an undergraduate clerkship. Neth J Med 2005; 63:279-84. [PMID: 16093582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
BACKGROUND Global performance rating is frequently used in clinical training despite its known psychometric drawbacks. Inter-rater reliability is low in undergraduate training but better in residency training, possibly because residency offers more opportunities for supervision. The low or moderate predictive validity of global performance ratings in undergraduate and residency training may be due to low or unknown reliability of both global performance ratings and criterion measures. In an undergraduate clerkship, we investigated whether reliability improves when raters are more familiar with students' work and whether validity improves with increased reliability of the predictor and criterion instrument. METHODS Inter-rater reliability was determined in a clerkship with more student-rater contacts than usual. The in-training assessment programme of the clerkship that immediately followed was used as the criterion measure to determine predictive validity. RESULTS With four ratings, inter-rater reliability was 0.41 and predictive validity was 0.32. Reliability was lower and validity slightly higher than similar results published for residency training. CONCLUSION Even with increased student-rater interaction, the reliability and validity of global performance ratings were too low to warrant the usage of global performance ratings as individual assessment format. However, combined with other assessment measures, global performance ratings may lead to improved integral assessment.
Collapse
|
35
|
Daelmans HEM, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Effects of an in-training assessment programme on supervision of and feedback on competencies in an undergraduate Internal Medicine clerkship. MEDICAL TEACHER 2005; 27:158-63. [PMID: 16019338 DOI: 10.1080/01421590400019534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Assessment drives the educational behaviour of students and supervisors. Therefore, an assessment programme targeted at specific competencies may be expected to motivate supervisors and students to pay more attention to those competencies. In-training assessment (ITA) is regarded as a feasible method for assessing a broad range of competencies. Before and after the implementation of an ITA programme in an undergraduate Internal Medicine clerkship we surveyed students on the frequency of unobserved and observed supervision, and the quality of feedback as inferred from the seniority of the person providing it. After the implementation of the ITA programme supervision increased, but the difference was not statistically significant. The quality of feedback showed no significant change either. Inter-student variation in supervision and feedback remained invariably high after the implementation of the ITA programme. Whether these results are attributable to the way the programme was implemented or to the way the results were assessed remains to be clarified.
Collapse
|
36
|
Hobma SO, Ram PM, Muijtjens AMM, Grol RPTM, van der Vleuten CPM. Setting a standard for performance assessment of doctor-patient communication in general practice. MEDICAL EDUCATION 2004; 38:1244-1252. [PMID: 15566535 DOI: 10.1111/j.1365-2929.2004.01918.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
CONTEXT Continuing professional development (CPD) of general practitioners. OBJECTIVE Criterion-referenced standards for assessing performance in the real practice of general practitioners (GPs) should be available to identify learning needs or poor performers for CPD. The applicability of common standard setting procedures in authentic assessment has not been investigated. METHODS To set a standard for assessment of GP-patient communication with video observation of daily practice, we investigated 2 well known examples of 2 different standard setting approaches. An Angoff procedure was applied to 8 written cases. A borderline regression method was applied to videotaped consultations of 88 GPs. The procedures and outcomes were evaluated by the applicability of the procedure, the reliability of the standards and the credibility as perceived by the stakeholders, namely, the GPs. RESULTS Both methods are applicable and reliable; the obtained standards are credible according to the GPs. CONCLUSIONS Both modified methods can be used to set a standard for assessment in daily practice. The context in which the standard will be used - i.e. the specific purpose of the standard, the moment the standard must be available or if specific feedback must be given - is important because methods differ in practical aspects.
Collapse
|
37
|
van der Vleuten CPM, Schuwirth LWT, Muijtjens AMM, Thoben AJNM, Cohen-Schotanus J, van Boven CPA. Cross institutional collaboration in assessment: a case on progress testing. MEDICAL TEACHER 2004; 26:719-25. [PMID: 15763876 DOI: 10.1080/01421590400016464] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
The practice of assessment is governed by an interesting paradox. On the one hand good assessment requires substantial resources which may exceed the capacity of a single institution and we have reason to doubt the quality of our in-house examinations. On the other hand, our parsimonity with regard to our resources makes us reluctant to pool efforts and share our test material. This paper reports on an initiative to share test material across different medical schools. Three medical schools in The Netherlands have successfully set up a partnership for a specific testing method: progress testing. At present, these three schools collaboratively produce high-quality test items. The jointly produced progress tests are administered concurrently by these three schools and one other school, which buys the test. The steps taken in establishing this partnership are described and results are presented to illustrate the unique sort of information that is obtained by cross-institutional assessment. In addition, plans to improve test content and procedure and to expand the partnership are outlined. Eventually, the collaboration may even extend to other test formats. This article is intended to give evidence of the feasibility and exciting potential of between school collaboration in test development and test administration. Our experiences have demonstrated that such collaboration has excellent potential to combine economic benefit with educational advantages, which exceed what is achievable by individual schools.
Collapse
|
38
|
Daelmans HEM, van der Hem-Stokroos HH, Hoogenboom RJI, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Feasibility and reliability of an in-training assessment programme in an undergraduate clerkship. MEDICAL EDUCATION 2004; 38:1270-7. [PMID: 15566538 DOI: 10.1111/j.1365-2929.2004.02019.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
INTRODUCTION Structured assessment, embedded in a training programme, with systematic observation, feedback and appropriate documentation may improve the reliability of clinical assessment. This type of assessment format is referred to as in-training assessment (ITA). The feasibility and reliability of an ITA programme in an internal medicine clerkship were evaluated. The programme comprised 4 ward-based test formats and 1 outpatient clinic-based test format. Of the 4 ward-based test formats, 3 were single-sample tests, consisting of 1 student-patient encounter, 1 critical appraisal session and 1 case presentation. The other ward-based test and the outpatient-based test were multiple sample tests, consisting of 12 ward-based case write-ups and 4 long cases in the outpatient clinic. In all the ITA programme consisted of 19 assessments. METHODS During 41 months, data were collected from 119 clerks. Feasibility was defined as over two thirds of the students obtaining 19 assessments. Reliability was estimated by performing generalisability analyses with 19 assessments as items and 5 test formats as items. RESULTS A total of 73 students (69%) completed 19 assessments. Reliability expressed by the generalisability coefficients was 0.81 for 19 assessments and 0.55 for 5 test formats. CONCLUSIONS The ITA programme proved to be feasible. Feasibility may be improved by scheduling protected time for assessment for both students and staff. Reliability may be improved by more frequent use of some of the test formats.
Collapse
|
39
|
van der Hem-Stokroos HH, Daelmans HEM, van der Vleuten CPM, Haarman HJTM, Scherpbier AJJA. The impact of multifaceted educational structuring on learning effectiveness in a surgical clerkship. MEDICAL EDUCATION 2004; 38:879-886. [PMID: 15271049 DOI: 10.1111/j.1365-2929.2004.01899.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Various measures have been introduced to enhance learning experiences in clerkships, generally with limited success. This study evaluated the impact of a multifaceted approach on the effectiveness of learning in a surgical clerkship. In accordance with results obtained in continuing medical education, several interventions were introduced simultaneously. We compared students' evaluations of the traditional surgical clerkship with those of the restructured clerkship. METHODS Two consecutive cohorts of students were asked to complete a questionnaire about the quality and quantity of their learning experiences. Cohort 1 (n = 28) undertook the traditional clerkship and cohort 2 (n = 72) the restructured clerkship. A Mann-Whitney test was used to compare outcomes between the 2 cohorts. RESULTS There were few statistically significant differences between cohorts 1 and 2. Overall, quality indicators did not differ between the 2 cohorts. DISCUSSION A short-term multifaceted intervention led to a slight increase in the performance of clinical skills and a slight decrease in time spent on activities of limited educational value. The intervention may have been too brief to produce substantial effects. Future interventions should also target teachers, including trainees, in order to assess their opinions and address their educational needs.
Collapse
|
40
|
Daelmans HEM, Hoogenboom RJI, Donker AJM, Scherpbier AJJA, Stehouwer CDA, van der Vleuten CPM. Effectiveness of clinical rotations as a learning environment for achieving competences. MEDICAL TEACHER 2004; 26:305-12. [PMID: 15203842 DOI: 10.1080/01421590410001683195] [Citation(s) in RCA: 82] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Competences are becoming more and more prominent in undergraduate medical education. Workplace learning is regarded as crucial in competence learning. Assuming that effective learning depends on adequate supervision, feedback and assessment, the authors studied the occurrence of these three variables in relation to a set of clinical competences. They surveyed students at the end of their rotation in surgery, internal medicine or paediatrics asking them to indicate for each competence how often they had received observed and unobserved supervision, the seniority of the person who provided most of their feedback, and whether the competence was addressed in formal assessments. Supervision was found to be scarce and mostly unobserved. Senior staff did not provide much feedback, and assessment mostly targeted patient-related competences. For all variables, the variation between students exceeded that between disciplines. We conclude that conditions for adequate workplace learning are poorly met and that clerkship experiences show huge inter-student variation.
Collapse
|
41
|
Kristina TN, Majoor GD, van der Vleuten CPM. Defining generic objectives for community-based education in undergraduate medical programmes. MEDICAL EDUCATION 2004; 38:510-521. [PMID: 15107085 DOI: 10.1046/j.1365-2929.2004.01819.x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
RATIONALE The availability of a framework for the definition of generic objectives for community-based education (CBE) programmes may assist in the rational design of objectives for specific CBE programmes. STRATEGY Factors impacting on community health from the perspective of a developing country were collected. Potential assistance from medical students to communities to improve their health status was determined. Competencies required in students to execute tasks in the community were defined and eventually educational objectives to develop these competencies in the students were established. METHODS Factors impacting on community health and activities of medical students in CBE programmes were identified by review of literature and Internet resources. Competencies desired for execution of tasks by students and educational objectives to develop these competencies were defined by us and checked against pertinent literature. A draft table representing the 4 elements of the framework was discussed by an international group of experts for external validation. MAIN OUTCOMES A total of 26 factors impacting on community health were identified and clustered in 5 domains. Twenty-one generic objectives for CBE programmes were defined to develop the required competencies in students. Analogues of each of these 21 objectives were found in at least 1 publication specifying objectives for specific CBE programmes but none of these publications stated any objective not covered by our list of generic objectives. CONCLUSION It proved possible to develop a framework to define generic objectives for CBE programmes. An example was elaborated from the perspective of a medical school in a developing country.
Collapse
|
42
|
Kramer AWM, Düsman H, Tan LHC, Jansen JJM, Grol RPTM, van der Vleuten CPM. Acquisition of communication skills in postgraduate training for general practice. MEDICAL EDUCATION 2004; 38:158-167. [PMID: 14871386 DOI: 10.1111/j.1365-2923.2004.01747.x] [Citation(s) in RCA: 50] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE The evidence suggests that a longitudinal training of communication skills embedded in a rich clinical context is most effective. In this study we evaluated the acquisition of communication skills under such conditions. METHODS In a longitudinal design the communication skills of a randomly selected sample of 25 trainees of a three-year postgraduate training programme for general practice were assessed at the start and at the end of training. Eight videotaped real life consultations were rated per measurement and per trainee, using the MAAS-Global scoring list. The results were compared with each other and with those of a reference group of 94 experienced GPs. RESULTS The mean score of the MAAS-Global was slightly increased at the end of training (2.4) compared with the start (2.2). No significant difference was found between the final results of the trainees and the reference group. According to the criteria of the rating scale the performance of both trainees and GPs was unsatisfactory. CONCLUSION The results of this study indicate that communication skills do not improve in a three-year postgraduate training comprising both a rich clinical context and a longitudinal training of communication skills, and that an unsatisfactory level still exists at the end of training. Moreover, GPs do not acquire communication skills during independent practice as they perform comparably to the trainees. Further research into the measurement of communication skills, the teaching procedures, the role of the GP-trainer as a model and the influence of rotations through hospitals and the like, is required.
Collapse
|
43
|
Blok GA, Morton J, Morley M, Kerckhoffs CCJM, Kootstra G, van der Vleuten CPM. Requesting organ donation: the case of self-efficacy--effects of the European Donor Hospital Education Programme (EDHEP). ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2004; 9:261-282. [PMID: 15583482 DOI: 10.1007/s10459-004-9404-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
One of the major reasons for the shortage of donor organs is the high number of refusals by relatives. Studies have shown that the quality of communication with bereaved relatives influences whether to object or agree to organ and/or tissue donation. Breaking news of brain stem death, approaching relatives for permission to donate organs while also appropriately managing emotional reactions of relatives are complex tasks, which require knowledge of the domains as well as adequate skills to communicate information and understanding. In this study the effect of the European Donor Hospital Education Programme (EDHEP) on the self-efficacy of Intensive Care staff is evaluated. Self-efficacy scores significantly improved after attending EDHEP; an effect that was maintained at six month follow-up. EDHEP participants with high baseline scores on self-efficacy, maintained the increase at follow-up. EDHEP participants with low baseline scores on self-efficacy showed the greatest increase at the post-test. Increases in self-efficacy were significantly related to decreases in the perceived difficulty of requesting. Experience had a significant effect on both self-efficacy beliefs and perceived difficulty of requesting donation. As self-efficacy beliefs are perceived as better predictors for future behaviour than prior attainments, the results call for further research in this domain. The data indicate that training programmes should be tailored not only to working circumstances of participants, but should also take levels of experience and self-efficacy into account. Further study is necessary and the best way to proceed is to relate the outcomes of this study to behavioural outcomes.
Collapse
|
44
|
Ringsted C, Østergaard D, van der Vleuten CPM. Implementation of a formal in-training assessment programme in anaesthesiology and preliminary results of acceptability. Acta Anaesthesiol Scand 2003; 47:1196-203. [PMID: 14616315 DOI: 10.1046/j.1399-6576.2003.00255.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND A new reform on postgraduate education in Denmark requires a formal in-training assessment in all specialties. The aim of this study was to survey the implementation and acceptability of the first example of a nation-wide in-training assessment programme for first-year trainees in anaesthesiology developed by a working group under the Danish Society of Anaesthesiology and Intensive Care Medicine. METHODS A questionnaire about the implementation of the programme in practice and the characteristics of the trainees was sent to the educational responsible consultant (ERC) in each of the 26 anaesthetic departments in the country with first-year trainees in anaesthesiology. Standard evaluations of the assessment programme were regularly collected from trainees. RESULTS Twenty-five (96%) departments returned the questionnaire. In total the departments reported on 100 trainees and 83 of these had been enrolled in the programme. Thirteen departments reported in total on 27 trainees who had completed their first year of training and these departments had applied a median 21 (range 17-21) of the 21 tests included in the entire programme. Time constraints and resistance among senior clinicians were the most frequently cited barriers to implementation. Evaluations from trainees showed a generally positive attitude towards most of the programme. They especially praised the programme's effect on structuring training and having a positive effect on learning. CONCLUSION The in-training assessment programme has been widely implemented across the country. The majority of the programme was acceptable to trainees and had a positive effect on structuring training and on fostering learning.
Collapse
|
45
|
Schuwirth LWT, van der Vleuten CPM. The use of clinical simulations in assessment. MEDICAL EDUCATION 2003; 37 Suppl 1:65-71. [PMID: 14641641 DOI: 10.1046/j.1365-2923.37.s1.8.x] [Citation(s) in RCA: 116] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
CONTEXT Simulation-based testing methods have been developed to meet the need for assessment procedures that are both authentic and well-structured. It is widely acknowledged that, although the authenticity of a procedure may be a contributing factor to its validity, authenticity alone never is a sufficient factor. AIM In this paper we describe the mainstream development of various simulation-based approaches, with their strengths and weaknesses. The purpose is not to provide a review based on an extensive meta-analysis but to present crucial factors in the development of these methods and their implications for current and future developments. METHOD The description of these simulation-based instruments uses a subdivision according to the layers of Miller's pyramid. Written and computer-based simulations are aimed at measuring the 'knows how' layer, observation-based techniques such as standardised patient-based examinations and objective structured clinical examinations target the 'shows how' layer and performance practice measures assess performance at the 'does' layer. CONCLUSION In all simulations, case specificity was found to pose the most prominent threat to reliability, while too much structure threatened to trivialise the assessment. The conclusion is that authentic and reliable assessment is predicated on a wise balance between efficiency and adequate content sampling.
Collapse
|
46
|
Ringsted C, Østergaard D, Ravn L, Pedersen JA, Berlac PA, van der Vleuten CPM. A feasibility study comparing checklists and global rating forms to assess resident performance in clinical skills. MEDICAL TEACHER 2003; 25:654-658. [PMID: 15369915 DOI: 10.1080/01421590310001605642] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This study evaluated the feasibility of two different scoring forms for assessing the clinical performance of residents in anaesthesiology. One of the forms had a checklist format including task-specific items and the other was a global rating form with general dimensions of competence including 'clinical skills', 'communication skills' and 'knowledge'. Thirty-two clinicians representing 25 (83%) of the 30 training hospitals in the country participated in the study. The clinicians were randomized into two groups, each of which used one of the scoring formats to assess a resident's performance in four simulated clinical scenarios on videotape. Clinicians' opinions about the appropriateness of the scoring forms were rated on a scale of 1-5. The checklist format was rated significantly higher compared with the global rating form (mean 4.6, 0.5 vs. mean 3.5, 1.4, p < 0.001). The inter-rater agreement regarding pass/fail decisions was poor irrespective of the scoring form used. This was explained by clinicians' leniency as assessors rather than by lack of vigilance in the observations or disagreements on standards for good performance.
Collapse
|
47
|
van der Hem-Stokroos HH, Daelmans HEM, van der Vleuten CPM, Haarman HJTM, Scherpbier AJJA. A qualitative study of constructive clinical learning experiences. MEDICAL TEACHER 2003; 25:120-126. [PMID: 12745517 DOI: 10.1080/0142159031000092481] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Little is known about the effectiveness of clinical education. A more educational structure is considered to be potentially beneficial. The following structured components were added to a surgical clerkship: logbooks, an observed student-patient encounter, individual appraisals, feedback on patient notes, and (case) presentations by students. The authors organized two focus-group sessions in which 19 students participated to explore their perceptions about effective clinical learning experiences and the newly introduced structured components. The analysis of the transcripts showed that observation and constructive feedback are key features of clinical training. The structured activities were appreciated and the results show the direction to be taken for further improvement. Learning experiences depended vastly on individual clinicians' educational qualities. Students experienced being on call, assisting in theatre and time for self-study as instructive elements. Recommended clerkship components are: active involvement of students, direct observation, selection of teachers, a positive learning environment and time for self-study.
Collapse
|
48
|
Verhoeven BH, Verwijnen GM, Muijtjens AMM, Scherpbier AJJA, van der Vleuten CPM. Panel expertise for an Angoff standard setting procedure in progress testing: item writers compared to recently graduated students. MEDICAL EDUCATION 2002; 36:860-867. [PMID: 12354249 DOI: 10.1046/j.1365-2923.2002.01301.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
INTRODUCTION An earlier study showed that an Angoff procedure with > or = 10 recently graduated students as judges can be used to estimate the passing score of a progress test. As the acceptability and feasibility of this approach are questionable, we conducted an Angoff procedure with test item writers as judges. This paper reports on the reliability and credibility of this procedure and compares the standards set by the two different panels. METHODS Fourteen item writers judged 146 test items. Recently graduated students had assessed these items in a previous study. Generalizability was investigated as a function of the number of items and judges. Credibility was judged by comparing the pass/fail rates associated with the Angoff standard, a relative standard and a fixed standard. The Angoff standards obtained by item writers and graduates were compared. RESULTS The variance associated with consistent variability of item writers across items was 1.5% and for graduate students it was 0.4%. An acceptable error score required 39 judges. Item-Angoff estimates of the two panels and item P-values correlated highly. Failure rates of 57%, 55% and 7% were associated with the item writers' standard, the fixed standard and the graduates' standard, respectively. CONCLUSION The graduates' and the item writers' standards differed substantially, as did the associated failure rates. A panel of 39 item writers is not feasible. The item writers' passing score appears to be less credible. The credibility of the graduates' standard needs further evaluation. The acceptability and feasibility of a panel consisting of both students and item writers may be worth investigating.
Collapse
|
49
|
Kramer AWM, Jansen JJM, Zuithoff P, Düsman H, Tan LHC, Grol RPTM, van der Vleuten CPM. Predictive validity of a written knowledge test of skills for an OSCE in postgraduate training for general practice. MEDICAL EDUCATION 2002; 36:812-819. [PMID: 12354243 DOI: 10.1046/j.1365-2923.2002.01297.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE To examine the validity of a written knowledge test of skills for performance on an OSCE in postgraduate training for general practice. METHODS A randomly-selected sample of 47 trainees in general practice took a knowledge test of skills, a general knowledge test and an OSCE. The OSCE included technical stations and stations including complete patient encounters. Each station was checklist rated and global rated. RESULTS The knowledge test of skills was better correlated to the OSCE than the general knowledge test. Technical stations were better correlated to the knowledge test of skills than stations including complete patient encounters. For the technical stations the rating system had no influence on the correlation. For the stations including complete patient encounters the checklist rating correlated better to the knowledge test of skills than the global rating. CONCLUSION The results of this study support the predictive validity of the knowledge test of skills. In postgraduate training for general practice a written knowledge test of skills can be used as an instrument to estimate the level of clinical skills, especially for group evaluation, such as in studies examining the efficacy of a training programme or as a screening instrument for deciding about courses to be offered. This estimation is more accurate when the content of the test matches the skills under study. However, written testing of skills cannot replace direct observation of performance of skills.
Collapse
|
50
|
Verhoeven BH, Verwijnen GM, Scherpbier AJJA, van der Vleuten CPM. Growth of medical knowledge. MEDICAL EDUCATION 2002; 36:711-717. [PMID: 12191053 DOI: 10.1046/j.1365-2923.2002.01268.x] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
BACKGROUND Knowledge is an essential component of medical competence and a major objective of medical education. Thus, the degree of acquisition of knowledge by students is one of the measures of the effectiveness of a medical curriculum. We studied the growth in student knowledge over the course of Maastricht Medical School's 6-year problem-based curriculum. METHODS We analysed 60 491 progress test (PT) scores of 3226 undergraduate students at Maastricht Medical School. During the 6-year curriculum a student sits 24 PTs (i.e. four PTs in each year), intended to assess knowledge at graduation level. On each test occasion all students are given the same PT, which means that in year 1 a student is expected to score considerably lower than in year 6. The PT is therefore a longitudinal, objective assessment instrument. Mean scores for overall knowledge and for clinical, basic, and behavioural/social sciences knowledge were calculated and used to estimate growth curves. FINDINGS Overall medical knowledge and clinical sciences knowledge demonstrated a steady upward growth curve. However, the curves for behavioural/social sciences and basic sciences started to level off in years 4 and 5, respectively. The increase in knowledge was greatest for clinical sciences (43%), whereas it was 32% and 25% for basic and behavioural/social sciences, respectively. INTERPRETATION Maastricht Medical School claims to offer a problem-based, student-centred, horizontally and vertically integrated curriculum in the first 4 years, followed by clerkships in years 5 and 6. Students learn by analysing patient problems and exploring pathophysiological explanations. Originally, it was intended that students' knowledge of behavioural/social sciences would continue to increase during their clerkships. However, the results for years 5 and 6 show diminishing growth in basic and behavioural/social sciences knowledge compared to overall and clinical sciences knowledge, which appears to suggest there are discrepancies between the actual and the planned curricula. Further research is needed to explain this.
Collapse
|