1
|
Balslev T, Muijtjens A, de Grave W, Awneh H, van Merriënboer J. How isolation of key information and allowing clarifying questions may improve information quality and diagnostic accuracy at case handover in paediatrics. Adv Health Sci Educ Theory Pract 2021; 26:599-613. [PMID: 33150554 DOI: 10.1007/s10459-020-10001-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 10/17/2020] [Indexed: 06/11/2023]
Abstract
Handover between colleagues is a complex task. The problem is that handovers are often inadequate because they are not structured according to theoretically grounded guidelines. Based on the cognitive load theory, we suggest that allowing a clarifying dialogue and thereby optimizing germane cognitive load enhances the information quality and diagnostic accuracy at handover, but may prolong handover duration. We also expect that mentioning key information first and thus decreasing intrinsic cognitive load improves information quality and diagnostic accuracy. We developed two representative paediatric cases for presentation in a factorial 2 × 2 design. Sixth-year medical students (N = 80) were randomly assigned to one of four groups that differed with regard to how the case histories were delivered to them (chronological order versus key information mentioned first) and direction of information exchange (unidirectional versus a clarifying dialogue). The receivers of the handover were asked to write a report of the cases and suggest the best diagnosis. Dependent variables were information quality of the written report (Information score), quality of the diagnosis (Diagnostic accuracy score) and the time it took to deliver the written handover case report (Handover report duration). Seen through the lens of cognitive load theory, allowing a clarifying dialogue at handover, and thus optimizing the germane cognitive load, significantly increased the Information score (p < 0.0005), Diagnostic accuracy score (< 0.05) and Handover report duration (p < 0.001).
Collapse
Affiliation(s)
- T Balslev
- Department of Paediatrics, Viborg Regional Hospital, Viborg, Denmark.
- Centre for Health Sciences Education (CESU), Aarhus University, Aarhus, Denmark.
| | - A Muijtjens
- Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - W de Grave
- Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - H Awneh
- Department of Paediatrics, Viborg Regional Hospital, Viborg, Denmark
| | - J van Merriënboer
- Department of Educational Development and Research, Faculty of Health Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
2
|
Waterval DGJ, Frambach JM, Driessen EW, Muijtjens A, Scherpbier AJJA. Connected, attracted, and concerned: A Q study on medical crossborder curriculum partnerships. Med Teach 2018; 40:1293-1299. [PMID: 29415599 DOI: 10.1080/0142159x.2018.1431618] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION A new form of internationalization has been trending upward in the medical education realm: crossborder medical curriculum partnerships established to deliver the same, or adapted, curriculum to groups of geographically separated students. This study aims to investigate crossborder medical curriculum partnerships by exploring the experiences of teachers at the recipient institution who have a key role in delivering the program. METHODS From four pioneering recipient medical schools, 24 teachers participated in a Q-sort study. Each participant rank-ordered 42 statements about teaching in a crossborder medical curriculum on a scale from -5 (indicating strong disagreement) to +5 (indicating strong agreement). The authors conducted a "by-person" factor analysis to uncover distinct patterns in the ranking of statements, using the statistical results and participants' comments about their Q sorts to interpret these patterns and translate them into distinct viewpoints. RESULTS Three viewpoints emerged, reflecting: (1) a feeling of connectedness with the partner institution, trust in the quality of the curriculum, and appreciation of interinstitutional relationships; (2) the partnership's attractiveness because of the career opportunities it offers; and (3) concerns over the quality of graduates because of doubts about the appropriateness of the didactic model and insufficient attention to local healthcare needs, and over the practical feasibility of such partnerships. CONCLUSIONS The three viewpoints identified revealed a pallet of views on how host teachers might experience their work. It shows the heterogeneous features of this group and seems to counterbalance reports that they are feeling "deprived" from their role as teacher. Two viewpoints featured an appreciation of interinstitutional relationships and of the partnership, especially when perceiving a degree of autonomy. Partners can capitalize on all different viewpoints by deploying procedure and policies to raise the quality of education delivery.
Collapse
Affiliation(s)
- Dominique G J Waterval
- a School of Health Professions Education , Maastricht University , Maastricht , The Netherlands
| | - Janneke M Frambach
- a School of Health Professions Education , Maastricht University , Maastricht , The Netherlands
| | - Erik W Driessen
- a School of Health Professions Education , Maastricht University , Maastricht , The Netherlands
| | - Arno Muijtjens
- a School of Health Professions Education , Maastricht University , Maastricht , The Netherlands
| | - Albert J J A Scherpbier
- a School of Health Professions Education , Maastricht University , Maastricht , The Netherlands
| |
Collapse
|
3
|
Heeneman S, Schut S, Donkers J, van der Vleuten C, Muijtjens A. Embedding of the progress test in an assessment program designed according to the principles of programmatic assessment. Med Teach 2017; 39:44-52. [PMID: 27646870 DOI: 10.1080/0142159x.2016.1230183] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
BACKGROUND Progress tests (PT) are used to assess students on topics from all medical disciplines. Progress testing is usually one of the assessment methods of the cognitive domain. There is limited knowledge on how positioning of the PT in a program of assessment (PoA) influences students' PT scores, use of PT feedback and perceived learning value. METHODS We compared PT total scores and use of a PT test feedback (ProF) system in two medical courses, where the PT is either used as a summative assessment or embedded in a comprehensive PoA and used formatively. In addition, an interview study was used to explore the students' perception on use of PT feedback and perceived learning value. RESULTS PT total scores were higher, with considerable effect sizes (ESs) and students made more use of ProF when the PT was embedded in a comprehensive PoA. Analysis of feedback in the portfolio stimulated students to look for patterns in PT results, link the PT to other assessment results, follow-up on learning objectives, and integrate the PT in their learning for the entire PoA. CONCLUSIONS Embedding the PT in an assessment program designed according to the principles of programmatic assessment positively affects PT total scores, use of PT feedback, and perceived learning value.
Collapse
Affiliation(s)
- Sylvia Heeneman
- a Department of Pathology , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Suzanne Schut
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Jeroen Donkers
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Cees van der Vleuten
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| | - Arno Muijtjens
- b Department of Educational Development and Research , Maastricht University , Maastricht , The Netherlands
- c School of Health Professions Education, Faculty of Health, Medicine and Life Sciences , Maastricht University , Maastricht , The Netherlands
| |
Collapse
|
4
|
Kelly M, Bennett D, Muijtjens A, O'Flynn S, Dornan T. Can less be more? Comparison of an 8-item placement quality measure with the 50-item Dundee Ready Educational Environment Measure (DREEM). Adv Health Sci Educ Theory Pract 2015; 20:1027-32. [PMID: 25575870 DOI: 10.1007/s10459-015-9582-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Accepted: 01/05/2015] [Indexed: 05/25/2023]
Abstract
Clinical clerks learn more than they are taught and not all they learn can be measured. As a result, curriculum leaders evaluate clinical educational environments. The quantitative Dundee Ready Environment Measure (DREEM) is a de facto standard for that purpose. Its 50 items and 5 subscales were developed by consensus. Reasoning that an instrument would perform best if it were underpinned by a clearly conceptualized link between environment and learning as well as psychometric evidence, we developed the mixed methods Manchester Clinical Placement Index (MCPI), eliminated redundant items, and published validity evidence for its 8 item and 2 subscale structure. Here, we set out to compare MCPI with DREEM. 104 students on full-time clinical placements completed both measures three times during a single academic year. There was good agreement and at least as good discrimination between placements with the smaller MCPI. Total MCPI scores and the mean score of its 5-item learning environment subscale allowed ten raters to distinguish between the quality of educational environments. Twenty raters were needed for the 3-item MCPI training subscale and the DREEM scale and its subscales. MCPI compares favourably with DREEM in that one-sixth the number of items perform at least as well psychometrically, it provides formative free text data, and it is founded on the widely shared assumption that communities of practice make good learning environments.
Collapse
Affiliation(s)
- Martina Kelly
- Department of Family Medicine, University of Calgary, HSC G324B, 3330 Hospital Drive N.W., Calgary, AB, T2N 2N1, Canada.
| | - Deirdre Bennett
- Medical Education Unit, University College Cork, Cork, Ireland
| | - Arno Muijtjens
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| | - Siun O'Flynn
- Medical Education Unit, University College Cork, Cork, Ireland
| | - Tim Dornan
- Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
- Queen's University Belfast, Belfast, Northern Ireland, UK
| |
Collapse
|
5
|
Balslev T, Rasmussen AB, Skajaa T, Nielsen JP, Muijtjens A, De Grave W, Van Merriënboer J. Combining bimodal presentation schemes and buzz groups improves clinical reasoning and learning at morning report. Med Teach 2015; 37:759-766. [PMID: 25496711 DOI: 10.3109/0142159x.2014.986445] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Morning reports offer opportunities for intensive work-based learning. In this controlled study, we measured learning processes and outcomes with the report of paediatric emergency room patients. Twelve specialists and 12 residents were randomised into four groups and discussed the same two paediatric cases. The groups differed in their presentation modality (verbal only vs. verbal + text) and the use of buzz groups (with vs. without). The verbal interactions were analysed for clinical reasoning processes. Perceptions of learning and judgment of learning were reported in a questionnaire. Diagnostic accuracy was assessed by a 20-item multiple-choice test. Combined bimodal presentation and buzz groups increased the odds ratio of clinical reasoning to occur in the discussion of cases by a factor of 1.90 (p = 0.013), indicating superior reasoning for buzz groups working with bimodal materials. For specialists, a positive effect of bimodal presentation was found on perceptions of learning (p < 0.05), and for residents, a positive effect of buzz groups was found on judgment of learning (p < 0.005). A positive effect of bimodal presentation on diagnostic accuracy was noted in the specialists (p < 0.05). Combined bimodal presentation and buzz group discussion of emergency cases improves clinicians' clinical reasoning and learning.
Collapse
Affiliation(s)
- Thomas Balslev
- a Viborg Regional Hospital , Denmark
- b Aarhus University , Denmark
| | | | | | | | | | | | | |
Collapse
|
6
|
Naeem N, Muijtjens A. Validity and reliability of bilingual English-Arabic version of Schutte self report emotional intelligence scale in an undergraduate Arab medical student sample. Med Teach 2015; 37 Suppl 1:S20-S26. [PMID: 25803589 DOI: 10.3109/0142159x.2015.1006605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
BACKGROUND The psychological construct of emotional intelligence (EI), its theoretical models, measurement instruments and applications have been the subject of several research studies in health professions education. AIM The objective of the current study was to investigate the factorial validity and reliability of a bilingual version of the Schutte Self Report Emotional Intelligence Scale (SSREIS) in an undergraduate Arab medical student population. METHODS The study was conducted during April-May 2012. A cross-sectional survey design was employed. A sample (n = 467) was obtained from undergraduate medical students belonging to the male and female medical college of King Saud University, Riyadh, Saudi Arabia. Exploratory and confirmatory factor analysis was performed using SPSS 16.0 and AMOS 4.0 statistical software to determine the factor structure. Reliability was determined using Cronbach's alpha statistics. RESULTS The results obtained using an undergraduate Arab medical student sample supported a multidimensional; three factor structure of the SSREIS. The three factors are Optimism, Awareness-of-Emotions and Use-of-Emotions. The reliability (Cronbach's alpha) for the three subscales was 0.76, 0.72 and 0.55, respectively. CONCLUSION Emotional intelligence is a multifactorial construct (three factors). The bilingual version of the SSREIS is a valid and reliable measure of trait emotional intelligence in an undergraduate Arab medical student population.
Collapse
|
7
|
Dornan T, Muijtjens A, Graham J, Scherpbier A, Boshuizen H. Manchester Clinical Placement Index (MCPI). Conditions for medical students' learning in hospital and community placements. Adv Health Sci Educ Theory Pract 2012; 17:703-16. [PMID: 22234383 PMCID: PMC3490061 DOI: 10.1007/s10459-011-9344-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Accepted: 12/20/2011] [Indexed: 05/16/2023]
Abstract
The drive to quality-manage medical education has created a need for valid measurement instruments. Validity evidence includes the theoretical and contextual origin of items, choice of response processes, internal structure, and interrelationship of a measure's variables. This research set out to explore the validity and potential utility of an 11-item measurement instrument, whose theoretical and empirical origins were in an Experience Based Learning model of how medical students learn in communities of practice (COPs), and whose contextual origins were in a community-oriented, horizontally integrated, undergraduate medical programme. The objectives were to examine the psychometric properties of the scale in both hospital and community COPs and provide validity evidence to support using it to measure the quality of placements. The instrument was administered twice to students learning in both hospital and community placements and analysed using exploratory factor analysis and a generalizability analysis. 754 of a possible 902 questionnaires were returned (84% response rate), representing 168 placements. Eight items loaded onto two factors, which accounted for 78% of variance in the hospital data and 82% of variance in the community data. One factor was the placement learning environment, whose five constituent items were how learners were received at the start of the placement, people's supportiveness, and the quality of organisation, leadership, and facilities. The other factor represented the quality of training-instruction in skills, observing students performing skills, and providing students with feedback. Alpha coefficients ranged between 0.89 and 0.93 and there were no redundant or ambiguous items. Generalisability analysis showed that between 7 and 11 raters would be needed to achieve acceptable reliability. There is validity evidence to support using the simple 8-item, mixed methods Manchester Clinical Placement Index to measure key conditions for undergraduate medical students' experience based learning: the quality of the learning environment and the training provided within it. Its conceptual orientation is towards Communities of Practice, which is a dominant contemporary theory in undergraduate medical education.
Collapse
Affiliation(s)
- Tim Dornan
- Department of Educational Development and Research, Maastricht University, The Netherlands.
| | | | | | | | | |
Collapse
|
8
|
Wrigley W, van der Vleuten CPM, Freeman A, Muijtjens A. A systemic framework for the progress test: strengths, constraints and issues: AMEE Guide No. 71. Med Teach 2012; 34:683-97. [PMID: 22905655 DOI: 10.3109/0142159x.2012.704437] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
There has been increasing use and significance of progress testing in medical education. It is used in many ways and with several formats to reflect the variety of curricula and assessment purposes. These developments have occurred alongside a recognised sensitivity for error variance inherent in multiple choice tests from which challenges to its validity and reliability have arisen. This Guide presents a generic, systemic framework to help identify and explore improvements in the quality and defensibility of progress test data. The framework draws on the combined experience of the Dutch consortium, an individual medical school in the United Kingdom, and the bulk of the progress test literature to date. It embeds progress testing as a quality-controlled assessment tool for improving learning, teaching and the demonstration of educational standards. The paper describes strengths, highlights constraints and explores issues for improvement. These may assist in the establishment of potential or new progress testing in medical education programmes. They can also guide the evaluation and improvement of existing programmes.
Collapse
Affiliation(s)
- William Wrigley
- Department of Educational Development and Research, Maastricht University, The Netherlands
| | | | | | | |
Collapse
|
9
|
Derkx H, Rethans JJ, Muijtjens A, Maiburg B, Winkens R, van Rooij H, Knottnerus A. 'Quod scripsi, scripsi.' The quality of the report of telephone consultations at Dutch out-of-hours centres. Qual Saf Health Care 2010; 19:e1. [PMID: 20584701 DOI: 10.1136/qshc.2008.027920] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To assess the quality of the content of reports of telephone consultations at out-of-hours centres and to investigate to what extent the reports reflect the actual telephone consultation. DESIGN AND SETTING Cross-sectional qualitative study; 17 out-of-hours centres in The Netherlands. METHOD To assess the quality of the content of reports, a focus group developed the Reason for calling, Information gathered, Care advice given, Evaluation of the care advice with the patient (RICE) report rating instrument. Telephone Incognito Standardised Patients presented seven different clinical problems three times to 17 out-of-hours centres. All calls were recorded and transcribed. The out-of-hours centres being called were asked for a copy of the report of the call. The authors assessed the quality of the content of the reports and compared this with the transcripts. RESULTS The out-of-hours centres returned a report for 78% of the 357 calls. For the remaining 22% of the calls, no report was written. Reports contained almost always information about the medical reason for calling but little information about details of the clinical history. Patients' expectation, personal situation or perception of the care advice was seldom documented. In all but one out-of-hours centre, answers to obligatory questions were reported by triagists, although they had not been asked, varying between 1% and 54% of all questions entered. Triagists entered a subjective evaluation of a patients' condition in 12% of the reports. CONCLUSION Reports of telephone consultations of out-of-hours centres contained little information on patients' clinical and personal condition. This could potentially endanger patients' continuity of care and might pose legal consequences for the triagist.
Collapse
Affiliation(s)
- Hay Derkx
- Maastricht University, Maastricht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
10
|
Schoonheim-Klein M, Muijtjens A, Habets L, Manogue M, van der Vleuten C, van der Velden U. Who will pass the dental OSCE? Comparison of the Angoff and the borderline regression standard setting methods. Eur J Dent Educ 2009; 13:162-71. [PMID: 19630935 DOI: 10.1111/j.1600-0579.2008.00568.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
AIM Aim of this study is to elucidate which standard setting method is optimal to prevent incompetent students to pass and competent students to fail a dental Objective Structured Clinical Examination (OSCE). MATERIAL AND METHODS An OSCE with 14 test stations was used to assess the performance of 119 third year dental students in a training group practice. To establish the pass/fail standard per station, three standard setting methods were applied: the Angoff I method, the modified Angoff II with reality check and the Borderline Regression (BR) method. For the final decision about passing or failing the complete OSCE, three methods were compared: total compensatory (TC), a partial compensatory (PC) within clusters of competence and a non-compensatory (NC) model. The reliability of the pass/fail standard of the three methods was indicated by the root mean square error (RMSE). As a criterion measure, a sample of the students (n = 89) was rated in the clinic by their instructors and accordingly these students were divided into two groups: competent and incompetent students. The students' clinical rating (considered for this study as 'true qualification') was compared with the pass-fail classification resulting from the OSCE. Undeserved passing of an incompetent student was considered as more damaging than failing a competent student. RESULTS The BR method showed more acceptable results than the two Angoff methods. In terms of pass rate the BR method showed the highest pass rates: for the TC model the Angoff method I and II and the BR showed pass rates of 86.6%, 86.6% and 97.5% respectively. For the PC model the pass rates were 30.3%, 34.5% and 61.3%, and for the NC model the pass rates were 0.8%, 1.7% and 7.6%. The BR method showed lower RMSEs (higher reliability): for the TC model the RMSEs were 1.3%, 1.0% and 0.3% for the Angoff I, Angoff II and BR method respectively, and for the PC model the RMSE of the clusters of competence range was 2.0-3.7% for Angoffs I; 1.8-2.2% for Angoff II and 0.6-0.7% for the BR method. In terms of incorrect decisions, the BR method had a higher loss due to incorrect decisions for the TC model than for the PC model which is in accordance with the results of other studies in medical education. CONCLUSIONS Therefore we conclude that the BR method in a PC model provides defensible pass/fail standards and seems to be the optimal choice for OSCEs in health education.
Collapse
Affiliation(s)
- M Schoonheim-Klein
- Department of Periodontology, Academic Centre for Dentistry Amsterdam, Amsterdam, The Netherlands.
| | | | | | | | | | | |
Collapse
|
11
|
Derkx H, Maiburg B, Winkens R, Muijtjens A, van Rooij H, Knottnerus A. De kwaliteit van telefonische triage op huisartsenposten. ACTA ACUST UNITED AC 2009. [DOI: 10.1007/bf03085669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Schoonheim-Klein M, Muijtjens A, Habets L, Manogue M, Van der Vleuten C, Hoogstraten J, Van der Velden U. On the reliability of a dental OSCE, using SEM: effect of different days. Eur J Dent Educ 2008; 12:131-7. [PMID: 18666893 DOI: 10.1111/j.1600-0579.2008.00507.x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
AIM The first aim was to study the reliability of a dental objective structured clinical examination (OSCE) administered over multiple days, and the second was to assess the number of test stations required for a sufficiently reliable decision in three score interpretation perspectives of a dental OSCE administered over multiple days. MATERIALS AND METHODS In four OSCE administrations, 463 students of the year 2005 and 2006 took the summative OSCE after a dental course in comprehensive dentistry. The OSCE had 16-18 5-min stations (scores 1-10), and was administered per OSCE on four different days of 1 week. ANOVA was used to test for examinee performance variation across days. Generalizability theory was used for reliability analyses. Reliability was studied from three interpretation perspectives: for relative (norm) decisions, for absolute (domain) and pass-fail (mastery) decisions. As an indicator of reproducibility of test scores in this dental OSCE, the standard error of measurement (SEM) was used. The benchmark of SEM was set at <0.51. This is corresponding to a 95% confidence interval (CI) of <1 on the original scoring scale that ranged from 1 to 10. RESULTS The mean weighted total OSCE score was 7.14 on a 10-point scale. With the pass-fail score set at 6.2 for the four OSCE, 90% of the 463 students passed. There was no significant increase in scores over the different days the OSCE was administered. 'Wished' variance owing to students was 6.3%. Variance owing to interaction between student and stations and residual error was 66.3%, more than two times larger than variance owing to stations' difficulty (27.4%). The SEM norm was 0.42 with a CI of +/-0.83 and the SEM domain was 0.50, with a CI of +/-0.98. In order to make reliable relative decisions (SEM <0.51), the use of minimal 12 stations is necessary, and for reliable absolute and pass-fail decisions, the use of minimal 17 stations is necessary in this dental OSCE. CONCLUSIONS It appeared reliable, when testing large numbers of students, to administer the OSCE on different days. In order to make reliable decisions for this dental OSCE, minimum 17 stations are needed. Clearly, wide sampling of stations is at the heart of obtaining reliable scores in OSCE, also in dental education.
Collapse
|
13
|
Jensen ML, Hesselfeldt R, Rasmussen MB, Mogensen SS, Frost T, Jensen MK, Muijtjens A, Lippert F, Ringsted C. Newly graduated doctors’ competence in managing cardiopulmonary arrests assessed using a standardized Advanced Life Support (ALS) assessment. Resuscitation 2008; 77:63-8. [DOI: 10.1016/j.resuscitation.2007.10.022] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2007] [Revised: 10/10/2007] [Accepted: 10/26/2007] [Indexed: 10/22/2022]
|
14
|
Van der Veken J, Valcke M, Muijtjens A, De Maeseneer J, Derese A. The potential of the inventory of learning styles to study students' learning patterns in three types of medical curricula. Med Teach 2008; 30:863-869. [PMID: 18821163 DOI: 10.1080/01421590802141167] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND Introducing innovative curricular designs can be evaluating by scrutinizing the learning patterns students use. AIM Studying the potential of Vermunt's Inventory of Learning Styles (ILS) in detecting differences in student learning patterns in different medical curricula. METHODS Cross-sectional between-subjects comparison of ILS-scores in third-year medical students in a conventional, an integrated contextual and a PBL-curriculum using one-way post hoc ANOVA. RESULTS Response rate was 85%: 197 conventional, 130 integrated contextual and 301 PBL students. The results show a differential impact from the three curricula. In relation to processing strategies, the students in the problem-based curriculum showed less rote learning and rehearsing, greater variety in sources of knowledge used and less ability to express study content in a personal manner than did the students in the conventional curriculum. The students of the integrated contextual curriculum showed more structuring of subject matter by integrating different aspects into a whole. In relation to regulation strategies, the students in the problem-based curriculum showed significantly more self-regulation of learning content and the students in the integrated contextual curriculum showed lower levels of regulation. As to learning orientations, the students in the problem-based curriculum showed less ambivalence and the students of the conventional curriculum were less vocationally oriented. CONCLUSION The study provides empirical support for expected effects of traditional and innovative curricula which thus far were not well supported by empirical studies.
Collapse
Affiliation(s)
- J Van der Veken
- Centre for Educational Development, Faculty of Medicine and Health Sciences, Ghent University, Belgium.
| | | | | | | | | |
Collapse
|
15
|
Hobma S, Ram P, Muijtjens A, van der Vleuten C, Grol R. Effective improvement of doctor-patient communication: a randomised controlled trial. Br J Gen Pract 2006; 56:580-6. [PMID: 16882375 PMCID: PMC1874521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023] Open
Abstract
BACKGROUND Doctor-patient communication is an essential component of general practice. Improvement of GPs' communication patterns is an important target of training programmes. Available studies have so far failed to provide conclusive evidence of the effectiveness of educational interventions to improve doctor-patient communication. AIM To examine the effectiveness of a learner-centred approach that focuses on actual needs, to improve GPs' communication with patients. DESIGN OF STUDY Randomised controlled trial. SETTING One hundred volunteer GPs in the Netherlands. METHOD The intervention identified individual GPs' deficiencies in communication skills by observing authentic consultations in their own surgery. This performance assessment was followed by structured activities in small group meetings, aimed at remedying the identified shortcomings. Outcomes were measured using videotaped consultations in the GPs' own surgery before and after the intervention. Communication skills were rated using the MAAS-Global, a validated checklist. RESULTS The scores in the intervention group demonstrated a significant improvement compared with those of the control group (95% confidence interval = 0.04 to 0.75). The effect size was moderate to large (d-value = 0.66). The level of participation significantly contributed to the effectiveness. Largest improvement was found on patient-centred communication skills. CONCLUSION The approach of structured individual improvement activities based on performance assessment is more effective in improving communication skills than current educational activities.
Collapse
Affiliation(s)
- Sjoerd Hobma
- Department of General practice, Centre for Quality of Care Research, University of Maastricht, Maastricht, The Netherlands.
| | | | | | | | | |
Collapse
|
16
|
van Diest R, van Dalen J, Bak M, Schruers K, van der Vleuten C, Muijtjens A, Scherpbier A. Growth of knowledge in psychiatry and behavioural sciences in a problem-based learning curriculum. Med Educ 2004; 38:1295-1301. [PMID: 15566541 DOI: 10.1111/j.1365-2929.2004.02022.x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE To evaluate the effectiveness of undergraduate medical education in the domains of psychiatry and behavioural sciences, we examined the growth of knowledge in those disciplines in a 6-year, problem-based learning (PBL) curriculum. Psychiatry and behavioural sciences are taught in the 4 preclinical years and in the psychiatric clerkship. The integrative nature of this PBL curriculum led us to hypothesise that the knowledge growth curves for these disciplines are similar and show a steady upward trend throughout the curriculum. METHODS All items pertaining to psychiatry and behavioural sciences in the progress tests administered in the period from September 1993 through May 2001 were identified. For those items, the percentage of correct scores in the 6 year groups were considered a multivariate observation reflecting knowledge growth across the 6-year programme. RESULTS Knowledge growth for psychiatry and behavioural sciences increased significantly, from 12% to 59% and from 28% to 60%, respectively, between Year 1 and the end of Year 6. Apparently, students know more about behavioural sciences than about psychiatry when they enter medical school, but this difference vanishes in the last 2 years of training. Moreover, the growth curves for psychiatry and behavioural sciences started to level off after Years 3 and 4, respectively, with no additional significant growth in any of the later years. CONCLUSIONS Psychiatry and behavioural sciences showed different patterns of knowledge growth and the 2 growth curves levelled off in Years 5 through 6. Because a student-centred, horizontally and vertically integrated PBL curriculum is aimed at effecting steady growth in knowledge in all disciplines, the slowdown in growth in the later years was among the reasons for initiating a major curricular innovation in 2001.
Collapse
Affiliation(s)
- R van Diest
- Department of Psychiatry and Neuropsychology, Faculty of Medicine, Maastricht University, Maastricht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
17
|
Kramer A, Muijtjens A, Jansen K, Düsman H, Tan L, van der Vleuten C. Comparison of a rational and an empirical standard setting procedure for an OSCE. Objective structured clinical examinations. Med Educ 2003; 37:132-139. [PMID: 12558884 DOI: 10.1046/j.1365-2923.2003.01429.x] [Citation(s) in RCA: 68] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE Earlier studies of absolute standard setting procedures for objective structured clinical examinations (OSCEs) show inconsistent results. This study compared a rational and an empirical standard setting procedure. Reliability and credibility were examined first. The impact of a reality check was then established. METHODS The OSCE included 16 stations and was taken by trainees in their final year of postgraduate training in general practice and experienced general practitioners. A modified Angoff (independent judgements, no group discussion) with and without a reality check was used as a rational procedure. A method related to the borderline group procedure, the borderline regression (BR) method, was used as an empirical procedure. Reliability was assessed using generalisability theory. Credibility was assessed by comparing pass rates and by relating the passing scores to test difficulty. RESULTS The passing scores were 73.4% for the Angoff procedure without reality check (Angoff I), 66.0% for the Angoff procedure with reality check (Angoff II) and 57.6% for the BR method. The reliabilities (expressed as root mean square errors) were 2.1% for Angoffs I and II, and 0.6% for the BR method. The pass rates of the trainees and GPs were 19% and 9% for Angoff I, 66% and 46% for Angoff II, and 95% and 80% for the BR method, respectively. The correlation between test difficulty and passing score was 0.69 for Angoff I, 0.88 for Angoff II and 0.86 for the BR method. CONCLUSION The BR method provides a more credible and reliable standard for an OSCE than a modified Angoff procedure. A reality check improves the credibility of the Angoff procedure but does not improve its reliability.
Collapse
Affiliation(s)
- Anneke Kramer
- National Centre for Evaluation of Postgraduate Training in General Practice (SVUH), Utrecht, the Netherlands.
| | | | | | | | | | | |
Collapse
|
18
|
van Baak MA, Jennen W, Muijtjens A, Verstappen FT. Effects of acute and chronic metoprolol administration during submaximal and maximal exercise. Int J Sports Med 1985; 6:347-52. [PMID: 4077364 DOI: 10.1055/s-2008-1025869] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The effects of different dosages of the beta 1-adrenoceptor blocker metoprolol and of acute and chronic administration of this beta-blocker during physical exercise were compared in healthy normotensive subjects. Placebo, 0.15 mg/kg, and 0.30 mg/kg metoprolol were administered intravenously 10 min before a progressive bicycle ergometer test up to exhaustion. Thereafter, subjects were treated for 4 weeks with placebo or slow-release metoprolol (1 X 200 mg/day). At the end of each 4th week of treatment, a maximal exercise test was performed. Heart rate, ventilation, oxygen consumption, and plasma concentrations of free fatty acids, glucose, and lactate were determined at rest and during exercise. After the low (0.15 mg/kg) i.v. dose, the heart rate during maximal exercise was reduced from 189 +/- 2 to 155 +/- 2 bts/min (P less than 0.001). This reduction was significantly smaller than that after the high (0.30 mg/kg) i.v. dose (177 +/- 3 to 137 +/- 4 bts/min, P less than 0.001) and during chronic treatment (176 +/- 3 to 132 +/- 2 bts/min, P less than 0.001). The difference between the high i.v. dose and chronic treatment was not significant. After the low i.v. dose, the heart rate was the only variable affected. After the high i.v. dose, the heart rate, exercise time, maximal oxygen uptake, and plasma glucose and free fatty acid concentration during maximal exercise were reduced, and maximal lactate concentration tended to be lower. During submaximal exercise, no significant differences between placebo or beta-blocker administration were found, except for heart rate, which was reduced after beta-blockade.(ABSTRACT TRUNCATED AT 250 WORDS)
Collapse
|
19
|
Hamilton CJ, Wetzels LC, Evers JL, Hoogland HJ, Muijtjens A, de Haan J. Follicle growth curves and hormonal patterns in patients with the luteinized unruptured follicle syndrome. Fertil Steril 1985; 43:541-8. [PMID: 3921410 DOI: 10.1016/s0015-0282(16)48494-3] [Citation(s) in RCA: 77] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
A prospective longitudinal and standardized study is presented, dealing with ultrasonographic and hormonal characteristics of the luteinized unruptured follicle (LUF) syndrome. Among 600 cycles monitored in 270 infertility patients, 40 cycles in 27 patients showed no evidence of follicle rupture, in spite of signs of luteinization, as reflected by basal body temperature recordings and progesterone determinations. In this study, 20 LUF cycles in 20 infertile patients were compared with 45 ovulatory cycles in 45 control women. During the follicular phase, no substantial difference in follicle growth was found, but after the luteinizing hormone peak, LUF follicles, instead of rupturing, showed a typical accelerated growth pattern. Both mean luteinizing hormone peak levels and midluteal progesterone levels were significantly lower in LUF cycles than in the control cycles. However, the duration of the luteal phase was not affected. Both central and local factors can be held responsible for the lack of follicle rupture. Ultrasound offers new possibilities as a noninvasive method in diagnosing the LUF syndrome.
Collapse
|