1
|
Van Meenen F, Coertjens L, Van Nes MC, Verschuren F. Peer overmarking and insufficient diagnosticity: the impact of the rating method for peer assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:1049-1066. [PMID: 35871407 DOI: 10.1007/s10459-022-10130-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
The present study explores two rating methods for peer assessment (analytical rating using criteria and comparative judgement) in light of concurrent validity, reliability and insufficient diagnosticity (i.e. the degree to which substandard work is recognised by the peer raters). During a second-year undergraduate course, students wrote a one-page essay on an air pollutant. A first cohort (N = 260) relied on analytical rating using criteria to assess their peers' essays. A total of 1297 evaluations were made, and each essay received at least four peer ratings. Results indicate a small correlation between peer and teacher marks, and three essays of substandard quality were not recognised by the group of peer raters. A second cohort (N = 230) used comparative judgement. They completed 1289 comparisons, from which a rank order was calculated. Results suggest a large correlation between the university teacher marks and the peer scores and acceptable reliability of the rank order. In addition, the three essays of substandard quality were discerned as such by the group of peer raters. Although replication research is warranted, the results provide the first evidence that, when peer raters overmark and fail to identify substandard work using analytical rating with criteria, university teachers may consider changing the rating method of the peer assessment to comparative judgement.
Collapse
Affiliation(s)
- Florence Van Meenen
- Psychological Sciences Research Institute, Université catholique de Louvain, 10, Place Cardinal Mercier, 1348, Louvain-la-Neuve, Belgium.
| | - Liesje Coertjens
- Psychological Sciences Research Institute, Université catholique de Louvain, 10, Place Cardinal Mercier, 1348, Louvain-la-Neuve, Belgium
| | - Marie-Claire Van Nes
- Emergency Department, Cliniques Universitaires Saint-Luc, Institue of Experimental and Clinical Research IREC, Université catholique de Louvain, Brussels, Belgium
| | - Franck Verschuren
- Institute of Experimental and Clinical Research, Acute Medicine Department, Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|
2
|
Wang PZT, Xie WY, Nair S, Dave S, Shatzer J, Chahine S. A Comparison of Guided Video Reflection versus Self-Regulated Learning to Teach Knot Tying to Medical Students: A Pilot Randomized Controlled Trial. JOURNAL OF SURGICAL EDUCATION 2020; 77:805-816. [PMID: 32151512 DOI: 10.1016/j.jsurg.2020.02.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 01/12/2020] [Accepted: 02/16/2020] [Indexed: 06/10/2023]
Abstract
OBJECTIVES Self-regulated learning has been proposed as a resource saving alternative for learning knot tying. However, this may be hindered by the Dunning-Kruger effect. A potential alternative is guided video reflection. The objectives of this study are to compare the performance and self-assessment abilities amongst medical students learning knot tying using either a traditional self-regulated approach versus guided video reflection. DESIGN This randomized, single-blinded, controlled trial used a pre-post-retention test design. All knot tying performances were video recorded and assessed nonsequentially by blinded evaluators using a modified Objective Structured Assessment of Technical Skills tool. PARTICIPANTS This study recruited 31 first- and second-year medical students and 6 senior urology residents from Western University in Canada. RESULTS At baseline, the performances of the experts were significantly higher than the experimental groups (F3,85 = 9.080, p < 0.001). After the intervention, there was a significant increase in the performance for both experimental groups compared to the pretest period (p < 0.001). The scores between the experimental groups were not significantly different (p = 0.338). The improved performances of both groups were sustained on retention testing (p < 0.001). The self-assessment abilities were accurate for both experimental groups at baseline. However, at the post-test period the accuracy was poor (interclass correlation 0.361) for the self-regulated group, while remaining moderately (interclass correlation 0.685) accurate for the reflection group. CONCLUSIONS Students using guided video reflection were able to achieve competency and maintained their knot tying skills to the same degree as those who used the self-regulated approach. These results may be due to the positive effects of reflection on self-assessment abilities and subsequent improvement in goal setting for further practice.
Collapse
Affiliation(s)
- Peter Zhan Tao Wang
- Department of Surgery, Division of Urology, Western University, London, Ontario, Canada.
| | - Wen Yan Xie
- Department of Surgery, Division of Urology, Western University, London, Ontario, Canada
| | - Shiva Nair
- Department of Surgery, Division of Urology, Western University, London, Ontario, Canada
| | - Sumit Dave
- Department of Surgery, Division of Urology, Western University, London, Ontario, Canada
| | - John Shatzer
- John Hopkins University, School of Education, Baltimore, Maryland
| | - Saad Chahine
- Faculty of Education, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
3
|
Pervaz Iqbal M, Velan GM, O’Sullivan AJ, Balasooriya C. The collaborative learning development exercise (CLeD-EX): an educational instrument to promote key collaborative learning behaviours in medical students. BMC MEDICAL EDUCATION 2020; 20:62. [PMID: 32122344 PMCID: PMC7052979 DOI: 10.1186/s12909-020-1977-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 02/20/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND Modern clinical practice increasingly relies on collaborative and team-based approaches to care. Regulatory bodies in medical education emphasise the need to develop collaboration and teamwork competencies and highlight the need to do so from an early stage of medical training. In undergraduate medical education, the focus is usually on collaborative learning, associated with feedback and reflection on this learning This article describes a novel educational instrument, the Collaborative Learning Development Exercise (CLeD-EX), which aims to foster the development of key collaborative learning competencies in medical students. In this article we report on the effectiveness, feasibility and educational impact of the CLeD-EX. METHODS In this study, the "educational design research" framework was used to develop, implement and evaluate the CLeD-EX. This involved adopting a systematic approach towards designing a creative and innovative instrument which would help solve a real-world challenge in developing collaborative learning skills. The systematic approach involved a qualitative exploration of key collaborative learning behaviours which are influential in effective collaborative learning contexts. The identified competencies were employed in the design of the CLeD-EX. The design of the CLeD-EX included features to facilitate structured feedback by tutors to students, complemented by self-evaluation and reflection. The CLeD-EX was field-tested with volunteer junior medical students, using a controlled pre-test post-test design. Analysis of the completed CLeD-EX forms, self-perception surveys (i.e. pre-test and post-test surveys) and analyses of reflective reports were used to explore the educational impact of CLeD-EX, as well as its utility and practicality. RESULTS After using the CLeD-EX, students showed a significant improvement in critical thinking and group process as measured by a previously validated instrument. Both students and tutors recognised CLeD-EX as an effective instrument, especially as a structured basis for giving and receiving feedback and for completing the feedback loop. CLeD-EX was also found to be feasible, practical and focused, while promoting learning and effective interactions in small group learning. CONCLUSION The findings of this study support the introduction of an effective and feasible educational instrument such as the CLeD-EX, to facilitate the development of students' skills in collaborative learning.
Collapse
Affiliation(s)
- Maha Pervaz Iqbal
- School of Public Health and Community Medicine, UNSW Medicine, UNSW Sydney, Sydney, Australia
| | - Gary M. Velan
- School of Medical Sciences, UNSW Medicine, UNSW Sydney, Sydney, Australia
| | | | - Chinthaka Balasooriya
- School of Public Health and Community Medicine, UNSW Medicine, UNSW Sydney, Sydney, Australia
| |
Collapse
|
4
|
Lerchenfeldt S, Mi M, Eng M. The utilization of peer feedback during collaborative learning in undergraduate medical education: a systematic review. BMC MEDICAL EDUCATION 2019; 19:321. [PMID: 31443705 PMCID: PMC6708197 DOI: 10.1186/s12909-019-1755-z] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 08/16/2019] [Indexed: 05/15/2023]
Abstract
BACKGROUND Peer evaluation can provide valuable feedback to medical students, and increase student confidence and quality of work. The objective of this systematic review was to examine the utilization, effectiveness, and quality of peer feedback during collaborative learning in medical education. METHODS The PRISMA statement for reporting in systematic reviews and meta-analysis was used to guide the process of conducting the systematic review. Evaluation of level of evidence (Colthart) and types of outcomes (Kirkpatrick) were used. Two main authors reviewed articles with a third deciding on conflicting results. RESULTS The final review included 31 studies. Problem-based learning and team-based learning were the most common collaborative learning settings. Eleven studies reported that students received instruction on how to provide appropriate peer feedback. No studies provided descriptions on whether or not the quality of feedback was evaluated by faculty. Seventeen studies evaluated the effect of peer feedback on professionalism; 12 of those studies evaluated its effectiveness for assessing professionalism and eight evaluated the use of peer feedback for professional behavior development. Ten studies examined the effect of peer feedback on student learning. Six studies examined the role of peer feedback on team dynamics. CONCLUSIONS This systematic review indicates that peer feedback in a collaborative learning environment may be a reliable assessment for professionalism and may aid in the development of professional behavior. The review suggests implications for further research on the impact of peer feedback, including the effectiveness of providing instruction on how to provide appropriate peer feedback.
Collapse
Affiliation(s)
- Sarah Lerchenfeldt
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, O’Dowd Hall, Room 466, 586 Pioneer Drive, Rochester, MI 48309 USA
| | - Misa Mi
- Department of Foundational Medical Studies, Oakland University William Beaumont School of Medicine, Kresge Library, #130, 100 Library Drive, Rochester, MI 48309 USA
| | - Marty Eng
- Department of Pharmacy Practice, Cedarville University, HSC 235, 251 N Main St, Cedarville, OH 45314 USA
| |
Collapse
|
5
|
Sa B, Ezenwaka C, Singh K, Vuma S, Majumder MAA. Tutor assessment of PBL process: does tutor variability affect objectivity and reliability? BMC MEDICAL EDUCATION 2019; 19:76. [PMID: 30850024 PMCID: PMC6407196 DOI: 10.1186/s12909-019-1508-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 02/26/2019] [Indexed: 05/29/2023]
Abstract
BACKGROUND Ensuring objectivity and maintaining reliability are necessary in order to consider any form of assessment valid. Evaluation of students in Problem-Based Learning (PBL) tutorials by the tutors has drawn the attention of critiques citing many challenges and limitations. The aim of this study was to determine the extent of tutor variability in assessing the PBL process in the Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, Trinidad and Tobago. METHOD All 181 students of year 3 MBBS were assigned randomly to 14 PBL groups. Out of 18 tutors, 12 had an opportunity to assess three groups: one assessed 2 groups and 4 tutors assessed one group each; at the end each group had been assessed three times by different tutors. The tutors used a PBL assessment rating scale of 12 different criteria on a six-point scale to assess each PBL Group. To test the stated hypotheses, independent t-test, one-way ANOVA followed by post-hoc Bonferroni test, Intra Class Correlation, and Pearson product moment correlations were performed. RESULT The analysis revealed significant differences between the highest- and lowest-rated groups (t-ratio = 12.64; p < 0.05) and between the most lenient and most stringent raters (t-ratio = 27.96; p < 0.05). ANOVA and post-hoc analysis for highest and lowest rated groups revealed that lenient- and stringent-raters significantly contribute (p < 0.01) in diluting the score in their respective category. The intra class correlations (ICC) among rating of different tutors for different groups showed low agreement among various ratings except three groups (Groups 6, 8 and 13) (r = 0.40). The correlation between tutors' PBL experiences and their mean ratings was found to be moderately significant (r = 0.52; p > 0.05). CONCLUSION Leniency and stringency factors amongst raters affect objectivity and reliability to a great extent as is evident from the present study. Thus, more rigorous training in the areas of principles of assessment for the tutors are recommended. Moreover, putting that knowledge into practice to overcome the leniency and stringency factors is essential.
Collapse
Affiliation(s)
- Bidyadhar Sa
- Centre for Medical Sciences Education, The Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, St Augustine, Trinidad and Tobago
| | - Chidum Ezenwaka
- Department of Para-clinical Sciences, Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, St Augustine, Trinidad and Tobago
| | - Keerti Singh
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Bridgetown, Barbados
| | - Sehlule Vuma
- Department of Para-clinical Sciences, Faculty of Medical Sciences, The University of the West Indies, St Augustine Campus, St Augustine, Trinidad and Tobago
| | - Md. Anwarul Azim Majumder
- Faculty of Medical Sciences, The University of the West Indies, Cave Hill Campus, Bridgetown, Barbados
| |
Collapse
|
6
|
Roberts C, Jorm C, Gentilcore S, Crossley J. Peer assessment of professional behaviours in problem-based learning groups. MEDICAL EDUCATION 2017; 51:390-400. [PMID: 28078685 DOI: 10.1111/medu.13151] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 02/29/2016] [Accepted: 06/27/2016] [Indexed: 05/28/2023]
Abstract
CONTEXT Peer assessment of professional behaviour within problem-based learning (PBL) groups can support learning and provide opportunities to identify and remediate problem behaviours. OBJECTIVES We investigated whether a peer assessment of learning behaviours in PBL is sufficiently valid to support decision making about student professional behaviours. METHODS Data were available for two cohorts of students, in which each student was rated by all of their PBL group peers using a modified version of a previously validated scale. Following the provision of feedback to the students, their behaviours were again peer-assessed. A generalisability study was undertaken to calculate the students' professional behaviour scores, sources of error that impacted the reliability of the assessment, changes in student rating behaviour, and changes in mean scores after the delivery of feedback. RESULTS Peer assessment of professional learning behaviour was highly reliable for within-group comparisons (G = 0.81-0.87), but poor for across-group comparisons (G = 0.47-0.53). Feedback increased the range of ratings given by assessors and brought their mean ratings into closer alignment. More of the increased variance was attributable to assessee performance than to assessor stringency and hence there was a slight improvement in reliability, especially for comparisons across groups. Mean professional behaviour scores were unchanged. CONCLUSIONS Peer assessment of professional learning behaviours may be unreliable for decision making outside a PBL group. Faculty members should not draw conclusions from peer assessment about a student's behaviour compared with that of their peers in the cohort, and such a tool may not be appropriate for summative assessment. Health professional educators interested in assessing student professional behaviours in PBL groups might focus on opportunities for the provision of formative peer feedback and its impact on learning.
Collapse
Affiliation(s)
- Chris Roberts
- Sydney Medical School - Northern, University of Sydney, Sydney, Australia
| | - Christine Jorm
- Office of Medical Education, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Stacey Gentilcore
- Office of Medical Education, Sydney Medical School, University of Sydney, Sydney, Australia
| | - Jim Crossley
- The Medical School, University of Sheffield, Sheffield, UK
| |
Collapse
|
7
|
Khoiriyah U, Roberts C, Jorm C, Van der Vleuten CPM. Enhancing students' learning in problem based learning: validation of a self-assessment scale for active learning and critical thinking. BMC MEDICAL EDUCATION 2015; 15:140. [PMID: 26306762 PMCID: PMC4549835 DOI: 10.1186/s12909-015-0422-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2015] [Accepted: 08/14/2015] [Indexed: 05/28/2023]
Abstract
BACKGROUND Problem based learning (PBL) is a powerful learning activity but fidelity to intended models may slip and student engagement wane, negatively impacting learning processes, and outcomes. One potential solution to solve this degradation is by encouraging self-assessment in the PBL tutorial. Self-assessment is a central component of the self-regulation of student learning behaviours. There are few measures to investigate self-assessment relevant to PBL processes. We developed a Self-assessment Scale on Active Learning and Critical Thinking (SSACT) to address this gap. We wished to demonstrated evidence of its validity in the context of PBL by exploring its internal structure. METHODS We used a mixed methods approach to scale development. We developed scale items from a qualitative investigation, literature review, and consideration of previous existing tools used for study of the PBL process. Expert review panels evaluated its content; a process of validation subsequently reduced the pool of items. We used structural equation modelling to undertake a confirmatory factor analysis (CFA) of the SSACT and coefficient alpha. RESULTS The 14 item SSACT consisted of two domains "active learning" and "critical thinking." The factorial validity of SSACT was evidenced by all items loading significantly on their expected factors, a good model fit for the data, and good stability across two independent samples. Each subscale had good internal reliability (>0.8) and strongly correlated with each other. CONCLUSIONS The SSACT has sufficient evidence of its validity to support its use in the PBL process to encourage students to self-assess. The implementation of the SSACT may assist students to improve the quality of their learning in achieving PBL goals such as critical thinking and self-directed learning.
Collapse
Affiliation(s)
- Umatul Khoiriyah
- Medical Education Unit (MEU) Fakultas Kedokteran UII, Jl.Kaliurang Km 14.5 Ngaglik, Sleman, Yogyakarta, 55584, Indonesia.
| | - Chris Roberts
- Sydney Medical School - Northern, the University of Sydney, Hornsby Kur-ring-gai Hospital, Palmerston Road, Hornsby, NSW, 2077, Australia.
| | - Christine Jorm
- Sydney Medical School, Edward Ford Building A27, the University of Sydney, Sydney, NSW, 2006, Australia.
| | - C P M Van der Vleuten
- Maastricht University, Educational Development and Research, P.O. Box 616, Maastricht, 6200, MD, The Netherlands.
| |
Collapse
|
8
|
Mehrdad N, Bigdeli S, Ebrahimi H. A Comparative Study on Self, Peer and Teacher Evaluation to Evaluate Clinical Skills of Nursing Students. ACTA ACUST UNITED AC 2012. [DOI: 10.1016/j.sbspro.2012.06.911] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Hawkins SC, Osborne A, Schofield SJ, Pournaras DJ, Chester JF. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks. MEDICAL TEACHER 2012; 34:279-84. [PMID: 22455696 DOI: 10.3109/0142159x.2012.658897] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
INTRODUCTION Isolated video recording has not been demonstrated to improve self-assessment accuracy. This study examines if the inclusion of a defined standard benchmark performance in association with video feedback of a student's own performance improves the accuracy of student self-assessment of clinical skills. METHODS Final year medical students were video recorded performing a standardised suturing task in a simulated environment. After the exercise, the students self-assessed their performance using global rating scales (GRSs). An identical self-assessment process was repeated following video review of their performance. Students were then shown a video-recorded 'benchmark performance', which was specifically developed for the study. This demonstrated the competency levels required to score full marks (30 points). A further self-assessment task was then completed. Students' scores were correlated against expert assessor scores. RESULTS A total of 31 final year medical students participated. Student self-assessment scores before video feedback demonstrated moderate positive correlation with expert assessor scores (r = 0.48, p < 0.01) with no change after video feedback (r = 0.49, p < 0.01). After video feedback with benchmark performance demonstration, self-assessment scores demonstrated a very strong positive correlation with expert scores (r = 0.83, p < 0.0001). CONCLUSIONS The demonstration of a video-recorded benchmark performance in combination with video feedback may significantly improve the accuracy of students' self-assessments.
Collapse
|
10
|
Speyer R, Pilz W, Van Der Kruis J, Brunings JW. Reliability and validity of student peer assessment in medical education: a systematic review. MEDICAL TEACHER 2011; 33:e572-85. [PMID: 22022910 DOI: 10.3109/0142159x.2011.610835] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND Peer assessment has been demonstrated to be an effective educational intervention for health science students. AIMS This study aims to give an overview of all instruments or questionnaires for peer assessments used in medical and allied health professional educational settings and their psychometric characteristics as described in literature. METHODS A systematic literature search was carried out using the electronic databases Pubmed, Embase, ERIC, PsycINFO and Web of Science, including all available inclusion dates up to May 2010. RESULTS Out of 2899 hits, 28 studies were included, describing 22 different instruments for peer assessment in mainly medical educational settings. Although most studies considered professional behaviour as a main subject of assessment and described peer assessment usually as an assessment tool, great diversity was found in educational settings and application of peer assessment, dimensions or constructs as well as number of items and scoring system per questionnaire, and in psychometric characteristics. CONCLUSIONS Although quite a few instruments of peer assessment have been identified, many questionnaires did not provide sufficient psychometric data. Still, the final choice of an instrument for educational purposes can only be justified by its sufficient reliability and validity as well as the discriminative and evaluative purposes of the assessment.
Collapse
Affiliation(s)
- Renée Speyer
- Institute of Health Studies, HAN University of Applied Sciences, Nijmegen, The Netherlands.
| | | | | | | |
Collapse
|
11
|
Dory V, Degryse J, Roex A, Vanpee D. Usable knowledge, hazardous ignorance - beyond the percentage correct score. MEDICAL TEACHER 2010; 32:375-380. [PMID: 20423255 DOI: 10.3109/01421590903197027] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
BACKGROUND Little attention has been paid to the metacognitive ability of medical students. AIM We used confidence marking to explore certainty of knowledge and ignorance. METHODS One hundred and twenty-seven of 169 general practice trainees took part. Students sat a written multiple choice question (MCQ) test. Each answer was followed by a degree of certainty judgement. Answers attributed with a high degree of certainty were used to compute overall usable knowledge, hazardous ignorance, proportions of knowledge that is usable and of ignorance that is hazardous. The former variables were analysed according to MCQ score, year of training and gender. RESULTS At a group level, the mean amount of usable knowledge on the MCQ was 21.13%, mean amount of hazardous ignorance on the MCQ was 5.21%, mean proportion of knowledge that was usable was 36.57%, mean proportion of ignorance that was hazardous was 14.32%. There were neither significant differences between highest and lowest quartiles of MCQ score, nor according to year of training. Men had higher levels of ignorance that is hazardous. CONCLUSION A third of trainees' knowledge was partial. A sixth of their ignorance was hazardous. Confidence marking can aid formative assessment and could potentially be implemented into summative assessments.
Collapse
Affiliation(s)
- Valerie Dory
- Centre Academique de Medecine Generale, Université Catholique de Louvain, Brussels, Belgium.
| | | | | | | |
Collapse
|
12
|
van Mook WNKA, van Luijk SJ, de Grave W, O'Sullivan H, Wass V, Schuwirth LW, van der Vleuten CPM. Teaching and learning professional behavior in practice. Eur J Intern Med 2009; 20:e105-11. [PMID: 19712827 DOI: 10.1016/j.ejim.2009.01.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2008] [Revised: 12/18/2008] [Accepted: 01/04/2009] [Indexed: 11/29/2022]
Abstract
This paper is the fourth article in a series on Professionalism and provides an overview of current methods used for teaching and learning about professionalism. The questions "whether" and "how" professionalism can be placed in the formal medical school curricula are addressed, and the informal learning related to professionalism reviewed.
Collapse
Affiliation(s)
- Walther N K A van Mook
- Department of Intensive Care and Internal Medicine, Maastricht University Medical Centre, Maastricht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
13
|
Machado JLM, Machado VMP, Grec W, Bollela VR, Vieira JE. Self- and peer assessment may not be an accurate measure of PBL tutorial process. BMC MEDICAL EDUCATION 2008; 8:55. [PMID: 19038048 PMCID: PMC2605444 DOI: 10.1186/1472-6920-8-55] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2008] [Accepted: 11/27/2008] [Indexed: 05/14/2023]
Abstract
BACKGROUND Universidade Cidade de São Paulo adopted a problem-based learning (PBL) strategy as the predominant method for teaching and learning medicine. Self-, peer- and tutor marks of the educational process are taken into account as part of the final grade, which also includes assessment of content. This study compared the different perspectives (and grades) of evaluators during tutorials with first year medical students, from 2004 to 2007 (n = 349), from seven semesters. METHODS The tutorial evaluation method was comprised of the students' self assessment (SA) (10%), tutor assessment (TA) (80%) and peer assessment (PA) (10%) to calculate a final educational process grade for each tutorial. We compared these three grades from each tutorial for seven semesters using ANOVA and a post hoc test. RESULTS A total of 349 students participated with 199 (57%) women and 150 (42%) men. The SA and PA scores were consistently greater than the TA scores. Moreover, the SA and PA groups did not show statistical difference in any semester evaluated, while both differed from tutor assessment in all semesters (Kruskal-Wallis, Dunn's test). The Spearman rank order showed significant (p < 0.0001) and positive correlation for the SA and PA groups (r = 0.806); this was not observed when we compared TA with PA (r = 0.456) or TA with SA (r = 0.376). CONCLUSION Peer- and self-assessment marks might be reliable but not valid for PBL tutorial process, especially if these assessments are used for summative assessment, composing the final grade. This article suggests reconsideration of the use of summative assessment for self-evaluation in PBL tutorials.
Collapse
Affiliation(s)
- José Lúcio Martins Machado
- UNICID – Universidade Cidade de São Paulo Medical School, Rua Cesário Galeno 448/475, CEP 03071-000, São Paulo, Brazil
| | | | - Waldir Grec
- UNICID – Universidade Cidade de São Paulo Medical School, Rua Cesário Galeno 448/475, CEP 03071-000, São Paulo, Brazil
| | - Valdes Roberto Bollela
- UNICID – Universidade Cidade de São Paulo Medical School, Rua Cesário Galeno 448/475, CEP 03071-000, São Paulo, Brazil
| | - Joaquim Edson Vieira
- UNICID – Universidade Cidade de São Paulo Medical School, Rua Cesário Galeno 448/475, CEP 03071-000, São Paulo, Brazil
| |
Collapse
|
14
|
Abstract
PURPOSE Physical therapists are expected to engage in self-assessment in order to ensure competent practice and to identify appropriate professional development activities. SUMMARY OF KEY POINTS This paper reviews the current literature on the accuracy and role of self-assessment in physical therapy. Current literature indicating that self-assessment cannot be conducted with any degree of accuracy is discussed, and a proposed reformulation of the concept of self-assessment is presented. RECOMMENDATIONS Practical strategies are offered for clinicians to improve the potential for obtaining reliable and valid information about their own clinical performance to guide the selection of appropriate professional development activities and to promote the provision of competent patient care.
Collapse
Affiliation(s)
- Patricia A Miller
- Patricia A. Miller , BSc(PT), MHSc, MSc, PhD : Associate Clinical Professor, School of Rehabilitation Science, McMaster University, Hamilton, Ontario; PhD candidate, Health Research Methodology Program, McMaster University, Hamilton, Ontario; Strategic Training Fellow in Rehabilitation Research, CIHR Quality of Life Strategic Training Program, McMaster University / University of British Columbia
| |
Collapse
|
15
|
O'Brien CE, Franks AM, Stowe CD. Multiple rubric-based assessments of student case presentations. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2008; 72:58. [PMID: 18698367 PMCID: PMC2508736 DOI: 10.5688/aj720358] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2007] [Accepted: 12/09/2007] [Indexed: 05/20/2023]
Abstract
OBJECTIVES To evaluate a rubric-based method of assessing pharmacy students' case presentations in the recitation component of a therapeutics course. METHODS A rubric was developed to assess knowledge, skills, and professional behavior. The rubric was used for instructor, student peer, and student self-assessment of case presentations. Rubric-based composite scores were compared to the previous dichotomous checklist-based scores. RESULTS Rubric-based instructor scores were significantly lower and had a broader score distribution than those resulting from the checklist method. Spring 2007 rubric-based composite scores from instructors and peers were significantly lower than those from the pilot study results, but self-assessment composite scores were not significantly different. CONCLUSIONS Successful development and implementation of a grading rubric facilitated evaluation of knowledge, skills, and professional behavior from the viewpoints of instructor, peer, and self in a didactic course.
Collapse
Affiliation(s)
- Catherine E O'Brien
- College of Pharmacy, University of Arkansas for Medical Sciences, 4301 West Markham Street, Little Rock, AR 72205, USA
| | | | | |
Collapse
|
16
|
Bucknall V, Sobic EM, Wood HL, Howlett SC, Taylor R, Perkins GD. Peer assessment of resuscitation skills. Resuscitation 2008; 77:211-5. [PMID: 18243473 DOI: 10.1016/j.resuscitation.2007.12.003] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2007] [Revised: 11/27/2007] [Accepted: 12/12/2007] [Indexed: 11/27/2022]
Abstract
INTRODUCTION Peer tuition has been identified as a useful tool for delivering undergraduate healthcare training in basic life support. The aim of this study was to test the expansion of the peer tuition model to include peer assessment of performance. The study also sought to establish the attitudes towards peer assessment among the course students and tutors. METHODS Students undergoing an end-of-course test in basic life support were simultaneously assessed by peer and faculty assessors, and the reliability of assessment results was measured. Students' and peer assessors' attitudes to peer assessment were also measured, by questionnaire. RESULTS In all 162 candidates were assessed by 9 sets of peers and faculty examiners. Inter-observer agreement was high (>95%) for all assessment domains apart from chest compressions (93%). Agreement on the final pass/fail decision was less consistent at 86%, because of the lower pass rate of 71% (115/162) afforded by peer assessors compared with 82% (132/162) by faculty assessors (p=0.0008). Peer assessor sensitivity and specificity were 85% was 90%, respectively, with positive predictive value of 97% and negative predictive value of 57%. CONCLUSION Senior healthcare students can make reliable assessments of their peers' performance during an end-of-course test in basic life support. Students preferred peer assessment, and the peer assessment process was acceptable to the majority of students and peer assessors.
Collapse
|
17
|
Marks A, McIntosh J. Achieving meaningful learning in health information management students: the importance of professional experience. HEALTH INF MANAG J 2008; 35:14-22. [PMID: 18209219 DOI: 10.1177/183335830603500205] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Learning is a complex process, not merely a transfer of information from teacher to student. for learning to be meaningful, students need to adopt a deep approach, and in the case of vocational students, to be given the opportunity to learn experientially. Health information management is a practice profession for which students are educated through theory at university and professional experience in the workplace. This article discusses how, through the process of experiential learning, professional experience can promote reflective thinking and thus deep learning, that is, the ability to integrate theory and practice, as well as professional and personal development in health information management students.
Collapse
Affiliation(s)
- Anne Marks
- School of Health Information Management, Faculty of Health Sciences, The University of Sydney, PO Box 170, Lidcombe, NSW 1825, Australia.
| | | |
Collapse
|
18
|
Colthart I, Bagnall G, Evans A, Allbutt H, Haig A, Illing J, McKinstry B. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10. MEDICAL TEACHER 2008; 30:124-45. [PMID: 18464136 DOI: 10.1080/01421590701881699] [Citation(s) in RCA: 213] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
BACKGROUND Health professionals are increasingly expected to identify their own learning needs through a process of ongoing self-assessment. Self-assessment is integral to many appraisal systems and has been espoused as an important aspect of personal professional behaviour by several regulatory bodies and those developing learning outcomes for clinical students. In this review we considered the evidence base on self-assessment since Gordon's comprehensive review in 1991. The overall aim of the present review was to determine whether specific methods of self-assessment lead to change in learning behaviour or clinical practice. Specific objectives sought evidence for effectiveness of self-assessment interventions to: a. improve perception of learning needs; b. promote change in learning activity; c. improve clinical practice; d. improve patient outcomes. METHODS The methods for this review were developed and refined in a series of workshops with input from an expert BEME systematic reviewer, and followed BEME guidance. Databases searched included Medline, CINAHL, BNI, Embase, EBM Collection, Psychlit, HMIC, ERIC, BEI, TIMElit and RDRB. Papers addressing self-assessment in all professions in clinical practice were included, covering under- and post-graduate education, with outcomes classified using an extended version of Kirkpatrick's hierarchy. In addition we included outcome measures of accuracy of self-assessment and factors influencing it. 5,798 papers were retrieved, 194 abstracts were identified as potentially relevant and 103 papers coded independently by pairs using an electronic coding sheet adapted from the standard BEME form. This total included 12 papers identified by hand-searches, grey literature, cited references and updating. The identification of a further 12 papers during the writing-up process resulted in a total of 77 papers for final analysis. RESULTS Although a large number of papers resulted from our original search only a small proportion of these were of sufficient academic rigour to be included in our review. The majority of these focused on judging the accuracy of self-assessment against some external standard, which raises questions about assumed reliability and validity of this 'gold standard'. No papers were found which satisfied Kirkpatrick's hierarchy above level 2, or which looked at the association between self-assessment and resulting changes in either clinical practice or patient outcomes. Thus our review was largely unable to answer the specific research questions and provide a solid evidence base for effective self-assessment. Despite this, there was some evidence that the accuracy of self-assessment can be enhanced by feedback, particularly video and verbal, and by providing explicit assessment criteria and benchmarking guidance. There was also some evidence that the least competent are also the least able to self-assess accurately. Our review recommends that these areas merit future systematic research to further our understanding of self-assessment. CONCLUSION As in other BEME reviews, the methodological issues emerging from this review indicate a need for more rigorous study designs. In addition, it highlights the need to consider the potential for combining qualitative and quantitative data to further our understanding of how self-assessment can improve learning and professional clinical practice.
Collapse
|
19
|
Ferguson KJ, Kreiter CD. Assessing the relationship between peer and facilitator evaluations in case-based learning. MEDICAL EDUCATION 2007; 41:906-8. [PMID: 17696984 DOI: 10.1111/j.1365-2923.2007.02824.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
OBJECTIVES Attempts to validate peer evaluation and to incorporate it into the curriculum have met with mixed results. The purpose of this study was to assess the use of peer evaluations in a Year 1 case-based learning course. METHODS As part of the formal grading process for the course, all faculty facilitators (n = 69 over 3 years) completed a 12-item evaluation form for each student at the conclusion of each case. As part of a course assignment, students (n = 415 over 3 years) completed brief evaluations of their peers based on 2 criteria: the overall quality of written reports, and participation in group discussion. In addition, students provided anonymous feedback in the written end-of-course evaluation about the peer evaluation process, and faculty were asked to comment during the wrap-up luncheon for small-group facilitators. RESULTS Response rates for the 3 Year 1 medical student classes ranged from 95% to 99%. The average number of peer evaluations completed for each student was 4.6. The G coefficients for the rater-nested-within-person generalisability study were 0.52 for written reports and 0.60 for group participation; both were based on an average of 4-5 ratings. Correlation coefficients between peer and faculty evaluations in each of the 3 consecutive years of the course ranged from 0.46 to 0.63; all were statistically significant at P < 0.001. A correction for attenuation suggests that the true score correlation between faculty and peer measures is near 1.0. DISCUSSION This study provides strong evidence that facilitator and peer ratings measure similar constructs and shows that, even among Year 1 medical students, peer evaluation can be conducted in a valid manner.
Collapse
Affiliation(s)
- Kristi J Ferguson
- Carver College of Medicine, University of Iowa, Iowa City, Iowa 52242, USA.
| | | |
Collapse
|
20
|
Papinczak T, Young L, Groves M, Haynes M. An analysis of peer, self, and tutor assessment in problem-based learning tutorials. MEDICAL TEACHER 2007; 29:e122-32. [PMID: 17885964 DOI: 10.1080/01421590701294323] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVE The purpose of this study was to explore self-, peer-, and tutor assessment of performance in tutorials among first year medical students in a problem-based learning curriculum. METHODS One hundred and twenty-five students enrolled in the first year of the Bachelor of Medicine and Bachelor of Surgery Program at the University of Queensland were recruited to participate in a study of metacognition and peer- and self-assessment. Both quantitative and qualitative data were collected from the assessment of PBL performance within the tutorial setting, which included elements such as responsibility and respect, communication, and critical analysis through presentation of a case summary. Self-, peer-, and tutor assessment took place concurrently. RESULTS Scores obtained from tutor assessment correlated poorly with self-assessment ratings (r = 0.31-0.41), with students consistently under-marking their own performance to a substantial degree. Students with greater self-efficacy, scored their PBL performance more highly. Peer-assessment was a slightly more accurate measure, with peer-averaged scores correlating moderately with tutor ratings initially (r = 0.40) and improving over time (r = 0.60). Students consistently over-marked their peers, particularly those with sceptical attitudes to the peer-assessment process. Peer over-marking led to less divergence from the tutor scoring than under-marking of one's own work. CONCLUSION According to the results of this study, first-year medical students in a problem-based learning curriculum were better able to accurately judge the performance of their peers compared to their own performance. This study has shown that self-assessment of process is not an accurate measure, in line with the majority of research in this domain. Nevertheless, it has an important role to play in supporting the development of skills in reflection and self-awareness.
Collapse
Affiliation(s)
- Tracey Papinczak
- School of Medicine, University of Queensland, Herston, Queensland, Australia.
| | | | | | | |
Collapse
|
21
|
Papinczak T, Young L, Groves M. Peer assessment in problem-based learning: a qualitative study. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2007; 12:169-86. [PMID: 17072771 DOI: 10.1007/s10459-005-5046-6] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2005] [Accepted: 11/10/2005] [Indexed: 05/12/2023]
Abstract
Peer assessment provides a powerful avenue for students to receive feedback on their learning. Although student perceptions of peer assessment have been studied extensively in higher education, little qualitative research has been undertaken with medical students in problem-based learning (PBL) curricula. A qualitative study of students' attitudes to, and perceptions of, peer assessment was undertaken within the framework of a larger study of metacognition with first-year medical students at the University of Queensland. A highly structured format for provision of feedback was utilised in the study design. Many recommendations from the higher education literature on optimal implementation of peer-assessment procedures were put into practice. Results indicated the existence of six main themes: (1) increased responsibility for others, (2) improved learning, (3) lack of relevancy, (4) challenges, (5) discomfort, and (6) effects on the PBL process. Five of these themes have previously been described in the literature. However, the final theme represents a unique, although not unexpected, finding. Students expressed serious concerns about the negative impact of peer assessment on the cooperative, non-judgmental atmosphere of PBL tutorial groups. The practical implications of these findings are considered.
Collapse
Affiliation(s)
- Tracey Papinczak
- Mayne Medical School, School of Medicine, University of Queensland, Herston, 4006, Brisbane, Queensland, Australia.
| | | | | |
Collapse
|
22
|
Eva KW, Solomon P, Neville AJ, Ladouceur M, Kaufman K, Walsh A, Norman GR. Using a sampling strategy to address psychometric challenges in tutorial-based assessments. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2007; 12:19-33. [PMID: 17077987 DOI: 10.1007/s10459-005-2327-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2005] [Accepted: 08/22/2005] [Indexed: 05/12/2023]
Abstract
INTRODUCTION Tutorial-based assessment, despite providing a good match with the philosophy adopted by educational programmes that emphasize small group learning, remains one of the greatest challenges for educators working in this context. The current study was performed in an attempt to assess the psychometric characteristics of tutorial-based evaluation upon adopting a multiple sampling approach that requires minimal recording of observations. METHOD After reviewing the literature, a simple 3-item evaluation form was created. The items were "Professional Behaviour," "Contribution to Group Process," and "Contribution to Group Content." Explicit definition of these items was provided on an evaluation form. Twenty five tutors in five different programmes were asked to use the form to evaluate their students (N=169) after every tutorial over the course of an academic unit. Each item was rated using a 10-point scale. RESULTS Cronbach's alpha revealed an appropriate internal consistency in all five programmes. Test-retest reliability of any single rating was low, but the reliability of the average rating was at least 0.75 in all cases. The construct validity of the tool was supported by the observation of increasing ratings over the course of the academic unit and by the finding that more senior students received higher ratings than more junior students. CONCLUSION Consistent with the context specificity phenomenon, the adoption of a "minimal observations often" approach to tutorial-based assessment appears to maintain better psychometric characteristics than do attempts to assess tutorial performance using more comprehensive measurement tools.
Collapse
Affiliation(s)
- Kevin W Eva
- Department of Clinical Epidemiology and Biostatistics, Program for Educational Research and Development, MDCL 3510, McMaster University, L8N 3Z5, Hamilton, ON, Canada.
| | | | | | | | | | | | | |
Collapse
|
23
|
Ladouceur MG, Rideout EM, Black MEA, Crooks DL, O'Mara LM, Schmuck ML. Development of an instrument to assess individual student performance in small group tutorials. J Nurs Educ 2006; 43:447-55. [PMID: 17152304 DOI: 10.3928/01484834-20041001-01] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Recognizing the need for a valid and reliable method to assess individual tutorial performance in a problem-based learning curriculum, we developed a 31-item instrument from theoretical frameworks and items used elsewhere. A scale was developed for each of three broad learning domains: self-directed learning (SDL), critical thinking (CT), and group process (GP). The instrument demonstrated high internal consistency (SDL = .88, CT = .90, GP = .83) on a sample of 18 tutors and 167 students. Tutor-student interrater reliability coefficients were estimated to be low (SDL = .16, CT = .18, GP = .14) due to lack of variance on the response scale. The instrument showed high correlation (r = .82) with other forms of summative evaluation. In its current form, this standardized and validated instrument is unreliable in differentiating strong from weak tutorial performance but can have a steering effect on student tutorial behaviors. The process of instrument development has general application to other educational programs.
Collapse
Affiliation(s)
- Michael G Ladouceur
- McMaster University, School of Nursing, HSC-2J26, 1200 Main Street West, Hamilton, Ontario, Canada L8N 3Z5.
| | | | | | | | | | | |
Collapse
|
24
|
Violato C, Lockyer J. Self and peer assessment of pediatricians, psychiatrists and medicine specialists: implications for self-directed learning. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2006; 11:235-44. [PMID: 16832707 DOI: 10.1007/s10459-005-5639-0] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2005] [Accepted: 12/02/2005] [Indexed: 05/10/2023]
Abstract
Self-regulation in medicine depends on accurate self-assessment. The purpose of the present study was to examine the discrepancy between self and peer assessments for a group of specialist physicians from internal medicine (IM), pediatrics, and psychiatry clinical domains (i.e., patient management, clinical assessment, professional development, and communication). Data from 304 psychiatrists, pediatricians and internal medicine specialists were used. Each physician had data from an identical self and 8 peer (38 item/4 clinical domains assessment). A total of 2306 peer assessments were available. Physicians were classified into quartiles based on mean assessment peer data and compared with self-assessment data. The analyses showed that self and peer assessment profiles were consistent across specialties and domains. Physicians assessed in the lowest and highest quartiles (i.e., <25th and >75th) by colleagues tended to rate themselves 30-40 percentile ranks higher and lower than peers, respectively. This study suggests that practicing physicians are inaccurate in assessing their own performance. These data suggest that systems to provide practicing physicians with regular and routine feedback may be appropriate if we are to ensure physicians are able to accurately assess themselves in a profession in which self-regulation is predicated upon the assumption that physicians know their capabilities and limitations.
Collapse
Affiliation(s)
- Claudio Violato
- Medical Education Research Unit, University of Calgary, 3330 Hospital Dr NW, T2N 4N1, Calgary, AB, Canada.
| | | |
Collapse
|
25
|
Lynn DJ, Holzer C, O'Neill P. Relationships between self-assessment skills, test performance, and demographic variables in psychiatry residents. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2006; 11:51-60. [PMID: 16583284 DOI: 10.1007/s10459-005-5473-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2004] [Accepted: 11/24/2005] [Indexed: 05/08/2023]
Abstract
Some researchers have seen the capacity for self-assessment in trainees as a special skill, and some reports have concluded that this skill is positively and crucially correlated with academic competence. Thus, it is believed that those trainees who are most deficient in knowledge are least likely to be aware of their limitations. Other researchers have emphasized the impact of statistical regression and other technical considerations in the studies, which have led to these conclusions. Our study used a relative-ranking design to measure the accuracy of self-assessments of both strengths and weaknesses in psychiatry residents. We analyzed the relationships between indices of self-assessment accuracy and other resident characteristics, particularly current academic strength as measured by a standard test of psychiatric knowledge. A total of 56 residents in two general psychiatry programs evaluated their performance on the Psychiatry Resident in Training Examination by estimating the rank order of their scores in the 11 psychiatry subject areas. For each resident, actual examination results were then used to generate measures of the accuracy of the identification of strengths and weaknesses. Residents' identifications of their strengths and weaknesses were significantly more accurate than chance levels. Strengths and weaknesses were identified with roughly equal proficiency, and accuracy in these assessments was not correlated to any of the following variables: academic competence as measured by examination raw scores, postgraduate year, gender, international vs. American medical education, program membership, or age. Our results do not support the hypothesis that trainees who show the least academic mastery also make the most inaccurate self-assessments. In addition, we found no resident characteristics that accounted for variation in self-assessment accuracy.
Collapse
Affiliation(s)
- David J Lynn
- Department of Psychiatry, Thomas Jefferson University, 1020 Sansom Street Suite 1652, Philadelphia, Pennsylvania 19107-5004, USA.
| | | | | |
Collapse
|
26
|
Eva KW, Regehr G. Self-assessment in the health professions: a reformulation and research agenda. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2005; 80:S46-54. [PMID: 16199457 DOI: 10.1097/00001888-200510001-00015] [Citation(s) in RCA: 559] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Affiliation(s)
- Kevin W Eva
- Department of Clinical Epidemiology and Biostatistics, Program for Educational Research and Development, MDCL 3522, McMaster University, Hamilton, ON, L8N 3Z5, Canada.
| | | |
Collapse
|
27
|
Rees C, Shepherd M. Students' and assessors' attitudes towards students' self-assessment of their personal and professional behaviours. MEDICAL EDUCATION 2005; 39:30-39. [PMID: 15612898 DOI: 10.1111/j.1365-2929.2004.02030.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Previous research has demonstrated self-assessment inaccuracy in medical students. This study aims to examine students' and assessors' attitudes towards students' self-assessment of personal and professional behaviours. METHODS Twenty-eight [corrected] participants (16 [corrected] Year 1 medical students and 12 personal and professional development assessors) participated in 4 semistructured focus group discussions in April and May 2003. All discussions were audio-taped and transcribed verbatim and the transcripts were theme analysed independently by 2 analysts. RESULTS Assessors and students perceived accurate self-assessment to be difficult for students and feedback was deemed to be crucial in helping students develop accurate self-assessment of their personal and professional behaviours. Assessors thought that some students had unrealistically high expectations of their own performance and this was thought to be due to various factors, such as previous academic success and gender. Assessors felt that students with high expectations of their own performance exhibited difficult behaviours if they failed to achieve their expectations. Students suggested that the school and the assessors had too high a level of expectation of their personal and professional behaviours, leading them to underestimate students' performance. DISCUSSION These difficulties surrounding self-assessment accuracy support the findings reported in previous literature and suggest that medical educators should encourage students to self-assess their own performance wherever possible. These results need to be triangulated with other sources of data such as expert panels or quantitative data.
Collapse
Affiliation(s)
- Charlotte Rees
- Institute of Clinical Education, Peninsula Medical School, University of Exeter, UK.
| | | |
Collapse
|
28
|
Regehr G. Trends in medical education research. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2004; 79:939-947. [PMID: 15383349 DOI: 10.1097/00001888-200410000-00008] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The medical education community is reflecting increasingly on the role and nature of research in the field. Useful sources of data to include in these reflections are a description of the topics in which we are investing our energies, an analysis of the extent to which there is a sense of progress on these topics, and an examination of the mechanisms by which any progress has been achieved. This article presents the results of a thematic review of the medical education research literature in four key journals since the turn of the 21st century. It describes four examples of areas in which the community appears to be investing its energies: curriculum and teaching issues, skills and attitudes relevant to the structure of the profession, individual characteristics of medical students, and the evaluation of students and residents. A discussion of the recent publications in these domains highlights a distinction between thematic categories of research, in which many members of the community are working on the same topic, and programmatic lines of research, in which members of the community are working together toward the shared goal of consensual understanding. The author suggests that community-level, programmatic lines of research are necessary to build knowledge and understanding of a domain and that, in the absence of such communal effort, the value of research is limited to the uncoordinated accrual of information.
Collapse
Affiliation(s)
- Glenn Regehr
- Wilson Center for Research in Education, 200 Elizabeth Street, Eaton South 1-565, Toronto, Ontario, Canada M5G2C4.
| |
Collapse
|
29
|
|