1
|
Does 'summative' count? The influence of the awarding of study credits on feedback use and test-taking motivation in medical progress testing. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2024:10.1007/s10459-024-10324-4. [PMID: 38502460 DOI: 10.1007/s10459-024-10324-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 03/03/2024] [Indexed: 03/21/2024]
Abstract
Despite the increasing implementation of formative assessment in medical education, its' effect on learning behaviour remains questionable. This effect may depend on how students value formative, and summative assessments differently. Informed by Expectancy Value Theory, we compared test preparation, feedback use, and test-taking motivation of medical students who either took a purely formative progress test (formative PT-group) or a progress test that yielded study credits (summative PT-group). In a mixed-methods study design, we triangulated quantitative questionnaire data (n = 264), logging data of an online PT feedback system (n = 618), and qualitative interview data (n = 21) to compare feedback use, and test-taking motivation between the formative PT-group (n = 316), and the summative PT-group (n = 302). Self-reported, and actual feedback consultation was higher in the summative PT-group. Test preparation, and active feedback use were relatively low and similar in both groups. Both quantitative, and qualitative results showed that the motivation to prepare and consult feedback relates to how students value the assessment. In the interview data, a link could be made with goal orientation theory, as performance-oriented students perceived the formative PT as not important due to the lack of study credits. This led to low test-taking effort, and feedback consultation after the formative PT. In contrast, learning-oriented students valued the formative PT, and used it for self-study or self-assessment to gain feedback. Our results indicate that most students are less motivated to put effort in the test, and use feedback when there are no direct consequences. A supportive assessment environment that emphasizes recognition of the value of formative testing is required to motivate students to use feedback for learning.
Collapse
|
2
|
Improving assessment of procedural skills in health sciences education: a validation study of a rubrics system in neurophysiotherapy. BMC Psychol 2024; 12:147. [PMID: 38486300 PMCID: PMC10941460 DOI: 10.1186/s40359-024-01643-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 03/05/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND The development of procedural skills is essential in health sciences education. Rubrics can be useful for learning and assessing these skills. To this end, a set of rubrics were developed in case of neurophysiotherapy maneuvers for undergraduates. Although students found the rubrics to be valid and useful in previous courses, the analysis of the practical exam results showed the need to change them in order to improve their validity and reliability, especially when used for summative purposes. After reviewing the rubrics, this paper analyzes their validity and reliability for promoting the learning of neurophysiotherapy maneuvers and assessing the acquisition of the procedural skills they involve. METHODS In this cross-sectional and psychometric study, six experts and 142 undergraduate students of a neurophysiotherapy subject from a Spanish university participated. The rubrics' validity (content and structural) and reliability (inter-rater and internal consistency) were analyzed. The students' scores in the subject practical exam derived from the application of the rubrics, as well as the rubrics' criteria difficulty and discrimination indices were also determined. RESULTS The rubrics´ content validity was found to be adequate (Content Validity Index > 0.90). These showed a unidimensional structure, and an acceptable internal consistency (α = 0.71) and inter-rater reliability (Fleiss' ƙ=0.44, ICC = 0.94). The scores of the subject practical exam practically covered the entire range of possible theoretical scores, showing all the criterion medium-low to medium difficulty indices - except for the one related to the physical therapist position-. All the criterion exhibited adequate discrimination indices (rpbis > 0.39), as did the rubric as a whole (Ferguson's δ = 0.86). Students highlighted the rubrics´ usefulness for learning the maneuvers, as well as their validity and reliability for formative and summative assessment. CONCLUSIONS The changed rubrics constitute a valid and reliable instrument for evaluating the execution quality of neurophysiotherapy maneuvers from a summative evaluation viewpoint. This study facilitates the development of rubrics aimed at promoting different practical skills in health-science education.
Collapse
|
3
|
Summative Evaluation of Vaginal Surgery Skills: Setting A Pass-Fail Score. Int Urogynecol J 2024; 35:451-456. [PMID: 38206339 DOI: 10.1007/s00192-023-05717-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 12/08/2023] [Indexed: 01/12/2024]
Abstract
INTRODUCTION AND HYPOTHESIS We developed a summative assessment tool to evaluate competent performance on three procedure-specific low fidelity simulation models for vaginal surgery. Our purpose was to determine a pass-fail score for each model. METHODS We enrolled participants (2011-2023, three Canadian academic centers) and grouped them according to operative competency in vaginal procedures. Novice operators were medical students recruited through targeted advertisement to clerkship level medical students. Proficient operators consisted of gynecology residents from the intervention arm of a randomized controlled trial, trained to competence in the use of the models; urogynecology fellows and attending gynecologic surgeons recruited through departmental rounds. All participants were asked to perform the three procedures on the models, were videotaped, and their performance assessed by evaluators familiar with the procedure and the scoring system, blinded to operator identity. A total performance score (range 0-400) assessed timing and errors. Basic skill deductions were set a priori. We calculated sensitivity and specificity scores and obtained an optimal cutoff based on Youden's J statistic. RESULTS For anterior repair, we rated 46 novice and 16 proficient videos. The pass-fail score was 170/400. For posterior repair, we rated 54 novice and 14 proficient videos. The pass-fail score was 140/400. For vaginal hysterectomy, we rated 47 novice and 12 proficient videos. The pass-fail score was 180/400. Scores of proficient operators were significantly better than those of novice participants (p < 0.001 for all). CONCLUSIONS A pass-fail score can distinguish between novice and proficient operators and can be used for summative assessment of surgical skill.
Collapse
|
4
|
Effective Feedback Strategy for Formative Assessment in an Integrated Medical Neuroscience Course. MEDICAL SCIENCE EDUCATOR 2023; 33:747-753. [PMID: 37501810 PMCID: PMC10368590 DOI: 10.1007/s40670-023-01801-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/28/2023] [Indexed: 07/29/2023]
Abstract
Purpose Despite the different benefits of formative assessments in an integrated medical curriculum, the effective strategies to provide feedback to medical students to benefit from the different merits of formative assessment are not fully understood. This study aims to determine the effect of different strategies of formative feedback on students' outcomes in a medical neuroscience course. Method We compared medical students' performance in summative examinations in the academic year that formative feedback was provided using in-person discussion and compared such performances with the academic year when the feedback was provided by written rationales or a combination of written rationales and in-person discussion. We also surveyed medical students' preferences for whether written or in-person formative feedback is a better strategy to provide feedback at the end of each course. Results ANOVA found a significant difference in summative performance scores for those scoring ≥ 70% when formative feedback was provided by providing a rationale, in-person, and a combination of both ([F (2,80) = 247.60, P < 0.001]. Post hoc analysis revealed a significant and highest performance when feedback was provided using the written rationale approach (***P < 0.05), followed by in-person (**P < 0.05). In contrast, the least performance was recorded when formative feedback was provided using a combination of providing a written rationale for the answers to the questions and in-person discussion of the questions (*P < 0.05). Students' preferred approach for receiving formative feedback for their formative assessment was highest for written rationale (***P < 0.05), followed by in-person or a combination of in-person and written rationale (**P < 0.05). Conclusion Our results found that medical students preferred a written formative feedback approach, which was associated with better student performance on the summative examination. This study reveals the importance of developing effective strategies to provide formative feedback to medical students for medical students to fully benefit from the merits of formative assessment in an integrated medical school curriculum.
Collapse
|
5
|
Effective formative assessment for pharmacy students in Thailand: lesson learns from a school of pharmacy in Thailand. BMC MEDICAL EDUCATION 2023; 23:300. [PMID: 37131144 PMCID: PMC10152769 DOI: 10.1186/s12909-023-04232-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 04/04/2023] [Indexed: 05/04/2023]
Abstract
INTRODUCTION Formative assessment (FA) is an assessment concept that is of interest in education. The Doctor of Pharmacy program is one of the programs in which FA is usually implemented. This study aimed to describe the correlation between FA scores and summative assessment (SA) scores and to suggest possible key success factors that affect the effectiveness of FA. METHODS This study employed a retrospective design using mixed methods for data collection. Data in the semesters 1/2020 and 2/2020 of the Doctor of Pharmacy curriculum in a Thailand pharmacy school were used. Three sets of data were gathered, including the course information (e.g. FA methods, FA scores, and SA scores) from 38 records, self-reports from 326 students and 27 teachers, and 5 focus group discussions. The quantitative data were statistically analyzed using descriptive statistics and Pearson correlation, while the qualitative data were analyzed using a content analysis framework. RESULTS The analysis revealed five main methods that were used for FA, including individual quizzes, individual reports, individual skill assessments, group presentations, and group reports. Of all 38 courses, 29 (76.32%) had significant correlations between FA and SA scores at p-values < 0.05. The individual FA score was related to the correlation coefficient of the courses (p-value = 0.007), but the group FA score was not related (p-value = 0.081). In addition, only the frequency of individual quiz had a significant effect on the correlation coefficient. Moreover, the key success factors which affected the effectiveness of FA were divided into six themes, including the appropriate method, an effective reflection, frequency of assessment, the appropriate score, the adequate support system, and teacher knowledge management. CONCLUSION The subjects that used individual FA methods provided a significant correlation between FA and SA, while those who used group FA methods did not show a significant correlation. Additionally, the key success factors in this study were appropriate assessment methods, frequency of assessment, effective feedback, appropriate scoring, and a proper support system.
Collapse
|
6
|
Assessment methods in laparoscopic colorectal surgery: a systematic review of available instruments. Int J Colorectal Dis 2023; 38:105. [PMID: 37074421 PMCID: PMC10115727 DOI: 10.1007/s00384-023-04395-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/03/2023] [Indexed: 04/20/2023]
Abstract
BACKGROUND Laparoscopic surgery has become the golden standard for many procedures, requiring new skills and training methods. The aim of this review is to appraise literature on assessment methods for laparoscopic colorectal procedures and quantify these methods for implementation in surgical training. MATERIALS AND METHODS PubMed, Embase and Cochrane Central Register of Controlled Trials databases were searched in October 2022 for studies reporting learning and assessment methods for laparoscopic colorectal surgery. Quality was scored using the Downs and Black checklist. Included articles were categorized in procedure-based assessment methods and non-procedure-based assessment methods. A second distinction was made between capability for formative and/or summative assessment. RESULTS In this systematic review, nineteen studies were included. These studies showed large heterogeneity despite categorization. Median quality score was 15 (range 0-26). Fourteen studies were categorized as procedure-based assessment methods (PBA), and five as non-procedure-based assessment methods. Three studies were applicable for summative assessment. CONCLUSIONS The results show a considerable diversity in assessment methods with varying quality and suitability. To prevent a sprawl of assessment methods, we argue for selection and development of available high-quality assessment methods. A procedure-based structure combined with an objective assessment scale and possibility for summative assessment should be cornerstones.
Collapse
|
7
|
Exploring Academic Performance of Medical Students in an Integrated Hybrid Curriculum by Gender. MEDICAL SCIENCE EDUCATOR 2023; 33:353-357. [PMID: 37261018 PMCID: PMC10226948 DOI: 10.1007/s40670-023-01743-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/18/2023] [Indexed: 06/02/2023]
Abstract
Gender gaps in academic performance have been reported at a variety of educational levels including several national standardized exams for medical education, with men scoring higher than women. These gaps potentially impact medical school acceptance and residency matching and may be influenced by curricular design. Performance data for our 4-year integrated hybrid curriculum, which features a large proportion of active learning, revealed a gender gap with men performing better early in the curriculum and on the first national standardized exam. This gap in performance almost entirely disappeared for years 2-4 of the curriculum and the second national standardized exam.
Collapse
|
8
|
Dialogic Problematization of Academic Integrity Education. Integr Psychol Behav Sci 2022:10.1007/s12124-022-09722-3. [PMID: 36109432 DOI: 10.1007/s12124-022-09722-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/27/2022] [Indexed: 11/28/2022]
Abstract
Many university educators have argued for a need for academic integrity education as an alternative to a focus on students' and scholars' compliance with academic rules and conventions (Brimble, 2016; Christensen Hughes & Bertram-Gallant, 2016; Hutton, 2006). I argue that the universal ethical-moral discourse of academic integrity disciplines subjects to comply with frequently alienating academic practices. This ethical discourse focuses on individual responsibility, in turn rendering invisible the authority of sometimes dysfunctional and oppressive instructional and summative assessment practices. Taking a Bakhtinian dialogic authorial perspective, the paper calls on students, scholars, instructors, and academic advisors to engage in critical ontological dialogue on diverse responses and motivations in regard to academic demands and deeds. Dialogue on situated instead of universal ethics in academic settings contextualizes and problematizes not just individual actions but also the ethics of the summative assessment regime, the instruction, the curriculum, authority dynamics, and the educational system as a whole. This discussion on academic integrity violations calls on educators to consider the ethical value of separating summative assessment from instruction.
Collapse
|
9
|
Development of a simulation technical competence curriculum for medical simulation fellows. ADVANCES IN SIMULATION (LONDON, ENGLAND) 2022; 7:24. [PMID: 35945638 PMCID: PMC9361680 DOI: 10.1186/s41077-022-00221-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 07/27/2022] [Indexed: 11/10/2022]
Abstract
Background and needs Medical educators with simulation fellowship training have a unique skill set. Simulation fellowship graduates have the ability to handle basic and common troubleshooting issues with simulation software, hardware, and equipment setup. Outside of formal training programs such as this, simulation skills are inconsistently taught and organically learned. This is important to address because there are high expectations of medical educators who complete simulation fellowships. To fill the gap, we offer one way of teaching and assessing simulation technical skills within a fellowship curriculum and reflect on lessons learned throughout the process. This report describes the instructional designs, implementation, and program evaluation of an educational intervention: a simulation technology curriculum for simulation fellows. Curriculum design The current iteration of the simulation technical skill curriculum was introduced in 2018 and took approximately 8 months to develop under the guidance of expert simulation technology specialists, simulation fellowship-trained faculty, and simulation center administrators. Kern’s six steps to curriculum development was used as the guiding conceptual framework. The curriculum was categorized into four domains, which emerged from the outcome of a qualitative needs assessment. Instructional sessions occurred on 5 days spanning a 2-week block. The final session concluded with summative testing. Program evaluation Fellows were administered summative objective structured exams at three stations. The performance was rated by instructors using station-specific checklists. Scores approached 100% accuracy/completion for all stations. Conclusions The development of an evidence-based educational intervention, a simulation technical skill curriculum, was highly regarded by participants and demonstrated effective training of the simulation fellows. This curriculum serves as a template for other simulationists to implement formal training in simulation technical skills. Supplementary Information The online version contains supplementary material available at 10.1186/s41077-022-00221-4.
Collapse
|
10
|
Assessing medical students' perception and educational experience during COVID-19 pandemic. Ir J Med Sci 2022:10.1007/s11845-022-03118-3. [PMID: 35908145 PMCID: PMC9362516 DOI: 10.1007/s11845-022-03118-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 07/23/2022] [Indexed: 11/02/2022]
Abstract
INTRODUCTION The COVID-19 pandemic has significantly impacted the traditional delivery of medical education. Medical education programmes have had to cope with limitations on face-to-face learning, and accelerate the adoption of digital learning. In addition, the pandemic has potential serious implications on the psychological well-being of medical students. We aim to assess the changes in perceptions and experiences of medical students as a consequence of this pandemic. METHODS Cross-sectional survey of medical students at Trinity College Dublin (TCD) between March and April 2022 was performed. The survey explored student satisfaction with the current education program, teaching delivery and the impact of COVID-19 on education and student well-being. RESULTS 175 medical students participated in the survey. Overall, the majority of students were happy/neutral with their medical education. 93 (53.1%) felt tutorials and problem-based learning (PBL) to be the most effective method of teaching, followed by laboratory and clinical placements in 78 participants (44.6%) and hybrid-learning in 85 participants (48.6%). There was a mixed reaction to the changes in the delivery of education brought about by the pandemic. 67 participants (40.6%) felt happy with the changes, another 64 participants (38.8%) felt neutral, whilst only 34 participants (20.6%) were unhappy. However, most participants felt the pandemic negatively impacted their mental health, with 96 participants (55.8%) reporting negative responses. 58% of participants (n = 102/175) reported utilising the student support services at university campus and 49% (n = 50) were satisfied with their services. CONCLUSION Digital content and delivery confer the benefit of greater flexibility in learning, the ability to learn at one's own pace and in a preferred environment, however lacks the advantage of bedside teaching and hands-on training. Our findings reinforce the potential advantages of online learning.
Collapse
|
11
|
Assessment in mathematics: a study on teachers' practices in times of pandemic. ZDM : THE INTERNATIONAL JOURNAL ON MATHEMATICS EDUCATION 2022; 55:221-233. [PMID: 35880203 PMCID: PMC9298165 DOI: 10.1007/s11858-022-01395-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 06/20/2022] [Indexed: 06/15/2023]
Abstract
Lockdowns imposed by many countries on their populations at the beginning of the COVID-19 crisis forced teachers to adapt quickly and without adequate preparation to distance teaching. In this paper, we focus on one of the most formidable challenges that teachers faced during the lockdowns and even in the post-lockdown emergency period, namely, developing assessment that maintains the pedagogical continuity that educational institutions typically require. Based on the results of a previous study, focused on the analysis of answers to an open-ended questionnaire administered to a population of 700 teachers from France, Germany, Israel and Italy, a semi-structured interview series was designed and implemented by the authors of this paper with a small group of teachers. The transcripts of these interviews were analysed according to the interpretative phenomenological analysis methodology, with the aim of investigating teachers' own perspectives on the following: (a) the difficulties with which they had to contend, with respect to the question of assessment; (b) the techniques adopted to deal with these difficulties; and (c) the ways in which the lockdown experience could affect the future evolution of teachers' assessment practices. This analysis supported us in formulating hypotheses concerning the possible long-term effects of lockdown on modes of assessment in mathematics.
Collapse
|
12
|
Feasibility of radiology online structured oral examination for undergraduate medical students. Insights Imaging 2022; 13:120. [PMID: 35849259 PMCID: PMC9289656 DOI: 10.1186/s13244-022-01258-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 06/23/2022] [Indexed: 11/10/2022] Open
Abstract
Background Online summative assessment has emerged during the COVID-19 pandemic as an alternative to traditional examinations, bringing opportunities and challenges. The study aims to evaluate the feasibility and effectiveness of online structured oral examination (SOE) in radiology clerkships. The study identifies measures taken to successfully implement online SOE and minimize chances of cheating. It also discusses the challenges encountered and how they were addressed. Methods SOE percent scores of fourth-year medical students from two institutions were correlated with students’ grade point average (GPA). The scores were compared among different institutions, students’ genders, students’ batches, examination versions, and examiners with different experience levels. Students’ perceived satisfaction and concerns were captured using anonymous self-administered questionnaire. Technical problems and success rate of SOE implementation were recorded. Results were analyzed using descriptive and inferential statistics. Results A total of 79 students participated in the study, out of which 81.0% (n = 64) responded to the survey. SOE scores showed poor positive correlation with the students’ GPAs (r = 0.22, and p = .09). Scores showed no significant difference between the two institutions or genders. Scores were also not significantly different between students who were examined by junior or senior examiners. All but one version of examination showed no significant difference in students’ scores. No significant difference was observed in students’ scores between each two subsequent batches who were exposed to the same examination version. Conclusion Online summative SOE is a feasible alternative whenever face-to-face SOE could not be implemented provided that appropriate measures are taken to ensure its successful execution.
Collapse
|
13
|
Can a high-fidelity simulation tutorial improve written examination results? Review of a change in teaching practice. BRITISH JOURNAL OF NURSING (MARK ALLEN PUBLISHING) 2022; 31:704-708. [PMID: 35797086 DOI: 10.12968/bjon.2022.31.13.704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Undergraduate nursing students prefer technology-based learning. Simulation has been used in nursing education to provide skills acquisition and clinical exposure. Can high-fidelity simulation (HFS) be used to teach tutorial content to prepare students for a written examination? AIMS To design a pilot HFS tutorial. METHOD 203 second year undergraduate nurses were timetabled to attend an HFS tutorial. Examination results at first attempt were compared with the previous cohort's results. RESULTS 81% of the students from the HFS tutorial cohort passed at the first attempt compared with 85% from the previous cohort. CONCLUSION The HFS tutorial needs to be developed further, incorporating simulation standards, to further assess its ability to improve a student's written examination results. Students found the post-simulation discussion difficult and wanted guidance in how to participate. Involvement of the university's skills and simulation team would be recommended for future cohorts to assist with design and facilitation.
Collapse
|
14
|
Abstract
BACKGROUND Currently, there is significant variability in the development, implementation and overarching goals of video review for assessment of surgical performance. METHODS This paper evaluates the current methods in which video review is used for evaluation of surgical performance and identifies which processes are critical for successful, widespread implementation of video-based assessment. RESULTS Despite the advances in video capture technology and growing interest in video-based assessment, there is a notable gap in the implementation and longitudinal use of formative and summative assessment using video. CONCLUSION Validity, scalability and discoverability are current but removable barriers to video-based assessment.
Collapse
|
15
|
Post-exam feedback with question rationales improves re-test performance of medical students on a multiple-choice exam. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:995-1003. [PMID: 30043313 DOI: 10.1007/s10459-018-9844-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 07/18/2018] [Indexed: 06/08/2023]
Abstract
This study compared the effects of two types of delayed feedback (correct response or correct response + rationale) provided to students by a computer-based testing system following an exam. The preclinical medical curriculum at the University of Kansas Medical Center uses a two-exam system for summative assessments in which students test, revisit material, and then re-test (same content, different questions), with the higher score used to determine the students' grades. Using a quasi-experimental design and data collected during the normal course of instruction, test and re-test scores from midterm multiple choice examinations were compared between academic year (AY) 2015-2016, which provided delayed feedback with the correct answer only, and AY 2016-2017, where delayed feedback consisted of the correct answer plus a rationale. The average increase in score on the re-test was 2.29 ± 6.83% (n = 192) with correct answer only and 3.92 ± 7.12% (n = 197) with rationales (p < 0.05). The effect of the rationales was not different in students of differing academic abilities based on entering composite MCAT scores or Year 1 GPA. Thus, delayed feedback with exam question rationales resulted in a greater increase in exam score between the test and re-test than feedback with correct response only. This finding suggests that delayed elaborative feedback on a summative exam produced a small, but significant, improvement in learning, in medical students.
Collapse
|
16
|
OSCE as a Summative Assessment Tool for Undergraduate Students of Surgery-Our Experience. Indian J Surg 2017; 79:534-538. [PMID: 29217905 DOI: 10.1007/s12262-016-1521-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Accepted: 07/05/2016] [Indexed: 10/21/2022] Open
Abstract
Traditional examination has inherent deficiencies. Objective Structured Clinical Examination (OSCE) is considered as a method of assessment that may overcome many such deficits. OSCE is being increasingly used worldwide in various medical specialities for formative and summative assessment. Although it is being used in various disciplines in our country as well, its use in the stream of general surgery is scarce. We report our experience of assessment of undergraduate students appearing in their pre-professional examination in the subject of general surgery by conducting OSCE. In our experience, OSCE was considered a better assessment tool as compared to the traditional method of examination by both faculty and students and is acceptable to students and faculty alike. Conducting OSCE is feasible for assessment of students of general surgery.
Collapse
|
17
|
Changing the culture of assessment: the dominance of the summative assessment paradigm. BMC MEDICAL EDUCATION 2017; 17:73. [PMID: 28454581 PMCID: PMC5410042 DOI: 10.1186/s12909-017-0912-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2017] [Accepted: 04/22/2017] [Indexed: 05/19/2023]
Abstract
BACKGROUND Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students' use of assessment-related feedback. METHODS Using the principles of participatory design, a mixed group comprising medical students, clinical teachers and senior faculty members was challenged to develop radical solutions to improve the use of post-assessment feedback. Follow-up interviews were conducted with individual members of the group to explore their personal beliefs about the proposed redesign. Data were analysed using a socio-cultural lens. RESULTS Proposed changes were dominated by a shared belief in the primacy of the summative assessment paradigm, which prevented radical redesign solutions from being accepted by group members. Participants' prior assessment experiences strongly influenced proposals for change. As participants had largely only experienced a summative assessment culture, they found it difficult to conceptualise radical change in the assessment culture. Although all group members participated, students were less successful at persuading the group to adopt their ideas. Faculty members and clinical teachers often used indirect techniques to close down discussions. The strength of individual beliefs became more apparent in the follow-up interviews. CONCLUSIONS Naïve epistemologies and prior personal experiences were influential in the assessment redesign but were usually not expressed explicitly in a group setting, perhaps because of cultural conventions of politeness. In order to successfully implement a change in assessment culture, firmly-held intuitive beliefs about summative assessment will need to be clearly understood as a first step.
Collapse
|
18
|
Do coursework summative assessments predict clinical performance? A systematic review. BMC MEDICAL EDUCATION 2017; 17:40. [PMID: 28209159 PMCID: PMC5314623 DOI: 10.1186/s12909-017-0878-3] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Accepted: 02/04/2017] [Indexed: 05/17/2023]
Abstract
BACKGROUND Two goals of summative assessment in health profession education programs are to ensure the robustness of high stakes decisions such as progression and licensing, and predict future performance. This systematic and critical review aims to investigate the ability of specific modes of summative assessment to predict the clinical performance of health profession education students. METHODS PubMed, CINAHL, SPORTDiscus, ERIC and EMBASE databases were searched using key terms with articles collected subjected to dedicated inclusion criteria. Rigorous exclusion criteria were applied to ensure a consistent interpretation of 'summative assessment' and 'clinical performance'. Data were extracted using a pre-determined format and papers were critically appraised by two independent reviewers using a modified Downs and Black checklist with level of agreement between reviewers determined through a Kappa analysis. RESULTS Of the 4783 studies retrieved from the search strategy, 18 studies were included in the final review. Twelve were from the medical profession and there was one from each of physiotherapy, pharmacy, dietetics, speech pathology, dentistry and dental hygiene. Objective Structured Clinical Examinations featured in 15 papers, written assessments in four and problem based learning evaluations, case based learning evaluations and student portfolios each featured in one paper. Sixteen different measures of clinical performance were used. Two papers were identified as 'poor' quality and the remainder categorised as 'fair' with an almost perfect (k = 0.852) level of agreement between raters. Objective Structured Clinical Examination scores accounted for 1.4-39.7% of the variance in student performance; multiple choice/extended matching questions and short answer written examinations accounted for 3.2-29.2%; problem based or case based learning evaluations accounted for 4.4-16.6%; and student portfolios accounted for 12.1%. CONCLUSIONS Objective structured clinical examinations and written examinations consisting of multiple choice/extended matching questions and short answer questions do have significant relationships with the clinical performance of health professional students. However, caution should be applied if using these assessments as predictive measures for clinical performance due to a small body of evidence and large variations in the predictive strength of the relationships identified. Based on the current evidence, the Objective Structured Clinical Examination may be the most appropriate summative assessment for educators to use to identify students that may be at risk of poor performance in a clinical workplace environment. Further research on this topic is needed to improve the strength of the predictive relationship.
Collapse
|
19
|
Flipped clinical training: a structured training method for undergraduates in complete denture prosthesis. KOREAN JOURNAL OF MEDICAL EDUCATION 2016; 28:333-342. [PMID: 27907980 PMCID: PMC5138569 DOI: 10.3946/kjme.2016.39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Revised: 07/30/2016] [Accepted: 09/22/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE To design and implement flipped clinical training for undergraduate dental students in removable complete denture treatment and predict its effectiveness by comparing the assessment results of students trained by flipped and traditional methods. METHODS Flipped training was designed by shifting the learning from clinics to learning center (phase I) and by preserving the practice in clinics (phase II). In phase I, student-faculty interactive session was arranged to recap prior knowledge. This is followed by a display of audio synchronized video demonstration of the procedure in a repeatable way and subsequent display of possible errors that may occur in treatment with guidelines to overcome such errors. In phase II, live demonstration of the procedure was given. Students were asked to treat three patients under instructor's supervision. The summative assessment was conducted by applying the same checklist criterion and rubric scoring used for the traditional method. Assessment results of three batches of students trained by flipped method (study group) and three traditionally trained previous batches (control group) were taken for comparison by chi-square test. RESULTS The sum of traditionally trained three batch students who prepared acceptable dentures (score: 2 and 3) and unacceptable dentures (score: 1) was compared with the same of flipped trained three batch students revealed that the number of students who demonstrated competency by preparing acceptable dentures was higher for flipped training (χ2=30.996 with p<0.001). CONCLUSION The results reveal the supremacy of flipped training in enhancing students competency and hence recommended for training various clinical procedures.
Collapse
|
20
|
Factors influencing students' receptivity to formative feedback emerging from different assessment cultures. PERSPECTIVES ON MEDICAL EDUCATION 2016; 5:276-84. [PMID: 27650373 PMCID: PMC5035283 DOI: 10.1007/s40037-016-0297-x] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
INTRODUCTION Feedback after assessment is essential to support the development of optimal performance, but often fails to reach its potential. Although different assessment cultures have been proposed, the impact of these cultures on students' receptivity to feedback is unclear. This study aimed to explore factors which aid or hinder receptivity to feedback. METHODS Using a constructivist grounded theory approach, the authors conducted six focus groups in three medical schools, in three separate countries, with different institutional approaches to assessment, ranging from a traditional summative assessment structure to a fully implemented programmatic assessment system. The authors analyzed data iteratively, then identified and clarified key themes. RESULTS Helpful and counterproductive elements were identified within each school's assessment system. Four principal themes emerged. Receptivity to feedback was enhanced by assessment cultures which promoted students' agency, by the provision of authentic and relevant assessment, and by appropriate scaffolding to aid the interpretation of feedback. Provision of grades and comparative ranking provided a helpful external reference but appeared to hinder the promotion of excellence. CONCLUSIONS This study has identified important factors emerging from different assessment cultures which, if addressed by programme designers, could enhance the learning potential of feedback following assessments. Students should be enabled to have greater control over assessment and feedback processes, which should be as authentic as possible. Effective long-term mentoring facilitates this process. The trend of curriculum change towards constructivism should now be mirrored in the assessment processes in order to enhance receptivity to feedback.
Collapse
|
21
|
Strategies for increasing the feasibility of performance assessments during competency-based education: Subjective and objective evaluations correlate in the operating room. Am J Surg 2016; 214:365-372. [PMID: 27634423 DOI: 10.1016/j.amjsurg.2016.07.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2016] [Revised: 06/23/2016] [Accepted: 07/06/2016] [Indexed: 11/22/2022]
Abstract
BACKGROUND Competency-based education necessitates assessments that determine whether trainees have acquired specific competencies. The evidence on the ability of internal raters (staff surgeons) to provide accurate assessments is mixed; however, this has not yet been directly explored in the operating room. This study's objective is to compare the ratings given by internal raters vs an expert external rater (independent to the training process) in the operating room. METHODS Raters assessed general surgery residents during a laparoscopic cholecystectomy for their technical and nontechnical performance. RESULTS Fifteen cases were observed. There was a moderately positive correlation (rs = .618, P = .014) for technical performance and a strong positive correlation (rs = .731, P = .002) for nontechnical performance. The internal raters were less stringent for technical (mean rank 3.33 vs 8.64, P = .007) and nontechnical (mean rank 3.83 vs 8.50, P = .01) performances. CONCLUSIONS This study provides evidence to help operationalize competency-based assessments.
Collapse
|
22
|
Evaluation of marking of peer marking in oral presentation. PERSPECTIVES ON MEDICAL EDUCATION 2016; 5:103-107. [PMID: 26951165 PMCID: PMC4839009 DOI: 10.1007/s40037-016-0254-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
BACKGROUND Peer marking is an important skill for students, helping them to understand the process of learning and assessment. This method is increasingly used in medical education, particularly in formative assessment. However, the use of peer marking in summative assessment is not widely adopted because many teachers are concerned about biased marking by students of their peers. OBJECTIVE The aim of this study was to investigate whether marking of summative peer assessment can improve the reliability of peer marking. METHODS In a retrospective analysis, the peer-marking results of a summative assessment of oral presentations of two cohorts of students were compared. One group of students was told that their peer marks would be assessed against a benchmark consisting of the average of examiner marks and that these scores together with the peer and examiner marks would form their final exam results. The other group of students were just informed that their final exam results would be determined based on the examiner and peer marks. RESULTS Based on examiner marks, both groups of students performed similarly in their summative assessment, agreement between student markers was less consistent and more polar than the examiners. When compared with the examiners, students who were told that their peer marking would be scored were more generous markers (their average peer mark was 2.4 % points higher than the average examiner mark) while students who were not being scored on their marking were rather harsh markers (their average peer mark was 4.2 % points lower than the average examiner mark), with scoring of the top-performing students most affected. CONCLUSIONS Marking of peer marking had a small effect on the marking conduct of students in summative assessment of oral presentation but possibly indicated a more balanced marking performance.
Collapse
|
23
|
Abstract
Progress testing in the Netherlands has a long history. It was first introduced at one medical school which had a problem-based learning (PBL) curriculum from the start. Later, other schools with and without PBL curricula joined. At present, approximately 10,000 students sit a test every three months. The annual progress exam is not a single test. It consists of a series of 4 tests per annum which are summative in the end. The current situation with emphasis on the formative and summative aspects will be discussed. The reader will get insight into the way progress testing can be used as feedback for students and schools.
Collapse
|