1
|
Bernard J, Sonnadara R, Saraco AN, Mitchell JP, Bak AB, Bayer I, Wainman BC. Automated grading of anatomical objective structured practical examinations using decision trees: An artificial intelligence approach. ANATOMICAL SCIENCES EDUCATION 2024; 17:967-978. [PMID: 37322819 DOI: 10.1002/ase.2305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 05/19/2023] [Accepted: 05/22/2023] [Indexed: 06/17/2023]
Abstract
An Objective Structured Practical Examination (OSPE) is an effective and robust, but resource-intensive, means of evaluating anatomical knowledge. Since most OSPEs employ short answer or fill-in-the-blank style questions, the format requires many people familiar with the content to mark the examinations. However, the increasing prevalence of online delivery for anatomy and physiology courses could result in students losing the OSPE practice that they would receive in face-to-face learning sessions. The purpose of this study was to test the accuracy of Decision Trees (DTs) in marking OSPE questions as a first step to creating an intelligent, online OSPE tutoring system. The study used the results of the winter 2020 semester final OSPE from McMaster University's anatomy and physiology course in the Faculty of Health Sciences (HTHSCI 2FF3/2LL3/1D06) as the data set. Ninety percent of the data set was used in a 10-fold validation algorithm to train a DT for each of the 54 questions. Each DT was comprised of unique words that appeared in correct, student-written answers. The remaining 10% of the data set was marked by the generated DTs. When the answers marked by the DT were compared to the answers marked by staff and faculty, the DT achieved an average accuracy of 94.49% across all 54 questions. This suggests that machine learning algorithms such as DTs are a highly effective option for OSPE grading and are suitable for the development of an intelligent, online OSPE tutoring system.
Collapse
Affiliation(s)
- Jason Bernard
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Ranil Sonnadara
- Department of Surgery, McMaster University, Hamilton, Ontario, Canada
| | - Anthony N Saraco
- Education Program in Anatomy, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Josh P Mitchell
- Education Program in Anatomy, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
| | - Alex B Bak
- Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Ilana Bayer
- Education Program in Anatomy, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada
| | - Bruce C Wainman
- Education Program in Anatomy, Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Department of Pathology and Molecular Medicine, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
2
|
Bond AP, Butts T, Tierney CM. Spot(ters) the difference: Bringing traditional anatomical examinations online. Clin Anat 2024; 37:284-293. [PMID: 37409502 DOI: 10.1002/ca.24092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 06/23/2023] [Accepted: 06/24/2023] [Indexed: 07/07/2023]
Abstract
The COVID-19 pandemic caused a shift in anatomy education forcing institutions to find innovative ways to teach and assess online. This study details the development of an online spotter across multiple modules that allowed students to sit the examination at home whilst still maintaining the integrity of the assessment. The online spotter consisted of individual, Zoom calls between students and examiners whereby slides with images and questions were screen shared. To examine the viability of this spotter in non-lockdown scenarios several parameters were considered. Mean marks were compared to traditional versions and Pearson's r correlation coefficients were calculated between online and traditional spotters and between online spotters and overall performance in anatomy modules. A survey was carried out to determine the students' view of the assessment. Pearson's r was between 0.33 and 0.49 when comparing online spotters to the traditional format, and between 0.65 and 0.75 (p < 0.01) when compared to a calculated anatomy score. The survey indicated overall student satisfaction as 82.5% reported that it was a fair way to test their knowledge and 55% reported the same or lower levels of anxiety when compared to traditional spotters. However, there was nothing to indicate that the students preferred this format over laboratory-based spotters. These results indicate that this new exam format would be useful for small cohorts who are undertaking online or hybrid courses, or in circumstances when running a full spotter is too costly, and represents a fair and robust way to assess practical anatomical knowledge online.
Collapse
Affiliation(s)
- Alistair P Bond
- Human Anatomy Resource Centre, Education Directorate, Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| | - Thomas Butts
- School of Medicine, University of Sunderland, Sunderland, UK
| | - Claire M Tierney
- Human Anatomy Resource Centre, Education Directorate, Faculty of Health and Life Science, University of Liverpool, Liverpool, UK
| |
Collapse
|
3
|
Bala L, Westacott RJ, Brown C, Sam AH. Twelve tips for introducing very short answer questions (VSAQs) into your medical curriculum. MEDICAL TEACHER 2023; 45:360-367. [PMID: 35833915 DOI: 10.1080/0142159x.2022.2093706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Most undergraduate written examinations use multiple-choice questions, such as single best answer questions (SBAQs) to assess medical knowledge. In recent years, a strong evidence base has emerged for the use of very short answer questions (VSAQs). VSAQs have been shown to be an acceptable, reliable, discriminatory, and cost-effective assessment tool in both formative and summative undergraduate assessments. VSAQs address many of the concerns raised by educators using SBAQs including inauthentic clinical scenarios, cueing and test-taking behaviours by students, as well as the limited feedback SBAQs provide for both students and teachers. The widespread use of VSAQs in medical assessment has yet to be adopted, possibly due to lack of familiarity and experience with this assessment method. The following twelve tips have been constructed using our own practical experience of VSAQs alongside supporting evidence from the literature to help medical educators successfully plan, construct and implement VSAQs within medical curricula.
Collapse
Affiliation(s)
- Laksha Bala
- Imperial College School of Medicine, Imperial College London, London, UK
| | | | - Celia Brown
- Warwick Medical School, University of Warwick, Coventry, UK
| | - Amir H Sam
- Imperial College School of Medicine, Imperial College London, London, UK
| |
Collapse
|
4
|
Renes J, van der Vleuten CPM, Collares CF. Utility of a multimodal computer-based assessment format for assessment with a higher degree of reliability and validity. MEDICAL TEACHER 2023; 45:433-441. [PMID: 36306368 DOI: 10.1080/0142159x.2022.2137011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Multiple choice questions (MCQs) suffer from cueing, item quality and factual knowledge testing. This study presents a novel multimodal test containing alternative item types in a computer-based assessment (CBA) format, designated as Proxy-CBA. The Proxy-CBA was compared to a standard MCQ-CBA, regarding validity, reliability, standard error of measurement, and cognitive load, using a quasi-experimental crossover design. Biomedical students were randomized into two groups to sit a 65-item formative exam starting with the MCQ-CBA followed by the Proxy-CBA (group 1, n = 38), or the reverse (group 2, n = 35). Subsequently, a questionnaire on perceived cognitive load was taken, answered by 71 participants. Both CBA formats were analyzed according to parameters of the Classical Test Theory and the Rasch model. Compared to the MCQ-CBA, the Proxy-CBA had lower raw scores (p < 0.001, η2 = 0.276), higher reliability estimates (p < 0.001, η2 = 0.498), lower SEM estimates (p < 0.001, η2 = 0.807), and lower theta ability scores (p < 0.001, η2 = 0.288). The questionnaire revealed no significant differences between both CBA tests regarding perceived cognitive load. Compared to the MCQ-CBA, the Proxy-CBA showed increased reliability and a higher degree of validity with similar cognitive load, suggesting its utility as an alternative assessment format.
Collapse
Affiliation(s)
- Johan Renes
- Department of Human Biology, Maastricht University, The Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Carlos F Collares
- Department of Educational Research and Development, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
- European Board of Medical Assessors, Edinburgh, UK
- Stichting Aphasia.help, Maastricht, The Netherlands
| |
Collapse
|
5
|
Keenan ID, Green E, Haagensen E, Hancock R, Scotcher KS, Swainson H, Swamy M, Walker S, Woodhouse L. Pandemic-Era Digital Education: Insights from an Undergraduate Medical Programme. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1397:1-19. [DOI: 10.1007/978-3-031-17135-2_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
6
|
Douglas-Morris J, Ritchie H, Willis C, Reed D. Identification-Based Multiple-Choice Assessments in Anatomy can be as Reliable and Challenging as Their Free-Response Equivalents. ANATOMICAL SCIENCES EDUCATION 2021; 14:287-295. [PMID: 33683830 DOI: 10.1002/ase.2068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 01/22/2021] [Accepted: 03/02/2021] [Indexed: 06/12/2023]
Abstract
Multiple-choice (MC) anatomy "spot-tests" (identification-based assessments on tagged cadaveric specimens) offer a practical alternative to traditional free-response (FR) spot-tests. Conversion of the two spot-tests in an upper limb musculoskeletal anatomy unit of study from FR to a novel MC format, where one of five tagged structures on a specimen was the answer to each question, provided a unique opportunity to assess the comparative validity and reliability of FR- and MC-formatted spot-tests and the impact on student performance following the change of test format to MC. Three successive year cohorts of health science students (n = 1,442) were each assessed by spot-tests formatted as FR (first cohort) or MC (following two cohorts). Comparative question difficulty was assessed independently by three examiners. There were more higher-order cognitive skill questions and more of the course objectives tested in the MC-formatted tests. Spot-test reliability was maintained with Cronbach's alpha reliability coefficients ≥ 0.80 and 80% of the MC items of high quality (having point-biserial correlation coefficients > 0.25). These results also demonstrated guessing was not an issue. The mean final score for the MC-formatted cohorts increased by 4.9%, but did not change for the final theory examination that was common to all three cohorts. Subgroup analysis revealed that the greatest change in spot-test marks was for the lower-performing students. In conclusion, our results indicate spot-tests formatted as MC are suitable alternatives to FR tests. The increase in mean scores for the MC-formatted spot-tests was attributed to the lower demand of the MC format.
Collapse
Affiliation(s)
- Jan Douglas-Morris
- School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Helen Ritchie
- School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Catherine Willis
- School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Darren Reed
- School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
7
|
Ryan AT, Wilkinson TJ. Rethinking Assessment Design: Evidence-Informed Strategies to Boost Educational Impact in the Anatomical Sciences. ANATOMICAL SCIENCES EDUCATION 2021; 14:361-367. [PMID: 33752261 DOI: 10.1002/ase.2075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 03/16/2021] [Accepted: 03/17/2021] [Indexed: 06/12/2023]
Abstract
University assessment is in the midst of transformation. Assessments are no longer designed solely to determine that students can remember and regurgitate lecture content, nor in order to rank students to aid with some future selection process. Instead, assessments are expected to drive, support, and enhance learning and to contribute to student self-assessment and development of skills and attributes for a lifetime of learning. While traditional purposes of certifying achievement and determining readiness to progress remain important, these new expectations for assessment can create tensions in assessment design, selection, and deployment. With the recognition of these tensions, three contemporary approaches to assessment in medical education are described. These approaches include careful consideration of the educational impact of assessment-before, during (test or recall enhanced learning) and after assessments; development of student (and staff) assessment literacy; and planning of cohesive systems of assessment (with a range of assessment tools) designed to assess the various competencies demanded of future graduates. These approaches purposefully straddle the cross purposes of assessment in modern health professions education. The implications of these models are explored within the context of medical education and then linked with contemporary work in the anatomical sciences in order to highlight current synergies and potential future innovations when using evidence-informed strategies to boost the educational impact of assessments.
Collapse
Affiliation(s)
- Anna T Ryan
- Department of Medical Education, Melbourne Medical School, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Victoria, Australia
| | - Tim J Wilkinson
- Education Unit, Otago Medical School, University of Otago, Christchurch, New Zealand
| |
Collapse
|
8
|
Balta JY, Supple B, O'Keeffe GW. The Universal Design for Learning Framework in Anatomical Sciences Education. ANATOMICAL SCIENCES EDUCATION 2021; 14:71-78. [PMID: 32539206 DOI: 10.1002/ase.1992] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 05/07/2020] [Accepted: 06/08/2020] [Indexed: 06/11/2023]
Abstract
Over the past decades, teaching and learning within the discipline of anatomy has undergone significant changes. Some of these changes are due to a reduction in the number of teaching hours, while others are related to advancements in technology. Faced with these many choices for change, it can be difficult for faculty to decide on which new developments in anatomical education need or indeed can be integrated into their course to enhance student learning. This article presents the universal design for learning (UDL) framework-an informed, evidence-based, and robust approach to underpin new course design and pedagogical reform in anatomy education. Universal design for learning is not a theory but a framework grounded in cognitive neuroscience that focuses on engaging multiple brain networks. The guidelines for UDL are organized into three core principles: (1) provide multiple means of representation, (2) provide multiple means of action and expression, and (3) provide multiple means of engagement. The learning space within the anatomy laboratory provides an excellent opportunity in which to apply this framework. This article also describes current trends employed in the teaching of anatomy. The principles of UDL are then outlined, followed by a description of how UDL approaches have been applied in the design and delivery of anatomy practical teaching to first year medical students at University College Cork. Future implications for this work are a consideration and investigation of how a course designed with the principles of UDL at its heart ultimately benefits student learning.
Collapse
Affiliation(s)
- Joy Y Balta
- Department of Anatomy and Neuroscience and Cork Neuroscience Centre, Western Gateway Building, University College Cork, Cork, Ireland
- Division of Anatomy, Department of Biomedical Education and Anatomy, College of Medicine, The Ohio State University, Columbus, Ohio
| | - Briony Supple
- Centre for the Integration of Research, Teaching and Learning (CIRTL), University College Cork, Cork, Ireland
| | - Gerard W O'Keeffe
- Department of Anatomy and Neuroscience and Cork Neuroscience Centre, Western Gateway Building, University College Cork, Cork, Ireland
- Centre for the Integration of Research, Teaching and Learning (CIRTL), University College Cork, Cork, Ireland
| |
Collapse
|
9
|
Riggs CD, Kang S, Rennie O. Positive Impact of Multiple-Choice Question Authoring and Regular Quiz Participation on Student Learning. CBE LIFE SCIENCES EDUCATION 2020; 19:ar16. [PMID: 32357094 PMCID: PMC8697657 DOI: 10.1187/cbe.19-09-0189] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 03/06/2020] [Accepted: 03/12/2020] [Indexed: 06/11/2023]
Abstract
We previously developed an online multiple-choice question authoring, learning, and self-assessment tool that we termed Quizzical. Here we report statistical analyses over two consecutive years of Quizzical use in a large sophomore-level introductory molecular biology course. Students were required to author two questions during the term and were also afforded opportunities to earn marks for quiz participation. We found that students whose final grade was "A," "B," or "C" exhibited similar patterns of Quizzical engagement. The degree to which students participated was positively associated with performance on formal exams, even if prior academic performance was considered as a covariable. During both terms investigated, students whose Quizzical engagement increased from one exam to the next earned statistically significant higher scores on the subsequent exam, and students who attempted Quizzical questions from earlier in the term scored higher, on average, on the cumulative portion of the final exam. We conclude that the structure and value of the assignment, and the utility of Quizzical as a discipline-independent active-learning and self-assessment tool, enabled students to better master course topics.
Collapse
Affiliation(s)
- C. Daniel Riggs
- Department of Biological Sciences, University of Toronto, Scarborough, Toronto, Ontario M1C1A4, Canada
| | - Sohee Kang
- Department of Computer and Mathematical Sciences, Centre for Teaching and Learning, University of Toronto, Scarborough, Toronto, Ontario M1C1A4, Canada
| | - Olivia Rennie
- Department of Biological Sciences, University of Toronto, Scarborough, Toronto, Ontario M1C1A4, Canada
| |
Collapse
|
10
|
D'Antoni AV, Mtui EP, Loukas M, Tubbs RS, Zipp GP, Dunlosky J. An evidence-based approach to learning clinical anatomy: A guide for medical students, educators, and administrators. Clin Anat 2019; 32:156-163. [PMID: 30307063 PMCID: PMC7379743 DOI: 10.1002/ca.23298] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 10/06/2018] [Indexed: 11/29/2022]
Abstract
The amount of information that medical students learn is voluminous and those who do not use evidence-based learning strategies may struggle. Research from cognitive and educational psychology provides a blueprint on how best to learn science subjects, including clinical anatomy. Students should aim for high-cognitive learning levels as defined in the SOLO taxonomy. Using a real-world example from a modern clinical anatomy textbook, we describe how to learn information using strategies that have been experimentally validated as effective. Students should avoid highlighting and rereading text because they do not result in robust learning as defined in the SOLO taxonomy. We recommend that students use (1) practice testing, (2) distributed practice, and (3) successive relearning. Practice testing refers to nonsummative assessments that contain questions used to facilitate retrieval (e.g., flashcards and practice questions). Practice questions can be fill-in, short-answer, and multiple-choice types, and students should receive explanatory feedback. Distributed practice, the technique of distributing learning of the same content within a single study session or across sessions, has been found to facilitate long-term retention. Finally, successive relearning combines both practice testing and distributed practice. For this strategy, students use practice questions to continue learning until they can answer all of the practice questions correctly. Students who continuously use practice testing, distributed practice, and successive relearning will become more efficient and effective learners. Our hope is that the real-world clinical anatomy example presented in this article makes it easier for students to implement these evidence-based strategies and ultimately improve their learning. Clin. Anat., 2018. © 2018 The Authors. Clinical Anatomy published by Wiley Periodicals, Inc. on behalf of American Association of Clinical Anatomists.
Collapse
Affiliation(s)
- Anthony V. D'Antoni
- Division of Anatomy, Department of RadiologyWeill Cornell MedicineNew YorkNew York
| | - Estomih P. Mtui
- Division of Anatomy, Department of RadiologyWeill Cornell MedicineNew YorkNew York
| | - Marios Loukas
- Department of Anatomical SciencesSt. George's University School of MedicineGrenadaWest Indies
| | | | - Genevieve Pinto Zipp
- Department of Interprofessional Health Sciences and Health AdministrationSchool of Health and Medical Sciences, Seton Hall UniversitySouth OrangeNew Jersey
| | - John Dunlosky
- Department of Psychological SciencesKent State UniversityKentOhio
| |
Collapse
|
11
|
Sam AH, Field SM, Collares CF, van der Vleuten CPM, Wass VJ, Melville C, Harris J, Meeran K. Very-short-answer questions: reliability, discrimination and acceptability. MEDICAL EDUCATION 2018; 52:447-455. [PMID: 29388317 DOI: 10.1111/medu.13504] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Revised: 07/25/2017] [Accepted: 11/07/2017] [Indexed: 05/24/2023]
Abstract
CONTEXT Single-best-answer questions (SBAQs) have been widely used to test knowledge because they are easy to mark and demonstrate high reliability. However, SBAQs have been criticised for being subject to cueing. OBJECTIVES We used a novel assessment tool that facilitates efficient marking of open-ended very-short-answer questions (VSAQs). We compared VSAQs with SBAQs with regard to reliability, discrimination and student performance, and evaluated the acceptability of VSAQs. METHODS Medical students were randomised to sit a 60-question assessment administered in either VSAQ and then SBAQ format (Group 1, n = 155) or the reverse (Group 2, n = 144). The VSAQs were delivered on a tablet; responses were computer-marked and subsequently reviewed by two examiners. The standard error of measurement (SEM) across the ability spectrum was estimated using item response theory. RESULTS The review of machine-marked questions took an average of 1 minute, 36 seconds per question for all students. The VSAQs had high reliability (alpha: 0.91), a significantly lower SEM than the SBAQs (p < 0.001) and higher mean item-total point biserial correlations (p < 0.001). The VSAQ scores were significantly lower than the SBAQ scores (p < 0.001). The difference in scores between VSAQs and SBAQs was attenuated in Group 2. Although 80.4% of students found the VSAQs more difficult, 69.2% found them more authentic. CONCLUSIONS The VSAQ format demonstrated high reliability and discrimination and items were perceived as more authentic. The SBAQ format was associated with significant cueing. The present results suggest the VSAQ format has a higher degree of validity.
Collapse
Affiliation(s)
- Amir H Sam
- Medical Education Research Unit, Imperial College London, London, UK
- Division of Diabetes, Endocrinology and Metabolism, Imperial College London, London, UK
| | - Samantha M Field
- Medical Education Research Unit, Imperial College London, London, UK
| | - Carlos F Collares
- Department of Educational Research and Development, Maastricht University, Maastricht, the Netherlands
| | - Cees P M van der Vleuten
- Department of Educational Research and Development, Maastricht University, Maastricht, the Netherlands
| | - Val J Wass
- Faculty of Medicine and Health, Keele University, Keele, UK
| | | | - Joanne Harris
- Medical Education Research Unit, Imperial College London, London, UK
| | - Karim Meeran
- Medical Education Research Unit, Imperial College London, London, UK
- Division of Diabetes, Endocrinology and Metabolism, Imperial College London, London, UK
| |
Collapse
|
12
|
Choudhury B, Freemont A. Assessment of anatomical knowledge: Approaches taken by higher education institutions. Clin Anat 2017; 30:290-299. [DOI: 10.1002/ca.22835] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Revised: 12/05/2016] [Accepted: 12/12/2016] [Indexed: 11/08/2022]
Affiliation(s)
- Bipasha Choudhury
- Faculty of Biology, Medicine and Health; University of Manchester; Manchester M13 9PT United Kingdom
| | - Anthony Freemont
- Faculty of Biology, Medicine and Health; University of Manchester; Manchester M13 9PT United Kingdom
| |
Collapse
|
13
|
Valero G, Cárdenas P. Formative and Summative Assessment in Veterinary Pathology and Other Courses at a Mexican Veterinary College. JOURNAL OF VETERINARY MEDICAL EDUCATION 2017; 44:331-337. [PMID: 27779920 DOI: 10.3138/jvme.1015-169r] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
The Faculty of Veterinary Medicine and Animal Science of the National Autonomous University of Mexico (UNAM) uses the Moodle learning management system for formative and summative computer assessment. The authors of this article-the teacher primarily responsible for Moodle implementation and a researcher who is a recent Moodle adopter-describe and discuss the students' and teachers' attitudes to summative and formative computer assessment in Moodle. Item analysis of quiz results helped us to identify and fix poorly performing questions, which greatly reduced student complaints and improved objective assessment. The use of certainty-based marking (CBM) in formative assessment in veterinary pathology was well received by the students and should be extended to more courses. The importance of having proficient computer support personnel should not be underestimated. A properly translated language pack is essential for the use of Moodle in a language other than English.
Collapse
|
14
|
Choudhury B, Gouldsborough I, Shaw FL. The intelligent anatomy spotter: A new approach to incorporate higher levels of Bloom's taxonomy. ANATOMICAL SCIENCES EDUCATION 2016; 9:440-445. [PMID: 26687931 DOI: 10.1002/ase.1588] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Revised: 11/07/2015] [Accepted: 11/09/2015] [Indexed: 06/05/2023]
Abstract
The spotter test is an assessment that has been used widely to test practical knowledge of anatomy. Traditional spotter formats often focus solely on knowledge recall, in addition to being an onerous marking burden on staff where consistency in marking free text responses can be questioned. First-year optometry students at the University of Manchester study the functional anatomy of the eye in the first semester of their first year. Included in the assessment of this unit is a spotter examination worth 45% of the total unit mark. Due to the factors listed above, a new spotter format was designed. Students had to answer three questions per specimen where the answers to the questions were the labeled structures themselves (A, B, C, or D). They had to work out the answer to the question and then work out which of the labeled structures was the correct structure, negating the "cueing effect" of standard multiple choice questions. Examination results were analyzed over a six-year period (control groups 2008/2009, 2009/2010, 2010/2011; treatment groups 2011/2012, 2012/2013, 2013/2014). There were no significant differences between marks obtained for the new spotter format when compared with the traditional format. The new format spotter tested comprehension rather than just knowledge, and facilitated marking because subjectiveness was erased, and less time was spent determining whether an answer was correct or not. Anat Sci Educ 9: 440-445. © 2015 American Association of Anatomists.
Collapse
Affiliation(s)
| | | | - Frances L Shaw
- Faculty of Life Sciences, University of Manchester, United Kingdom
| |
Collapse
|
15
|
McDonald AC, Chan SP, Schuijers JA. Practical session assessments in human anatomy: Weightings and performance. ANATOMICAL SCIENCES EDUCATION 2016; 9:330-336. [PMID: 26580309 DOI: 10.1002/ase.1586] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2015] [Revised: 10/29/2015] [Accepted: 10/31/2015] [Indexed: 06/05/2023]
Abstract
Assessment weighting within a given module can be a motivating factor for students when deciding on their commitment level and time given to study a specific topic. In this study, an analysis of assessment performances of second year anatomy students was performed over four years to determine if (1) students performed better when a higher weighting was given to a set of practical session assessments and (2) whether an improved performance in the practical session assessments had a carry-over effect on other assessment tasks within that anatomy module and/or other anatomy modules that follow. Results showed that increasing the weighting of practical session assessments improved the average mark in that assessment and also improved the percentage of students passing that assessment. Further, it significantly improved performance in the written end-semester examination within the same module and had a carry-over effect on the anatomy module taught in the next teaching period, as students performed better in subsequent practical session assessments as well as subsequent end-semester examinations. It was concluded that the weighting of assessments had significant influences on a student's performance in that, and subsequent, assessments. It is postulated that practical session assessments, designed to develop deep learning skills in anatomy, improved efficacy in student performance in assessments undertaken in that and subsequent anatomy modules when the weighting of these assessments was greater. These deep learning skills were also transferable to other methods of assessing anatomy. Anat Sci Educ 9: 330-336. © 2015 American Association of Anatomists.
Collapse
Affiliation(s)
- Aaron C McDonald
- Department of Physiology, Anatomy and Microbiology, College of Science, Health and Engineering. La Trobe University, Bundoora, Victoria, Australia
| | - Siew-Pang Chan
- Department of Mathematics and Statistics. College of Science, Health and Engineering. La Trobe University, Bundoora, Victoria, Australia
| | - Johannes A Schuijers
- Department of Physiology, Anatomy and Microbiology, College of Science, Health and Engineering. La Trobe University, Bundoora, Victoria, Australia
| |
Collapse
|
16
|
Green RA, Cates T, White L, Farchione D. Do collaborative practical tests encourage student-centered active learning of gross anatomy? ANATOMICAL SCIENCES EDUCATION 2016; 9:231-237. [PMID: 26415089 DOI: 10.1002/ase.1564] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2015] [Revised: 08/07/2015] [Accepted: 08/07/2015] [Indexed: 06/05/2023]
Abstract
Benefits of collaborative testing have been identified in many disciplines. This study sought to determine whether collaborative practical tests encouraged active learning of anatomy. A gross anatomy course included a collaborative component in four practical tests. Two hundred and seven students initially completed the test as individuals and then worked as a team to complete the same test again immediately afterwards. The relationship between mean individual, team, and difference (between team and individual) test scores to overall performance on the final examination (representing overall learning in the course) was examined using regression analysis. The overall mark in the course increased by 9% with a decreased failure rate. There was a strong relationship between individual score and final examination mark (P < 0.001) but no relationship for team score (P = 0.095). A longitudinal analysis showed that the test difference scores increased after Test 1 which may be indicative of social loafing and this was confirmed by a significant negative relationship between difference score on Test 4 (indicating a weaker student) and final examination mark (P < 0.001). It appeared that for this cohort, there was little peer-to-peer learning occurring during the collaborative testing and that weaker students gained the benefit from team marks without significant active learning taking place. This negative outcome may be due to insufficient encouragement of the active learning strategies that were expected to occur during the collaborative testing process. An improved understanding of the efficacy of collaborative assessment could be achieved through the inclusion of questionnaire based data to allow a better interpretation of learning outcomes. Anat Sci Educ 9: 231-237. © 2015 American Association of Anatomists.
Collapse
Affiliation(s)
- Rodney A Green
- Department of Pharmacy and Applied Sciences, College of Science, Health and Engineering, La Trobe University, Bendigo, Victoria, Australia
| | - Tanya Cates
- Department of Physiology, Anatomy and Microbiology, College of Science, Health and Engineering, La Trobe University, Bundoora, Victoria, Australia
| | - Lloyd White
- Department of Physiology, Anatomy and Microbiology, College of Science, Health and Engineering, La Trobe University, Bundoora, Victoria, Australia
| | - Davide Farchione
- Department of Mathematics and Statistics, College of Science, Health and Engineering, La Trobe University, Bundoora, Victoria, Australia
| |
Collapse
|
17
|
Meyer AJ, Innes SI, Stomski NJ, Armson AJ. Student performance on practical gross anatomy examinations is not affected by assessment modality. ANATOMICAL SCIENCES EDUCATION 2016; 9:111-20. [PMID: 25981194 DOI: 10.1002/ase.1542] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 04/12/2015] [Accepted: 04/28/2015] [Indexed: 05/24/2023]
Abstract
Anatomical education is becoming modernized, not only in its teaching and learning, but also in its assessment formats. Traditional "steeplechase" examinations are being replaced with online gross anatomy examinations. The aims of this study were to: (1) determine if online anatomy practical examinations are equivalent to traditional anatomy practical examinations; and (2) to examine if students' perceptions of the online or laboratory testing environments influenced their performance on the examinations. In phase one, 10 third-year students were interviewed to generate perception items to which five anatomy lecturers assigned content validity. In phase two, students' gross anatomical knowledge was assessed by examinations in two modes and their perceptions were examined using the devised survey instrument. Forty-five second-year chiropractic students voluntarily participated in Phase Two. The two randomly allocated groups completed the examinations in a sequential cross-over manner. Student performance on the gross anatomy examination was not different between traditional "steeplechase" (mean ± standard deviation (SD): 69 ± 11%) and online (68 ± 15%) modes. The majority of students (87%) agreed that they felt comfortable using computers for gross anatomy examinations. However, fewer students found it easy to orientate images of cadaver specimens online. The majority of students (85%) agreed that they felt comfortable working with cadavers but there was less agreement on the effect of moving around the laboratory during practical examinations. This data will allow anatomists to confidently implement online assessments without fear of jeopardizing academic rigor or student performance.
Collapse
Affiliation(s)
- Amanda J Meyer
- Discipline of Chiropractic, School of Health Professions, Murdoch University, Murdoch, Perth, Western Australia, Australia
| | - Stanley I Innes
- Discipline of Chiropractic, School of Health Professions, Murdoch University, Murdoch, Perth, Western Australia, Australia
| | - Norman J Stomski
- Discipline of Chiropractic, School of Health Professions, Murdoch University, Murdoch, Perth, Western Australia, Australia
| | - Anthony J Armson
- Discipline of Chiropractic, School of Health Professions, Murdoch University, Murdoch, Perth, Western Australia, Australia
| |
Collapse
|
18
|
Smith CF, McManus B. The integrated anatomy practical paper: A robust assessment method for anatomy education today. ANATOMICAL SCIENCES EDUCATION 2015; 8:63-73. [PMID: 24706567 DOI: 10.1002/ase.1454] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Revised: 01/31/2014] [Accepted: 03/20/2014] [Indexed: 06/03/2023]
Abstract
Assessing anatomy in a way that tests higher cognitive domains and clinical application is not always straightforward. The old "spotter" examination has been criticized for only testing low level "identify" knowledge, whereas other assessment modalities such as multiple choice questions do not reflect the three dimensional and application nature of clinical anatomy. Medical curricula are frequently integrated and subject specific examinations do not reflect the case based, spiral, integrative nature of the curricula. The integrated anatomy practical paper (IAPP) is a hybrid of the old "spotter" and an objective structured clinical examination but it demonstrates how higher levels of taxonomy can be assessed, together with clinical features and integrates well with other disciplines. Importantly, the IAPP has shown to be reliable and practical to administer. Data gathered from the Bachelor of Medicine five-year program over two academic years for four IAPP examinations, each being 40 minutes with (K = 60 items) based on 440 students revealed consistently strong reliability coefficients (Cronbach alpha) of up to 0.923. Applying Blooms taxonomy to questions has shown a marked shift resulting in an increase in the complexity level being tested; between 2009 and 2013 a reduction of 26% in the number of low level "remember knowledge" domain questions was noted with up to an increase of 15% in "understanding" domain and 12% increase in the "applying" knowledge domain. Our findings highlight that it is possible to test, based in a laboratory, anatomy knowledge and application that is integrated and fit for practice.
Collapse
Affiliation(s)
- Claire F Smith
- Department of Anatomy, Brighton and Sussex Medical School, University of Sussex, Brighton, East Sussex, United Kingdom; Academic Unit of Medical Education, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | | |
Collapse
|
19
|
Zhang G, Fenderson BA, Schmidt RR, Veloski JJ. Equivalence of students' scores on timed and untimed anatomy practical examinations. ANATOMICAL SCIENCES EDUCATION 2013; 6:281-285. [PMID: 23463722 DOI: 10.1002/ase.1357] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2012] [Revised: 12/18/2012] [Accepted: 01/15/2013] [Indexed: 06/01/2023]
Abstract
Untimed examinations are popular with students because there is a perception that first impressions may be incorrect, and that difficult questions require more time for reflection. In this report, we tested the hypothesis that timed anatomy practical examinations are inherently more difficult than untimed examinations. Students in the Doctor of Physical Therapy program at Thomas Jefferson University were assessed on their understanding of anatomic relationships using multiple-choice questions. For the class of 2012 (n = 46), students were allowed to circulate freely among 40 testing stations during the 40-minute testing session. For the class of 2013 (n = 46), students were required to move sequentially through the 40 testing stations (one minute per item). Students in both years were given three practical examinations covering the back/upper limb, lower limb, and trunk. An identical set of questions was used for both groups of students (untimed and timed examinations). Our results indicate that there is no significant difference between student performance on untimed and timed examinations (final percent scores of 87.3 and 88.9, respectively). This result also held true for students in the top and bottom 20th percentiles of the class. Moreover, time limits did not lead to errors on even the most difficult, higher-order questions (i.e., items with P-values < 0.70). Thus, limiting time at testing stations during an anatomy practical examination does not adversely affect student performance.
Collapse
Affiliation(s)
- Guiyun Zhang
- Department of Pathology, Anatomy and Cell Biology, Jefferson Medical College, Thomas Jefferson University, Philadelphia, Pennsylvania
| | | | | | | |
Collapse
|