1
|
Riccò M, Ferraro P, Ranzieri S, Boldini G, Zanella I, Marchesi F. Legionnaires' Disease in Occupational Settings: A Cross-Sectional Study from Northeastern Italy (2019). Trop Med Infect Dis 2023; 8:364. [PMID: 37505660 PMCID: PMC10384770 DOI: 10.3390/tropicalmed8070364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 07/09/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
In Italy, Legionnaires' Disease (LD) causes >1000 hospital admissions per year, with a lethality rate of 5 to 10%. Occupational exposures could reasonably explain a substantial share of total cases, but the role of Occupational Physicians (OPs) in management and prevention of LD has been scarcely investigated. The present survey therefore evaluates the knowledge, attitudes and practices (KAP) regarding LD from a convenience sample of Italian OPs, focusing on their participation in preventive interventions. A total of 165 OPs were recruited through a training event (Parma, Northeastern Italy, 2019), and completed a specifically designed structured questionnaire. The association between reported participation in preventive interventions and individual factors was analyzed using a binary logistic regression model, calculating corresponding multivariable Odds Ratio (aOR). Overall, participants exhibited satisfactory knowledge of the clinical and diagnostic aspects of LD, while substantial uncertainties were associated epidemiological factors (i.e., notification rate and lethality). Although the majority of participating OPs reportedly assisted at least one hospital (26.7%) and/or a nursing home (42.4%) and/or a wastewater treatment plant, only 41.8% reportedly contributed to the risk assessment for LD and 18.8% promoted specifically designed preventive measures. Working as OPs in nursing homes (aOR 8.732; 95% Confidence Intervals [95%CI] 2.991 to 25.487) and wastewater treatment plants (aOR 8.710; 95%CI 2.844 to 26.668) was associated with participation in the risk assessment for LD, while the promotion of preventive practice was associated with working as an OP in hospitals (aOR 6.792; 95%CI 2.026 to 22.764) and wastewater treatment plants (aOR 4.464, 95%CI 1.363 to 14.619). In other words, the effective participation of the OP in the implementation of preventive measures appears uncommon and is limited to certain occupational settings. Collectively, these results highlight the importance of tailoring specifically designed information campaigns aimed to raise the involvement of OPs in the prevention of LD in occupational settings other than healthcare.
Collapse
Affiliation(s)
- Matteo Riccò
- Servizio di Prevenzione e Sicurezza Negli Ambienti di Lavoro (SPSAL), AUSL-IRCCS di Reggio Emilia, Via Amendola n.2, I-42122 Reggio Emilia, Italy
| | - Pietro Ferraro
- Occupational Medicine Unit, Direzione Sanità, Italian Railways' Infrastructure Division, RFI SpA, I-00161 Rome, Italy
| | - Silvia Ranzieri
- Department of Medicine and Surgery, University of Parma, Via Gramsci, 14, I-43126 Parma, Italy
| | - Giorgia Boldini
- Department of Medicine and Surgery, University of Parma, Via Gramsci, 14, I-43126 Parma, Italy
- Servizio di Igiene Pubblica, AUSL di Parma, Via Vasari n.13/a, I-43123 Parma, Italy
| | - Ilaria Zanella
- Department of Medicine and Surgery, University of Parma, Via Gramsci, 14, I-43126 Parma, Italy
| | - Federico Marchesi
- Department of Medicine and Surgery, University of Parma, Via Gramsci, 14, I-43126 Parma, Italy
| |
Collapse
|
2
|
Hamamoto Filho PT, Moriguti JC, Ribeiro ZMT, Diehl L, Lopes RD, Adler UC, Lima ARDA, Oliveira RCD, Andrade MCD, Bicudo AM. Time of clerkship rotations’ interruption during COVID-19 and differences on Progress Test’s scores. Rev Assoc Med Bras (1992) 2022; 68:1447-1451. [DOI: 10.1590/1806-9282.20220657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 07/16/2022] [Indexed: 11/22/2022] Open
|
3
|
Mo DY, Tang YM, Wu EY, Tang V. Theoretical model of investigating determinants for a successful Electronic Assessment System (EAS) in higher education. EDUCATION AND INFORMATION TECHNOLOGIES 2022; 27:12543-12566. [PMID: 35676938 PMCID: PMC9164563 DOI: 10.1007/s10639-022-11098-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
Electronic assessment (e-assessment) is an essential part of higher education, not only used to manage a large class size of students' learning performance and particularly in assessing the learning outcomes of students. The e-assessment data generated can not only be used to determine students' study weaknesses to develop strategies for teaching and learning, but also in the development of essential teaching and learning pedagogies for online teaching and learning. Despite the wider adoption of Information and Communication Technology (ICT) technologies due to the COVID-19 pandemic, universities still encountered numerous problems during the transformation to electronic teaching as most educators struggled with the effective implementation of the Electronic Assessment System (EAS). The successful launch of EAS relied heavily on students' use intention towards the new and unfamiliar electronic system, which was actually unknown to the project managers of EAS. It is therefore important to understand students' views and concerns on EAS and the proactive measures taken by universities to enhance students' acceptance and intention of usage. Although most studies investigate students' acceptance of online learning, there is still little research on the adoption of e-assessment. In this regard, we propose to develop a theoretical model based on students' perceptions of EAS. Based on the Technology Acceptance Model (TAM) and a major successor of TAM, an electronic assessment system acceptance model (EASA model) is developed with key measures including system adoption anxiety, e-assessment facilitation, risk reduction amid, etc. The data is obtained through a survey among current students at a local university, and structural equation modeling (SEM) is applied to analyze the quantitative data. This study has a significant impact on improving educators' use of e-assessment in order to develop essential online teaching and learning pedagogy in the future.
Collapse
Affiliation(s)
- Daniel Y. Mo
- Department of Supply Chain and Information Management, Hang Seng University of Hong Kong, Hang Shin Link, Siu Lek Yuen Hong Kong
| | - Yuk Ming Tang
- Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
- Faculty of Business, City University of Macau, Macau, China
| | - Edmund Y. Wu
- Department of Supply Chain and Information Management, Hang Seng University of Hong Kong, Hang Shin Link, Siu Lek Yuen Hong Kong
| | - Valerie Tang
- Department of Supply Chain and Information Management, Hang Seng University of Hong Kong, Hang Shin Link, Siu Lek Yuen Hong Kong
| |
Collapse
|
4
|
Andreou V, Peters S, Eggermont J, Wens J, Schoenmakers B. Remote versus on-site proctored exam: comparing student results in a cross-sectional study. BMC MEDICAL EDUCATION 2021; 21:624. [PMID: 34930231 PMCID: PMC8686350 DOI: 10.1186/s12909-021-03068-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 12/06/2021] [Indexed: 05/09/2023]
Abstract
BACKGROUND The COVID-19 pandemic has profoundly affected assessment practices in medical education necessitating distancing from the traditional classroom. However, safeguarding academic integrity is of particular importance for high-stakes medical exams. We utilised remote proctoring to administer safely and reliably a proficiency-test for admission to the Advanced Master of General Practice (AMGP). We compared exam results of the remote proctored exam group to those of the on-site proctored exam group. METHODS A cross-sectional design was adopted with candidates applying for admission to the AMGP. We developed and applied a proctoring software operating on three levels to register suspicious events: recording actions, analysing behaviour, and live supervision. We performed a Mann-Whitney U test to compare exam results from the remote proctored to the on-site proctored group. To get more insight into candidates' perceptions about proctoring, a post-test questionnaire was administered. An exploratory factor analysis was performed to explore quantitative data, while qualitative data were thematically analysed. RESULTS In total, 472 (79%) candidates took the proficiency-test using the proctoring software, while 121 (20%) were on-site with live supervision. The results indicated that the proctoring type does not influence exam results. Out of 472 candidates, 304 filled in the post-test questionnaire. Two factors were extracted from the analysis and identified as candidates' appreciation of proctoring and as emotional distress because of proctoring. Four themes were identified in the thematic analysis providing more insight on candidates' emotional well-being. CONCLUSIONS A comparison of exam results revealed that remote proctoring could be a viable solution for administering high-stakes medical exams. With regards to candidates' educational experience, remote proctoring was met with mixed feelings. Potential privacy issues and increased test anxiety should be taken into consideration when choosing a proctoring protocol. Future research should explore generalizability of these results utilising other proctoring systems in medical education and in other educational settings.
Collapse
Affiliation(s)
- Vasiliki Andreou
- Department of Public Health and Primacy Care, KU Leuven, Academic Center for General Practice, Kapucijnenvoer 7 -Box 7001, 3000, Leuven, Belgium.
| | - Sanne Peters
- Department of Public Health and Primacy Care, KU Leuven, Academic Center for General Practice, Kapucijnenvoer 7 -Box 7001, 3000, Leuven, Belgium
- Evidence Based Practice, EBMPracticeNet, 3000, Leuven, Belgium
- School of Health Sciences, Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, 3800, Australia
| | - Jan Eggermont
- Department of Cellular and Molecular Medicine, KU Leuven, 3000, Leuven, Belgium
| | - Johan Wens
- Center for General Practice/Family Medicine, Department of Primary and Interdisciplinary Care, University of Antwerp, 2610, Wilrijk, Belgium
| | - Birgitte Schoenmakers
- Department of Public Health and Primacy Care, KU Leuven, Academic Center for General Practice, Kapucijnenvoer 7 -Box 7001, 3000, Leuven, Belgium
| |
Collapse
|
5
|
Rasheed A. Investigating the relationship of computerized examination anxiety with other variables at the university level: A case of health college students in Saudi Arabia. JOURNAL OF EDUCATION AND HEALTH PROMOTION 2021; 10:371. [PMID: 34912907 PMCID: PMC8641729 DOI: 10.4103/jehp.jehp_220_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 03/26/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND In the daily lives of people, the level of anxiety plays a significant role. This applies to students, who experience anxiety when taking examinations referred to as examination anxiety. Majority of the current educational institutions have shifted from a traditional evaluation system to one that is computerized. The present study aim is to identify the computerized examination anxiety (CEA) among college students in the Faculty of Health and to compare the differences among them based on study system and gender. MATERIALS AND METHODS The research is a descriptive quantitative design. The research population consists of 138 health college students. CEA scale was used to identify the level of examination anxiety among students. Data were then exposed to analysis, namely the descriptive statistics, independent sample t-test, and Chi-square tests, to obtain the answers to the research questions at the level of <0.05. RESULTS Based on the findings, the CEA experienced by the health students was of moderate level. The findings also showed insignificant differences between students' levels of anxiety based on gender and study system at <0.05 value. CONCLUSION The study contributed to literature by adding a study related to CEA during COVID-19. The study enumerated implications and recommendations based on the findings.
Collapse
Affiliation(s)
- Abeer Rasheed
- Self-Development Department, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| |
Collapse
|
6
|
Schoenmakers B, Wens J. Efficiency, Usability, and Outcomes of Proctored Next-Level Exams for Proficiency Testing in Primary Care Education: Observational Study. JMIR Form Res 2021; 5:e23834. [PMID: 34398786 PMCID: PMC8406127 DOI: 10.2196/23834] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 10/06/2020] [Accepted: 05/16/2021] [Indexed: 11/13/2022] Open
Abstract
Background The COVID-19 pandemic has affected education and assessment programs and has resulted in complex planning. Therefore, we organized the proficiency test for admission to the Family Medicine program as a proctored exam. To prevent fraud, we developed a web-based supervisor app for tracking and tracing candidates’ behaviors. Objective We aimed to assess the efficiency and usability of the proctored exam procedure and to analyze the procedure’s impact on exam scores. Methods The application operated on the following three levels to register events: the recording of actions, analyses of behavior, and live supervision. Each suspicious event was given a score. To assess efficiency, we logged the technical issues and the interventions. To test usability, we counted the number of suspicious students and behaviors. To analyze the impact that the supervisor app had on students’ exam outcomes, we compared the scores of the proctored group and those of the on-campus group. Candidates were free to register for off-campus participation or on-campus participation. Results Of the 593 candidates who subscribed to the exam, 472 (79.6%) used the supervisor app and 121 (20.4%) were on campus. The test results of both groups were comparable. We registered 15 technical issues that occurred off campus. Further, 2 candidates experienced a negative impact on their exams due to technical issues. The application detected 22 candidates with a suspicion rating of >1. Suspicion ratings mainly increased due to background noise. All events occurred without fraudulent intent. Conclusions This pilot observational study demonstrated that a supervisor app that records and registers behavior was able to detect suspicious events without having an impact on exams. Background noise was the most critical event. There was no fraud detected. A supervisor app that registers and records behavior to prevent fraud during exams was efficient and did not affect exam outcomes. In future research, a controlled study design should be used to compare the cost-benefit balance between the complex interventions of the supervisor app and candidates’ awareness of being monitored via a safe browser plug-in for exams.
Collapse
Affiliation(s)
| | - Johan Wens
- Department of Primary and Interdisciplinary Care, University of Antwerp, Antwerpen, Belgium
| |
Collapse
|
7
|
Karibyan A, Sabnis G. Students' perceptions of computer-based testing using ExamSoft. CURRENTS IN PHARMACY TEACHING & LEARNING 2021; 13:935-944. [PMID: 34294257 DOI: 10.1016/j.cptl.2021.06.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 01/04/2021] [Accepted: 06/09/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION In fall 2017, West Coast University School of Pharmacy implemented ExamSoft for testing. Three courses in each didactic year employed ExamSoft. Prior to this, courses had Scantron-based exams. We surveyed the students to assess their perception of ExamSoft. We hypothesized that students' inherent bias towards technology affected their perception of ExamSoft. METHODS To assess this hypothesis, we conducted a survey of all students. The survey contained questions about comfort with technology and nine questions on students' perceptions of ExamSoft and its usefulness. RESULTS The survey responses were stratified according to the preference of respondents towards technology and its use in exams. Respondents were stratified into three groups: tech-embracers, tech-skeptics, and neutral. Our results showed that respondents classified as tech-skeptics tended to have a more negative view of ExamSoft and its perceived impact on their grades than students stratified as tech-embracers or neutral. CONCLUSIONS Our study suggests that students' inherent bias towards technology plays an important role in their perception of computer-based testing. Assessing incoming students' comfort with technology and student orientation activities to help acquaint students with new technology could help improve their acceptance of educational technology used for testing.
Collapse
Affiliation(s)
- Anna Karibyan
- West Coast University, School of Pharmacy, 590 N. Vermont Ave, Los Angeles CA-90004, United States.
| | - Gauri Sabnis
- Department of Pharmaceutical Sciences, West Coast University, School of Pharmacy, 590 N. Vermont Ave, Room 332, Los Angeles CA-90004, United States.
| |
Collapse
|
8
|
Preferences and Scores of Different Types of Exams during COVID-19 Pandemic in Faculty of Veterinary Medicine in Spain: A Cross-Sectional Study of Paper and E-exams. EDUCATION SCIENCES 2021. [DOI: 10.3390/educsci11080386] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The World Health Organization (WHO) officially declared the novel coronavirus (COVID-19) as a pandemic on 11 March 2020, and educational institutions have had to modify most of their activities (face-to-face activities were suspended). This situation forced academic institutions to modify the evaluation format of students. The use of proctoring systems quickly became widespread, although some controversies arose. The two main discussions regarding these systems are the integrity of the assessment and the capacity of the students to adapt to this new assessment method, without changes in theirs scores. To elucidate two controversies, we have analyzed the preferences and the scores obtained from a trial of 660 scores from 332 students of the third grade of Veterinary Medicine. The experiment involved three modalities of exam: an online format from home using the Respondus Lockdown Browser system (Modality 1), online in person using the Respondus Lockdown Browser system with the supervision of a teacher (Modality 2), or paper format in person with the supervision of a teacher (Modality 3). The results obtained showed that the students preferred Modality 1 (online at home with Respondus Lockdown Browser system). No statistical differences between the scores obtained by students were found between the three modalities analyzed. The proctoring system is a good method to adjudicate exams in higher education institutions, and the scores of students are similar to those obtained through traditional evaluation and control systems.
Collapse
|
9
|
Jang S, Suh EE. Development, Application, and Effectiveness of a Smart Device-based Nursing Competency Evaluation Test: A Mixed-Method Study. Comput Inform Nurs 2021; 39:634-643. [PMID: 33935202 DOI: 10.1097/cin.0000000000000742] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
We aimed to develop and evaluate the effectiveness of a smart device-based test to assess Korean undergraduate students' clinical nursing competency, named SBT-NURS. The 65-item SBT-NURS comprises questions that simulate clinical situations, are problem solving-oriented, use multimedia (ie, videos/photos/animations), and involve the following topics: medical-surgical nursing, fundamentals of nursing, pediatrics, maternity, management, and psychiatric. We utilized a quantitative method to analyze the effects of the SBT-NURS (ie, via a single-group, post-experimental survey design) and a qualitative method to analyze students' experiences of using the SBT-NURS (ie, via seven focus group interviews [FGIs]). Students' overall adult health nursing paper-based test scores (ie, combining their scores in group activity, presentation, attendance, and attitude toward the midterm and final tests on adult health nursing) (r = 0.552, P < .001) and clinical practicum scores (r = 0.268, P = .040) in the last semester showed a statistically significant positive correlation with their SBT-NURS scores. Their paper-based testing practice average scores (ie, combination between paper-based tests and clinical practicum scores) showed a similar significant correlation (r = 0.506, P < .001). Students deemed the SBT-NURS advantageous, satisfactory, convenient, and useful. The SBT-NURS may be an effective learning and evaluation method for nursing education that help improve students' clinical competency and learning outcomes.
Collapse
Affiliation(s)
- Soyoung Jang
- Author Affiliation: College of Nursing (Ms Jang and Dr Suh) and Research Institute of Nursing Science (Dr Suh), Seoul National University, Republic of Korea
| | | |
Collapse
|
10
|
Stauffer R, Pitlick J, Challen L. Impact of an electronic-based assessment on student pharmacist performance in a required therapeutics course. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:287-290. [PMID: 32273064 DOI: 10.1016/j.cptl.2019.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 09/13/2019] [Accepted: 12/04/2019] [Indexed: 06/11/2023]
Abstract
INTRODUCTION The use of technology in the classroom has continued to grow, and with the advancement of classroom management systems and online exam software, there are opportunities to administer exams electronically. This study assessed the impact of electronic-based assessments on examination scores in a required therapeutics course. METHODS This was a retrospective, single-centered, observational study including second professional year pharmacy students enrolled in a required, one semester therapeutics course. Four assessments were administered each semester. Lecture content and exam format, a mixture of multiple-choice questions and free response written cases, did not differ significantly between years. Assessments administered during the first two years were printed on paper, while assessments administered during the third and fourth year of the study were all electronic, submitted through a classroom management system. Following institutional review board approval, the change in mean overall examination scores between paper and electronic-based assessments were analyzed. RESULTS Of the 948 students enrolled in this study, there was no difference in overall mean scores between paper and electronic-based assessments (74.8% vs. 73.8%). In addition, there was no difference in mean examination scores between overall individual paper and electronic Exam 1 through 4 or overall multiple-choice or free response scores between paper and electronic-based assessments. CONCLUSIONS Scores did not differ between paper and electronic-based assessments. From this study, test method does not appear to impact exam results.
Collapse
Affiliation(s)
- Rebecca Stauffer
- St. Louis College of Pharmacy, 4588 Parkview Place, St. Louis, MO 63110, United States.
| | - Jamie Pitlick
- St. Louis College of Pharmacy, 4588 Parkview Place, St. Louis, MO 63110, United States.
| | - Laura Challen
- St. Louis College of Pharmacy, 4588 Parkview Place, St. Louis, MO 63110, United States.
| |
Collapse
|
11
|
[Examinations while studying medicine - more than simply grades]. Wien Med Wochenschr 2018; 169:126-131. [PMID: 30084089 DOI: 10.1007/s10354-018-0650-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 07/11/2018] [Indexed: 10/28/2022]
Abstract
Assessment drives learning. Examinations need to be aligned primarily with learning objectives, as well as teaching and assessment methods of the courses on offer. In doing so, various examination instruments are required to measure on levels of competency that build on one another. An appropriate mix is essential to reflect the variety of learning outcomes of a chosen curriculum. Furthermore, examinations also possess the characteristics of evaluation: They reflect the knowledge and abilities of students and assess the teaching at a defined location. Digital examinations in the form of multiple-choice-question (MCQ) testing enable a higher degree of automation and accelerate the processes of creation, implementation, and evaluation of the examination results. Thus, they enjoy increasing popularity, provided that the technical requirements for large semester cohorts are met. Shifting examination processes to computers or tablets entails not only a wealth of new challenges but also opportunities.
Collapse
|
12
|
Washburn S, Herman J, Stewart R. Evaluation of performance and perceptions of electronic vs. paper multiple-choice exams. ADVANCES IN PHYSIOLOGY EDUCATION 2017; 41:548-555. [PMID: 29066605 DOI: 10.1152/advan.00138.2016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Revised: 08/04/2017] [Accepted: 09/19/2017] [Indexed: 06/07/2023]
Abstract
In the veterinary professional curriculum, methods of examination in many courses are transitioning from the traditional paper-based exams to electronic-based exams. Therefore, a controlled trial to evaluate the impact of testing methodology on examination performance in a veterinary physiology course was designed and implemented. Formalized surveys and focus group discussions were also used to determine student attitudes toward the examination formats. In total, 134 first-year veterinary students and 11 PhD/MS students were administered a total of 4 exams throughout 1 semester (2 on paper and 2 electronically) using a split-halves design. The paper (P) and electronic (E) exams contained 25 identical multiple-choice questions. Students were randomly assigned to two groups and were given exams in one of two sequences (E-P-E-P or P-E-P-E). Participants consented to and completed two anonymous surveys vis à vis their experience. Out of a maximum raw score of 25, the mean score for electronic examinations (20.8; 95% confidence interval, 20.3-21.2) was significantly (P = 0.01) greater than that for paper examinations (20.3; 95% confidence interval, 20.0-20.7). However, students expressed numerous concerns with the electronic examination format, and, at the completion of the study, 87% preferred to take their examination on paper rather than the electronic format. These data show that student attitudes concerning the examination format are not primarily determined by examination results, and that the additional anxiety related to the electronic examination format plays a large role in student attitudes.
Collapse
Affiliation(s)
- Shannon Washburn
- Department of Veterinary Physiology and Pharmacology and Michael E. DeBakey Institute, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, Texas
| | - James Herman
- Department of Veterinary Physiology and Pharmacology and Michael E. DeBakey Institute, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, Texas
| | - Randolph Stewart
- Department of Veterinary Physiology and Pharmacology and Michael E. DeBakey Institute, College of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, Texas
| |
Collapse
|
13
|
Computer-based versus paper-based testing: Investigating testing mode with cognitive load and scratch paper use. COMPUTERS IN HUMAN BEHAVIOR 2017. [DOI: 10.1016/j.chb.2017.07.044] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
14
|
Cerutti B, Blondon K, Galetto A. Long-menu questions in computer-based assessments: a retrospective observational study. BMC MEDICAL EDUCATION 2016; 16:55. [PMID: 26861755 PMCID: PMC4748522 DOI: 10.1186/s12909-016-0578-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 02/02/2016] [Indexed: 06/01/2023]
Abstract
BACKGROUND Computer based assessments of paediatrics in our institution use series of clinical cases, where information is progressively delivered to the students in a sequential order. Three types of formats are mainly used: Type A (single answer), Pick N, and Long-menu. Long-menu questions require a long, hidden list of possible answers: based on the student's initial free text response, the program narrows the list, allowing the student to select the answer. This study analyses the psychometric properties of Long-menu questions compared with the two other commonly used formats: Type A and Pick N. METHODS We reviewed the difficulty level and discrimination index of the items in the paediatric exams from 2009 to 2015, and compared the Long-menu questions with the Type A and Pick N questions, using multiple-way analyses of variances. RESULTS Our dataset included 13 exam sessions with 855 students and 558 items included in the analysis, 212 (38 %) Long-menu, 201 (36 %) Pick N, and 140 Type A (25 %) items. There was a significant format effect associated with both level of difficulty (p = .005) and discrimination index (p < .001). Long-menu questions were easier than Type A questions(+5.2 %; 95 % CI 1.1-9.4 %), and more discriminative than both Type A (+0.07; 95 % CI 0.01-0.14), and Pick N (+0.10; 95 % CI 0.05-0.16) questions. CONCLUSIONS Long-menu questions show good psychometric properties when compared with more common formats such as Type A or Pick N, though confirmatory studies are needed. They provide more variety, reduce the cueing effect, and thus may more closely reflect real life practice than the other item formats inherited from paper-based examination that are used during computer-based assessments.
Collapse
Affiliation(s)
- Bernard Cerutti
- Unit of Research and Development in Medical Education, Faculty of Medicine, University Geneva, 1 Rue Michel Servet, Geneva, 1211, Switzerland.
| | - Katherine Blondon
- Division of General Internal Medicine, Geneva University Hospitals, Geneva, Switzerland.
| | - Annick Galetto
- Division of Paediatric Emergency Medicine, Department of Child and Adolescent Medicine, Geneva University Hospitals, Geneva, Switzerland.
| |
Collapse
|
15
|
Introducing Computer-Based Testing in High-Stakes Exams in Higher Education: Results of a Field Experiment. PLoS One 2015; 10:e0143616. [PMID: 26641632 PMCID: PMC4671535 DOI: 10.1371/journal.pone.0143616] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2015] [Accepted: 11/06/2015] [Indexed: 11/19/2022] Open
Abstract
The introduction of computer-based testing in high-stakes examining in higher education is developing rather slowly due to institutional barriers (the need of extra facilities, ensuring test security) and teacher and student acceptance. From the existing literature it is unclear whether computer-based exams will result in similar results as paper-based exams and whether student acceptance can change as a result of administering computer-based exams. In this study, we compared results from a computer-based and paper-based exam in a sample of psychology students and found no differences in total scores across the two modes. Furthermore, we investigated student acceptance and change in acceptance of computer-based examining. After taking the computer-based exam, fifty percent of the students preferred paper-and-pencil exams over computer-based exams and about a quarter preferred a computer-based exam. We conclude that computer-based exam total scores are similar as paper-based exam scores, but that for the acceptance of high-stakes computer-based exams it is important that students practice and get familiar with this new mode of test administration.
Collapse
|
16
|
Karay Y, Schauber SK, Stosch C, Schüttpelz-Brauns K. Computer versus paper--does it make any difference in test performance? TEACHING AND LEARNING IN MEDICINE 2015; 27:57-62. [PMID: 25584472 DOI: 10.1080/10401334.2014.979175] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. BACKGROUND Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. APPROACH A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. RESULTS The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. CONCLUSIONS Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.
Collapse
Affiliation(s)
- Yassin Karay
- a Dean's Office for Student Affairs , Medical Faculty of the University of Cologne , Köln , Germany
| | | | | | | |
Collapse
|
17
|
Hassanien MA, Al-Hayani A, Abu-Kamer R, Almazrooa A. A six step approach for developing computer based assessment in medical education. MEDICAL TEACHER 2013; 35 Suppl 1:S15-S19. [PMID: 23581891 DOI: 10.3109/0142159x.2013.765542] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Assessment, which entails the systematic evaluation of student learning, is an integral part of any educational process. Computer-based assessment (CBA) techniques provide a valuable resource to students seeking to evaluate their academic progress through instantaneous, personalized feedback. CBA reduces examination, grading and reviewing workloads and facilitates training. This paper describes a six step approach for developing CBA in higher education and evaluates student perceptions of computer-based summative assessment at the College of Medicine, King Abdulaziz University. A set of questionnaires were distributed to 341 third year medical students (161 female and 180 male) immediately after examinations in order to assess the adequacy of the system for the exam program. The respondents expressed high satisfaction with the first Saudi experience of CBA for final examinations. However, about 50% of them preferred the use of a pilot CBA before its formal application; hence, many did not recommend its use for future examinations. Both male and female respondents reported that the range of advantages offered by CBA outweighed any disadvantages. Further studies are required to monitor the extended employment of CBA technology for larger classes and for a variety of subjects at universities.
Collapse
Affiliation(s)
- Mohammed Ahmed Hassanien
- Medical Education Department, Faculty of Medicine, King Abdulaziz University, Jeddah 21589, P.O. Box 80205, Saudi Arabia.
| | | | | | | |
Collapse
|