1
|
Moynahan KF. The Current Use of United States Medical Licensing Examination Step 1 Scores: Holistic Admissions and Student Well-Being Are in the Balance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:963-965. [PMID: 29261538 DOI: 10.1097/acm.0000000000002101] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
United States Medical Licensing Examination Step 1 scores increasingly are being used by graduate medical education programs to "screen out" applicants to invite for an interview; medical students with scores below a certain (usually unpublished) point are not considered for an interview. Conversely, many students are extended an interview invitation based solely on their Step 1 scores, prior to the release of their Medical Student Performance Evaluation. In this Invited Commentary, the author discusses why this practice has gained popularity as well as the unintended effects it has had in multiple domains-holistic undergraduate medical education admissions practices, student well-being, and medical school curricula. Possible solutions are considered, including a pass/fail reporting structure, a national basic science curriculum, and better alignment between undergraduate and graduate medical education admissions processes and values. Through collaborative work between the Association of American Medical Colleges, the National Board of Medical Examiners, the Accreditation Council for Graduate Medical Education, and medical educators, an alternative, more holistic standardized metric by which to compare students' applications should be developed.
Collapse
|
|
7 |
42 |
2
|
Winston KA, van der Vleuten CPM, Scherpbier AJJA. Prediction and prevention of failure: an early intervention to assist at-risk medical students. MEDICAL TEACHER 2014; 36:25-31. [PMID: 24083365 DOI: 10.3109/0142159x.2013.836270] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
BACKGROUND Consistent identification and prevention of failure for at-risk medical students is challenging, failing courses is costly to all stakeholders, and there is need for further research into duration, timing and structure of interventions to help students in difficulty. AIMS To verify the value of a new exam two weeks into medical school as a predictor of failure, and explore the requirements for a preventative intervention. METHODS Students who failed the two-week exam were invited to a series of large-group workshops and small-group follow-up meetings. Participants' subsequent exam performance was compared with non-participants. RESULTS About 71% of students who performed poorly in the new exam subsequently failed a course. Attendance at the workshops made no difference to short- or long-term pass rates. Attendance at more than three follow-up small group sessions significantly improved pass rates two semesters later, and was influenced by teacher experience. CONCLUSIONS Close similarity between predictor task and target task is important for accurate prediction of failure. Consideration should be given to dose effect and class size in the prevention of failure of at-risk students, and we recommend a systemic approach to intervention/remediation programmes, involving a whole semester of mandatory, weekly small group meetings with experienced teachers.
Collapse
|
|
11 |
34 |
3
|
Zhang J, Peterson RF, Ozolins IZ. Student approaches for learning in medicine: what does it tell us about the informal curriculum? BMC MEDICAL EDUCATION 2011; 11:87. [PMID: 22013994 PMCID: PMC3209448 DOI: 10.1186/1472-6920-11-87] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2011] [Accepted: 10/21/2011] [Indexed: 05/25/2023]
Abstract
BACKGROUND It has long been acknowledged that medical students frequently focus their learning on that which will enable them to pass examinations, and that they use a range of study approaches and resources in preparing for their examinations. A recent qualitative study identified that in addition to the formal curriculum, students are using a range of resources and study strategies which could be attributed to the informal curriculum. What is not clearly established is the extent to which these informal learning resources and strategies are utilized by medical students. The aim of this study was to establish the extent to which students in a graduate-entry medical program use various learning approaches to assist their learning and preparation for examinations, apart from those resources offered as part of the formal curriculum. METHODS A validated survey instrument was administered to 522 medical students. Factor analysis and internal consistence, descriptive analysis and comparisons with demographic variables were completed. The factor analysis identified eight scales with acceptable levels of internal consistency with an alpha coefficient between 0.72 and 0.96. RESULTS Nearly 80% of the students reported that they were overwhelmed by the amount of work that was perceived necessary to complete the formal curriculum, with 74.3% believing that the informal learning approaches helped them pass the examinations. 61.3% believed that they prepared them to be good doctors. A variety of informal learning activities utilized by students included using past student notes (85.8%) and PBL tutor guides (62.7%), and being part of self-organised study groups (62.6%), and peer-led tutorials (60.2%). Almost all students accessed the formal school resources for at least 10% of their study time. Students in the first year of the program were more likely to rely on the formal curriculum resources compared to those of Year 2 (p = 0.008). CONCLUSIONS Curriculum planners should examine the level of use of informal learning activities in their schools, and investigate whether this is to enhance student progress, a result of perceived weakness in the delivery and effectiveness of formal resources, or to overcome anxiety about the volume of work expected by medical programs.
Collapse
|
research-article |
14 |
17 |
4
|
Weis R, Dean EL, Osborne KJ. Accommodation Decision Making for Postsecondary Students With Learning Disabilities: Individually Tailored or One Size Fits All? JOURNAL OF LEARNING DISABILITIES 2016; 49:484-498. [PMID: 25395372 DOI: 10.1177/0022219414559648] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Clinicians uniformly recommend accommodations for college students with learning disabilities; however, we know very little about which accommodations they select and the validity of their recommendations. We examined the assessment documentation of a large sample of community college students receiving academic accommodations for learning disabilities to determine (a) which accommodations their clinicians recommended and (b) whether clinicians' recommendations were supported by objective data gathered during the assessment process. In addition to test and instructional accommodations, many clinicians recommended that students with learning disabilities should have different educational expectations, standards, and methods of evaluation (i.e., grading) than their nondisabled classmates. Many of their recommendations for accommodations were not supported by objective evidence from students' history, diagnosis, test data, and current functioning. Furthermore, clinicians often recommended accommodations that were not specific to the student's diagnosis or area of disability. Our findings highlight the need for individually selected accommodations matched to students' needs and academic contexts.
Collapse
|
|
9 |
17 |
5
|
VanLangen KM, Meny LM, Bright DR, Seiferlein MR, Philip A, Sharma R, DiVall M. An initial environmental scan of APPE readiness assessment. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:771-775. [PMID: 32540038 DOI: 10.1016/j.cptl.2020.02.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 01/08/2020] [Accepted: 02/26/2020] [Indexed: 05/26/2023]
Abstract
INTRODUCTION Standards 2016 require schools/colleges of pharmacy (s/cop) to assess students' readiness to enter advanced pharmacy practice experiences (APPEs). However, literature describing how schools are meeting this standard is limited. The purpose of this study was to conduct an environmental scan to describe how s/cop assess student readiness to enter APPEs. METHODS A web-based survey was distributed to assessment leads at United States s/cop, regardless of accreditation status. Respondents answered questions related to their current approach to assessing student APPE readiness, existence of intentional assessment plans, competencies used, assessment methods, benchmarks, and remediation strategies. Aggregate data were analyzed using descriptive statistics. RESULTS Fifty-two S/COP (36.1%) responded. The majority (90.1%) were fully accredited schools. Most respondents have an intentional APPE readiness plan (73.5%), although the duration since implementation varied. There was no consensus among schools on which competencies informed APPE readiness with 67.3% listing Center for the Advancement of Pharmacy Education (CAPE) 2013 outcomes, 61.2% Guidance for Standards 2016 Appendix A, 53.1% pre-APPE domains (Standards 2007), and 30.6% Entrustable Professional Activities. Twenty-eight S/COP (57.1%) reported having individual student-level data to assess student APPE readiness. The most common methods for validating student APPE readiness were preceptor (48.9%) and student (44.9%) surveys. CONCLUSIONS This environmental scan begins to identify trends in how S/COP is approaching the assessment of student readiness to begin APPEs. Further research is needed to identify best practices and practical methods to ensure compliance with current accreditation standards.
Collapse
|
|
5 |
11 |
6
|
Dell KA, Chudow MB. A web-based review game as a measure of overall course knowledge in pharmacotherapeutics. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:838-842. [PMID: 31227200 DOI: 10.1016/j.cptl.2019.04.012] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 02/26/2019] [Accepted: 04/14/2019] [Indexed: 05/23/2023]
Abstract
BACKGROUND AND PURPOSE Implementation of games is common in pharmacy education, but limited information exists about the relationship between students' game performances and course grades. Our study was designed to determine if scores on a comprehensive web-based review game correlate to overall pharmacotherapeutics course and course series grades. EDUCATIONAL ACTIVITY AND SETTING Two cohorts of students in Pharmacotherapeutics IV, the last course in a four course series, were administered a web-based quiz review game (Kahoot!) at the end of the course. Student performance on the game was compared to grades throughout the Pharmacotherapeutics course sequence to determine if the games were reflective of the students' performances. All students enrolled in Pharmacotherapeutics IV in 2016 and 2017 were included in the analysis. Pearson correlation was performed on the scores from the review game compared to the grades in the pharmacotherapeutics course series. FINDINGS A total of 197 students, 111 in the 2016 cohort and 86 in the 2017 cohort, were included in the analysis. The correlation coefficient (r) for the review scores and Pharmacotherapeutics IV course grade was 0.399 and 0.461 for the 2 cohorts (p < 0.001). Almost all component comparisons between the review scores and the other pharmacotherapeutics course grades were also significantly correlated (p < 0.01). SUMMARY Student performance on the review game significantly correlated with pharmacotherapeutics course grades. Review games are fun tools to review course content and can serve as an effective method to determine student understanding, progression, and knowledge.
Collapse
|
|
6 |
10 |
7
|
Gibson CM, Kwon HI, Tatachar A. Impact of a low-cost simulated electronic medical record on perceptions of APPE readiness. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:736-741. [PMID: 31227098 DOI: 10.1016/j.cptl.2019.03.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 01/25/2019] [Accepted: 03/12/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Meaningful use of electronic medical records (EMRs) is critical for providing high-quality, patient-centered care. However, many pharmacy students are not exposed to EMRs until the experiential components of the curriculum. EDUCATIONAL ACTIVITY AND SETTING We created a low-cost simulated EMR (SEMR) using Microsoft PowerPoint software (Microsoft, Redmond, WA, Version 16.16) to use in a case-based application course for second-year pharmacy students for two consecutive years. FINDINGS Pre- and post-assessment surveys of 162 students indicated that perceived confidence and efficiency navigating EMRs improved after the activity. Students agreed that the activity enhanced learning, improved understanding of how to extract meaningful data from EMRs, benefited their preparation for the fourth professional year, and demonstrated the role of informatics in patient care. SUMMARY Incorporation of a SEMR using Microsoft PowerPoint enhances student perceptions of proficiency in navigating the patient medical record. Adoption of similar activities into pharmacy curricula may be an attractive option when adequate financial resources for simulation are unavailable.
Collapse
|
|
6 |
10 |
8
|
Afeli SA. Academic accommodation strategies for pharmacy students with learning disabilities: What else can be done? CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:751-756. [PMID: 31227188 DOI: 10.1016/j.cptl.2019.04.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 02/18/2019] [Accepted: 04/14/2019] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Colleges and schools of pharmacy in the US are faced with the challenge of providing appropriate academic accommodations for students with learning disabilities (SLD) including those with dyslexia, dyscalculia, or dysgraphia. Although there are no population statistics about SLD in pharmacy schools, current data show that their number at higher education institutions across the US has increased. This commentary discusses how the rigorous pharmacy school curriculum affects SLD and proposes creative and innovative measures to overcome these obstacles. PERSPECTIVE The shift in the public perception about SLD from a state of stigma to a manageable condition in recent years has greatly contributed to increase the number of SLD enrolled in higher education institutions. Although most colleges and universities can now offer some type of academic accommodation to SLD compared to few years ago, other academic accommodation recommendations to SLD remain challenging to implement, especially for SLD enrolled in pharmacy programs. IMPLICATIONS SLD enrolled in pharmacy programs could benefit from a multifaceted approach involving the school's office of disability services, medical practitioners, behavioral and language specialists, as well as some sort of curricular changes in order to be academically successful and become practice ready upon graduation.
Collapse
|
|
6 |
10 |
9
|
Sewitch MJ, Fallone CA, Ghali P, Lee GE. What Patients Want in a Smartphone App That Supports Colonoscopy Preparation: Qualitative Study to Inform a User-Centered Smartphone App. JMIR Mhealth Uhealth 2019; 7:e12242. [PMID: 31125310 PMCID: PMC6632098 DOI: 10.2196/12242] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/25/2019] [Accepted: 05/01/2019] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND The preparation for colonoscopy is elaborate and complex. In the context of colorectal cancer screening, up to 11% of patients do not keep their colonoscopy appointments and up to 33% of those attending their appointments have inadequately cleansed bowels that can delay cancer diagnosis and treatment. A smartphone app may be an acceptable and wide-reaching tool to improve patient adherence to colonoscopy. OBJECTIVE The aim of this qualitative study was to employ a user-centered approach to design the content and features of a smartphone app called colonAPPscopy to support individuals preparing for their colonoscopy appointments. METHODS We conducted 2 focus group discussions (FGDs) with gastroenterology patients treated at the McGill University Health Centre in Montreal, Canada. Patients were aged 50 to 75 years, were English- or French-speaking, and had undergone outpatient colonoscopy in the previous 3 months; they did not have inflammatory bowel disease or colorectal cancer. FGDs were 75 to 90 min, conducted by a trained facilitator, and audiotaped. Participants discussed the electronic health support tools they might use to help them prepare for the colonoscopy, the content needed for colonoscopy preparation, and the features that would make the smartphone app useful. Recordings of FGDs were transcribed and analyzed using thematic analysis to identify key user-defined content and features to inform the design of colonAPPscopy. RESULTS A total of 9 patients (7 male and 2 female) participated in one of 2 FGDs. Main content areas focused on bowel preparation instructions, medication restrictions, appointment logistics, communication, and postcolonoscopy expectations. Design features to make the app useful and engaging included minimization of data input, reminders and alerts for up to 7 days precolonoscopy, and visual aids. Participants wanted a smartphone app that comes from a trusted source, sends timely and tailored messages, provides reassurance, provides clear instructions, and is simple to use. CONCLUSIONS Participants identified the need for postcolonoscopy information as well as reminders and alerts in the week before colonoscopy, novel content, and features that had not been included in previous smartphone-based strategies for colonoscopy preparation. The ability to tailor instructions made the smartphone app preferable to other modes of delivery. Study findings recognize the importance of including potential users in the development phase of building a smartphone app.
Collapse
|
research-article |
6 |
8 |
10
|
Zhang B, Kim S, Xiao Y, Damman C, Kelly K, Fitzpatrick D, Anderson C, Reynoldson M, Bray BS, Stewart AS, Daoud SS, Remsberg CM. A student initiative to implement peer-led study groups for a pharmacogenomics course: Evaluation of student performance and perceptions. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:549-557. [PMID: 32336452 DOI: 10.1016/j.cptl.2020.01.013] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 11/22/2019] [Accepted: 01/12/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION To better elucidate the impact of cooperative learning outside the classroom, a student-initiated research project was conducted to explore the effects of participating in peer-led study groups (PLSGs) on student examination scores and perceptions. METHODS First-year pharmacy students were given the opportunity to participate in weekly PLSGs for a pharmacogenomics course during spring 2016 and spring 2017. Student exam performance was stratified by those who attended vs. those who did not. Optional pre- and post-course surveys examined student perceptions of PLSGs. RESULTS No significant differences were seen between the attendance groups in spring 2016. In spring 2017, student attendees were significantly more likely to pass two of their six exams (p = .04, p = .0029) and to have higher exam scores on one exam (p = .02) in comparison to non-attendees. Overall exam score averages were significantly different between attendees and non-attendees during spring 2017 (p = .03) but not during spring 2016 (p = .38). Perception surveys indicated students believed participation helped them to demonstrate competency and build confidence. Additionally, students reported they felt more comfortable clarifying questions during the study groups vs. during class time. CONCLUSIONS The impact of study group participation on student exam performance was minimal over the two years of data collection, but there were instances where exam scores were positively impacted. Students perceived value in study group participation even if it did not translate directly to improved exam performance on all exams.
Collapse
|
|
5 |
6 |
11
|
Elder KG, Daugherty KK, Malcom DR. Predicting NAPLEX scores using student performance in the clinical skills laboratory setting. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:686-695. [PMID: 31227091 DOI: 10.1016/j.cptl.2019.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Revised: 02/04/2019] [Accepted: 03/11/2019] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Many predictors of success on the North American Pharmacist Licensure Exam (NAPLEX) have been studied, but little information is available regarding the effect of assessments in the clinical skills setting on student success. One way to determine if these assessments affect NAPLEX success is to review student performance in clinical skills laboratory courses (termed patient care lab ["PCL"] at our institution), which incorporate such assessments. METHODS Students (n = 88) enrolled in a four quarter PCL sequence completed several individual assessments (both knowledge and skills-based), including drug information (DI) quizzes/final exams, patient case presentations and associated critical thinking questions, and formal DI responses. Linear regression assessed the relationship between PCL assessments and NAPLEX scores. Statistical tests were performed using IBM SPSS version 22 with a p-value <0.05 considered significant. RESULTS While drug information responses were not predictive of total NAPLEX scores (p = 0.192), the patient case presentation and associated critical thinking questions predicted 31.1% of variability (p < 0.05). CONCLUSION Performance of one cohort of students on certain pharmacy clinical skills lab assessments was predictive of total scores on the NAPLEX. Students who struggle with assessments in the clinical skills lab setting may be targets for early intervention to help improve the likelihood of success. More research is needed to fully elucidate the relationship between assessments in the clinical skills lab setting and NAPLEX performance.
Collapse
|
|
6 |
6 |
12
|
Schüttpelz-Brauns K, Kadmon M, Kiessling C, Karay Y, Gestmann M, Kämmer JE. Identifying low test-taking effort during low-stakes tests with the new Test-taking Effort Short Scale (TESS) - development and psychometrics. BMC MEDICAL EDUCATION 2018; 18:101. [PMID: 29739405 PMCID: PMC5941641 DOI: 10.1186/s12909-018-1196-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Accepted: 04/20/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND Low-stakes tests are becoming increasingly important in international assessments of educational progress, and the validity of these results is essential especially as these results are often used for benchmarking. Test scores in these tests not only mirror students' ability but also depend on their test-taking effort. One way to obtain more valid scores from participating samples is to identify test-takers with low test-taking effort and to exclude them from further analyses. Self-assessment is a convenient and quick way of measuring test-taking effort. We present the newly developed Test-taking Effort Short Scale (TESS), which comprises three items measuring attainment value/intrinsic value, utility value, and perceived benefits, respectively. METHODS In a multicenter validation study with N = 1837 medical students sitting a low-stakes progress test we analyzed item and test statistics including construct and external validity. RESULTS TESS showed very good psychometric properties. We propose an approach using stanine norms to determine a cutoff value for identifying participants with low test-taking effort. CONCLUSION With just three items, TESS is shorter than most established self-assessment scales; it is thus suited for administration after low-stakes progress testing. However, further studies are necessary to establish its suitability for routine usage in assessment outside progress testing.
Collapse
|
Multicenter Study |
7 |
6 |
13
|
Mnatzaganian CL, Atayee RS, Namba JM, Brandl K, Lee KC. The effect of sleep quality, sleep components, and environmental sleep factors on core curriculum exam scores among pharmacy students. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:119-126. [PMID: 32147152 DOI: 10.1016/j.cptl.2019.11.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 07/25/2019] [Accepted: 11/08/2019] [Indexed: 06/10/2023]
Abstract
INTRODUCTION Sleep deprivation is associated with poor academic performance, although the impact on pharmacy students has been minimally reported. This study examined sleep quality in pharmacy students in the first (P1), second (P2), and third (P3) professional years during perceived low and high stress periods in a course. Individual sleep and environmental factors were also explored. METHODS This prospective cohort study used an 18-item survey adapted from the Pittsburgh Sleep Quality Index (PSQI) that included demographics, individual sleep components, and factors affecting sleep. Surveys were administered at the beginning of the quarter (low stress) and the week before final exams (high stress). Chi-square tests compared categorical variables; ANOVA/ANCOVA tests compared continuous variables. RESULTS During high stress, PSQI scores worsened among all classes and was significant for the P3s. Average sleep duration was 6.64 (SD 1.18) and 6.8 (SD 1.18) hours per night for P1s and P3s, respectively, at the beginning of the quarter; both groups had significant reduction in sleep duration at the end of the quarter. There were no significant correlations between PSQI and exam scores. Factors impacting sleep such as exercise, use of technology at bedtime, and work hours outside of school decreased during high times of stress, for P1s, P2s, and P3, respectively. CONCLUSIONS Students demonstrated worsening sleep quality during high stress periods and less sleep than recommended. Academic performance was not adversely affected. Future research should use sleep logs and other performance measures to determine the impact of sleep quality on academic success and wellbeing.
Collapse
|
|
5 |
6 |
14
|
Lubarsky S, Dory V, Meterissian S, Lambert C, Gagnon R. Examining the effects of gaming and guessing on script concordance test scores. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:174-181. [PMID: 29904900 PMCID: PMC6002294 DOI: 10.1007/s40037-018-0435-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION In a script concordance test (SCT), examinees are asked to judge the effect of a new piece of clinical information on a proposed hypothesis. Answers are collected using a Likert-type scale (ranging from -2 to +2, with '0' indicating no effect), and compared with those of a reference panel of 'experts'. It has been argued, however, that SCT may be susceptible to the influences of gaming and guesswork. This study aims to address some of the mounting concern over the response process validity of SCT scores. METHOD Using published datasets from three independent SCTs, we investigated examinee response patterns, and computed the score a hypothetical examinee would obtain on each of the tests if he 1) guessed random answers and 2) deliberately answered '0' on all test items. RESULTS A simulated random guessing strategy led to scores 2 SDs below mean scores of actual respondents (Z-scores -3.6 to -2.1). A simulated 'all-0' strategy led to scores at least 1 SD above those obtained by random guessing (Z-scores -2.2 to -0.7). In one dataset, stepwise exclusion of items with modal panel response '0' to fewer than 10% of the total number of test items yielded hypothetical scores 2 SDs below mean scores of actual respondents. DISCUSSION Random guessing was not an advantageous response strategy. An 'all-0' response strategy, however, demonstrated evidence of artificial score inflation. Our findings pose a significant threat to the SCT's validity argument. 'Testwiseness' is a potential hazard to all testing formats, and appropriate countermeasures must be established. We propose an approach that might be used to mitigate a potentially real and troubling phenomenon in script concordance testing. The impact of this approach on the content validity of SCTs merits further discussion.
Collapse
|
research-article |
7 |
5 |
15
|
Volk AS, Rhudy AK, Marturano MN, Ott L, DuCoin C. Best Study Strategy for the NBME Clinical Science Surgery Exam. JOURNAL OF SURGICAL EDUCATION 2019; 76:1539-1545. [PMID: 31196769 DOI: 10.1016/j.jsurg.2019.05.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 05/13/2019] [Accepted: 05/14/2019] [Indexed: 05/22/2023]
Abstract
INTRODUCTION Providing medical students with effective study strategies is paramount in fostering their success on the NBME Clinical Science Surgery exam. As of yet, there has not been a comprehensive inquiry into a study strategy for this exam. We aim to identify if the following are associated with higher NBME raw exam scores: (1) the use of popular study materials, (2) the number of study materials used, and (3) the amount of time spent studying throughout the clerkship. METHODS Over the period of 1 academic year, third-year medical students at our institution were administered a survey during their surgical clerkship inquiring about study materials used and hours spent studying per week. The data were then matched to students' raw NBME scores and then depersonalized. A total of 82 of 193 (42%) students responded with an overall average raw score of 76.74 on the NBME Clinical Science Surgery exam. By comparing our data to the NBME national average of 70 with a standard deviation of 8, a z test was used to compare the population mean to our sample means. RESULTS When investigating resources used, the combination of using an online question bank and a high yield review book yielded a high z score (6.23) and using 4 resources yielded the highest z score (6.28). Regarding study hours, the highest z scores were seen when students studied for 6 to 10 hours per week during the first half of the clerkship, and 11 to 15 hours per week during the second half of the clerkship (5.76 and 6.02, respectively). CONCLUSIONS In conclusion, higher NBME Clinical Science Surgery exam scores were correlated with the use of multiple and varied types of resources and increasing study time closer to the exam date. The resources and methods utilized by students achieving the highest exam scores can be recommended by surgical educators and employed by other medical students to foster academic success.
Collapse
|
|
6 |
4 |
16
|
Shah S, Peng I, Seifert CF. A model to predict NAPLEX outcomes and identify students needing additional preparation. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:810-817. [PMID: 31227196 DOI: 10.1016/j.cptl.2019.04.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 02/10/2019] [Accepted: 04/14/2019] [Indexed: 06/09/2023]
Abstract
INTRODUCTION Studies have been conducted to identify factors that may predict North American Pharmacist Licensure Examination (NAPLEX) outcomes, but there is no proposed single or combination of predictors that can be implemented reliably in academia. We aimed to develop a NAPLEX outcomes predictive model that could be practical, measurable, and reliable. METHODS The study cohort consisted of students who graduated from 2012 to 2016 who had taken NAPLEX and whose first-attempt examination scores were available to the school of pharmacy. Students were considered to have poor performance on NAPLEX if they received an overall score of less than or equal to 82. Linear and logistic regression analysis were utilized to identify independent predictors. RESULTS Seventy of 433 (16.2%) students were identified as poor performers. Independent factors that were associated with a poor outcome on NAPLEX were: age >28 years at graduation, Pharmacy College Admission Test scaled score <74, High Risk Drug Knowledge Assessment score <90, third-year Pharmacy Curriculum Outcome Assessment scaled score <349, and grades of <74 in more than three courses. These predictors were utilized to stratify students into four risk groups: Low, Intermediate-1, Intermediate-2, and High. Mean NAPLEX scores for these groups were 106.4, 97.4, 87.1, and 75.1, respectively. CONCLUSIONS The model can be used as a practical tool to identify students who are at risk for poor performance on NAPLEX. Four of the five predictors in the model could be generalizable to other schools of pharmacy.
Collapse
|
|
6 |
4 |
17
|
Dreher HM, Smith Glasgow ME, Schreiber J. The use of "high-stakes testing" in nursing education: Rhetoric or rigor? Nurs Forum 2019; 54:477-482. [PMID: 31273820 DOI: 10.1111/nuf.12363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The expectations for clinically ready graduates have increased over the years, paralleling changes in the NCLEX exam with increasingly cognitively difficult items in preparation for professional nursing practice. Yet, it is widely recognized that nursing program exit standardized exams have come under increased scrutiny in several public cases. Several articles have frequently been cited in the opposing argument against use of the HESI Exit Exam or other standardized nursing exit exams. We review two of these for logical coherence and standard psychometric adherence given their recurrent use in arguing against nursing program exit standardized testing. In light of patient safety implications and school accreditation consequences, it's more important that schools use exit standardized testing and assess minimum competency of students based on an exit program standardized exam. Measurement standards are critical to these analyses. The use of a program exit standardized exam also alerts students that faculty are holding them accountable for their time in the program.
Collapse
|
|
6 |
3 |
18
|
Kirby RL. Survey of wheelchair skills tests: making the case for broadening the search and assessment criteria. Clin Rehabil 2011; 25:287. [PMID: 21321056 DOI: 10.1177/0269215510380833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
Comment |
14 |
2 |
19
|
Talgar CP, Goodey NM. Views from academia and industry on skills needed for the modern research environment. BIOCHEMISTRY AND MOLECULAR BIOLOGY EDUCATION : A BIMONTHLY PUBLICATION OF THE INTERNATIONAL UNION OF BIOCHEMISTRY AND MOLECULAR BIOLOGY 2015; 43:324-332. [PMID: 26147139 DOI: 10.1002/bmb.20883] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Accepted: 05/25/2015] [Indexed: 06/04/2023]
Abstract
Reports from employers of higher education graduates indicate the existence of a considerable gap between the skills required by employers and those possessed by recent graduates. As a first step toward closing this gap, this study aims to determine its origin. Interviews with nine research-active biochemistry professionals were used to identify the most important skills for biochemistry students to succeed in research positions postgraduation. The results of these interviews were used to develop a survey, which was then administered to a larger group of biochemistry faculty and industry professionals. The output of the survey was a list of 52 skills valued by biochemistry professionals and rated by perceived importance. Importantly, the survey results also afford a comparative look at the prioritization of skills by two key populations: the academic faculty training students and the industry professionals hiring them. While there are many areas of agreement between these two populations, the survey also reveals areas were priorities diverge. The discrepancies found here suggest that the skills gap manifest at the point of employment may stem directly from differences in prioritization between the academic and industrial environments. This article aims to provide insight into the needs and requirements of the modern biochemical research environment, and invites debate concerning the preparation students receive in academia. Moreover, the results presented herein point to a need for further exploration of the possible misalignment of these two critical environments for young scientists.
Collapse
|
|
10 |
2 |
20
|
Mospan CM, Alexander KM. Teaching drug utilization review skills via a simulated clinical decision making exercise. CURRENTS IN PHARMACY TEACHING & LEARNING 2017; 9:282-287. [PMID: 29233414 DOI: 10.1016/j.cptl.2016.11.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Revised: 08/27/2016] [Accepted: 11/25/2016] [Indexed: 06/07/2023]
Abstract
BACKGROUND Drug utilization review (DUR) is a central role of the pharmacist, especially within the community pharmacy setting. Previous literature has shown risk of "alert fatigue", supporting the necessity of pharmacists to utilize a step-wise approach in evaluation of drug therapy during the verification process. Many students are intimidated by this process, and may lack verification practice or experience until their first day as a licensed pharmacist. EDUCATIONAL ACTIVITY AND SETTING An innovative skills-based laboratory exercise was developed for third-year pharmacy students to develop DUR skills. Through simulation of patient prescriptions, profiles, and drug information resources, students were tasked with completing a DUR for each patient case. Students were expected evaluate the clinical significance of various drug-related problems, determine if they would or would not dispense the prescription, and were required to provide rationale for their decision. FINDINGS This learning activity was well-received by the student population; however, students struggled with the volume of cases along with identifying a solution to the clinical scenario. On average, students required nine minutes per case, which is likely longer than community pharmacists can devote to a single DUR in practice. DISCUSSION In response, to student challenges with the activity, the number of cases was condensed to highlight key concepts and cases that facilitated strong discussion. To improve students' approach to the DUR process, faculty developed a vodcast to watch prior to the activity explaining a systematic approach to the DUR process as well as considerations a pharmacist should have. SUMMARY Development and integration of an active-learning, simulated dispensing activity allowed students to gain valuable experience completing the DUR process, a foundational community pharmacy practice skill; however, repeated experience should be provided to ensure competency.
Collapse
|
|
8 |
1 |
21
|
Monteiro S, Sibbald D, Coetzee K. i-Assess: Evaluating the impact of electronic data capture for OSCE. PERSPECTIVES ON MEDICAL EDUCATION 2018; 7:110-119. [PMID: 29488098 PMCID: PMC5889381 DOI: 10.1007/s40037-018-0410-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores. METHODS Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipulated between candidates (Study 1) and within candidates (Study 2). Average scores were analyzed using repeated measures ANOVA, Cronbach's alpha and generalizability theory. Post-hoc analyses included a Rasch analysis and McDonald's omega. RESULTS Study 1 revealed a main effect of modality (F (1,152) = 25.06, p < 0.01). Average tablet-based scores were higher, (3.39/5, 95% CI = 3.28 to 3.51), compared with average scan-sheet scores (3.00/5, 95% CI = 2.90 to 3.11). Study 2 also revealed a main effect of modality (F (1, 88) = 15.64, p < 0.01), however, the difference was reduced to 2% with higher scan-sheet scores (3.36, 95% CI = 3.30 to 3.42) compared with tablet scores (3.27, 95% CI = 3.21 to 3.33). Internal consistency (alpha and omega) remained high (>0.8) and inter-station reliability remained constant (0.3). Rasch analyses showed no relationship between station difficulty and rating modality. DISCUSSION Analyses of average scores may be misleading without an understanding of internal consistency and overall reliability of scores. Although updating to tablet-based forms did not result in systematic variations in scores, routine analyses ensured accurate interpretation of the variability of assessment scores. CONCLUSION This study demonstrates the importance of ongoing program evaluation and data analysis.
Collapse
|
research-article |
7 |
1 |
22
|
Moynahan KF. The Current Use of United States Medical Licensing Examination Step 1 Scores: Holistic Admissions and Student Well-Being Are in the Balance. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2018; 93:963-965. [PMID: 29261538 DOI: 10.1097/acm.000000000000210] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
United States Medical Licensing Examination Step 1 scores increasingly are being used by graduate medical education programs to "screen out" applicants to invite for an interview; medical students with scores below a certain (usually unpublished) point are not considered for an interview. Conversely, many students are extended an interview invitation based solely on their Step 1 scores, prior to the release of their Medical Student Performance Evaluation. In this Invited Commentary, the author discusses why this practice has gained popularity as well as the unintended effects it has had in multiple domains-holistic undergraduate medical education admissions practices, student well-being, and medical school curricula. Possible solutions are considered, including a pass/fail reporting structure, a national basic science curriculum, and better alignment between undergraduate and graduate medical education admissions processes and values. Through collaborative work between the Association of American Medical Colleges, the National Board of Medical Examiners, the Accreditation Council for Graduate Medical Education, and medical educators, an alternative, more holistic standardized metric by which to compare students' applications should be developed.
Collapse
|
|
7 |
1 |
23
|
Bullen K, Ainsley K, Davison K. Evaluating the numeracy skills of pharmacy students in multiple choice questions and free-text answer assessments and their perception of numeracy in clinical practice. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:648-655. [PMID: 32482266 DOI: 10.1016/j.cptl.2020.01.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 11/18/2019] [Accepted: 01/30/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION Numerical errors can cause unintentional harm to patients and are a burden to healthcare systems worldwide. There is a paucity of research regarding numerical competence in pharmacists and pharmacy students and how this should be assessed within undergraduate and licensure assessments. METHODS Two numeracy assessments were given to year three and four pharmacy students at the University of Sunderland, United Kingdom. One paper included ten multiple-choice questions (MCQs) and the second ten free-text answer questions. Participants were then given an evaluation questionnaire to explore their perceptions about the assessments and numeracy in clinical practice. RESULTS A response rate of 75% (n = 247) was achieved, with 60.9% of students passing the MCQ and 27.9% passing the free-text answer assessments. There were statistically significant differences in pass rates depending on year of study, ethnicity, and previous mathematics qualifications. Participants were asked if numeracy was an important skill for pharmacists in practice; 57.9% thought it were essential and 36.4% quite important. However, only 1.4% felt sufficiently supported in the development of the required numeracy skills, and this has prompted a redesign of teaching and assessment at the university. CONCLUSIONS Educators need to ensure taught and assessed numeracy is reflective of and transferable to pharmacy practice, whilst ensuring students are supported effectively and engaged. This is likely to be achieved with integrated and clinically focused teaching approaches and appropriately constructed assessments throughout the pharmacy programme.
Collapse
|
|
5 |
1 |
24
|
Sera L, McPherson ML. Effect of a study skills course on student self-assessment of learning skills and strategies. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:664-668. [PMID: 31227088 DOI: 10.1016/j.cptl.2019.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 01/23/2019] [Accepted: 03/11/2019] [Indexed: 06/09/2023]
Abstract
INTRODUCTION The purpose of this study was to determine whether a study skills course taken by first professional year pharmacy students improved their self-assessment of study skills and strategies. METHODS This study analyzed student responses to the Learning and Study Strategies Inventory (LASSI), an online assessment with questions in 10 subject areas: anxiety, motivation, concentration, test strategies, study aids, selecting main ideas, attitude, self-testing, information processing, and time management. Students in an elective study skills course in 2012-2017 completed the self-assessment prior to and at the end of the course. Wilcoxon signed-rank test was performed to compare class score pre- and post-course. RESULTS Over a five-year period, 312 students completed both the pre- and post-course LASSI assessment. Average percentile scores increased significantly from the beginning to the end of the course in all 10 areas. Notably, average pre-course scores in seven subject areas (attention, concentration, self-testing, selecting main ideas, study aids, time management, and test strategies) were all below the 50th percentile, indicating a need for improvement in those skills to see increased academic success. Average post-course scores in each area increased to between the 50th and 75th percentile. CONCLUSIONS This evaluation shows that a study skills course improves students' self-assessment of skills and attitudes associated with success in post-secondary education. Future studies will look at the effect of such a course on academic outcomes.
Collapse
|
|
6 |
|
25
|
Lenz H, Opitz A, Huber D, Jacobs F, Paik WG, Roche J, Fischer MR. Language Matters: Development of an Objective Structured Language Test for Foreign Physicians - Results of a Pilot Study in Germany. GMS JOURNAL FOR MEDICAL EDUCATION 2019; 36:Doc2. [PMID: 30828602 PMCID: PMC6390091 DOI: 10.3205/zma001210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 12/05/2018] [Accepted: 12/19/2018] [Indexed: 06/09/2023]
Abstract
Objective: To develop a scientifically sound and standardized medical language examination for the State of Bavaria according to the requirements set forth by the 87th Conference of State Health Ministers. This Sprachtest für Ausländische Mediziner (SAM, Language Test for Foreign Physicians) ought to become part of the licensing procedure for foreign physicians in Germany. Using testing stations that are situation-based, it will assess medical language competence and communication skills at the proficiency level of C1. Methods: Case scenarios for four mini-interviews of 10 minutes each were developed. For the written part of the exam, consisting of two separate testing stations with a combined duration of 40 minutes, one video of a physician taking a patient's history and one annotated set of laboratory results were developed. Based on the analysis of existing scientific literature as well as real-life examples, features and characteristics of professional medical language were identified. This served as the basis for the development of itemized rating scales for each of the testing stations. The exam was validated in three simulated trial runs. Each run was video-recorded and subsequently graded by a team of test-raters. Results: 19 participants took part in the three trial runs. A benchmark (gold standard) could be set for 18 of these. A ROC-analysis yielded an AUC-value of .83. This confirmed the predictive quality of the SAM-test. The reliability of the SAM-test could be calculated for only ten participants. The internal consistency, calculated with the use of Cronbach's Alpha, was .85. The pass/fail mark was calculated based on the Youden-Index and yielded a result of >60%. Conclusion: The SAM-test presents a statistically valid medical language examination with a high level of objectivity. As required, it tests language proficiency at the level of C1 and uses authentic communication scenarios within a standardized test setting. Additional studies with larger test samples will help to further validate this test and thus guarantee a higher degree of reliability.
Collapse
|
research-article |
6 |
|