1
|
Singh VK, Tiwari M, Singh S, Kumar S. Faculty Perception of Scenario-Based MCQs, SAQs, and MEQs in Medical Education at an Apex Institute. MEDICAL SCIENCE EDUCATOR 2024; 34:865-871. [PMID: 39099861 PMCID: PMC11296997 DOI: 10.1007/s40670-024-02052-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/15/2024] [Indexed: 08/06/2024]
Abstract
Purpose This study explores the current knowledge and overall awareness of the faculty at an Apex institute about the use and difficulties of scenario-based multiple-choice questions (SB-MCQs), short-answer questions (SB-SAQs), and modified essay questions (SB-MEQs) in the assessment of the undergraduate and postgraduate students. Objectives To assess faculty perception of awareness and use of SB-MCQs, SB-SAQs, and SB-MEQs and to understand the challenges encountered while designing scenario-based questions (SBQs) and the ways to overcome them. Study Procedure The tool used for data collection was a Google form questionnaire designed with a total of 16 questions-12 Likert-scale format items and four open-ended questions. The quantitative data collected as a response to close-ended questions was analyzed by descriptive statistics and percentage values. For qualitative data, thematic analysis was done for open-ended questions. Conclusion The study showed that the faculty has the motivation and agreeability to switch over from traditional questions to scenario-based questions but constant training in the form of regular faculty development programs and workshops is required for its effective implementation. On the administrative level, challenges like lack of sufficient faculty and proper inter-departmental integration for designing scenarios must be addressed. Supplementary Information The online version contains supplementary material available at 10.1007/s40670-024-02052-6.
Collapse
Affiliation(s)
- Veena K. Singh
- Department of Burns & Plastic Surgery, All India Institute of Medical Sciences, Patna, Bihar India
| | - Meenakshi Tiwari
- Department of Biochemistry, All India Institute of Medical Sciences, Patna, Bihar India
| | - Shruti Singh
- Department of Pharmacology, All India Institute of Medical Sciences, Patna, Bihar India
| | - Santosh Kumar
- Department of Psychiatry, Nalanda Medical College Hospital, Patna, Bihar India
| |
Collapse
|
2
|
Lee HY, Yune SJ, Lee SY, Im S, Kam BS. The impact of repeated item development training on the prediction of medical faculty members' item difficulty index. BMC MEDICAL EDUCATION 2024; 24:599. [PMID: 38816855 PMCID: PMC11140961 DOI: 10.1186/s12909-024-05577-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 05/20/2024] [Indexed: 06/01/2024]
Abstract
BACKGROUND Item difficulty plays a crucial role in assessing students' understanding of the concept being tested. The difficulty of each item needs to be carefully adjusted to ensure the achievement of the evaluation's objectives. Therefore, this study aimed to investigate whether repeated item development training for medical school faculty improves the accuracy of predicting item difficulty in multiple-choice questions. METHODS A faculty development program was implemented to enhance the prediction of each item's difficulty index, ensure the absence of item defects, and maintain the general principles of item development. The interrater reliability between the predicted, actual, and corrected item difficulty was assessed before and after the training, using either the kappa index or the correlation coefficient, depending on the characteristics of the data. A total of 62 faculty members participated in the training. Their predictions of item difficulty were compared with the analysis results of 260 items taken by 119 fourth-year medical students in 2016 and 316 items taken by 125 fourth-year medical students in 2018. RESULTS Before the training, significant agreement between the predicted and actual item difficulty indices was observed for only one medical subject, Cardiology (K = 0.106, P = 0.021). However, after the training, significant agreement was noted for four subjects: Internal Medicine (K = 0.092, P = 0.015), Cardiology (K = 0.318, P = 0.021), Neurology (K = 0.400, P = 0.043), and Preventive Medicine (r = 0.577, P = 0.039). Furthermore, a significant agreement was observed between the predicted and actual difficulty indices across all subjects when analyzing the average difficulty of all items (r = 0.144, P = 0.043). Regarding the actual difficulty index by subject, neurology exceeded the desired difficulty range of 0.45-0.75 in 2016. By 2018, however, all subjects fell within this range. CONCLUSION Repeated item development training, which includes predicting each item's difficulty index, can enhance faculty members' ability to predict and adjust item difficulty accurately. To ensure that the difficulty of the examination aligns with its intended purpose, item development training can be beneficial. Further studies on faculty development are necessary to explore these benefits more comprehensively.
Collapse
Affiliation(s)
- Hye Yoon Lee
- Division of Humanities and Social Medicine, Pusan National University School of Korean Medicine, Yangsan, Republic of Korea
| | - So Jung Yune
- Department of Medical Education, Pusan National University School of Medicine, Yangsan, Republic of Korea
| | - Sang Yeoup Lee
- Department of Medical Education, Pusan National University School of Medicine, Yangsan, Republic of Korea.
- Family Medicine Clinic and Biomedical Research Institute, Pusan National University Yangsan Hospital, Yangsan, 50612, Republic of Korea.
| | - Sunju Im
- Department of Medical Education, Pusan National University School of Medicine, Yangsan, Republic of Korea
| | - Bee Sung Kam
- Department of Medical Education, Pusan National University School of Medicine, Yangsan, Republic of Korea
| |
Collapse
|
3
|
Shammas M, Nagda S, Shah C, Baxi G, Gadde P, Sachdeva S, Gupta D, Wali O, Dhall RS, Gajdhar S. An assessment of preclinical removable prosthodontics based on multiple-choice questions: Stakeholders' perceptions. J Dent Educ 2024; 88:533-543. [PMID: 38314889 DOI: 10.1002/jdd.13462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 12/05/2023] [Accepted: 01/06/2024] [Indexed: 02/07/2024]
Abstract
PURPOSE Item analysis of multiple-choice questions (MCQs) is an essential tool for identifying items that can be stored, revised, or discarded to build a quality MCQ bank. This study analyzed MCQs based on item analysis to develop a pool of valid and reliable items and investigate stakeholders' perceptions regarding MCQs in a written summative assessment (WSA) based on this item analysis. METHODS In this descriptive study, 55 questions each from 2016 to 2019 of WSA in preclinical removable prosthodontics for fourth-year undergraduate dentistry students were analyzed for item analysis. Items were categorized according to their difficulty index (DIF I) and discrimination index (DI). Students (2021-2022) were assessed using this question bank. Students' perceptions of and feedback from faculty members concerning this assessment were collected using a questionnaire with a five-point Likert scale. RESULTS Of 220 items when both indices (DIF I and DI) were combined, 144 (65.5%) were retained in the question bank, 66 (30%) required revision before incorporation into the question bank, and only 10 (4.5%) were discarded. The mean DIF I and DI values were 69% (standard deviation [Std.Dev] = 19) and 0.22 (Std.Dev = 0.16), respectively, for 220 MCQs. The mean scores from the questionnaire for students and feedback from faculty members ranged from 3.50 to 4.04 and from 4 to 5, respectively, indicating that stakeholders tended to agree and strongly agree, respectively, with the proposed statements. CONCLUSION This study assisted the prosthodontics department in creating a set of prevalidated questions with known difficulty and discrimination capacity.
Collapse
Affiliation(s)
- Mohammed Shammas
- Division of Prosthodontics, Department of Oral and Maxillofacial Rehabilitation, Ibn Sina National College for Medical Studies, Al Mahjar, Jeddah, Saudi Arabia
| | | | - Chinmay Shah
- Department of Physiology, Government Medical College, Bhavnagar, Gujarat, India
| | - Gaurang Baxi
- Dr. D. Y. Patil College of Physiotherapy, Dr. D. Y. Patil Vidyapeeth, Pune, Maharashtra, India
| | - Praveen Gadde
- Department of Public Health Dentistry, Vishnu Dental College, Bhimavaram, West Godavari (Dt), Andhra Pradesh, India
| | - Shabina Sachdeva
- Department of Prosthodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| | - Deeksha Gupta
- Department of Prosthodontics, MP Dental College and Hospital, Vadodara, Gujarat, India
| | - Othman Wali
- Division of Periodontics, Department of Oral Basic and Clinical Sciences, Ibn Sina National College for Medical Studies, Al Mahjar, Jeddah, Saudi Arabia
| | - Rupinder Singh Dhall
- Department of Prosthodontics, Himachal Institute of Dental Sciences, Paonta Sahib, Himachal Pradesh, India
| | - Shaiq Gajdhar
- Division of Prosthodontics, Department of Oral and Maxillofacial Rehabilitation, Ibn Sina National College for Medical Studies, Al Mahjar, Jeddah, Saudi Arabia
| |
Collapse
|
4
|
Al Ameer AY. Assessment of the Quality of Multiple-Choice Questions in the Surgery Course for an Integrated Curriculum, University of Bisha College of Medicine, Saudi Arabia. Cureus 2023; 15:e50441. [PMID: 38222171 PMCID: PMC10785735 DOI: 10.7759/cureus.50441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2023] [Indexed: 01/16/2024] Open
Abstract
INTRODUCTION Multiple-choice questions (MCQs) have been recognized as reliable assessment tools, and incorporating clinical scenarios in MCQ stems has enhanced their effectiveness in evaluating knowledge and understanding. Item analysis is used to assess the reliability and consistency of MCQs, indicating their suitability as an assessment tool. This study aims to ensure the competence of graduates in serving the community and establish an examination bank for the surgery course. OBJECTIVE This study aims to assess the quality and acceptability of MCQs in the surgery course at the University of Bisha College of Medicine (UBCOM). METHODS A psychometric study evaluated the quality of MCQs used in surgery examinations from 2019 to 2023 at UBCOM in Saudi Arabia. The MCQs/items were analyzed and categorized for their difficulty index (DIF), discrimination index (DI), and distracter efficiency (DE) Fifth-year MBBS students undergo a rotation in the department and are assessed at the end of 12 weeks. The assessment includes 60 MCQs/items and written items. Data was collected and analyzed using SPSS version 24. RESULTS A total of 189 students were examined across five test sessions, with 300 MCQ items. Student scores ranged from 28.33% to 90.0%, with an average score of 64.6%±4.35. The 300 MCQ items had a total of 900 distractors. The DIF was 75.3% for the items, and 63.3% of the items showed good discrimination. No items had negative points in terms of biserial correlation. The mean number of functional distractors per test item was 2.19±1.007, with 34% of the items having three functional distractors. CONCLUSION The psychometric indices used to evaluate the MCQs in this study were encouraging, with acceptable DIF, distractor efficiencies, and item reliability. Providing robust faculty training and capacity-building is recommended to enhance item development skills.
Collapse
Affiliation(s)
- Ahmed Y Al Ameer
- Department of Surgery, College of Medicine, University of Bisha, Bisha, SAU
| |
Collapse
|
5
|
Dhanvijay AKD, Dhokane N, Balgote S, Kumari A, Juhi A, Mondal H, Gupta P. The Effect of a One-Day Workshop on the Quality of Framing Multiple Choice Questions in Physiology in a Medical College in India. Cureus 2023; 15:e44049. [PMID: 37746478 PMCID: PMC10517710 DOI: 10.7759/cureus.44049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/23/2023] [Indexed: 09/26/2023] Open
Abstract
Background Multiple choice questions (MCQs) are commonly used in medical exams for more objectivity in assessment. However, the quality of the questions should be optimum for a proper assessment of the students. A faculty development program (FDP) may improve the quality of MCQs. The effect of a one-day workshop on framing MCQ as a part of a FDP has not been explored in our institution. Aim This study aimed to evaluate the quality of MCQ in the subject of physiology before and after a one-day workshop on framing MCQ as a part of a FDP. Methods This was a retrospective study conducted in the Department of Physiology, All India Institute of Medical Sciences, Deoghar, Jharkhand, India. A one-day workshop on framing MCQ as a part of a FDP was conducted in March 2022. We took 100 MCQs and responses from the students from examinations conducted before the workshop and 100 MCQs and responses from the students after the workshop. In pre-workshop and post-workshop, the same five faculties framed the questions. Post-validation item analysis including difficulty index (DIFI), discrimination index (DI), distractor effectiveness (DE), and Kuder-Richardson Formula 20 (KR-20) for internal consistency was calculated. Results Pre-workshop and post-workshop quality of the MCQ remain equal in terms of DIFI (chi-square {3} = 2.42, P = 0.29), DI (chi-square {3} = 2.44, P = 0.49), and DE (chi-square {3} = 4.97, P = 0.17). The KR-20 in pre-workshop and post-workshop was 0.65 and 0.87, respectively. Both had acceptable internal consistency. Conclusion The one-day workshop on framing MCQs as a part of a FDP did not have a significant impact on the quality of the MCQs as measured by the three indices of item quality but did improve the internal consistency of the MCQs. Further educational programs and research are required to find out what measures can improve the quality of MCQs.
Collapse
Affiliation(s)
| | - Nitin Dhokane
- Physiology, Government Medical College, Sindhudurg, IND
| | | | - Anita Kumari
- Physiology, All India Institute of Medical Sciences, Deoghar, IND
| | - Ayesha Juhi
- Physiology, All India Institute of Medical Sciences, Deoghar, IND
| | - Himel Mondal
- Physiology, All India Institute of Medical Sciences, Deoghar, IND
| | - Pratima Gupta
- Microbiology, All India Institute of Medical Sciences, Deoghar, IND
| |
Collapse
|
6
|
Adnan S, Sarfaraz S, Nisar MK, Jouhar R. Faculty perceptions on one-best MCQ development. CLINICAL TEACHER 2023; 20:e13529. [PMID: 36151738 DOI: 10.1111/tct.13529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 09/07/2022] [Indexed: 01/21/2023]
Abstract
OBJECTIVE The aim of this study was to determine the perception of faculty of undergraduate medical and dental programmes in various private and public sector institutes regarding their Readiness, Attitude and Institutional support for developing high-quality one-best MCQs. METHODS A validated questionnaire was designed for recording demographic data and responses related to Readiness, Attitude and Institutional support based on 5-point Likert scale and multiple options. Scores for items on Likert scale were categorised (Readiness: poor 0-12, good 13-24, Attitude: negative 0-12, positive 13-24, Institutional support: no support 0-12, highly supportive 13-24). The individual and overall scores related to Readiness, Attitude and Institutional support were compared to demographic characteristics using Independent samples and Paired samples t-test as appropriate. Data was analysed using SPSS version 25.0. P-value of <0.05 (two-sided) was considered significant. RESULTS With a response rate of 87.5%, the mean scores for Institutional support were higher (14.45 ± 4.73) compared to those for Readiness (13.39 ± 4.51) and Attitude (12.54 ± 4.59). Responses to multiple choice items revealed that faculty considered MCQ writing workshops to be effective while facing most difficulty in formulating scenario and homogenous options. Most faculty reported no commitment issues but desired on-job protected time for item development. No significant association was found between the scores and age group, gender, qualification, institute type, department and designation of participants. CONCLUSION Overall, the faculty were found to be motivated and committed to developing high-quality one-best MCQs. With continued institutional support, faculty can be expected to further engage in writing such items.
Collapse
Affiliation(s)
- Samira Adnan
- Department of Operative Dentistry, Sindh Institute of Oral Health Science, Jinnah Sindh Medical University, Karachi, Pakistan
| | - Shaur Sarfaraz
- Institute of Medical Education, Jinnah Sindh Medical University, Karachi, Pakistan
| | - Muhammad Kashif Nisar
- Department of Biochemistry, Liaquat National Hospital and Medical College, Karachi, Pakistan
| | - Rizwan Jouhar
- Department of Restorative Dentistry and Endodontics, College of Dentistry, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
7
|
Zahoor AW, Farooqui SI, Khan A, Kazmi SAM, Qamar N, Rizvi J. Evaluation of Cognitive Domain in Objective Exam of Physiotherapy Teaching Program by Using Bloom's Taxonomy. JOURNAL OF HEALTH AND ALLIED SCIENCES NU 2022. [DOI: 10.1055/s-0042-1755447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Abstract
Objective For the development and growth in conceptual understanding of education, evaluation is one of the key factors of it. Improving a student's cognitive level is highly dependent upon the questions being asked in exams. The primary aim of this study is to analyze the cognitive level of physiotherapy exam papers using Bloom's taxonomy.
Material and Methods The study was performed in a Private Medical University, Doctor of Physical Therapy Program in all 5 years of mid-term examination of 2019. One thousand and eighty multiple-choice questions were evaluated on revised Bloom's taxonomy of cognitive domain.
Results It was found that most lower order cognitive questions were asked from first- and second-year students, whereas third- to fifth-year students were asked higher order cognitive questions ranging from 27.5 to 38%.
Conclusion The examination analyzed the efficacy of education being provided. It helped in finding the subject content that needs greater emphasis and clarification. The faculty should give consideration on higher order cognitive level questions to encourage critical thinking among students and the medical colleges should develop the policy on construction of question papers according to the goal of each study year.
Collapse
Affiliation(s)
- Al-Wardha Zahoor
- Ziauddin college of Rehabilitation Sciences, Ziauddin University, Karachi, Pakistan
| | | | - Amna Khan
- Ziauddin college of Rehabilitation Sciences, Ziauddin University, Karachi, Pakistan
| | | | - Naveed Qamar
- Physiotherapy Department, Aga Khan University Hospital, Karachi, Pakistan
| | - Jaza Rizvi
- Ziauddin college of Rehabilitation Sciences, Ziauddin University, Karachi, Pakistan
| |
Collapse
|
8
|
Belay LM, Sendekie TY, Eyowas FA. Quality of multiple-choice questions in medical internship qualification examination determined by item response theory at Debre Tabor University, Ethiopia. BMC MEDICAL EDUCATION 2022; 22:635. [PMID: 35989323 PMCID: PMC9394015 DOI: 10.1186/s12909-022-03687-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Assessment of cognitive competence is a major element of the internship qualification exam in undergraduate medical education in Ethiopia. Assessing the quality of exam items can help to improve the validity of assessments and assure stakeholders about the accuracy of the go/no decision to the internship. However, we know little about the quality of exam items utilized to ascertain fitness to join the medical internship. Therefore, this study aimed to analyze the quality of multiple-choice questions (MCQs) of the qualification exam administered to final-year medical students at Debre Tabor University (DTU), Ethiopia. METHODS A psychometric study was conducted to assess the qualities of 120 randomly selected MCQs and 407 distractors. Item characteristics were estimated using the item response theory (IRT) model. T-test, one-way ANOVA, and chi-square tests were run to analyze the univariate association between factors. Pearson's correlation test was done to determine the predictive validity of the qualification examination. RESULT Overall, 16, 51, and 33% of the items had high, moderate, and low distractor efficiency, respectively. About two-thirds (65.8%) of the items had two or more functioning distractors and 42.5% exhibited a desirable difficulty index. However, 77.8% of items administered in the qualification examination had a negative or poor discrimination index. Four and five option items didn't show significant differences in psychometric qualities. The qualification exam showed a positive predictive value of success in the national licensing examination (Pearson's correlation coefficient = 0.5). CONCLUSIONS The psychometric properties of the medical qualification exam were inadequate for making valid decisions. Five option MCQs were not better than four options in terms of psychometric qualities. The qualification examination had a positive predictive validity of future performance. High-stakes examination items must be properly created and reviewed before being administered.
Collapse
|
9
|
Pham H, Court-Kowalski S, Chan H, Devitt P. Writing Multiple Choice Questions-Has the Student Become the Master? TEACHING AND LEARNING IN MEDICINE 2022:1-12. [PMID: 35491868 DOI: 10.1080/10401334.2022.2050240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
CONSTRUCT We compared the quality of clinician-authored and student-authored multiple choice questions (MCQs) using a formative, mock examination of clinical knowledge for medical students. BACKGROUND Multiple choice questions are a popular format used in medical programs of assessment. A challenge for educators is creating high-quality items efficiently. For expediency's sake, a standard practice is for faculties to repeat items in examinations from year to year. This study aims to compare the quality of student-authored with clinician-authored items as a potential source of new items to include in faculty item banks. APPROACH We invited Year IV and V medical students at the University of Adelaide to participate in a mock examination. The participants first completed an online instructional module on strategies for answering and writing MCQs, then submitted one original MCQ each for potential inclusion in the mock examination. Two 180-item mock examinations, one for each year level, were constructed. Each consisted of 90 student-authored items and 90 clinician-authored items. Participants were blinded to the author of each item. Each item was analyzed for item difficulty and discrimination, number of item-writing flaws (IWFs) and non-functioning distractors (NFDs), and cognitive skill level (using a modified version of Bloom's taxonomy). FINDINGS Eighty-nine and 91 students completed the Year IV and V examinations, respectively. Student-authored items, compared with clinician-authored items, tended to be written at both a lower cognitive skill and difficulty level. They contained a significantly higher rate of IWFs (2-3.5 times) and NFDs (1.18 times). However, they were equally or better discriminating items than clinician-authored items. CONCLUSIONS Students can author MCQ items with comparable discrimination to clinician-authored items, despite being inferior in other parameters. Student-authored items may be considered a potential source of material for faculty item banks; however, several barriers exist to their use in a summative setting. The overall quality of items remains suboptimal, regardless of author. This highlights the need for ongoing faculty training in item writing.
Collapse
Affiliation(s)
- Hannah Pham
- Adelaide Medical School, University of Adelaide, Adelaide, South Australia
| | - Stefan Court-Kowalski
- Adelaide Medical School, University of Adelaide, Adelaide, South Australia
- Royal Adelaide Hospital, Adelaide, South Australia
| | - Hong Chan
- SA Ambulance Service, Eastwood, South Australia
| | - Peter Devitt
- Adelaide Medical School, University of Adelaide, Adelaide, South Australia
| |
Collapse
|
10
|
Nguyentan DC, Gruenberg K, Shin J. Should multiple-choice questions get the SAQ? Development of a short-answer question writing rubric. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:591-596. [PMID: 35715099 DOI: 10.1016/j.cptl.2022.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Revised: 01/17/2022] [Accepted: 04/27/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Short-answer questions (SAQs) are often used to assess pharmacy student competency. However, the literature lacks guidance on SAQ development strategies, resulting in varying practices between SAQ writers. Understanding student and faculty perceptions of what constitutes a high-quality SAQ can identify best practices for SAQ development. METHODS We surveyed second-year pharmacy students at the University of California San Francisco (UCSF) to assess their perceptions of SAQs. Likert-type data were descriptively analyzed, and open-ended responses were analyzed using thematic analysis; we used these results to draft an initial SAQ checklist. We then conducted focus groups of UCSF pharmacy faculty to explore their experiences writing SAQs. Transcripts were analyzed using the survey codebook and de novo codes to generate themes. We used the focus group findings to finalize the checklist. RESULTS Seventy-five students (82%) completed the survey. Students identified "structure" (organizing into sections/lists) and "content" (clearly delineating student's task) as two ways to improve SAQ quality. Eight faculty participated in focus groups of two to three participants each. Faculty expanded on these previous themes and also identified a new theme, "process." This included peer review of SAQs as well as the iterative process involved in writing the SAQ, model answer, and grading rubric. CONCLUSIONS Content, structure, and process were the three areas identified for the improvement of SAQ quality at our institution. A checklist outlining best practices in these areas may be best implemented and adopted within the SAQ peer-review process.
Collapse
Affiliation(s)
- Ducanhhoa-Crystal Nguyentan
- Pharmacy Practice Resident, Department of Pharmacy, School of Pharmacy, University of Washington, 1959 NE Pacific Street, Seattle, WA 98195-7630, United States.
| | - Katherine Gruenberg
- Department of Clinical Pharmacy, School of Pharmacy, University of California, San Francisco, 521 Parnassus Avenue, Floor 3, San Francisco, CA 94143-0622, United States.
| | - Jaekyu Shin
- Department of Clinical Pharmacy, School of Pharmacy, University of California, San Francisco, 521 Parnassus Avenue, Floor 3, San Francisco, CA 94143-0622, United States.
| |
Collapse
|
11
|
Owolabi LF, Adamu B, Taura MG, Isa AI, Jibo AM, Abdul-Razek R, Alharthi MM, Alghamdi M. Impact of a longitudinal faculty development program on the quality of multiple-choice question item writing in medical education. Ann Afr Med 2021; 20:46-51. [PMID: 33727512 PMCID: PMC8102895 DOI: 10.4103/aam.aam_14_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Background: Like many other academic programs, medical education is incomplete without a robust assessment plan. Objective: The study aimed to evaluate the impact of longitudinal faculty development program (FDP) on the examination item quality (EIQ) from a cohort of medical college faculty members. Methods: Item analysis (IA) of multiple-choice questions (MCQs) from a cohort of medical tutors over a 3-year period (2017 [S1], 2018 [S2], and 2019 [S3]) before and following once-per-week FDP was conducted. The questions were from three randomly selected courses: man and his environment (MEV) from phase 1, central nervous system (CNS) from phase 2, and internal medicine (MED) from phase 3. Data assessed were 480 MCQs from the final exams in the courses. The parameters considered in IA were the difficulty index, index of discrimination, nonfunctional distractors (NFDs), distractor efficiency for each question item, and Cronbach's alpha (CA) for the test as a whole. Comparison over the 3 years was made using Fisher's exact test and repeated-measures ANOVA with Bonferroni test as post hoc test. Results: Overall, out of 480 MCQs, 272 had no NFD (52 [19.52%], 104 [38.24%], and 116 [42.65%] in 2017, 2018, and 2019, respectively) with a significant difference between S3, S2, and S1 (P < 0.0001). The mean CA for the exams in S1, S2, and S3, respectively, were 0.51, 0.77, and 0.84, P < 0.0001. Conclusion: There was an improvement in EIQ following the implementation of longitudinal FDP. Thus, the need for active training and retraining of the faculty for a better EIQ cannot be overemphasized.
Collapse
Affiliation(s)
- Lukman Femi Owolabi
- Department of Medicine, University of Bisha Medical College, Bisha, Saudi Arabia
| | - Bappa Adamu
- Department of Medicine, University of Bisha Medical College, Bisha, Saudi Arabia
| | - Magaji Garba Taura
- Department of Anatomy, University of Bisha Medical College, Bisha, Saudi Arabia
| | - Adamu Imam Isa
- Department of Physiology, University of Bisha Medical College, Bisha, Saudi Arabia
| | - Abubakar Muhammed Jibo
- Department of Community of Medicine, University of Bisha Medical College, Bisha, Saudi Arabia
| | - Reda Abdul-Razek
- Department of Medicine, University of Bisha Medical College, Bisha, Saudi Arabia
| | | | - Mushabab Alghamdi
- Department of Medicine, University of Bisha Medical College, Bisha, Saudi Arabia
| |
Collapse
|
12
|
Menon B, Miller J, DeShetler LM. Questioning the questions: Methods used by medical schools to review internal assessment items. MEDEDPUBLISH 2021; 10:37. [PMID: 38486513 PMCID: PMC10939609 DOI: 10.15694/mep.2021.000037.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024] Open
Abstract
This article was migrated. The article was marked as recommended. Objective: Review of assessment questions to ensure quality is critical to properly assess student performance. The purpose of this study was to identify processes used by medical schools to review questions used in internal assessments. Methods: The authors recruited professionals involved with the writing and/or review of questions for their medical school's internal assessments to participate in this study. The survey was administered electronically via an anonymous link, and participation was solicited through the DR-ED listserv, an electronic discussion group for medical educators. Responses were collected over a two-week period, and one reminder was sent to increase the response rate. The instrument was comprised of one demographic question, two closed-ended questions, and two open-ended questions. Results: Thirty-nine respondents completed the survey in which 22 provided the name of their institution/medical school. Of those who self-identified, no two respondents appeared to be from the same institution, and participants represented institutions from across the United States with two from other countries. The majority (n=32, 82%) of respondents indicated they had a process to review student assessment questions. Most participants reported that faculty and course/block directors had responsibility for review of assessment questions, while some indicated they had a committee or group of faculty who was responsible for review. Most focused equally on content/accuracy, formatting, and grammar as reported. Over 81% (n=22) of respondents indicated they used NBME resources to guide review, and less than 19% (n=5) utilized internally developed writing guides. Conclusions: Results of this study identified that medical schools are using a wide range of item review strategies and use a variety of tools to guide their review. These results will give insight to other medical schools who do not have processes in place to review assessment questions or who are looking to expand upon current procedures.
Collapse
|
13
|
Kowash M, Alhobeira H, Hussein I, Al Halabi M, Khan S. Knowledge of dental faculty in gulf cooperation council states of multiple-choice questions' item writing flaws. MEDICAL EDUCATION ONLINE 2020; 25:1812224. [PMID: 32835640 PMCID: PMC7482711 DOI: 10.1080/10872981.2020.1812224] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 07/29/2020] [Accepted: 08/15/2020] [Indexed: 06/11/2023]
Abstract
Multiple-Choice Questions provide an objective cost/time effective assessment. Deviation from appropriate question writing structural guidelines will most probably result in commonly ignored multiple-choice questions writing flaws, influencing the ability of the assessment to measure students' cognitive levels thereby seriously affecting students' academic performance outcome measures. To gauge the knowledge of multiple-choice question items writing flaws in dental faculty working at colleges in Gulf Cooperation Council (GCC) countries. A cross-sectional short online Survey MonkeyTM multiple-choice questions-based questionnaire was disseminated to dental faculty working in GCC countries during the academic year 2018/2019. The questionnaire included five test incorrect (flawed) multiple-choice questions and one correct control question. The participants were asked to identify flawed multiple-choice question items from the known 14 items writing flaws. Out of a total of 460 faculty, 216 respondents completed the questionnaires, 132 (61.1%) were from Saudi Arabia, while numbers of participants from United Arab Emirates, Kuwait and Oman were 59 (27.3), 14 (6.5%) and 11 (5.1%) respectively. Majority of participants were male (n = 141, 65.9%) compared to 73 females (34.1%). Eighty percent of the participants possessed more than five years of teaching experience. Assistant professors constituted the majority (43.3%) of the academic positions participating in this study. The overall fail rate ranged from 76.3% to 98.1% and almost 2/3rds of the participants were unable to identify one or more of the flawed item(s). No significant association was observed between the demographics (age, region, academic position and specialty) and knowledge except that of participant's gender (p < 0.009). GCC dental faculty demonstrated below average knowledge of multiple-choice question items writing flaws. Training and workshops are needed to ensure substantial exposure to proper multiple-choice question items construction standards.
Collapse
Affiliation(s)
- Mawlood Kowash
- Pediatric Dentistry Department, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Hail, United Arab Emirates
| | - Hazza Alhobeira
- Restorative Dentistry Department, Hail University, Hail, Saudi Arabia
| | - Iyad Hussein
- Pediatric Dentistry Department, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Hail, United Arab Emirates
| | - Manal Al Halabi
- Pediatric Dentistry Department, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Hail, United Arab Emirates
| | - Saif Khan
- Basic Dental and Medical Sciences Department, Hail University, Hail, Saudi Arabia
| |
Collapse
|
14
|
Moore WL. Does Faculty Experience Count? A Quantitative Analysis of Evidence-Based Testing Practices in Baccalaureate Nursing Education. Nurs Educ Perspect 2020; 42:17-21. [PMID: 33230018 DOI: 10.1097/01.nep.0000000000000754] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AIM This study explored evidence-based testing practices of nurse faculty teaching in baccalaureate programs. BACKGROUND Faculty-developed examinations are important for determining progression in nursing programs. Little is known about faculty who implement such practices most often. METHOD A causal comparative study with a convenience sample of 177 was used to answer the research question. Participants were grouped according to level of teaching experience based on Benner's novice to expert theory. Individual/group means were calculated for the best practices in test development survey; one-way analysis of variance was used to identify significant differences between groups. RESULTS Expert faculty had higher overall mean scores than the other four groups, with significantly higher scores than both the advanced beginner (p = .007) and proficient (p = .020) groups. CONCLUSION Faculty with more experience seem to implement evidence-based testing practices most often. This information can be used to guide faculty development and peer-mentoring initiatives within nursing programs.
Collapse
Affiliation(s)
- Wendy L Moore
- About the Author Wendy L. Moore, PhD, RN, CNE, is the ABSN program director, Utica College Department of Nursing, Utica, New York. For more information, contact her at
| |
Collapse
|
15
|
Gupta P, Meena P, Khan AM, Malhotra RK, Singh T. Effect of Faculty Training on Quality of Multiple-Choice Questions. Int J Appl Basic Med Res 2020; 10:210-214. [PMID: 33088746 PMCID: PMC7534721 DOI: 10.4103/ijabmr.ijabmr_30_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 02/25/2020] [Accepted: 04/26/2020] [Indexed: 11/04/2022] Open
Abstract
Background Multiple-choice question (MCQ) is frequently used assessment tool in medical education, both for certification and competitive examinations. Ill-constructed MCQs impact the utility of the assessment and thus the fate of examinee. We conducted this study to ascertain whether a short training session for faculty on MCQ writing results in desired improvement in their item-writing skills. Methods A 1-day workshop on constructing high-quality MCQs was conducted for the faculty as a before-after design, following training session of 3 h duration. 28 participants wrote preworkshop (n = 133) and postworkshop (n = 137) MCQs, which were analyzed and compared for 17 item-writing flaws. A mock test of 100 MCQs (selected by stratified random sampling from all the MCQs generated during the workshop) was conducted for MBBS-passed students for item analysis. Results Item-writing flaws reduced following the training (15% vs. 27.7%, P < 0.05). Improvement mainly occurred in quality of options; heterogeneity dropped from 27.1% prior to the workshop to 5.8% postworkshop. The proportion of MCQs failing the cover test remained similarly high (68.4% vs. 60.6%), and there was no improvement in writing of the stem before and after the workshop. The item analysis did not reveal any significant improvement in facility value, discriminating index, and proportion of nonfunctioning distractors. Conclusion A single, short-duration faculty training session is not good enough to correct flaws in writing of the MCQs. There is a need of focused training of the faculty in MCQ writing. Courses with a longer duration, supplemented by repeated or continuous faculty development programs, need to be explored.
Collapse
Affiliation(s)
- Piyush Gupta
- Department of Pediatrics, University College of Medical Sciences, Delhi, India
| | - Pinky Meena
- Department of Pediatrics, University College of Medical Sciences, Delhi, India
| | - Amir Maroof Khan
- Department of Community Medicine, Medical Education Unit, University College of Medical Sciences, Delhi, India
| | - Rajeev Kumar Malhotra
- Delhi Cancer Registry, Dr. BRA Institute Rotary Cancer Hospital, AIIMS, Delhi, India
| | - Tejinder Singh
- Department of Pediatrics and Medical Education, SGRD Institute of Medical Sciences and Research, Amritsar, Punjab, India
| |
Collapse
|
16
|
Danh T, Desiderio T, Herrmann V, Lyons HM, Patrick F, Wantuch GA, Dell KA. Evaluating the quality of multiple-choice questions in a NAPLEX preparation book. CURRENTS IN PHARMACY TEACHING & LEARNING 2020; 12:1188-1193. [PMID: 32739055 DOI: 10.1016/j.cptl.2020.05.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 03/13/2020] [Accepted: 05/29/2020] [Indexed: 06/11/2023]
Abstract
INTRODUCTION There is a plethora of preparatory books and guides available to help study for the North American Pharmacist Licensure Examination (NAPLEX). However, the quality of questions included has not been scrutinized. Our objective was to evaluate the quality of multiple-choice questions (MCQs) construction in a commonly used NAPLEX preparatory book. METHODS Five students and two faculty members reviewed MCQs from the RxPrep 2018 edition course book. Item structure and utilization of case-based questions were evaluated using best practices for item construction. Frequency of item writing flaws (IWF) and utilization of cases for case-based questions was identified. RESULTS A total of 298 questions were reviewed. Twenty-seven (9.1%) questions met all best practices for item construction. Flawed questions contained an average of 2.53 IWF per MCQ. The most commonly identified best practice violations were answer choices containing differing length and verb tense (21%) and question stems containing too little or too much information necessary to eliminate distractors (16.6%). Of the case-based questions, the majority (61.9%) did not require utilization of the provided case. CONCLUSIONS This pilot analysis identified that a majority of MCQs in one NAPLEX preparatory source contained IWF. These results align with previous evaluations of test-banks in published books outside of pharmacy. Further evaluation of other preparatory materials, to expand on the findings from this pilot analysis, are needed to evaluate the pervasiveness of IWF in preparatory materials and the effect of flawed questions on utility of study materials.
Collapse
Affiliation(s)
- Tina Danh
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States
| | - Tamara Desiderio
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States
| | - Victoria Herrmann
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States
| | - Heather M Lyons
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States
| | - Frankie Patrick
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States
| | - Gwendolyn A Wantuch
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States.
| | - Kamila A Dell
- University of South Florida Taneja College of Pharmacy, 12901 Bruce B. Downs Blvd, MDC 30, Tampa, FL 33612, United States
| |
Collapse
|
17
|
Karthikeyan S, O’Connor E, Hu W. Motivations of assessment item writers in medical programs: a qualitative study. BMC MEDICAL EDUCATION 2020; 20:334. [PMID: 32993579 PMCID: PMC7523313 DOI: 10.1186/s12909-020-02229-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 09/07/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND The challenge of generating sufficient quality items for medical student examinations is a common experience for medical program coordinators. Faculty development strategies are commonly used, but there is little research on the factors influencing medical educators to engage in item writing. To assist with designing evidence-based strategies to improve engagement, we conducted an interview study informed by self-determination theory (SDT) to understand educators' motivations to write items. METHODS We conducted 11 semi-structured interviews with educators in an established medical program. Interviews were transcribed verbatim and underwent open coding and thematic analysis. RESULTS Major themes included; responsibility for item writing and item writer motivations, barriers and enablers; perceptions of the level of content expertise required to write items; and differences in the writing process between clinicians and non-clinicians. CONCLUSIONS Our findings suggest that flexible item writing training, strengthening of peer review processes and institutional improvements such as improved communication of expectations, allocation of time for item writing and pairing new writers with experienced writers for mentorship could enhance writer engagement.
Collapse
Affiliation(s)
- Sowmiya Karthikeyan
- School of Medicine, Western Sydney University, Narellan Road & Gilchrist Drive, Campbelltown, NSW 2560 Australia
| | - Elizabeth O’Connor
- School of Medicine, Western Sydney University, Narellan Road & Gilchrist Drive, Campbelltown, NSW 2560 Australia
| | - Wendy Hu
- School of Medicine, Western Sydney University, Narellan Road & Gilchrist Drive, Campbelltown, NSW 2560 Australia
| |
Collapse
|
18
|
Shaikh S, Kannan SK, Naqvi ZA, Pasha Z, Ahamad M. The Role of Faculty Development in Improving the Quality of Multiple-Choice Questions in Dental Education. J Dent Educ 2020; 84:316-322. [PMID: 32176343 DOI: 10.21815/jde.019.189] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 08/16/2019] [Indexed: 11/20/2022]
Abstract
Valid and reliable assessment of students' knowledge and skills is integral to dental education. However, most faculty members receive no formal training on student assessment techniques. The aim of this study was to quantify the value of a professional development program designed to improve the test item-writing skills of dental faculty members. A quasi-experimental (pretest, intervention, posttest) study was conducted with faculty members in the dental school of Majmaah University, Saudi Arabia. Data assessed were 450 multiple-choice questions (MCQs) from final exams in 15 courses in 2017 (prior to the intervention; pretest) and the same number in 2018 (after the intervention; posttest). The intervention was a faculty development program implemented in 2018 to improve the writing of MCQs. This training highlighted construct-irrelevant variance-the abnormal increase or decrease in test scores due to factors extraneous to constructs of interest-and provided expert advice to rectify flaws. Item analysis of pre- and post-intervention MCQs determined the difficulty index, discrimination index, and proportion of non-functional distractors for each question. MCQs on 2017 and 2018 exams were compared on each of these parameters. The results showed statistically significant improvements in MCQs from 2017 to 2018 on all parameters. MCQs with low discrimination decreased, those with high discrimination increased, and the proportion of questions with more than two non-functional distractors were reduced. These results provide evidence of improved test item quality following implementation of a long-term faculty development program. Additionally, the findings underscore the need for an active dental education department and demonstrate its value for dental schools.
Collapse
|
19
|
Philibert I. The International Literature on Teaching Faculty Development in English-Language Journals: A Scoping Review and Recommendations for Core Topics. J Grad Med Educ 2019; 11:47-63. [PMID: 31428259 PMCID: PMC6697281 DOI: 10.4300/jgme-d-19-00174] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/01/2019] [Accepted: 05/23/2019] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND With increasing physician mobility, there is interest in how medical schools and postgraduate medical education institutions across the world develop and maintain the competence of medical teachers. Published reviews of faculty development (FD) have predominantly included studies from the United States and Canada. OBJECTIVE We synthesized the international FD literature (beyond the US and Canada), focusing on FD type, intended audience, study format, effectiveness, differences among countries, and potential unique features. METHODS We identified English-language publications that addressed FD for medical faculty for teaching and related activities, excluding US and Canadian publications. RESULTS A search of 4 databases identified 149 publications, including 83 intervention studies. There was significant growth in international FD publications for the most recent decade, and a sizable number of studies were from developing economies and/or resulted from international collaborations. Focal areas echo those in earlier published reviews, suggesting the international FD literature addresses similar faculty needs and organizational concerns. CONCLUSIONS The growth in publications in recent years and a higher proportion of reporting on participant reactions, coupled with less frequent reporting of results, transfer to practice, and impact on learners and the organization, suggest this is an evolving field. To enhance international FD, educators and researchers should focus on addressing common needs expressed by faculty, including curriculum design and evaluation, small group teaching, assessing professionalism and providing feedback. Future research should focus on approaches for developing comprehensive institutional FD programs that include communities of learning and practice and evaluating their impact.
Collapse
|
20
|
Karthikeyan S, O’Connor E, Hu W. Barriers and facilitators to writing quality items for medical school assessments - a scoping review. BMC MEDICAL EDUCATION 2019; 19:123. [PMID: 31046744 PMCID: PMC6498649 DOI: 10.1186/s12909-019-1544-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 04/03/2019] [Indexed: 05/10/2023]
Abstract
BACKGROUND Producing a sufficient quantity of quality items for use in medical school examinations is a continuing challenge in medical education. We conducted this scoping review to identify barriers and facilitators to writing good quality items and note gaps in the literature that are yet to be addressed. METHODS We conducted searches of three databases (ERIC, Medline and Scopus) as well as Google Scholar for empirical studies on the barriers and facilitators for writing good quality items for medical school examinations. RESULTS The initial search yielded 1997 articles. After applying pre-determined criteria, 13 articles were selected for the scoping review. Included studies could be broadly categorised into studies that attempted to directly investigate the barriers and facilitators and studies that provided implicit evidence. Key findings were that faculty development and quality assurance were facilitators of good quality item writing while barriers at both an individual and institutional level include motivation, time constraints and scheduling. CONCLUSIONS Although studies identified factors that may improve or negatively impact on the quality of items written by faculty and clinicians, there was limited research investigating the barriers and facilitators for individual item writers. Investigating these challenges could lead to more targeted and effective interventions to improve both the quality and quantity of assessment items.
Collapse
Affiliation(s)
- Sowmiya Karthikeyan
- School of Medicine, Western Sydney University, Ainsworth Bldg, Goldsmith Ave, Campbelltown, NSW 2560 Australia
| | - Elizabeth O’Connor
- School of Medicine, Western Sydney University, Ainsworth Bldg, Goldsmith Ave, Campbelltown, NSW 2560 Australia
| | - Wendy Hu
- School of Medicine, Western Sydney University, Ainsworth Bldg, Goldsmith Ave, Campbelltown, NSW 2560 Australia
| |
Collapse
|
21
|
Scott KR, King AM, Estes MK, Conlon LW, Jones JS, Phillips AW. Evaluation of an Intervention to Improve Quality of Single-best Answer Multiple-choice Questions. West J Emerg Med 2019; 20:11-14. [PMID: 30643595 PMCID: PMC6324722 DOI: 10.5811/westjem.2018.11.39805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 11/04/2018] [Accepted: 11/08/2018] [Indexed: 11/29/2022] Open
Abstract
Introduction Despite the ubiquity of single-best answer multiple-choice questions (MCQ) in assessments throughout medical education, question writers often receive little to no formal training, potentially decreasing the validity of assessments. While lengthy training opportunities in item writing exist, the availability of brief interventions is limited. Methods We developed and performed an initial validation of an item-quality assessment tool and measured the impact of a brief educational intervention on the quality of single-best answer MCQs. Results The item-quality assessment tool demonstrated moderate internal structure evidence when applied to the 20 practice questions (κ=.671, p<.001) and excellent internal structure when applied to the true dataset (κ=0.904, p<.001). Quality scale scores for pre-intervention questions ranged from 2–6 with a mean ± standard deviation (SD) of 3.79 ± 1.23, while post-intervention scores ranged from 4–6 with a mean ± SD of 5.42 ± 0.69. The post-intervention scores were significantly higher than the pre-intervention scores, x2(1) =38, p <0.001. Conclusion Our study demonstrated short-term improvement in single-best answer MCQ writing quality after a brief, open-access lecture, as measured by a simple, novel, grading rubric with reasonable validity evidence.
Collapse
Affiliation(s)
- Kevin R Scott
- Perelman School of Medicine at the University of Pennsylvania, Department of Emergency Medicine, Philadelphia, Pennsylvania
| | - Andrew M King
- The Ohio State University Wexner Medical Center, Department of Emergency Medicine, Columbus, Ohio
| | - Molly K Estes
- Loma Linda University Medical Center, Department of Emergency Medicine, Loma Linda, California
| | - Lauren W Conlon
- Perelman School of Medicine at the University of Pennsylvania, Department of Emergency Medicine, Philadelphia, Pennsylvania
| | - Jonathan S Jones
- Merit Health Central, Department of Emergency Medicine, Jackson, Mississippi
| | - Andrew W Phillips
- University of North Carolina, Department of Emergency Medicine, Chapel Hill, North Carolina
| |
Collapse
|
22
|
Smeby SS, Lillebo B, Gynnild V, Samstad E, Standal R, Knobel H, Vik A, Slørdahl TS. Improving assessment quality in professional higher education: Could external peer review of items be the answer? COGENT MEDICINE 2019. [DOI: 10.1080/2331205x.2019.1659746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Affiliation(s)
- Susanne Skjervold Smeby
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Børge Lillebo
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Clinic of Medicine and Rehabilitation, Levanger Hospital, Nord-Trøndelag Hospital Trust, Levanger, Norway
| | - Vidar Gynnild
- Department of Education and Lifelong Learning, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Eivind Samstad
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Clinic of Medicine and Rehabilitation, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway
| | - Rune Standal
- Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Heidi Knobel
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Department of Oncology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Anne Vik
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Department of Neurosurgery, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| | - Tobias S. Slørdahl
- Department of Clinical and Molecular Medicine, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Department of Haematology, St. Olavs Hospital, Trondheim University Hospital, Trondheim, Norway
| |
Collapse
|
23
|
Nayer M, Glover Takahashi S, Hrynchak P. Twelve tips for developing key-feature questions (KFQ) for effective assessment of clinical reasoning. MEDICAL TEACHER 2018; 40:1116-1122. [PMID: 30001652 DOI: 10.1080/0142159x.2018.1481281] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Clinical reasoning is the cognitive process that makes it possible for us to reach conclusions from clinical data. "A key feature (KF) is defined as a significant step in the resolution of a clinical problem. Examinations using key-feature questions (KFQs) focus on a challenging aspect in the diagnosis and management of a clinical problem where the candidates are most likely to make errors." KFs have been used at different levels of medical education and practice, from undergraduate to certification examinations. KFQs illuminate the strengths and limits of an individual's clinical problem-solving ability. These types of items are more likely than other forms of assessment to discriminate among stronger or weaker candidates in the area of clinical reasoning. The 12 tips in this article will provide guidance to faculty who wish to develop KFQs for their tests.
Collapse
Affiliation(s)
- Marla Nayer
- a University of Toronto , Toronto , ON , Canada
| | | | | |
Collapse
|
24
|
Pawluk SA, Shah K, Minhas R, Rainkie D, Wilby KJ. A psychometric analysis of a newly developed summative, multiple choice question assessment adapted from Canada to a Middle Eastern context. CURRENTS IN PHARMACY TEACHING & LEARNING 2018; 10:1026-1032. [PMID: 30314537 DOI: 10.1016/j.cptl.2018.05.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 02/12/2018] [Accepted: 05/10/2018] [Indexed: 06/08/2023]
Abstract
INTRODUCTION Accreditation necessitates that assessment methods reflect the standards established by the accrediting body. The process of adapting assessments to a new context can present unique challenges with uncertainty around psychometric defensibility of the adapted exam. METHODS A psychometric analysis of a summative multiple-choice-question (MCQ) assessment, adapted from Canada, for graduating pharmacy students from a Canadian accredited program in Qatar was conducted. Rates of difficult items, item discrimination measured by point biserial correlation (rpb), and non-functioning distractors (NFDs) were calculated to identify deficiencies and challenges with an adapted exam. Challenges encountered throughout the adaption process and recommendations were documented. RESULTS Overall score of a 90-item, four option, MCQ exam ranged from 46.7% to 78.9% (mean of 61.9%). For difficulty, there were 17 items with less than 30% of students answering correctly, while 29 items had unacceptable or poor discrimination (rpb < 0.1). NFDs occurred in 78 items with 49 containing at least two NFDs. DISCUSSION AND CONCLUSIONS Difficulty of the exam was deemed acceptable yet discriminator ability requires improvement. The high frequency of questions with NFDs suggests that faculty have difficulty developing plausible distractors for an adapted MCQ exam. This could be due to a lack of training or requirement for inclusion of too many distractor options. While it is feasible to implement an assessment adapted from a different learning environment, measures need to be taken to improve psychometric defensibility. The high number of questions with NFDs indicates that the current method of exam development does not encourage the incorporation of functional distractors.
Collapse
Affiliation(s)
| | - Kieran Shah
- Faculty of Pharmaceutical Sciences, University of British Columbia, 2405 Wesbrook Mall, V6T 1Z3, Vancouver, British Columbia, Canada.
| | - Rajwant Minhas
- Faculty of Pharmaceutical Sciences, University of British Columbia, 2405 Wesbrook Mall, V6T 1Z3, Vancouver, British Columbia, Canada.
| | - Daniel Rainkie
- College of Pharmacy, Qatar University, PO Box. 2713, Doha, Qatar.
| | - Kyle John Wilby
- College of Pharmacy, Qatar University, PO Box. 2713, Doha, Qatar.
| |
Collapse
|
25
|
Effectiveness of longitudinal faculty development programs on MCQs items writing skills: A follow-up study. PLoS One 2017; 12:e0185895. [PMID: 29016659 PMCID: PMC5634605 DOI: 10.1371/journal.pone.0185895] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 09/21/2017] [Indexed: 11/20/2022] Open
Abstract
This study examines the long-term impact of the faculty development programs on the multiple choice question (MCQ) items’ quality leading to study its effect on the students’ overall competency level during their yearly academic assessment. A series of longitudinal highly constructed faculty development workshops were conducted to improve the quality of the MCQs items writing skills. A total of 2207 MCQs were constructed by 58 participants for the assessment of 882 students’ cognitive competency level during the academic years 2012–2015. The MCQs were analyzed for the difficulty index (P-value), discriminating index (DI), presence/absence of item writing flaws (IWFs), and non-functioning distractors (NFDs), Bloom’s taxonomy cognitive levels, test reliability, and the rate of students’ scoring. Significant improvement in the difficulty index and DI were noticed during each successive academic year. Easy and poor discriminating questions, NFDs and IWFs were decreased significantly, whereas distractor efficiency (DE) mean score and high cognitive level (K2) questions were increased substantially during the each successive academic year. Improved MCQs’ quality leaded to increased competency level of the borderline students. Overall, the longitudinal faculty development workshops help in improving the quality of the MCQs items writing skills of the faculty that leads to students’ high competency levels.
Collapse
|
26
|
Walsh JL, Harris BHL, Denny P, Smith P. Formative student-authored question bank: perceptions, question quality and association with summative performance. Postgrad Med J 2017; 94:97-103. [PMID: 28866607 PMCID: PMC5800328 DOI: 10.1136/postgradmedj-2017-135018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2017] [Revised: 07/31/2017] [Accepted: 07/31/2017] [Indexed: 11/30/2022]
Abstract
Purpose of the study There are few studies on the value of authoring questions as a study method, the quality of the questions produced by students and student perceptions of student-authored question banks. Here we evaluate PeerWise, a widely used and free online resource that allows students to author, answer and discuss multiple-choice questions. Study design We introduced two undergraduate medical student cohorts to PeerWise (n=603). We looked at their patterns of PeerWise usage; identified associations between student engagement and summative exam performance; and used focus groups to assess student perceptions of the value of PeerWise for learning. We undertook item analysis to assess question difficulty and quality. Results Over two academic years, the two cohorts wrote 4671 questions, answered questions 606 658 times and posted 7735 comments. Question writing frequency correlated most strongly with summative performance (Spearman’s rank: 0.24, p=<0.001). Student focus groups found that: (1) students valued curriculum specificity; and (2) students were concerned about student-authored question quality. Only two questions of the 300 ’most-answered' questions analysed had an unacceptable discriminatory value (point-biserial correlation <0.2). Conclusions Item analysis suggested acceptable question quality despite student concerns. Quantitative and qualitative methods indicated that PeerWise is a valuable study tool.
Collapse
Affiliation(s)
- Jason L Walsh
- Centre for Medical Education, Cardiff University, Cardiff, UK
| | | | - Paul Denny
- Department of Computer Science, University of Auckland, Auckland, New Zealand
| | - Phil Smith
- Centre for Medical Education, Cardiff University, Cardiff, UK
| |
Collapse
|
27
|
Tariq S, Tariq S, Maqsood S, Jawed S, Baig M. Evaluation of Cognitive levels and Item writing flaws in Medical Pharmacology Internal Assessment Examinations. Pak J Med Sci 2017; 33:866-870. [PMID: 29067055 PMCID: PMC5648954 DOI: 10.12669/pjms.334.12887] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVES This study aimed to evaluate the cognitive levels of Multiple Choice Questions (MCQs) & Short Answer Questions (SAQs) and types of Item Writing Flaws (IWFs) in MCQs in Medical Pharmacology internal assessment exams. METHODS This descriptive, study was conducted over a period of six months, from December 2015 to May 2016 and evaluated six internal assessment examinations comprising SAQs and MCQs. A total of 150 MCQs and 43 SAQs were analyzed. These questions were administered to third-year medical students in the year of 2015. All SAQs were reviewed for their cognitive levels and MCQs were reviewed for cognitive levels as well as for IWFs. Items were classified as flawed if they contained one or more than one flaw. The cognitive level of the questions was determined by the modified Bloom's taxonomy. RESULTS The proportion of flawed items out of 150 items in six exams ranged from 16% to 52%. While the percentage of total flawed items was 28%. Most common types of flaws were implausible distractors 19.69% (26), extra detail in correct option 18.18% (24), vague terms 9.85% (13), unfocused stem 9.09% (12) and absolute terms 9.09% (12). The two-third of MCQs 97(64.67%) were assessing the recall of information, while 29 (19.33%) and 24 (16%) were assessing the interpretation of data and problem-solving skills respectively. The majority of the SAQs (90.7%) were assessing recall of the information and only 9.3% were assessing interpretation of data while none of the questions was assessing the problem-solving skills. CONCLUSIONS The cognitive level of assessment tools (SAQs & MCQs) is low, and IWFS are common in the MCQs. Therefore, faculty should be urged and groomed to design problem-solving questions which are devoid of any flaws.
Collapse
Affiliation(s)
- Saba Tariq
- Dr. Saba Tariq, MBBS, M.Phil, Assistant Professor, Pharmacology, University Medical & Dental College, Faisalabad, Pakistan
| | - Sundus Tariq
- Dr. Sundus Tariq, MBBS, M.Phil Assistant Professor, Physiology, University Medical & Dental College, Faisalabad, Pakistan
| | - Sadia Maqsood
- Dr. Sadia Maqsood, MBBS, M.Phil Senior Demonstrator, Pharmacology, Shaikh Zayed Postgraduate Medical Institute, Shaikh Zayed Hospital, Lahore, Pakistan
| | - Shireen Jawed
- Dr. Shireen Jawed, MBBS, M.Phil Assistant Professor, Physiology, Aziz Fatima Medical College, Faisalabad, Pakistan
| | - Mukhtiar Baig
- Dr. Mukhtiar Baig, MBBS, M.Phil, PhD Professor of Clinical Biochemistry, Faculty of Medicine, Rabigh, King Abdulaziz University, Jeddah, KSA
| |
Collapse
|
28
|
Hijji BM. Flaws of Multiple Choice Questions in Teacher-Constructed Nursing Examinations: A Pilot Descriptive Study. J Nurs Educ 2017; 56:490-496. [PMID: 28787072 DOI: 10.3928/01484834-20170712-08] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 03/02/2017] [Indexed: 11/20/2022]
Abstract
BACKGROUND In many middle eastern universities, English is the medium of instruction and testing. As nurse educators construct multiple choice questions (MCQs), it is essential that items are developed to be valid and reliable to assess student learning. METHOD This study examined the structure of 98 MCQs included in nursing examinations at three middle eastern universities using a checklist composed of 22 literature-based principles. RESULTS Ninety MCQs (91.8%) experienced one or more item-writing flaws. Examples of these included linguistic errors, various problems with the stem, and answer options. Of importance, most faculty did not use item analysis to assess the integrity of the examinations. CONCLUSION Results confirm concerns about the standards faculty use for test construction and item analysis. Universities must ensure that the faculty they hired are fluent in English. Faculty would also benefit from workshops that focus on test construction and the use of item analysis. [J Nurs Educ. 2017;56(8):490-496.].
Collapse
|
29
|
Alamoudi AA, El-Deek BS, Park YS, Al Shawwa LA, Tekian A. Evaluating the long-term impact of faculty development programs on MCQ item analysis. MEDICAL TEACHER 2017; 39:S45-S49. [PMID: 28110583 DOI: 10.1080/0142159x.2016.1254753] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE Evaluating the long-term impact of faculty development programs (FDPs) can help monitor the effectiveness of the program and identify areas for development. This study examined long-term differences in confidence, knowledge, behaviors, and policies of faculty members who attended FDPs on multiple choice question (MCQ) item analysis and faculty members who did not attend the FDPs. METHODS A cross-sectional study design was used, by administering a 24-item survey to a representative sample (simple random selection) of 61 faculty members at King Abdulaziz University Faculty of Medicine. RESULTS Among respondents, 34% did not attend FDPs; 53% attended 1-3 FDPs; and 13% attended more than 3 FDPs on MCQ item analysis. Results showed that faculty knowledge on elements of MCQ item analysis was significantly greater (p = .01) for members who attended the FDPs. Faculty who attended FDPs on MCQ item analysis were twice more likely to conduct item analysis in general (p = .020) and four times more likely to conduct item analysis for more than 70% of module examinations (p = .005). CONCLUSION FDPs focused on MCQ item analysis can yield systematic changes on faculty confidence, knowledge, and behaviors. Moreover, FDPs also need support from the department and need sustained strategic support to ensure continued effectiveness.
Collapse
Affiliation(s)
- Aliaa Amr Alamoudi
- a Department of Clinical Biochemistry, Faculty of Medicine, King Abdulaziz University , Jeddah , Saudi Arabia
| | - Basem Salama El-Deek
- b Department of Medical Education, Faculty of Medicine, King Abdulaziz University , Jeddah , Saudi Arabia
| | - Yoon Soo Park
- c Department of Medical Education, University of Illinois - College of Medicine at Chicago , Chicago , IL , USA
| | - Lana Adey Al Shawwa
- b Department of Medical Education, Faculty of Medicine, King Abdulaziz University , Jeddah , Saudi Arabia
| | - Ara Tekian
- c Department of Medical Education, University of Illinois - College of Medicine at Chicago , Chicago , IL , USA
| |
Collapse
|
30
|
Dell KA, Wantuch GA. How-to-guide for writing multiple choice questions for the pharmacy instructor. CURRENTS IN PHARMACY TEACHING & LEARNING 2017; 9:137-144. [PMID: 29180146 DOI: 10.1016/j.cptl.2016.08.036] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 07/07/2016] [Accepted: 08/23/2016] [Indexed: 06/07/2023]
Abstract
BACKGROUND Writing multiple choice questions (MCQ) takes a lot of practice. Often, pharmacy practitioners lack the training to write effective MCQ. Sources for instruction in effective MCQ writing can be overwhelming with numerous suggestions of what should and should not be done. PURPOSE The following guide is prepared to serve as a succinct reference for creation and revision of MCQ by both novice and seasoned pharmacy faculty practitioners. METHODS The literature is summarized into 12 best practices for writing effective MCQ. Pharmacy specific examples that demonstrate violations of best practices and how they can be corrected are provided. IMPLICATIONS The guide can serve as a primer to write new MCQ, as a reference to revise previously created questions, or as a guide to peer review of MCQ.
Collapse
Affiliation(s)
- Kamila A Dell
- College of Pharmacy, University of South Florida, Tampa, FL; College of Medicine, University of South Florida, Tampa, FL.
| | - Gwendolyn A Wantuch
- College of Pharmacy, University of South Florida, Tampa, FL; College of Medicine, University of South Florida, Tampa, FL
| |
Collapse
|
31
|
Singh D, Tripathi PK, Patwardhan K. "What do Ayurveda Postgraduate Entrance Examinations actually assess?" - Results of a five-year period question-paper analysis based on Bloom's taxonomy. J Ayurveda Integr Med 2016; 7:167-172. [PMID: 27637447 PMCID: PMC5052362 DOI: 10.1016/j.jaim.2016.06.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Revised: 06/13/2016] [Accepted: 06/13/2016] [Indexed: 11/09/2022] Open
Abstract
Background The standards of Ayurveda education in India are being questioned in the recent years and many suggestions related to educational reforms are being put forth by educators and health policy experts. However, the Post Graduate Entrance Examinations (PGEEs) that are carried out to select the candidates to pursue postgraduate programs have received little attention in this context. Objectives The objective of this study was to classify the Multiple Choice Questions (MCQs) from Ayurveda PGEEs conducted in different universities of India during the five year period (ranging from 2010 to 2014) into six levels of Bloom's Taxonomy in cognitive domain. Methods This is a retrospective observational study. The sampling method followed was purposive sampling. Totally, 3299 MCQs obtained out of 25 question papers from seven universities spread across four zones of India (North, South, West and East) were included in the study and were classified based on the Bloom's taxonomy. Results About 93.3% of MCQs assessed only the ‘recall’ component whereas 6.2% of the MCQs assessed ‘comprehension’. Percentage of MCQs that assessed ‘application’ level was a mere 0.3% whereas the percentage of MCQs that assessed the ‘analysis’ component was found to be only 0.2%. There was not even a single question to assess the ‘synthesis’ and ‘evaluation’ components. Conclusions We conclude that an appropriate proportion of MCQs assessing ‘higher order thinking’ are required to be included in Ayurveda PGEEs. While it is possible to frame MCQs to assess all six levels of Bloom's taxonomy in cognitive domain, the teachers are required to be trained well in the skills of MCQ writing. We propose that our study may be taken as a lead to introduce the required reforms in PGEEs. Clinical Trial Registration No.: Not applicable.
Collapse
Affiliation(s)
- Deepti Singh
- Department of Kriya Sharir, Faculty of Ayurveda, Banaras Hindu University, Varanasi, 221005, India
| | - Piyush Kumar Tripathi
- Department of Kriya Sharir, Faculty of Ayurveda, Banaras Hindu University, Varanasi, 221005, India
| | - Kishor Patwardhan
- Department of Kriya Sharir, Faculty of Ayurveda, Banaras Hindu University, Varanasi, 221005, India.
| |
Collapse
|
32
|
Walsh JL, Harris BHL, Smith PE. Single best answer question-writing tips for clinicians. Postgrad Med J 2016; 93:76-81. [DOI: 10.1136/postgradmedj-2015-133893] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Revised: 04/07/2016] [Accepted: 05/31/2016] [Indexed: 11/04/2022]
|
33
|
Webb EM, Phuong JS, Naeger DM. Does Educator Training or Experience Affect the Quality of Multiple-Choice Questions? Acad Radiol 2015; 22:1317-22. [PMID: 26277486 DOI: 10.1016/j.acra.2015.06.012] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Revised: 06/22/2015] [Accepted: 06/27/2015] [Indexed: 12/19/2022]
Abstract
RATIONALE AND OBJECTIVES Physicians receive little training on proper multiple-choice question (MCQ) writing methods. Well-constructed MCQs follow rules, which ensure that a question tests what it is intended to test. Questions that break these are described as "flawed." We examined whether the prevalence of flawed questions differed significantly between those with or without prior training in question writing and between those with different levels of educator experience. MATERIALS AND METHODS We assessed 200 unedited MCQs from a question bank for our senior medical student radiology elective: an equal number of questions (50) were written by faculty with previous training in MCQ writing, other faculty, residents, and medical students. Questions were scored independently by two readers for the presence of 11 distinct flaws described in the literature. RESULTS Questions written by faculty with MCQ writing training had significantly fewer errors: mean 0.4 errors per question compared to a mean of 1.5-1.7 errors per question for the other groups (P < .001). There were no significant differences in the total number of errors between the untrained faculty, residents, and students (P values .35-.91). Among trained faculty 17/50 questions (34%) were flawed, whereas other faculty wrote 38/50 (76%) flawed questions, residents 37/50 (74%), and students 44/50 (88%). Trained question writers' higher performance was mainly manifest in the reduced frequency of five specific errors. CONCLUSIONS Faculty with training in effective MCQ writing made fewer errors in MCQ construction. Educator experience alone had no effect on the frequency of flaws; faculty without dedicated training, residents, and students performed similarly.
Collapse
|