1
|
Newsom LC, Augustine J, Momary K. Development of a script concordance test to assess clinical reasoning in a pharmacy curriculum. CURRENTS IN PHARMACY TEACHING & LEARNING 2022; 14:1135-1142. [PMID: 36154958 DOI: 10.1016/j.cptl.2022.07.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 07/01/2022] [Accepted: 07/20/2022] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Clinical reasoning is a vital skill for student pharmacists in the provision of patient-centered care, but these skills are often difficult to assess in the didactic curriculum. A script concordance test (SCT) is an innovative assessment method that can be used to assess clinical reasoning skills. The objective of this study was to develop and refine an SCT to assess clinical reasoning skills of third year student pharmacists (P3s). METHODS An SCT was written and administered to P3s. Pharmacy practice faculty members served as the expert group. The SCT was scored and Rasch analysis was performed. RESULTS The SCT included 20 case vignettes and 60 questions. Test reliability was 0.34 with mean square values for all items between 0.7 and 1.3. Forty-two questions had a difficulty score between 0 and - 1 logits, indicating there were multiple questions with similar difficulty levels. Two case vignettes and 43.3% of questions (n = 26) were revised to enhance clarity and decrease ambiguity. CONCLUSIONS The SCT is a tool to assess clinical reasoning in the didactic curriculum. Faculty can create the SCT and use statistical methods such as Rasch analysis to assess validity and reliability of the SCT.
Collapse
Affiliation(s)
- Lydia C Newsom
- Department of Pharmacy Practice, Mercer University College of Pharmacy, 3001 Mercer University Drive, Atlanta, GA 30341-4115, United States.
| | - Jill Augustine
- Department of Pharmacy Practice and the Department of Pharmaceutical Sciences, Mercer University College of Pharmacy, 3001 Mercer University Drive, Atlanta, GA 30341-4115, United States.
| | - Kathryn Momary
- Department of Pharmacy Practice, Mercer University College of Pharmacy, 3001 Mercer University Drive, Atlanta, GA 30341-4115, United States.
| |
Collapse
|
2
|
Iglesias Gómez C, González Sequeros O, Salmerón Martínez D. Evaluación mediante script concordance test del razonamiento clínico de residentes en Atención Primaria. An Pediatr (Barc) 2022. [DOI: 10.1016/j.anpedi.2021.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
3
|
Iglesias Gómez C, González Sequeros O, Salmerón Martínez D. Clinical reasoning evaluation using script concordance test in primary care residents. ANALES DE PEDIATRÍA (ENGLISH EDITION) 2022; 97:87-94. [DOI: 10.1016/j.anpede.2022.06.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/30/2021] [Indexed: 11/28/2022] Open
|
4
|
Kün-Darbois JD, Annweiler C, Lerolle N, Lebdai S. Script concordance test acceptability and utility for assessing medical students' clinical reasoning: a user's survey and an institutional prospective evaluation of students' scores. BMC MEDICAL EDUCATION 2022; 22:277. [PMID: 35418078 PMCID: PMC9008989 DOI: 10.1186/s12909-022-03339-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 04/01/2022] [Indexed: 06/14/2023]
Abstract
Script Concordance Testing (SCT) is a method for clinical reasoning assessment in the field of health-care training. Our aim was to assess SCT acceptability and utility with a survey and an institutional prospective evaluation of students' scores.With a user's online survey, we collected the opinions and satisfaction data of all graduate students and teachers involved in the SCT setting. We performed a prospective analysis comparing the scores obtained with SCT to those obtained with the national standard evaluation modality.General opinions about SCT were mostly negative. Students tended to express more negative opinions and perceptions. There was a lower proportion of negative responses in the teachers' satisfaction survey. The proportion of neutral responses was higher for teachers. There was a higher proportion of positive positions towards all questions among teachers. PCC scores significantly increased each year, but SCT scores increased only between the first and second tests. PCC scores were found significantly higher than SCT scores for the second and third tests. Medical students' and teachers' global opinion on SCT was negative. At the beginning SCT scores were found quite similar to PCC scores. There was a higher progression for PCC scores through time.
Collapse
Affiliation(s)
- Jean-Daniel Kün-Darbois
- Maxillofacial Surgery Department, University Hospital of Angers, 49933, Angers Cedex, France.
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France.
| | - Cédric Annweiler
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France
- Geriatric Department, University Hospital of Angers, Angers, France
| | - Nicolas Lerolle
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France
- Intensive Care Department, University Hospital of Angers, Angers, France
| | - Souhil Lebdai
- Faculty for Health Sciences and Medicine, University of Angers, Angers, Angers, France
- Urology Department, University Hospital of Angers, Angers, France
| |
Collapse
|
5
|
Brentnall J, Thackray D, Judd B. Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19020936. [PMID: 35055758 PMCID: PMC8775520 DOI: 10.3390/ijerph19020936] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 11/16/2022]
Abstract
(1) Background: Clinical reasoning is essential to the effective practice of autonomous health professionals and is, therefore, an essential capability to develop as students. This review aimed to systematically identify the tools available to health professional educators to evaluate students' attainment of clinical reasoning capabilities in clinical placement and simulation settings. (2) Methods: A systemic review of seven databases was undertaken. Peer-reviewed, English-language publications reporting studies that developed or tested relevant tools were included. Searches included multiple terms related to clinical reasoning and health disciplines. Data regarding each tool's conceptual basis and evaluated constructs were systematically extracted and analysed. (3) Results: Most of the 61 included papers evaluated students in medical and nursing disciplines, and over half reported on the Script Concordance Test or Lasater Clinical Judgement Rubric. A number of conceptual frameworks were referenced, though many papers did not reference any framework. (4) Conclusions: Overall, key outcomes highlighted an emphasis on diagnostic reasoning, as opposed to management reasoning. Tools were predominantly aligned with individual health disciplines and with limited cross-referencing within the field. Future research into clinical reasoning evaluation tools should build on and refer to existing approaches and consider contributions across professional disciplinary divides.
Collapse
Affiliation(s)
- Jennie Brentnall
- Work Integrated Learning, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia;
- Correspondence:
| | - Debbie Thackray
- Physiotherapy, School of Health Sciences, University of Southampton, Southampton SO17 1BJ, UK;
| | - Belinda Judd
- Work Integrated Learning, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2006, Australia;
| |
Collapse
|
6
|
Steinberg E, Cowan E, Lin MP, Sielicki A, Warrington S. Assessment of Emergency Medicine Residents' Clinical Reasoning: Validation of a Script Concordance Test. West J Emerg Med 2020; 21:978-984. [PMID: 32726273 PMCID: PMC7390545 DOI: 10.5811/westjem.2020.3.46035] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/23/2020] [Indexed: 11/11/2022] Open
Abstract
INTRODUCTION A primary aim of residency training is to develop competence in clinical reasoning. However, there are few instruments that can accurately, reliably, and efficiently assess residents' clinical decision-making ability. This study aimed to externally validate the script concordance test in emergency medicine (SCT-EM), an assessment tool designed for this purpose. METHODS Using established methodology for the SCT-EM, we compared EM residents' performance on the SCT-EM to an expert panel of emergency physicians at three urban academic centers. We performed adjusted pairwise t-tests to compare differences between all residents and attending physicians, as well as among resident postgraduate year (PGY) levels. We tested correlation between SCT-EM and Accreditation Council for Graduate Medical Education Milestone scores using Pearson's correlation coefficients. Inter-item covariances for SCT items were calculated using Cronbach's alpha statistic. RESULTS The SCT-EM was administered to 68 residents and 13 attendings. There was a significant difference in mean scores among all groups (mean + standard deviation: PGY-1 59 + 7; PGY-2 62 + 6; PGY-3 60 + 8; PGY-4 61 + 8; 73 + 8 for attendings, p < 0.01). Post hoc pairwise comparisons demonstrated that significant difference in mean scores only occurred between each PGY level and the attendings (p < 0.01 for PGY-1 to PGY-4 vs attending group). Performance on the SCT-EM and EM Milestones was not significantly correlated (r = 0.12, p = 0.35). Internal reliability of the exam was determined using Cronbach's alpha, which was 0.67 for all examinees, and 0.89 in the expert-only group. CONCLUSION The SCT-EM has limited utility in reliably assessing clinical reasoning among EM residents. Although the SCT-EM was able to differentiate clinical reasoning ability between residents and expert faculty, it did not between PGY levels, or correlate with Milestones scores. Furthermore, several limitations threaten the validity of the SCT-EM, suggesting further study is needed in more diverse settings.
Collapse
Affiliation(s)
- Eric Steinberg
- St. Joseph's University Medical Center, Department of Emergency Medicine, Paterson, New Jersey
| | - Ethan Cowan
- Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Michelle P Lin
- Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Anthony Sielicki
- Mount Sinai Beth Israel, Icahn School of Medicine at Mount Sinai, Department of Emergency Medicine, New York, New York
| | - Steven Warrington
- Orange Park Medical Center, Department of Emergency Medicine, Orange Park, Florida
| |
Collapse
|
7
|
Le test de concordance de script : un outil pédagogique multimodal. Rev Med Interne 2018; 39:566-573. [DOI: 10.1016/j.revmed.2017.12.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Revised: 12/17/2017] [Accepted: 12/28/2017] [Indexed: 11/18/2022]
|
8
|
Abstract
OBJECTIVES Script concordance testing (SCT) is used to assess clinical decision-making. We explore the use of SCT to (1) quantify practice variations in infant lumbar puncture (LP) and (2) analyze physician's characteristics affecting LP decision making. METHODS Using standard SCT processes, a panel of pediatric subspecialty physicians constructed 15 infant LP case vignettes, each with 2 to 4 SCT questions (a total of 47). The vignettes were distributed to pediatric attending physicians and fellows at 10 hospitals within the INSPIRE Network. We determined both raw scores (tendency to perform LP) and SCT scores (agreement with the reference panel) as well as the variation with participant factors. RESULTS Two hundred twenty-six respondents completed all 47 SCT questions. Pediatric emergency medicine physicians tended to select LP more frequently than did general pediatricians, with pediatric emergency medicine physicians showing significantly higher raw scores (20.2 ± 10.2) than general pediatricians (13 ± 15; 95% confidence interval for difference, 1, 13). Concordance with the reference panel varied among subspecialties and by the frequency with which practitioners perform LPs in their practices. CONCLUSION Script concordance testing questions can be used as a tool to detect subspecialty practice variation. We are able to detect significant practice variation in the self-report of use of LP for infants among different pediatric subspecialties.
Collapse
|
9
|
Cooke S, Lemay JF, Beran T. Evolutions in clinical reasoning assessment: The Evolving Script Concordance Test. MEDICAL TEACHER 2017; 39:828-835. [PMID: 28580814 DOI: 10.1080/0142159x.2017.1327706] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
INTRODUCTION Script concordance testing (SCT) is a method of assessment of clinical reasoning. We developed a new type of SCT case design, the evolving SCT (E-SCT), whereby the patient's clinical story is "evolving" and with thoughtful integration of new information at each stage, decisions related to clinical decision-making become increasingly clear. OBJECTIVES We aimed to: (1) determine whether an E-SCT could differentiate clinical reasoning ability among junior residents (JR), senior residents (SR), and pediatricians, (2) evaluate the reliability of an E-SCT, and (3) obtain qualitative feedback from participants to help inform the potential acceptability of the E-SCT. METHODS A 12-case E-SCT, embedded within a 24-case pediatric SCT (PaedSCT), was administered to 91 pediatric residents (JR: n = 50; SR: n = 41). A total of 21 pediatricians served on the panel of experts (POE). A one-way analysis of variance (ANOVA) was conducted across the levels of experience. Participants' feedback on the E-SCT was obtained with a post-test survey and analyzed using two methods: percentage preference and thematic analysis. RESULTS Statistical differences existed across levels of training: F = 19.31 (df = 2); p < 0.001. The POE scored higher than SR (mean difference = 10.34; p < 0.001) and JR (mean difference = 16.00; p < 0.001). SR scored higher than JR (mean difference = 5.66; p < 0.001). Reliability (Cronbach's α) was 0.83. Participants found the E-SCT engaging, easy to follow and true to the daily clinical decision-making process. CONCLUSIONS The E-SCT demonstrated very good reliability and was effective in distinguishing clinical reasoning ability across three levels of experience. Participants found the E-SCT engaging and representative of real-life clinical reasoning and decision-making processes. We suggest that further refinement and utilization of the evolving style case will enhance SCT as a robust, engaging, and relevant method for the assessment of clinical reasoning.
Collapse
Affiliation(s)
- Suzette Cooke
- a Department of Paediatrics , Alberta Children's Hospital, University of Calgary , Calgary , Canada
- b Department of Paediatrics, Cumming School of Medicine , University of Calgary , Calgary , Canada
| | - Jean-François Lemay
- a Department of Paediatrics , Alberta Children's Hospital, University of Calgary , Calgary , Canada
- b Department of Paediatrics, Cumming School of Medicine , University of Calgary , Calgary , Canada
| | - Tanya Beran
- a Department of Paediatrics , Alberta Children's Hospital, University of Calgary , Calgary , Canada
- c Department of Community Health Sciences/Medical Education, Cumming School of Medicine , University of Calgary , Calgary , Alberta , Canada
| |
Collapse
|
10
|
Cooke S, Lemay JF, Beran T, Sandhu A, Amin H. Development of a Method to Measure Clinical Reasoning in Pediatric Residents: The Pediatric Script Concordance Test. ACTA ACUST UNITED AC 2016. [DOI: 10.4236/ce.2016.76084] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
11
|
Dumont K, Loye N, Goudreau J. Le potentiel diagnostique des questions d’un test de concordance de scripts pour évaluer le raisonnement clinique infirmier. ACTA ACUST UNITED AC 2015. [DOI: 10.1051/pmed/2015012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
12
|
Chang TP, Kessler D, McAninch B, Fein DM, Scherzer DJ, Seelbach E, Zaveri P, Jackson JM, Auerbach M, Mehta R, Van Ittersum W, Pusic MV. Script concordance testing: assessing residents' clinical decision-making skills for infant lumbar punctures. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:128-35. [PMID: 24280838 DOI: 10.1097/acm.0000000000000059] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
PURPOSE Residents must learn which infants require a lumbar puncture (LP), a clinical decision-making skill (CDMS) difficult to evaluate because of considerable practice variation. The authors created an assessment model of the CDMS to determine when an LP is indicated, taking practice variation into account. The objective was to detect whether script concordance testing (SCT) could measure CDMS competency among residents for performing infant LPs. METHOD In 2011, using a modified Delphi technique, an expert panel of 14 attending physicians constructed 15 case vignettes (each with 2 to 4 SCT questions) that represented various infant LP scenarios. The authors distributed the vignettes to residents at 10 academic pediatric centers within the International Simulation in Pediatric Innovation, Research, and Education Network. They compared SCT scores among residents of different postgraduate years (PGYs), specialties, training in adult medicine, LP experience, and practice within an endemic Lyme disease area. RESULTS Of 730 eligible residents, 102 completed 47 SCT questions. They could earn a maximum score of 47. Median SCT scores were significantly higher in PGY-3s compared with PGY-1s (difference: 3.0; 95% confidence interval [CI] 1.0-4.9; effect size d = 0.87). Scores also increased with increasing LP experience (difference: 3.3; 95% CI 1.1-5.5) and with adult medicine training (difference: 2.9; 95% CI 0.6-5.0). Residents in Lyme-endemic areas tended to perform more LPs than those in nonendemic areas. CONCLUSIONS SCT questions may be useful as an assessment tool to determine CDMS competency among residents for performing infant LPs.
Collapse
Affiliation(s)
- Todd P Chang
- Dr. Chang is assistant professor of pediatrics, Division of Emergency Medicine and Transport, Children's Hospital Los Angeles and University of Southern California Keck School of Medicine, Los Angeles, California. Dr. Kessler is assistant professor of pediatrics, Department of Pediatrics, Columbia University College of Physicians and Surgeons, New York, New York. Dr. McAninch is assistant professor, Division of Pediatric Emergency Medicine at Children's Hospital of Pittsburgh of the University of Pittsburgh Medical Center and University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania. Dr. Fein is assistant professor of pediatrics, Division of Pediatric Emergency Medicine, Children's Hospital at Montefiore affiliated with Albert Einstein College of Medicine, Bronx, New York. Dr. Scherzer is clinical associate professor of pediatrics, Division of Emergency Medicine, Nationwide Children's Hospital and Ohio State University, Columbus, Ohio. Dr. Seelbach is assistant professor, Department of Pediatrics, University of Kentucky, Lexington, Kentucky. Dr. Zaveri is assistant professor of pediatrics and emergency medicine, Division of Emergency Medicine, Children's National Medical Center and George Washington University, Washington, DC. Dr. Jackson is assistant professor of pediatrics, Wake Forest School of Medicine, Winston-Salem, North Carolina. Dr. Auerbach is assistant professor of pediatrics, Department of Pediatrics, Yale University School of Medicine, New Haven, Connecticut. Dr. Mehta is associate professor of pediatrics, Section of Critical Care, Georgia Regents University, Augusta, Georgia. Dr. Van Ittersum is assistant professor of pediatrics, Cleveland Clinic Lerner College of Medicine of Case Western Reserve University, Cleveland, Ohio. Dr. Pusic is assistant professor of emergency medicine, New York University School of Medicine, New York, New York
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Petrucci AM, Nouh T, Boutros M, Gagnon R, Meterissian SH. Assessing clinical judgment using the Script Concordance test: the importance of using specialty-specific experts to develop the scoring key. Am J Surg 2013; 205:137-40. [DOI: 10.1016/j.amjsurg.2012.09.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 06/15/2012] [Accepted: 09/02/2012] [Indexed: 11/16/2022]
|
14
|
Piovezan RD, Custódio O, Cendoroglo MS, Batista NA, Lubarsky S, Charlin B. Assessment of undergraduate clinical reasoning in geriatric medicine: application of a script concordance test. J Am Geriatr Soc 2012; 60:1946-50. [PMID: 23036106 DOI: 10.1111/j.1532-5415.2012.04152.x] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
A challenging aspect of geriatric practice is that it often requires decision-making under conditions of uncertainty. The Script Concordance Test (SCT) is an assessment tool designed to measure clinical data interpretation, an important element of clinical reasoning under uncertainty. The purpose of this study was to develop and analyze the validity of results of an SCT administered to undergraduate students in geriatric medicine. An SCT consisting of 13 cases and 104 items covering a spectrum of common geriatric problems was designed and administered to 41 undergraduate medical students at a medical school in São Paulo, Brazil. A reference panel of 21 practicing geriatricians contributed to the test's score key. The responses were analyzed, and the psychometric properties of the tool were investigated. The test's internal consistency and discriminative capacity to distinguish students from experienced geriatricians supported construct validity. The Cronbach alpha for the test was 0.84, and mean scores for the experts were found to be significantly higher than those of the students (80.0 and 70.7, respectively; P < .001). This study demonstrated robust evidence of reliability and validity of an SCT developed for use in geriatric medicine for assessing clinical reasoning skills under conditions of uncertainty in undergraduate medical students. These findings will be of interest to those involved in assessing clinical competence in geriatrics and will have important potential application in medical school examinations.
Collapse
Affiliation(s)
- Ronaldo D Piovezan
- Division of Geriatrics, Federal University of São Paulo, São Paulo, Brazil. rdpiovezan@gmail
| | | | | | | | | | | |
Collapse
|
15
|
Dory V, Gagnon R, Vanpee D, Charlin B. How to construct and implement script concordance tests: insights from a systematic review. MEDICAL EDUCATION 2012; 46:552-563. [PMID: 22626047 DOI: 10.1111/j.1365-2923.2011.04211.x] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
CONTEXT Programmes of assessment should measure the various components of clinical competence. Clinical reasoning has been traditionally assessed using written tests and performance-based tests. The script concordance test (SCT) was developed to assess clinical data interpretation skills. A recent review of the literature examined the validity argument concerning the SCT. Our aim was to provide potential users with evidence-based recommendations on how to construct and implement an SCT. METHODS A systematic review of relevant databases (MEDLINE, ERIC [Education Resources Information Centre], PsycINFO, the Research and Development Resource Base [RDRB, University of Toronto]) and Google Scholar, medical education journals and conference proceedings was conducted for references in English or French. It was supplemented by ancestry searching and by additional references provided by experts. RESULTS The search yielded 848 references, of which 80 were analysed. Studies suggest that tests with around 100 items (25-30 cases), of which 25% are discarded after item analysis, should provide reliable scores. Panels with 10-20 members are needed to reach adequate precision in terms of estimated reliability. Panellists' responses can be analysed by checking for moderate variability among responses. Studies of alternative scoring methods are inconclusive, but the traditional scoring method is satisfactory. There is little evidence on how best to determine a pass/fail threshold for high-stakes examinations. CONCLUSIONS Our literature search was broad and included references from medical education journals not indexed in the usual databases, conference abstracts and dissertations. There is good evidence on how to construct and implement an SCT for formative purposes or medium-stakes course evaluations. Further avenues for research include examining the impact of various aspects of SCT construction and implementation on issues such as educational impact, correlations with other assessments, and validity of pass/fail decisions, particularly for high-stakes examinations.
Collapse
|