1
|
Naylor K, Hislop J, Torres K, Mani ZA, Goniewicz K. The Impact of Script Concordance Testing on Clinical Decision-Making in Paramedic Education. Healthcare (Basel) 2024; 12:282. [PMID: 38275562 PMCID: PMC10815909 DOI: 10.3390/healthcare12020282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/10/2024] [Accepted: 01/15/2024] [Indexed: 01/27/2024] Open
Abstract
This study investigates the effectiveness of the Script Concordance Test (SCT) in enhancing clinical reasoning skills within paramedic education. Focusing on the Medical University of Lublin, we evaluated the SCT's application across two cohorts of paramedic students, aiming to understand its potential to improve decision-making skills in emergency scenarios. Our approach, informed by Van der Vleuten's assessment framework, revealed that while the SCT's correlation with traditional methods like multiple-choice questions (MCQs) was limited, its formative nature significantly contributed to improved performance in summative assessments. These findings suggest that the SCT can be an effective tool in paramedic training, particularly in strengthening cognitive abilities critical for emergency responses. The study underscores the importance of incorporating innovative assessment tools like SCTs in paramedic curricula, not only to enhance clinical reasoning but also to prepare students for effective emergency responses. Our research contributes to the ongoing efforts in refining paramedic education and highlights the need for versatile assessment strategies in preparing future healthcare professionals for diverse clinical challenges.
Collapse
Affiliation(s)
- Katarzyna Naylor
- Independent Unit of Emergency Medical Services and Specialist Emergency, Medical University of Lublin, Chodzki 7, 20-059 Lublin, Poland
| | - Jane Hislop
- Clinical Education, Edinburgh Medical School, The University of Edinburgh, Edinburgh EH16 4SB, UK;
| | - Kamil Torres
- Department of Didactics and Medical Simulation, Faculty of Medical Sciences, Medical University of Lublin Poland, Chodźki 7, 20-093 Lublin, Poland;
| | - Zakaria A. Mani
- Nursing College, Jazan University, Jazan 45142, Saudi Arabia;
| | - Krzysztof Goniewicz
- Department of Security Studies, Polish Air Force University, 08-521 Dęblin, Poland;
| |
Collapse
|
2
|
Aniort J, Trefond J, Tanguy G, Bataille S, Burtey S, Pereira B, Garrouste C, Philipponnet C, Clavelou P, Heng AE, Lautrette A. Impact of reference panel composition on scores of script concordance test assessing basic nephrology knowledge in undergraduate medical education. MEDICAL TEACHER 2024; 46:110-116. [PMID: 37544894 DOI: 10.1080/0142159x.2023.2239441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
PURPOSE In the assessment of basic medical knowledge, the composition of the reference panel between specialists and primary care (PC) physicians is a contentious issue. We assessed the effect of panel composition on the scores of undergraduate medical students in a script concordance test (SCT). METHODS The scale of an SCT on basic nephrology knowledge was set by a panel of nephrologists or a mixed panel of nephrologists and PC physicians. The results of the SCTs were compared with ANOVA for repeated measurements. Concordance was assessed with Bland and Altman plots. RESULTS Forty-five students completed the SCT. Their scores differed according to panel composition: 65.6 ± 9.73/100 points for nephrologists, and 70.27 ± 8.82 for the mixed panel, p < 0.001. Concordance between the scores was low with a bias of -4.27 ± 2.19 and a 95% limit of agreement of -8.96 to -0.38. Panel composition led to a change in the ranking of 71% of students (mean 3.6 ± 2.6 places). CONCLUSION The composition of the reference panel, either specialist or mixed, for SCT assessment of basic knowledge has an impact on test results and student rankings.
Collapse
Affiliation(s)
- Julien Aniort
- Nephrology, Dialysis and Transplantation Department, Gabriel Montpied Hospital, Clermont-Ferrand, France
- Human Nutrition Unit, Clermont Auvergne University INRAE UMR 1019, Clermont-Ferrand, France
| | - Jeromine Trefond
- General Practitioner Department, Clermont-Ferrand Medical School, Clermont Auvergne University, Clermont-Ferrand, France
| | - Gilles Tanguy
- General Practitioner Department, Clermont-Ferrand Medical School, Clermont Auvergne University, Clermont-Ferrand, France
| | - Stanislas Bataille
- Phocean Nephrology Institute, ELSAN, Clinique Bouchard, Marseille, France
- C2VN, Aix-Marseille Univ, INSERM, INRAE UMR 1076, Marseille, France
| | - Stephane Burtey
- C2VN, Aix-Marseille Univ, INSERM, INRAE UMR 1076, Marseille, France
- Centre de Néphrologie et Transplantation Rénale, Assistance Publique des Hôpitaux de Marseille, Marseille, France
| | - Bruno Pereira
- Biostatistics Unit, CHU de Clermont-Ferrand, Clermont-Ferrand, France
| | - Cyril Garrouste
- Nephrology, Dialysis and Transplantation Department, Gabriel Montpied Hospital, Clermont-Ferrand, France
| | - Carole Philipponnet
- Nephrology, Dialysis and Transplantation Department, Gabriel Montpied Hospital, Clermont-Ferrand, France
| | - Pierre Clavelou
- Neuro-Dol, INSERM, CHU Clermont-Ferrand, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Anne-Elisabeth Heng
- Nephrology, Dialysis and Transplantation Department, Gabriel Montpied Hospital, Clermont-Ferrand, France
- Human Nutrition Unit, Clermont Auvergne University INRAE UMR 1019, Clermont-Ferrand, France
| | - Alexandre Lautrette
- Intensive Care Unit, Centre Jean Perrin, Clermont-Ferrand, France
- LMGE (Laboratoire MicroOrganisme Genome et Environnement), Clermont Auvergne University CNRS UMR 6023, Clermont-Ferrand, France
| |
Collapse
|
3
|
Underman K, Kochunilathil M, McLean L, Vinson AH. Online student culture as site for negotiating assessment in medical education. Soc Sci Med 2022; 310:115270. [PMID: 36030626 DOI: 10.1016/j.socscimed.2022.115270] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/30/2022]
Abstract
Classic studies of medical education have examined how professional socialization reproduces the prevailing professional culture, as well as how students actively negotiate their place in educational processes. However, sociological research has not re-examined student culture in light of structural transformations in medical education, such as the introduction of new assessment types and their use as modes of commensuration. In this paper, we examine data from two studies of online forums where medical trainees and applicants to medical school discuss their experiences preparing for tests of professional skills, including judgment, empathy, and communication. Examining how medical students talk about these tests on such forums allows us to understand the meaning-making processes at work as students negotiate the commensuration processes such tests enable. We examine how these negotiations take place in online forums, where participants confront common challenges, form common perspectives, and share common solutions, all hallmarks of student culture. Through qualitative analysis, we find that online communities are spaces where students grapple with these new forms of commensuration, interrogate the standards and quantifications that underlie them, and collectively negotiate how to approach these assessments. Using the case of online forum communities, our findings advance past work on student culture in medical sociology by theorizing student culture as an extra-organizational phenomenon that spans multiple career stages. In so doing, we highlight the importance of online forum data for studying social processes.
Collapse
Affiliation(s)
| | | | - Lauren McLean
- Central Michigan University College of Medicine, USA
| | | |
Collapse
|
4
|
Iglesias Gómez C, González Sequeros O, Salmerón Martínez D. Evaluación mediante script concordance test del razonamiento clínico de residentes en Atención Primaria. An Pediatr (Barc) 2022. [DOI: 10.1016/j.anpedi.2021.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
5
|
Iglesias Gómez C, González Sequeros O, Salmerón Martínez D. Clinical reasoning evaluation using script concordance test in primary care residents. ANALES DE PEDIATRÍA (ENGLISH EDITION) 2022; 97:87-94. [DOI: 10.1016/j.anpede.2022.06.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/30/2021] [Indexed: 11/28/2022] Open
|
6
|
Lam A, Lam L, Blacketer C, Parnis R, Franke K, Wagner M, Wang D, Tan Y, Oakden-Rayner L, Gallagher S, Perry SW, Licinio J, Symonds I, Thomas J, Duggan P, Bacchi S. Professionalism and clinical short answer question marking with machine learning. Intern Med J 2022; 52:1268-1271. [PMID: 35879236 DOI: 10.1111/imj.15839] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 04/14/2022] [Indexed: 11/29/2022]
Abstract
Machine learning may assist in medical student evaluation. This study involved scoring short answer questions administered at three centres. Bidirectional encoder representations from transformers were particularly effective for professionalism question scoring (accuracy ranging from 41.6% to 92.5%). In the scoring of 3-mark professionalism questions, as compared with clinical questions, machine learning had a lower classification accuracy (P < 0.05). The role of machine learning in medical professionalism evaluation warrants further investigation.
Collapse
Affiliation(s)
- Antoinette Lam
- University of Adelaide, Adelaide, South Australia, Australia
| | - Lydia Lam
- University of Adelaide, Adelaide, South Australia, Australia
| | - Charlotte Blacketer
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Roger Parnis
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Darwin Hospital, Darwin, Northern Territory, Australia
| | - Kyle Franke
- University of Adelaide, Adelaide, South Australia, Australia
| | - Morganne Wagner
- State University of New York (SUNY) Upstate Medical University, Syracuse, New York, USA
| | - David Wang
- University of Otago, Dunedin, New Zealand
| | - Yiran Tan
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Lauren Oakden-Rayner
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | | | - Seth W Perry
- State University of New York (SUNY) Upstate Medical University, Syracuse, New York, USA
| | - Julio Licinio
- State University of New York (SUNY) Upstate Medical University, Syracuse, New York, USA
| | - Ian Symonds
- University of Adelaide, Adelaide, South Australia, Australia
| | - Josephine Thomas
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Paul Duggan
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Stephen Bacchi
- University of Adelaide, Adelaide, South Australia, Australia.,Royal Adelaide Hospital, Adelaide, South Australia, Australia
| |
Collapse
|
7
|
An Ontology-Driven Learning Assessment Using the Script Concordance Test. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Assessing the level of domain-specific reasoning acquired by students is one of the major challenges in education particularly in medical education. Considering the importance of clinical reasoning in preclinical and clinical practice, it is necessary to evaluate students’ learning achievements accordingly. The traditional way of assessing clinical reasoning includes long-case exams, oral exams, and objective structured clinical examinations. However, the traditional assessment techniques are not enough to answer emerging requirements in the new reality due to limited scalability and difficulty for adoption in online education. In recent decades, the script concordance test (SCT) has emerged as a promising tool for assessment, particularly in medical education. The question is whether the usability of SCT could be raised to a level high enough to match the current education requirements by exploiting opportunities that new technologies provide, particularly semantic knowledge graphs (SCGs) and ontologies. In this paper, an ontology-driven learning assessment is proposed using a novel automated SCT generation platform. SCTonto ontology is adopted for knowledge representation in SCT question generation with the focus on using electronic health records data for medical education. Direct and indirect strategies for generating Likert-type scores of SCT are described in detail as well. The proposed automatic question generation was evaluated against the traditional manually created SCT, and the results showed that the time required for tests creation significantly reduced, which confirms significant scalability improvements with respect to traditional approaches.
Collapse
|
8
|
Deschênes MF, Charlin B, Phan V, Grégoire G, Riendeau T, Henri M, Fehlmann A, Moussa A. Educators and practitioners' perspectives in the development of a learning by concordance tool for medical clerkship in the context of the COVID pandemic. CANADIAN MEDICAL EDUCATION JOURNAL 2021; 12:43-54. [PMID: 35003430 PMCID: PMC8740256 DOI: 10.36834/cmej.72461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
BACKGROUND The COVID-19 pandemic has forced medical schools to create educational material to palliate the anticipated and observed decrease in clinical experiences during clerkships. An online learning by concordance (LbC) tool was developed to overcome the limitation of students' experiences with clinical cases. However, knowledge about the instructional design of an LbC tool is scarce, especially the perspectives of collaborators involved in its design: 1- educators who wrote the vignettes' questions and 2- practitioners who constitute the reference panel by answering the LbC questions. The aim of this study was to describe the key elements that supported the pedagogical design of an LbC tool from the perspectives of educators and practitioners. METHODS A descriptive qualitative research design has been used. Online questionnaires were used, and descriptive analysis was conducted. RESULTS Six educators and 19 practitioners participated in the study. Important to the educators in designing the LbC tool were prevalent or high-stake situations, theoretical knowledge, professional situations experienced and perceived difficulties among students, and that the previous workshop promoted peer discussion and helped solidify the writing process. Important for practitioners was standards of practice and consensus among experts. However, they were uncertain of the educational value of their feedback, considering the ambiguity of the situations included in the LbC tool. CONCLUSIONS The LbC tool is a relatively new training tool in medical education. Further research is needed to refine our understanding of the design of such a tool and ensure its content validity to meet the pedagogical objectives of the clerkship.
Collapse
Affiliation(s)
- Marie-France Deschênes
- Centre d’innovation en formation infirmière (CIFI) – Center for Innovation in Nursing Education, Université de Montréal, Quebec, Canada
| | | | - Véronique Phan
- Faculté de Médecine, Université de Montréal, Quebec, Canada
| | | | - Tania Riendeau
- Faculté de Médecine, Université de Montréal, Quebec, Canada
| | - Margaret Henri
- Faculté de Médecine, Université de Montréal, Quebec, Canada
| | - Aurore Fehlmann
- Department of Paediatrics, Gynecology and Obstetrics, Geneva University Hospitals, Switzerland
| | - Ahmed Moussa
- Faculté de Médecine, Université de Montréal, Quebec, Canada
| |
Collapse
|