1
|
Henrico K, Makkink AW. Use of global rating scales and checklists in clinical simulation-based assessments: a protocol for a scoping review. BMJ Open 2023; 13:e065981. [PMID: 37173107 DOI: 10.1136/bmjopen-2022-065981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/15/2023] Open
Abstract
INTRODUCTION Assessment in health sciences education remains a hotly debated topic, with measures of competency and how to determine them in simulation-based assessments enjoying much of the focus. Global rating scales (GRS) and checklists are widely used within simulation-based education but there is a question regarding how the two strategies are used within clinical simulation assessment. The aim of this proposed scoping review is to explore, map and summarise the nature, range and extent of published literature available relating to the use of GRS and checklists in clinical simulation-based assessment. METHODS We will follow the methodological frameworks and updates described by Arksey and O'Malley, Levac, Colquhoun and O'Brien, and Peters, Marnie and Tricco et al and will report using the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). We will search PubMed, CINAHL, ERIC, Cochrane Library, Scopus, EBSCO, ScienceDirect, Web of Science, the DOAJ and several sources of grey literature. We will be including all identified sources published in English after 1 January 2010 that relate to the use of GRS and/or checklists in clinical simulation-based assessments. The planned search will be conducted from 6 February 2023 to 20 February 2023. ETHICS AND DISSEMINATION An ethical waiver was received from a registered research ethics committee and findings will be disseminated through publications. The overview of literature the produced will help to identify knowledge gaps and inform future research on the use of GRS and checklists in clinical simulation-based assessments. This information will be valuable and useful for all stakeholders that are interested in clinical simulation-based assessments.
Collapse
Affiliation(s)
- Karien Henrico
- Emergency Medical Care, University of Johannesburg, Johannesburg, Gauteng, South Africa
| | | |
Collapse
|
2
|
Rasetshwane I, Sepeng NV, Mooa RS. Psychometric properties of a clinical assessment tool in the postgraduate midwifery programme, Botswana. Curationis 2023; 46:e1-e7. [PMID: 37042533 PMCID: PMC10157412 DOI: 10.4102/curationis.v46i1.2404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 01/21/2023] [Accepted: 01/21/2023] [Indexed: 04/13/2023] Open
Abstract
BACKGROUND The psychometric properties of a clinical assessment tool used in the postgraduate midwifery programme in Botswana have not been evaluated. A lack of reliable and valid clinical assessment tools contributes to inconsistencies in clinical assessment in midwifery programmes. OBJECTIVES This study aimed to evaluate the internal consistency and content validity of a clinical assessment tool used in the postgraduate midwifery programme in Botswana. METHOD For internal consistency, we calculated the total-item correlation and Cronbach's alpha coefficient. For content validity, subject matter experts completed a checklist to evaluate the relevance and clarity of each competency in the clinical assessment tool. The checklist included questions with Likert-scale responses, indicating the level of agreement. RESULTS The clinical assessment tool had a good reliability, with a Cronbach's alpha of 0.837. The corrected item total correlation values ranged from -0.043 to 0.880 and the Cronbach's alpha (if item deleted) ranged from 0.079 to 0.865. Overall content validity ratio was 0.95, and content validity index was 0.97. Item content validity indices ranged from 0.8 to 1.0. The overall scale content validity index was 0.97 and the scale content validity index using universal agreement was 0.75. CONCLUSION The clinical assessment tool used in the postgraduate midwifery programme in Botswana has acceptable reliability. Most of the competencies included in the clinical assessment tool were relevant and clear. Certain competencies need to be reviewed to improve the reliability and validity of the clinical assessment tool.Contribution: The clinical assessment tool currently used in the postgraduate midwifery programme in Botswana had acceptable internal consistency reliability and validity.
Collapse
Affiliation(s)
- Itumeleng Rasetshwane
- Department of Nursing, Faculty of Health Sciences, University of Pretoria, Pretoria.
| | | | | |
Collapse
|
3
|
El Hussein MT, Hakkola J. Valid and Reliable Tools to Measure Safety of Nursing Students During Simulated Learning Experiences: A Scoping Review. TEACHING AND LEARNING IN NURSING 2023. [DOI: 10.1016/j.teln.2022.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
4
|
Chabrera C, Diago E, Curell L. Development, Validity and Reliability of Objective Structured Clinical Examination in Nursing Students. SAGE Open Nurs 2023; 9:23779608231207217. [PMID: 37822363 PMCID: PMC10563491 DOI: 10.1177/23779608231207217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 07/19/2023] [Accepted: 09/18/2023] [Indexed: 10/13/2023] Open
Abstract
Introduction The adoption of measurement instruments such as the Objective Structured Clinical Examination (OSCE) is essential to assess clinical competencies in nursing students. Objective The purpose of this study is to develop an OSCE, analyze its validity and reliability in the nursing curriculum and measure self-assessment, stress and satisfaction. Methods The observational validation study of a measurement instrument was carried out in two phases: the design and development of the OSCE and validity and reliability analysis. Results A total of 118 students participated in the study. Ten scenarios were designed that incorporated six competency components extracted from the curriculum. Good results were obtained in face validity, content validity (CVI .82-.95), criterion validity (r = .71, p < .001), and reliability (α Cronbach .84). Satisfaction and stress scores were high, and self-assessment scores were lower than the scores obtained. Conclusion A rigorously designed OSCE provides a reliable and valid method for assessing the clinical competence of nursing students.
Collapse
Affiliation(s)
- Carolina Chabrera
- Associate Professor, Health Department., TecnoCampus, Universitat Pompeu Fabra Research Group in Attention to Chronicity and Innovation in Health (GRACIS), Mataró, Barcelona, Spain
| | - Eva Diago
- Adjunct Professor, Health Department, TecnoCampus, Universitat Pompeu Fabra, Mataró, Barcelona, Spain
| | - Laura Curell
- Assistant Professor, Health Department, TecnoCampus, Universitat Pompeu Fabra, Research Group in Attention to Chronicity and Innovation in Health (GRACIS), Mataró, Barcelona, Spain
| |
Collapse
|
5
|
Singh M, Moss H, Thomas GM, Dadario NB, Mirante D, Ellsworth K, Shulman J, Bellido S, Amicucci B, Jafri FN. The Development of an Assessment Rubric for the Core and Contingency Team Interaction Among Rapid Response Teams. Simul Healthc 2022; 17:149-155. [PMID: 34387244 DOI: 10.1097/sih.0000000000000602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Teamwork training is critical in the development of high-functioning rapid response teams (RRT). Rapid response teams involve interactions between a patient's core care team and a hospital contingency team, which can lead to disorganized and unsafe resuscitations, largely due to problems with communication and information dissemination. An extensive literature search found no assessment tools specific to the unique communicative challenges of an RRT, and thus, this study sought to develop an assessment rubric validated for training RRTs. METHODS This study elucidates the development, implementation, and testing of an RRT rubric based on Kane's framework for validating testing instruments. Twenty-four inpatient code teams underwent team training using a Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) didactic, an online module on the TeamSTEPPS RRT program, and a subsequent presimulation and postsimulation experience. Two raters were randomized to give a bedside assessment for each team using the proposed RRT rubric. Simulation scores were assessed with Wilcoxon signed-rank tests. Interrater reliability was assessed using intraclass correlation coefficients. These analyses were then used to argue Kane's scoring, generalization, and extrapolation inferences. RESULTS All teams significantly improved from the presimulation to postsimulation scenarios across all TeamSTEPPS domains. Content validity was obtained from 5 resuscitation experts with a scale-level content validity index of 0.9, with individual content validity index of 0.8 to 1.0. Intraclass correlation coefficient for "pre" scores were 0.856 (n = 24, P < 0.001), "post" scores of 0.738 (n = 24, P < 0.001), and a total of 0.890 (n = 48, P < 0.001). CONCLUSIONS The authors argue for the validity of a new RRT rubric based off Kane's framework, with a specific focus on teamwork training to improve coordination and function of core and contingency teams. A follow-up study with longitudinal data along with external validation of this rubric is needed.
Collapse
Affiliation(s)
- Maninder Singh
- From the Department of Emergency Medicine (M.S.), Jacobi Medical Center; Department of Emergency Medicine (H.M.), Montefiore Medical Center; White Plains Hospital (G.M.T., S.B., B.A.), White Plains; Rutgers Robert Wood Johnson Medical School (N.B.D.), New Brunswick, NJ; and Departments of Emergency Medicine (D.M., J.S., F.N.J.) and Critical Care (K.E.), White Plains Hospital, White Plains, NY
| | | | | | | | | | | | | | | | | | | |
Collapse
|
6
|
|
7
|
Incorporating Future of Nursing Competencies Into a Clinical and Simulation Assessment Tool: Validating the Clinical Simulation Competency Assessment Tool. Nurs Educ Perspect 2020; 41:280-284. [PMID: 32732817 DOI: 10.1097/01.nep.0000000000000709] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AIM The purpose of the study was to evaluate the validity of the Clinical Simulation Competency Assessment Tool (ClinSimCAT). BACKGROUND The 2011 Future of Nursing report encouraged nursing programs to move toward a competency-based approach to education. As no tool was found to holistically evaluate nursing student competency in clinical and simulation settings, we developed the ClinSimCAT based on the Institute of Medicine recommended competencies. METHOD A Delphi study with three rounds was conducted. A national sample of nursing education and simulation leaders was used to achieve consensus about the competencies. RESULTS The process resulted in a set of 20 competencies across eight domains (patient-centered care, teamwork and collaboration, evidence-based practice, quality improvement, safety, informatics, professionalism, and systems-based practice). CONCLUSION The ClinSimCAT has demonstrated evidence of content validity and can be used for evaluation of clinical and simulation across a variety of undergraduate nursing courses.
Collapse
|
8
|
Taylor I, Bing-Jonsson PC, Johansen E, Levy-Malmberg R, Fagerström L. The Objective Structured Clinical Examination in evolving nurse practitioner education: A study of students' and examiners’ experiences. Nurse Educ Pract 2019; 37:115-123. [DOI: 10.1016/j.nepr.2019.04.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 03/10/2019] [Accepted: 04/02/2019] [Indexed: 10/27/2022]
|
9
|
Bernat-Adell MD, Moles-Julio P, Esteve-Clavero A, Collado-Boira EJ. Psychometric Evaluation of a Rubric to Assess Basic Performance During Simulation in Nursing. Nurs Educ Perspect 2019; 40:E3-E6. [PMID: 30672850 DOI: 10.1097/01.nep.0000000000000436] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
AIM This study was conducted to evaluate the psychometric properties of a rubric to assess nursing student performance in medium- and low-fidelity simulation. METHOD A psychometric study was carried out. Content validity was explored by a group of experts. Internal consistency was determined by means of Cronbach's coefficient alpha. Interrater agreement and the level of concordance were established by the kappa coefficient and intraclass correlation index. RESULTS The relevance of the dimensions and the definition of each category scored higher than 3.25 on a Likert-type scale (maximum value of 4); content validity ratio values were close to +1. The kappa index was above 0.61 (p < .001) in all dimensions, thereby indicating a good level of interrater agreement; the intraclass correlation index showed values above .82 (p < .001). CONCLUSION The rubric appears to be psychometrically sound, thus supporting its reliability.
Collapse
Affiliation(s)
- María Desamparados Bernat-Adell
- About the Authors The authors are faculty at Unidad Predepartamental de Enfermería, Facultad de Ciencias de la Salud, Universitat Jaume I, Castellón de la Plana, Spain. María Desamparados Bernat-Adell, PhD, MSc, RN, is a professor. Pilar Moles Julio, PhD, MSc, RN, is a professor. Aurora Esteve Clavero. PhD, MSc, RN, is a professor. Eladio Joaquín Collado Boira, PhD, MSc, RN, is a professor. The study described in this article is part of an Educational Innovation Project: Assessment Process, with the Code 3312/16. For more information, contact Dr. Bernat-Adell at
| | | | | | | |
Collapse
|
10
|
Shulruf B, Adelstein BA, Damodaran A, Harris P, Kennedy S, O'Sullivan A, Taylor S. Borderline grades in high stakes clinical examinations: resolving examiner uncertainty. BMC MEDICAL EDUCATION 2018; 18:272. [PMID: 30458741 PMCID: PMC6247637 DOI: 10.1186/s12909-018-1382-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 11/08/2018] [Indexed: 06/01/2023]
Abstract
BACKGROUND Objective Structured Clinical Exams are used to increase reliability and validity, yet they only achieve a modest level of reliability. This low reliability is due in part to examiner variance which is greater than the variance of students. This variance often represents indecisiveness at the cut score with apparent confusion over terms such as "borderline pass". It is amplified by a well reported failure to fail. METHODS A borderline grade (meaning performance is neither a clear pass nor a clear fail) was introduced in a high stakes undergraduate medical clinical skills exam to replace a borderline pass grade (which was historically resolved as 50%) in a 4 point scale (distinction, pass, borderline, fail). Each Borderline grade was then resolved into a Pass or Fail grade by a formula referencing the difficulty of the station and the performance in the same domain by the student in other stations. Raw pass or fail grades were unaltered. Mean scores and 95%CI were calculated per station and per domain for the unmodified and the modified scores/grades (results are presented on error bars). To estimate the defensibility of these modifications, similar analysis took place for the P and the F grades which resulted from the modification of the B grades. RESULTS Of 14,634 observations 4.69% were Borderline. Application of the formula did not impact the mean scores in each domain but the failure rate for the exam increased from 0.7 to 4.1%. Examiners and students expressed satisfaction with the Borderline grade, resolution formula and outcomes. Mean scores (by stations and by domains respectively) of students whose B grades were modified to P were significantly higher than their counterparts whose B grades were modified to F. CONCLUSIONS This study provides a feasible and defensible resolution to situations where the examinee's performance is neither a clear pass nor a clear fail, demonstrating the application of the resolution of borderline formula in a high stakes exam. It does not create a new performance standard but utilises real data to make judgements about these small number of candidates. This is perceived as a fair approach to Pass/Fail decisions.
Collapse
Affiliation(s)
- Boaz Shulruf
- Faculty of Medicine, University of New South Wales, Sydney, Australia.
| | | | - Arvin Damodaran
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Peter Harris
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Sean Kennedy
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | | | - Silas Taylor
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| |
Collapse
|
11
|
Abstract
AIM The aim of the study was to explore and understand the phenomenon of "failing to fail." BACKGROUND Phase 1 of a mixed-methods study suggested faculty in clinical settings instructed students that should not have passed preceding placements; students in didactic settings also passed exams that merited a fail. Phase 2 explored this phenomenon. METHOD A multisite qualitative case study targeted baccalaureate and community college faculty to support analysis using replication logic. Data collection was conducted via semistructured interview. RESULTS Eighteen demographically diverse cases were recruited (including age, experience, and full-/part-time status). Factors supporting failing to fail included being good enough, clinical/didactic dichotomy, team grading, and being the bad guy. CONCLUSION The consistency of enabling factors suggests a collective approach is required to address failing to fail, including pedagogical preparation and cross-school mechanisms for ensuring grading parity. Effort must address integrity and teaching excellence in all aspects of nursing education.
Collapse
Affiliation(s)
- Angie Docherty
- About the Author Angie Docherty, NursD, MPH, RN, is an assistant professor Oregon Health & Science University School of Nursing, Monmouth, Oregon. This research was funded by a Nursing Education Research Grant from the National League for Nursing. For more information, contact Dr. Docherty at
| |
Collapse
|
12
|
Facilitating peer based learning through summative assessment – An adaptation of the Objective Structured Clinical Assessment tool for the blended learning environment. Nurse Educ Pract 2018; 28:40-45. [DOI: 10.1016/j.nepr.2017.09.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Revised: 09/11/2017] [Accepted: 09/14/2017] [Indexed: 11/22/2022]
|
13
|
Hernández-Padilla JM, Granero-Molina J, Márquez-Hernández VV, Cortés-Rodríguez AE, Fernández-Sola C. Efeitos de um workshop de simulação sobre a competência em punção arterial de estudantes de enfermagem. ACTA PAUL ENFERM 2016. [DOI: 10.1590/1982-0194201600095] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Resumo Objetivo Avaliar se um workshop de simulação e curta duração sobre punção da artéria radial melhoraria a competência de alunos de enfermagem em um nível em que pudessem praticar o procedimento em um paciente vivo sem comprometer sua segurança. Métodos Estudo quase-experimental do tipo pré-teste e pós-teste com um grupo de 111 estudantes do terceiro ano de enfermagem. Foi implementado um workshop de simulação e 1,5 horas de duração. Isso incluiu uma vídeo-palestra, demonstrações ao vivo, prática simulada autodirigida em díades e feedback intermitente individual. As habilidades, conhecimentos e autoeficácia dos participantes em punção arterial foram medidos antes e depois da participação no workshop. Resultados Após a intervenção, 61,1% dos participantes demonstraram o nível de competência necessário para a prática segura da punção da artéria radial em um paciente vivo sob supervisão. Conclusão O treinamento efetivo em punção arterial baseado em simulação para estudantes de enfermagem não necessariamente precisa ser intensivo em recursos. Sessões de treinamento bem planejadas e baseadas em evidências, com uso de simuladores de baixa tecnologia podem ajudar os educadores a alcançarem bons resultados educacionais e promover a segurança do paciente.
Collapse
|