1
|
Fisher K, Fielding A, Ralston A, Holliday E, Ball J, Tran M, Davey A, Tapley A, Magin P. Exam prediction and the general Practice Registrar Competency Assessment Grid (GPR-CAG). EDUCATION FOR PRIMARY CARE 2023; 34:268-276. [PMID: 38011869 DOI: 10.1080/14739879.2023.2269884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 10/09/2023] [Indexed: 11/29/2023]
Abstract
BACKGROUND In GP training, identifying early predictors of poor summative examination performance can be challenging. We aimed to establish whether external clinical teaching visit (ECTV) performance, measured using a validated instrument (GP Registrar Competency Assessment Grid, GPR-CAG) is predictive of Royal Australian College of General Practitioners (RACGP) Fellowship examination performance. METHODS A retrospective cohort study including GP registrars in New South Wales/Australian Capital Territory with ECTV data recorded during their first training term (GPT1), between 2014 and 2018, who attempted at least one Fellowship examination. Independent variables of interest included the four GPR-CAG factors assessed in GPT1 ('patient-centredness/caring', 'formulating hypotheses/management plans', 'professional responsibilities', 'physical examination skills'). Outcomes of interest included individual scores of the three summative examinations (Applied Knowledge Test (AKT); Key Feature Problem (KFP); and the Objective Structured Clinical Examination (OSCE)) and overall Pass/Fail status. Univariable and multivariable regression analyses were performed. RESULTS Univariably, there were statistically significant associations (p < 0.01) between all four GPR-CAG factors and all four summative examination outcomes, except for 'formulating hypotheses/management plans' and OSCE score (p = 0.07). On multivariable analysis, each factor was significantly associated (p < 0.05) with at least one exam outcome, and 'physical examination skills' was significantly associated (p < 0.05) with all four exam outcomes. DISCUSSION ECTV performance, via GPR-CAG scores, is predictive of RACGP Fellowship exam performance. The univariable findings highlight the pragmatic utility of ECTVs in flagging registrars who are at-risk of poor exam performance, facilitating early intervention. The multivariable associations of GPR-CAG scores and examination performance suggest that these scores provide predictive ability beyond that of other known predictors.
Collapse
Affiliation(s)
- Katie Fisher
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Alison Fielding
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Anna Ralston
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Elizabeth Holliday
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
| | - Jean Ball
- Clinical Research Design IT and Statistical Support, Hunter Medical Research Institute, New Lambton Heights, NSW, Australia
| | - Michael Tran
- School of Population Health, University of New South Wales, Sydney, Australia
| | - Andrew Davey
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Amanda Tapley
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| | - Parker Magin
- School of Medicine and Public Health, University of Newcastle, University Drive, Callaghan, NSW, Australia
- NSW and ACT Research and Evaluation Unit, GP Synergy NSW and ACT Research and Evaluation Unit, Mayfield West, NSW, Australia
| |
Collapse
|
2
|
On Educational Assessment Theory: A High-Level Discussion of Adolphe Quetelet, Platonism, and Ergodicity. PHILOSOPHIES 2021. [DOI: 10.3390/philosophies6020046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Educational assessments, specifically standardized and normalized exams, owe most of their foundations to psychological test theory in psychometrics. While the theoretical assumptions of these practices are widespread and relatively uncontroversial in the testing community, there are at least two that are philosophically and mathematically suspect and have troubling implications in education. Assumption 1 is that repeated assessment measures that are calculated into an arithmetic mean are thought to represent some real stable, quantitative psychological trait or ability plus some error. Assumption 2 is that aggregated, group-level educational data collected from assessments can then be interpreted to make inferences about a given individual person over time without explicit justification. It is argued that the former assumption cannot be taken for granted; it is also argued that, while it is typically attributed to 20th century thought, the assumption in a rigorous form can be traced back at least to the 1830s via an unattractive Platonistic statistical thesis offered by one of the founders of the social sciences—Belgian mathematician Adolphe Quetelet (1796–1874). While contemporary research has moved away from using his work directly, it is demonstrated that cognitive psychology is still facing the preservation of assumption 1, which is becoming increasingly challenged by current paradigms that pitch human cognition as a dynamical, complex system. However, how to deal with assumption 1 and whether it is broadly justified is left as an open question. It is then argued that assumption 2 is only justified by assessments having ergodic properties, which is a criterion rarely met in education; specifically, some forms of normalized standardized exams are intrinsically non-ergodic and should be thought of as invalid assessments for saying much about individual students and their capability. The article closes with a call for the introduction of dynamical mathematics into educational assessment at a conceptual level (e.g., through Bayesian networks), the critical analysis of several key psychological testing assumptions, and the introduction of dynamical language into philosophical discourse. Each of these prima facie distinct areas ought to inform each other more closely in educational studies.
Collapse
|