1
|
Panlilio CC, Famularo L, Masters J, Dore S, Verdiglione N, Yang C, Lehman E, Hamm RM, Fiene R, Bard D, Kapp KM, Levi BH. Integrating Validity Evidence to Revise a Child Abuse Knowledge Test for Early Childhood Education Providers: A Mixed Methods Approach. THE AMERICAN JOURNAL OF EVALUATION 2022; 43:559-583. [PMID: 36507193 PMCID: PMC9733792 DOI: 10.1177/10982140211002901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Knowledge tests used to evaluate child protection training program effectiveness for early childhood education providers may suffer from threats to construct validity given the contextual variability inherent within state-specific regulations around mandated reporting requirements. Unfortunately, guidance on instrument revision that accounts for such state-specific mandated reporting requirements is lacking across research on evaluation practices. This study, therefore, explored how collection and integration of validity evidence using a mixed methods framework can guide the instrument revision process to arrive at a more valid program outcome measure.
Collapse
Affiliation(s)
- Carlomagno C. Panlilio
- Department of Educational Psychology, Counseling, and Special Education, The Pennsylvania State University, State College, PA, USA
| | | | | | - Sarah Dore
- Department of Humanities and Pediatrics, Penn State College of Medicine, Hershey, PA, USA
| | - Nicole Verdiglione
- Department of Humanities and Pediatrics, Penn State College of Medicine, Hershey, PA, USA
| | - Chengwu Yang
- Department of Epidemiology and Health Promotion, College of Dentistry, New York University, NY, USA
| | - Erik Lehman
- Department of Public Health Sciences, Penn State College of Medicine, Hershey, PA, USA
| | - Robert M. Hamm
- Department of Family and Preventive Medicine, College of Medicine, The University of Oklahoma, Norman, OK, USA
| | - Richard Fiene
- Department of Human Development and Family Studies, The Pennsylvania State University, State College, PA, USA
- Department of Psychology, The Pennsylvania State University, State College, PA, USA
| | - David Bard
- Department of Pediatrics, College of Medicine, The University of Oklahoma, Norman, OK, USA
| | - Karl M. Kapp
- Department of Instructional Technology, Bloomsburg University, PA, USA
| | - Benjamin H. Levi
- Department of Humanities and Pediatrics, Penn State College of Medicine, Hershey, PA, USA
| |
Collapse
|
2
|
Yaneva V, Clauser BE, Morales A, Paniagua M. Assessing the validity of test scores using response process data from an eye-tracking study: a new approach. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:1401-1422. [PMID: 35511357 PMCID: PMC9859888 DOI: 10.1007/s10459-022-10107-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 03/06/2022] [Indexed: 06/14/2023]
Abstract
Understanding the response process used by test takers when responding to multiple-choice questions (MCQs) is particularly important in evaluating the validity of score interpretations. Previous authors have recommended eye-tracking technology as a useful approach for collecting data on the processes test taker's use to respond to test questions. This study proposes a new method for evaluating alternative score interpretations by using eye-tracking data and machine learning. We collect eye-tracking data from 26 students responding to clinical MCQs. Analysis is performed by providing 119 eye-tracking features as input for a machine-learning model aiming to classify correct and incorrect responses. The predictive power of various combinations of features within the model is evaluated to understand how different feature interactions contribute to the predictions. The emerging eye-movement patterns indicate that incorrect responses are associated with working from the options to the stem. By contrast, correct responses are associated with working from the stem to the options, spending more time on reading the problem carefully, and a more decisive selection of a response option. The results suggest that the behaviours associated with correct responses are aligned with the real-world model used for score interpretation, while those associated with incorrect responses are not. To the best of our knowledge, this is the first study to perform data-driven, machine-learning experiments with eye-tracking data for the purpose of evaluating score interpretation validity.
Collapse
Affiliation(s)
- Victoria Yaneva
- National Board of Medical Examiners, 3750 Market Street, Philadelphia, PA, 19104-3102, USA.
| | - Brian E Clauser
- National Board of Medical Examiners, 3750 Market Street, Philadelphia, PA, 19104-3102, USA
| | - Amy Morales
- National Board of Medical Examiners, 3750 Market Street, Philadelphia, PA, 19104-3102, USA
| | - Miguel Paniagua
- National Board of Medical Examiners, 3750 Market Street, Philadelphia, PA, 19104-3102, USA
| |
Collapse
|
3
|
Yaneva V, Clauser BE, Morales A, Paniagua M. Using Eye‐Tracking Data as Part of the Validity Argument for Multiple‐Choice Questions: A Demonstration. JOURNAL OF EDUCATIONAL MEASUREMENT 2021. [DOI: 10.1111/jedm.12304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|