1
|
Watkins MW, Dombrowski SC, McGill RJ, Canivez GL, Pritchard AE, Jacobson LA. Bootstrap Exploratory Graph Analysis of the WISC-V with a Clinical Sample. J Intell 2023; 11:137. [PMID: 37504780 PMCID: PMC10381339 DOI: 10.3390/jintelligence11070137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 06/27/2023] [Accepted: 07/05/2023] [Indexed: 07/29/2023] Open
Abstract
One important aspect of construct validity is structural validity. Structural validity refers to the degree to which scores of a psychological test are a reflection of the dimensionality of the construct being measured. A factor analysis, which assumes that unobserved latent variables are responsible for the covariation among observed test scores, has traditionally been employed to provide structural validity evidence. Factor analytic studies have variously suggested either four or five dimensions for the WISC-V and it is unlikely that any new factor analytic study will resolve this dimensional dilemma. Unlike a factor analysis, an exploratory graph analysis (EGA) does not assume a common latent cause of covariances between test scores. Rather, an EGA identifies dimensions by locating strongly connected sets of scores that form coherent sub-networks within the overall network. Accordingly, the present study employed a bootstrap EGA technique to investigate the structure of the 10 WISC-V primary subtests using a large clinical sample (N = 7149) with a mean age of 10.7 years and a standard deviation of 2.8 years. The resulting structure was composed of four sub-networks that paralleled the first-order factor structure reported in many studies where the fluid reasoning and visual-spatial dimensions merged into a single dimension. These results suggest that discrepant construct and scoring structures exist for the WISC-V that potentially raise serious concerns about the test interpretations of psychologists who employ the test structure preferred by the publisher.
Collapse
Affiliation(s)
- Marley W. Watkins
- Department of Educational Psychology, Baylor University, Waco, TX 76798, USA
| | - Stefan C. Dombrowski
- Department of Graduate Education, Leadership and Counseling, Rider University, Lawrenceville, NJ 08648, USA;
| | - Ryan J. McGill
- Department of School Psychology and Counselor Education, William & Mary, Williamsburg, VA 23185, USA
| | - Gary L. Canivez
- Department of Psychology, Eastern Illinois University, Charleston, IL 61920, USA
| | - Alison E. Pritchard
- Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, MD 21231, USA
| | - Lisa A. Jacobson
- Department of Psychiatry & Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, MD 21231, USA
| |
Collapse
|
2
|
Feldman SJ, Beslow LA, Felling RJ, Malone LA, Waak M, Fraser S, Bakeer N, Lee JEM, Sherman V, Howard MM, Cavanaugh BA, Westmacott R, Jordan LC. Consensus-Based Evaluation of Outcome Measures in Pediatric Stroke Care: A Toolkit. Pediatr Neurol 2023; 141:118-132. [PMID: 36812698 PMCID: PMC10042484 DOI: 10.1016/j.pediatrneurol.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 01/08/2023] [Accepted: 01/16/2023] [Indexed: 01/20/2023]
Abstract
Following a pediatric stroke, outcome measures selected for monitoring functional recovery and development vary widely. We sought to develop a toolkit of outcome measures that are currently available to clinicians, possess strong psychometric properties, and are feasible for use within clinical settings. A multidisciplinary group of clinicians and scientists from the International Pediatric Stroke Organization comprehensively reviewed the quality of measures in multiple domains described in pediatric stroke populations including global performance, motor and cognitive function, language, quality of life, and behavior and adaptive functioning. The quality of each measure was evaluated using guidelines focused on responsiveness and sensitivity, reliability, validity, feasibility, and predictive utility. A total of 48 outcome measures were included and were rated by experts based on the available evidence within the literature supporting the strengths of their psychometric properties and practical use. Only three measures were found to be validated for use in pediatric stroke: the Pediatric Stroke Outcome Measure, the Pediatric Stroke Recurrence and Recovery Questionnaire, and the Pediatric Stroke Quality of Life Measure. However, multiple additional measures were deemed to have good psychometric properties and acceptable utility for assessing pediatric stroke outcomes. Strengths and weaknesses of commonly used measures including feasibility are highlighted to guide evidence-based and practicable outcome measure selection. Improving the coherence of outcome assessment will facilitate comparison of studies and enhance research and clinical care in children with stroke. Further work is urgently needed to close the gap and validate measures across all clinically significant domains in the pediatric stroke population.
Collapse
Affiliation(s)
- Samantha J Feldman
- Department of Psychology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Lauren A Beslow
- Division of Neurology, Children's Hospital of Philadelphia, Departments of Neurology and Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ryan J Felling
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Laura A Malone
- Johns Hopkins University School of Medicine and the Kennedy Krieger Institute, Baltimore, Maryland
| | - Michaela Waak
- Pediatric Critical Care Research Group, Child Health Research Centre, The University of Queensland, Queensland, Australia; Pediatric Intensive Care Unit, Queensland Children's Hospital, South Brisbane, Australia
| | - Stuart Fraser
- Division of Vascular Neurology, Department of Pediatrics, University of Texas Health Science Center, Houston, Texas
| | - Nihal Bakeer
- Indiana Hemophilia and Thrombosis Center, Indianapolis, Indiana
| | - Jo Ellen M Lee
- Department of Neurology, Nationwide Children's Hospital, Columbus, Ohio
| | | | - Melissa M Howard
- Casa Colina Hospital and Centers for Healthcare, Pomona, California
| | - Beth Anne Cavanaugh
- Division of Pediatric Neurology, Department of Pediatrics, University of Tennessee Health Science Center, Le Bonheur Children's Hospital, Memphis, Tennessee
| | - Robyn Westmacott
- Department of Psychology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Lori C Jordan
- Division of Pediatric Neurology, Department of Pediatrics, Vanderbilt University Medical Center, Nashville, Tennessee.
| |
Collapse
|
3
|
de Jong PF. The Validity of WISC-V Profiles of Strengths and Weaknesses. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2023. [DOI: 10.1177/07342829221150868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
The Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014 ) provides a general intelligence score, representing g, and five index scores, reflecting underlying broad factors. Within person differences between the overall performance across subtests and index scores, denoted as index difference scores, are often used to examine profiles of strengths and weaknesses. In this study, the validity of such profiles was examined for the Dutch WISC-V. In line with previous studies, broad factors explained little variance in index scores. A simulation study showed that variation in index difference scores also reflected little broad factor variance. The simulation study further revealed that, as a consequence, a significant discrepancy between an index score and overall performance was accompanied in only 40%–74% of the cases by a discrepancy on the underlying broad factor. Overall, these results provide little support for the validity and thereby clinical use of WISC-V profiles.
Collapse
Affiliation(s)
- Peter F. de Jong
- Research Institute of Child Development and Education, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Billeiter KB, Froiland JM. Diversity of Intelligence is the Norm Within the Autism Spectrum: Full Scale Intelligence Scores Among Children with ASD. Child Psychiatry Hum Dev 2022:10.1007/s10578-021-01300-9. [PMID: 35083590 DOI: 10.1007/s10578-021-01300-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/03/2022]
Abstract
Although previous research helped to define differences in intelligence between neurotypicals and those with ASD, results were limited by small sample sizes or restricted subtests. Using data from the NIMH Data Archive, this study examined the intelligence of children with ASD (N = 671). Results demonstrate an average standard deviation of 25.75, which is 1.72 times greater than that of the normative sample for the WISC-III. Moreover, students with ASD are 12 times more likely than the general population of students to score within the intellectual disability range, but are also 1.5 times more likely to score in the superior range, suggesting that more students with ASD should be considered for giftedness. Determining the diversity of intelligence among those with ASD has implications for research, clinical practice, and neurological understanding.
Collapse
Affiliation(s)
- Kenzie B Billeiter
- Department of School Psychology, Baylor University, Waco, TX, 76706, USA.
| | - John Mark Froiland
- Department of Educational Studies, Purdue University, West Lafayette, USA
| |
Collapse
|
5
|
Barrios-Fernandez S, Gozalo M, Amado-Fuentes M, Carlos-Vivas J, Garcia-Gomez A. A Short Version of the EFECO Online Questionnaire for the Assessment of Executive Functions in School-Age Children. CHILDREN (BASEL, SWITZERLAND) 2021; 8:799. [PMID: 34572231 PMCID: PMC8465183 DOI: 10.3390/children8090799] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 09/01/2021] [Accepted: 09/10/2021] [Indexed: 01/06/2023]
Abstract
Executive function (EF) is a group of processes that allow individuals to be goal-oriented and to have adaptive functioning, so that adequate performance is essential for success in activities of daily living, at school and in other activities. The present study aims to create a short version of the Executive Functioning Questionnaire (EFECO) since there is a gap in the Spanish literature due to the lack of behavioural observation questionnaires at school age. A total of 3926 participants completed the online questionnaire. Subsequently, the validity and reliability of the data are analysed. The results show that the short version of the questionnaire, the EFECO-S, has a structure with five dimensions (emotional self-control, initiation, working memory, inhibition, and spatial organisation), as well as a second-order factor (global executive skill) and high reliability (ordinal Alpha = 0.68-0.88). The EFECO is composed of 67 items, while the EFECO-S has 20 items, four per factor, which turns it into a quick and easy to apply test. Therefore, it becomes an interesting alternative to be applied in screening processes with children who may be experiencing executive difficulties.
Collapse
Affiliation(s)
- Sabina Barrios-Fernandez
- Social Impact and Innovation in Health (InHEALTH), University of Extremadura, 10003 Cáceres, Spain;
| | - Margarita Gozalo
- Psychology and Anthropology Department, University of Extremadura, 10003 Cáceres, Spain;
| | - Maria Amado-Fuentes
- Psychology and Anthropology Department, University of Extremadura, 10003 Cáceres, Spain;
| | - Jorge Carlos-Vivas
- Promoting a Healthy Society Research Group (PHeSO), Faculty of Sport Sciences, University of Extremadura, 10003 Cáceres, Spain;
| | - Andres Garcia-Gomez
- Education Sciences Department, University of Extremadura, 10003 Cáceres, Spain;
| |
Collapse
|
6
|
Glutting JJ, Davey A, Wahlquist VE, Watkins M, Kaminski TW. Internal (Factorial) Validity of the ANAM using a Cohort of Woman High-School Soccer Players. Arch Clin Neuropsychol 2021; 36:940-953. [PMID: 33372968 DOI: 10.1093/arclin/acaa120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/21/2020] [Accepted: 11/13/2020] [Indexed: 11/14/2022] Open
Abstract
INTRODUCTION Computerized neuropsychological testing is a cornerstone of sport-related concussion assessment. Female soccer players are at an increased risk for concussion as well as exposures to repetitive head impacts from heading a soccer ball. Our primary aim was to examine factorial validity of the Automated Neuropsychological Assessment Metrics (ANAM) neuropsychological test battery in computing the multiple neurocognitive constructs it purports to measure in a large cohort of interscholastic female soccer players. METHODS Study participants included 218 interscholastic female soccer players (age = 17.0±0.7 year; mass = 55.5±6.8 kg; height = 164.7±6.6 cm) drawn from a large (850+) prospective database examining purposeful heading from four area high schools over a 10-year period. The ANAM-2001 measured neurocognitive performance. Three methods were used to identify integral constructs underlying the ANAM: (a) exploratory factor analysis (EFA), (b) first-order confirmatory factor analysis (CFA), and (c) hierarchical CFA. RESULTS Neuropsychological phenomena measured by the ANAM-2001 were best reproduced by a hierarchical CFA organization, composed of two lower level factors (Simple Reaction Time, Mental Efficiency) and a single, general composite. Although the ANAM was multidimensional, only the composite was found to possess sufficient construct dimensionality and reliability for clinical score interpretation. Findings failed to uphold suppositions that the ANAM measures seven distinct constructs, or that any of its seven tests provide unique information independent of other constructs, or the composite, to support individual interpretation. CONCLUSIONS Outcomes infer the ANAM possesses factorial-validity evidence, but only scores from the composite appear to sufficiently internally valid, and reliable, to support applied use by practitioners.
Collapse
Affiliation(s)
| | - Adam Davey
- Department of Behavioral Health and Nutrition, University of Delaware, Newark, DE 19716, USA
| | - Victoria E Wahlquist
- Department of Kinesiology and Applied Physiology - Athletic Training Research Lab, University of Delaware, Newark, DE 19716, USA
| | - Marley Watkins
- Baylor University, Department of Educational Psychology, University of Delaware, Newark, DE 19716, USA
| | - Thomas W Kaminski
- Department of Kinesiology and Applied Physiology - Athletic Training Research Lab, University of Delaware, Newark, DE 19716, USA
| |
Collapse
|