1
|
Kofler MJ, Soto EF, Singh LJ, Harmon SL, Jaisle E, Smith JN, Feeney KE, Musser ED. Executive function deficits in attention-deficit/hyperactivity disorder and autism spectrum disorder. NATURE REVIEWS PSYCHOLOGY 2024; 3:701-719. [PMID: 39429646 PMCID: PMC11485171 DOI: 10.1038/s44159-024-00350-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/23/2024] [Indexed: 10/22/2024]
Abstract
Executive function deficits have been reported in both autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD). However, little is known regarding which, if any, of these impairments are unique vs. shared in children with ADHD versus ASD. In this Review, we provide an overview of the current literature with a critical eye toward diagnostic, measurement, and third-variable considerations that should be leveraged to provide more definitive answers. We conclude that the field's understanding of ASD and ADHD executive function profiles is highly limited because most research on one disorder has failed to account for their co-occurrence and the presence of symptoms of the other disorder; a vast majority of studies have relied on traditional neuropsychological tests and/or informant-rated executive function scales that have poor specificity and construct validity; and most studies have been unable to account for the well-documented between-person heterogeneity within and across disorders. Currently, the most parsimonious conclusion is that children with ADHD and/or ASD tend to perform moderately worse than neurotypical children on a broad range of neuropsychological tests. However, the extent to which these difficulties are unique vs. shared, or attributable to impairments in specific executive functions subcomponents, remains largely unknown. We end with focused recommendations for future research that we believe will advance this important line of inquiry.
Collapse
Affiliation(s)
- Michael J. Kofler
- Department of Psychology, Florida State University, Tallahassee, FL, USA
| | - Elia F. Soto
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Leah J. Singh
- Department of Psychology, Florida State University, Tallahassee, FL, USA
| | - Sherelle L. Harmon
- Department of Psychology, Florida State University, Tallahassee, FL, USA
| | - Emma Jaisle
- Department of Psychology, Florida International University, Miami, FL, USA
| | - Jessica N. Smith
- Department of Psychology, Florida International University, Miami, FL, USA
| | - Kathleen E. Feeney
- Department of Psychology, Florida International University, Miami, FL, USA
| | - Erica D. Musser
- Department of Psychology, Barnard College, Columbia University, New York, NY, USA
| |
Collapse
|
2
|
de Jong PF. The Validity of WISC-V Profiles of Strengths and Weaknesses. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2023. [DOI: 10.1177/07342829221150868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
The Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014 ) provides a general intelligence score, representing g, and five index scores, reflecting underlying broad factors. Within person differences between the overall performance across subtests and index scores, denoted as index difference scores, are often used to examine profiles of strengths and weaknesses. In this study, the validity of such profiles was examined for the Dutch WISC-V. In line with previous studies, broad factors explained little variance in index scores. A simulation study showed that variation in index difference scores also reflected little broad factor variance. The simulation study further revealed that, as a consequence, a significant discrepancy between an index score and overall performance was accompanied in only 40%–74% of the cases by a discrepancy on the underlying broad factor. Overall, these results provide little support for the validity and thereby clinical use of WISC-V profiles.
Collapse
Affiliation(s)
- Peter F. de Jong
- Research Institute of Child Development and Education, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Mao X, Zhang J, Xin T. The Optimal Design of Bifactor Multidimensional Computerized Adaptive Testing with Mixed-format Items. APPLIED PSYCHOLOGICAL MEASUREMENT 2022; 46:605-621. [PMID: 36131843 PMCID: PMC9483217 DOI: 10.1177/01466216221108382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multidimensional computerized adaptive testing (MCAT) using mixed-format items holds great potential for the next-generation assessments. Two critical factors in the mixed-format test design (i.e., the order and proportion of polytomous items) and item selection were addressed in the context of mixed-format bifactor MCAT. For item selection, this article presents the derivation of the Fisher information matrix of the bifactor graded response model and the application of the bifactor dimension reduction method to simplify the computation of the mutual information (MI) item selection method. In a simulation study, different MCAT designs were compared with varying proportions of polytomous items (0.2-0.6, 1), different item-delivering formats (DPmix: delivering polytomous items at the final stage; RPmix: random delivering), three bifactor patterns (low, middle, and high), and two item selection methods (Bayesian D-optimality and MI). Simulation results suggested that a) the overall estimation precision increased with a higher bifactor pattern; b) the two item selection methods did not show substantial differences in estimation precision; and c) the RPmix format always led to more precise interim and final estimates than the DPmix format. The proportions of 0.3 and 0.4 were recommended for the RPmix and DPmix formats, respectively.
Collapse
Affiliation(s)
| | | | - Tao Xin
- Beijing Normal University, Beijing, China
| |
Collapse
|
4
|
Wexler D, Pritchard AE, Ludwig NN. Characterizing and comparing adaptive and academic functioning in children with low average and below average intellectual abilities. Clin Neuropsychol 2022:1-18. [PMID: 35833873 DOI: 10.1080/13854046.2022.2096484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Objective The recent American Academy of Clinical Neuropsychology (AACN) consensus statement on uniform labeling of performance test scores places children who were previously characterized as having "borderline intellectual functioning" within the low average (LA; full scale intellectual quotient (FSIQ) between 80-89) or below average (BA; FSIQ between 70-79) categories. Given limited research examining functional differences across FSIQ groups using AACN's uniform labeling, this study examined adaptive and academic functioning by FSIQ group in youth referred for (neuro)psychological evaluation. Primary comparisons of interest were between LA and BA groups. MethodParticipants were 2,516 children between 6 to 13 years with standardized measures of intellectual, adaptive, and academic functioning. Participants were included if their FSIQ ranged from average to exceptionally low. Group differences in adaptive functioning and academic achievement were examined. ResultsThe LA group did not differ from the BA group in overall adaptive functioning and several domains of adaptive functioning (i.e. social, practical), but demonstrated slightly stronger adaptive skills in the conceptual domain. While the LA group evidenced slightly better word reading and math computation scores than the BA group, these statistically significant differences were not clinically -meaningful. ConclusionsIn this clinically referred sample, children with LA and BA intellectual abilities demonstrated similar adaptive skills, but slightly different academic achievement. Both groups demonstrated lower adaptive and academic functioning than children with average range FSIQs. These results suggest that adaptive functioning should be assessed during (neuro)psychological evaluations even when children do not have extremely low FSIQs.
Collapse
Affiliation(s)
- Danielle Wexler
- Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, MD, USA.,Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Alison E Pritchard
- Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, MD, USA.,Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Natasha N Ludwig
- Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, MD, USA.,Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
5
|
Billeiter KB, Froiland JM. Diversity of Intelligence is the Norm Within the Autism Spectrum: Full Scale Intelligence Scores Among Children with ASD. Child Psychiatry Hum Dev 2022:10.1007/s10578-021-01300-9. [PMID: 35083590 DOI: 10.1007/s10578-021-01300-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/29/2021] [Indexed: 11/03/2022]
Abstract
Although previous research helped to define differences in intelligence between neurotypicals and those with ASD, results were limited by small sample sizes or restricted subtests. Using data from the NIMH Data Archive, this study examined the intelligence of children with ASD (N = 671). Results demonstrate an average standard deviation of 25.75, which is 1.72 times greater than that of the normative sample for the WISC-III. Moreover, students with ASD are 12 times more likely than the general population of students to score within the intellectual disability range, but are also 1.5 times more likely to score in the superior range, suggesting that more students with ASD should be considered for giftedness. Determining the diversity of intelligence among those with ASD has implications for research, clinical practice, and neurological understanding.
Collapse
Affiliation(s)
- Kenzie B Billeiter
- Department of School Psychology, Baylor University, Waco, TX, 76706, USA.
| | - John Mark Froiland
- Department of Educational Studies, Purdue University, West Lafayette, USA
| |
Collapse
|
6
|
Wechsler SM, Peixoto EM, Gibim QGMT, Bruno Mundim MC, Ribeiro RKSM, de Souza AF. Assessment of Intelligence with Creativity the Need for a Comprehensive Approach. CREATIVITY RESEARCH JOURNAL 2021. [DOI: 10.1080/10400419.2021.1996750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
7
|
Pauls F, Daseking M. Revisiting the Factor Structure of the German WISC-V for Clinical Interpretability: An Exploratory and Confirmatory Approach on the 10 Primary Subtests. Front Psychol 2021; 12:710929. [PMID: 34594275 PMCID: PMC8476749 DOI: 10.3389/fpsyg.2021.710929] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 08/10/2021] [Indexed: 11/18/2022] Open
Abstract
With the exception of a recently published study and the analyses provided in the test manual, structural validity is mostly uninvestigated for the German version of the Wechsler Intelligence Scale for Children - Fifth Edition (WISC-V). Therefore, the main aim of the present study was to examine the latent structure of the 10 WISC-V primary subtests on a bifurcated extended population-representative German standardization sample (N=1,646) by conducting both exploratory (EFA; n=823) and confirmatory (CFA; n=823) factor analyses on the original data. Since no more than one salient subtest loading could be found on the Fluid Reasoning (FR) factor in EFA, results indicated a four-factor rather than a five-factor model solution when the extraction of more than two suggested factors was forced. Likewise, a bifactor model with four group factors was found to be slightly superior in CFA. Variance estimation from both EFA and CFA revealed that the general factor dominantly accounted for most of the subtest variance and construct reliability estimates further supported interpretability of the Full Scale Intelligence Quotient (FSIQ). In both EFA and CFA, most group factors explained rather small proportions of common subtest variance and produced low construct replicability estimates, suggesting that the WISC-V primary indexes were of lower interpretive value and should be evaluated with caution. Clinical interpretation should thus be primarily based on the FSIQ and include a comprehensive analysis of the cognitive profile derived from the WISC-V primary indexes rather than analyses of each single primary index.
Collapse
Affiliation(s)
- Franz Pauls
- Department of Clinical Psychology, Helmut-Schmidt-University/University of the Federal Armed Forces, Hamburg, Germany
| | - Monika Daseking
- Department of Educational Psychology, Helmut-Schmidt-University/University of the Federal Armed Forces, Hamburg, Germany
| |
Collapse
|
8
|
Decker SL, Bridges RM, Luedke JC, Eason MJ. Dimensional Evaluation of Cognitive Measures: Methodological Confounds and Theoretical Concerns. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2020. [DOI: 10.1177/0734282920940879] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The current study provides a methodological review of studies supporting a general factor of intelligence as the primary model for contemporary measures of cognitive abilities. A further evaluation is provided by an empirical evaluation that compares statistical estimates using different approaches in a large sample of children (ages 9–13 years, N = 780) administered a comprehensive battery of cognitive measures. Results from this study demonstrate the ramifications of using the bifactor and Schmid–Leiman (BF/SL) technique and suggest that using BF/SL methods limit interpretation of cognitive abilities to only a general factor. The inadvertent use of BF/SL methods is demonstrated to impact both model dimensionality and variance estimates for specific measures. As demonstrated in this study, conclusions from both exploratory and confirmatory studies using BF/SL methods are significantly questioned, especially for studies with a questionable theoretical basis. Guidelines for the interpretation of cognitive test scores in applied practice are discussed.
Collapse
|
9
|
McGill RJ, Ward TJ, Canivez GL. Use of translated and adapted versions of the WISC-V: Caveat emptor. SCHOOL PSYCHOLOGY INTERNATIONAL 2020. [DOI: 10.1177/0143034320903790] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Wechsler Intelligence Scale for Children (WISC) is the most widely used intelligence test in the world. Now in its fifth edition, the WISC-V has been translated and adapted for use in nearly a dozen countries. Despite its popularity, numerous concerns have been raised about some of the procedures used to develop and validate translated and adapted versions of the test around the world. The purpose of this article is to survey the most salient of those methodological and statistical limitations. In particular, empirical data are presented that call into question the equating procedures used to validate the WISC-V Spanish, suggesting cautious use of that instrument. It is believed that the issues raised in the present article will be instructive for school psychologists engaged in the clinical assessment of intelligence with the WISC-V Spanish and with other translated and adapted versions around the world.
Collapse
|
10
|
Mao X, Zhang J, Xin T. Application of Dimension Reduction to CAT Item Selection Under the Bifactor Model. APPLIED PSYCHOLOGICAL MEASUREMENT 2019; 43:419-434. [PMID: 31452552 PMCID: PMC6696870 DOI: 10.1177/0146621618813086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multidimensional computerized adaptive testing (MCAT) based on the bifactor model is suitable for tests with multidimensional bifactor measurement structures. Several item selection methods that proved to be more advantageous than the maximum Fisher information method are not practical for bifactor MCAT due to time-consuming computations resulting from high dimensionality. To make them applicable in bifactor MCAT, dimension reduction is applied to four item selection methods, which are the posterior-weighted Fisher D-optimality (PDO) and three non-Fisher information-based methods-posterior expected Kullback-Leibler information (PKL), continuous entropy (CE), and mutual information (MI). They were compared with the Bayesian D-optimality (BDO) method in terms of estimation precision. When both the general and group factors are the measurement objectives, BDO, PDO, CE, and MI perform equally well and better than PKL. When the group factors represent nuisance dimensions, MI and CE perform the best in estimating the general factor, followed by the BDO, PDO, and PKL. How the bifactor pattern and test length affect estimation accuracy was also discussed.
Collapse
Affiliation(s)
| | - Jiahui Zhang
- Michigan State University, East Lansing, MI, USA
| | - Tao Xin
- Beijing Normal University, Beijing, China
| |
Collapse
|
11
|
Dombrowski SC, McGill RJ, Morgan GB. Monte Carlo Modeling of Contemporary Intelligence Test (IQ) Factor Structure: Implications for IQ Assessment, Interpretation, and Theory. Assessment 2019; 28:977-993. [PMID: 31431055 DOI: 10.1177/1073191119869828] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Researchers continue to debate the constructs measured by commercial ability tests. Factor analytic investigations of these measures have been used to develop and refine widely adopted psychometric theories of intelligence particularly the Cattell-Horn-Carroll (CHC) model. Even so, this linkage may be problematic as many of these investigations examine a particular instrument in isolation and CHC model specification across tests and research teams has not been consistent. To address these concerns, the present study used Monte Carlo resampling to investigate the latent structure of four of the most widely used intelligence tests for children and adolescents. The results located the approximate existence of the publisher posited CHC theoretical group factors in the Differential Abilities Scales-Second edition and the Kaufman Assessment Battery for Children-Second edition but not in the Wechsler Intelligence Scale for Children-Fifth edition or the Woodcock-Johnson IV Tests of Cognitive Abilities. Instead, the results supported alternative conceptualizations from independent factor analytic research. Additionally, whereas a bifactor model produced superior fit indices in two instruments (Wechsler Intelligence Scale for Children-Fifth edition and Woodcock-Johnson IV Tests of Cognitive Abilities), a higher order structure was found to be superior in the Kaufman Assessment Battery for Children-Second edition and the Differential Abilities Scales-Second edition. Regardless of the model employed, the general factor captured a significant portion of each instrument's variance. Implications for IQ test assessment, interpretation, and theory are discussed.
Collapse
|
12
|
Grieder S, Grob A. Exploratory Factor Analyses of the Intelligence and Development Scales-2: Implications for Theory and Practice. Assessment 2019; 27:1853-1869. [PMID: 31023061 DOI: 10.1177/1073191119845051] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The factor structure of the intelligence and scholastic skills domains of the Intelligence and Development Scales-2 was examined using exploratory factor analyses with the standardization and validation sample (N = 2,030, aged 5 to 20 years). Results partly supported the seven proposed intelligence group factors. However, the theoretical factors Visual Processing and Abstract Reasoning as well as Verbal Reasoning and Long-Term Memory collapsed, resulting in a five-factor structure for intelligence. Adding the three scholastic skills subtests resulted in an additional factor Reading/Writing and in Logical-Mathematical Reasoning showing a loading on abstract Visual Reasoning and the highest general factor loading. A data-driven separation of intelligence and scholastic skills is not evident. Omega reliability estimates based on Schmid-Leiman transformations revealed a strong general factor that accounted for most of the true score variance both overall and at the group factor level. The possible usefulness of factor scores is discussed.
Collapse
|
13
|
Canivez GL, McGill RJ, Dombrowski SC, Watkins MW, Pritchard AE, Jacobson LA. Construct Validity of the WISC-V in Clinical Cases: Exploratory and Confirmatory Factor Analyses of the 10 Primary Subtests. Assessment 2018; 27:274-296. [PMID: 30516059 DOI: 10.1177/1073191118811609] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Independent exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) research with the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) standardization sample has failed to provide support for the five group factors proposed by the publisher, but there have been no independent examinations of the WISC-V structure among clinical samples. The present study examined the latent structure of the 10 WISC-V primary subtests with a large (N = 2,512), bifurcated clinical sample (EFA, n = 1,256; CFA, n = 1,256). EFA did not support five factors as there were no salient subtest factor pattern coefficients on the fifth extracted factor. EFA indicated a four-factor model resembling the WISC-IV with a dominant general factor. A bifactor model with four group factors was supported by CFA as suggested by EFA. Variance estimates from both EFA and CFA found that the general intelligence factor dominated subtest variance and omega-hierarchical coefficients supported interpretation of the general intelligence factor. In both EFA and CFA, group factors explained small portions of common variance and produced low omega-hierarchical subscale coefficients, indicating that the group factors were of poor interpretive value.
Collapse
Affiliation(s)
| | | | | | | | | | - Lisa A Jacobson
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
14
|
McGill RJ, Dombrowski SC, Canivez GL. Cognitive profile analysis in school psychology: History, issues, and continued concerns. J Sch Psychol 2018; 71:108-121. [DOI: 10.1016/j.jsp.2018.10.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 10/13/2018] [Accepted: 10/29/2018] [Indexed: 11/24/2022]
|
15
|
Canivez GL, Watkins MW, McGill RJ. Construct validity of the Wechsler Intelligence Scale For Children - Fifth UK Edition: Exploratory and confirmatory factor analyses of the 16 primary and secondary subtests. BRITISH JOURNAL OF EDUCATIONAL PSYCHOLOGY 2018; 89:195-224. [DOI: 10.1111/bjep.12230] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 05/11/2018] [Indexed: 11/30/2022]
|
16
|
Dombrowski SC, McGill RJ, Canivez GL, Peterson CH. Investigating the Theoretical Structure of the Differential Ability Scales—Second Edition Through Hierarchical Exploratory Factor Analysis. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2018. [DOI: 10.1177/0734282918760724] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
When the Differential Ability Scales–Second Edition (DAS-II) was developed, the instrument’s content, structure, and theoretical orientation were amended. Despite these changes, the Technical Handbook did not report results from exploratory factor analytic investigations, and confirmatory factor analyses were implemented using selected subtests across the normative age groups from the total battery. To address these omissions, the present study investigated the theoretical structure of the DAS-II using principal axis factoring followed by the Schmid–Leiman procedure with participants from the 5- to 8-year-old age range to determine the degree to which the DAS-II theoretical structure proposed in the Technical Handbook could be replicated. Unlike other age ranges investigated where at most 14 subtests were administered, the entire DAS-II battery was normed on participants aged 5 to 8 years, making it well suited to test the full instrument’s alignment with theory. Results suggested a six-factor solution that was essentially consistent with the Cattell–Horn–Carroll (CHC)-based theoretical structure suggested by the test publisher and simple structure was attained. The only exception involved two subtests (Picture Similarities and Early Number Concepts) that did not saliently load on a group factor. Implications for clinical practice are discussed.
Collapse
|
17
|
Dombrowski SC, Golay P, McGill RJ, Canivez GL. Investigating the theoretical structure of the DAS-II core battery at school age using Bayesian structural equation modeling. PSYCHOLOGY IN THE SCHOOLS 2017. [DOI: 10.1002/pits.22096] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
18
|
McGill RJ. Exploring the latent structure of the Luria model for the KABC-II at school age: Further insights from confirmatory factor analysis. PSYCHOLOGY IN THE SCHOOLS 2017. [DOI: 10.1002/pits.22037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
19
|
Watkins MW, Dombrowski SC, Canivez GL. Reliability and factorial validity of the Canadian Wechsler Intelligence Scale for Children–Fifth Edition. ACTA ACUST UNITED AC 2017. [DOI: 10.1080/21683603.2017.1342580] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- Marley W. Watkins
- Department of Educational Psychology, Baylor University, Waco, Texas, USA
| | | | - Gary L. Canivez
- Department of Psychology, Eastern Illinois University, Charleston, Illinois, USA
| |
Collapse
|
20
|
McGill RJ, Canivez GL. Confirmatory factor analyses of the WISC-IV Spanish core and supplemental subtests: Validation evidence of the Wechsler and CHC models. ACTA ACUST UNITED AC 2017. [DOI: 10.1080/21683603.2017.1327831] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Ryan J. McGill
- School of Education, College of William & Mary, Williamsburg, Virginia, USA
| | - Gary L. Canivez
- Department of Psychology, Eastern Illinois University, Charleston, Illinois, USA
| |
Collapse
|
21
|
Factor Structure of the CHC Model for the KABC-II: Exploratory Factor Analyses with the 16 Core and Supplementary Subtests. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/s40688-017-0152-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
22
|
Mitchell CAA, Maybery MT, Russell-Smith SN, Collerton D, Gignac GE, Waters F. The Structure and Measurement of Unusual Sensory Experiences in Different Modalities: The Multi-Modality Unusual Sensory Experiences Questionnaire (MUSEQ). Front Psychol 2017; 8:1363. [PMID: 28848477 PMCID: PMC5554527 DOI: 10.3389/fpsyg.2017.01363] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Accepted: 07/27/2017] [Indexed: 12/11/2022] Open
Abstract
Hallucinations and other unusual sensory experiences (USE) can occur in all modalities in the general population. Yet, the existing literature is dominated by investigations into auditory hallucinations (“voices”), while other modalities remain under-researched. Furthermore, there is a paucity of measures which can systematically assess different modalities, which limits our ability to detect individual and group differences across modalities. The current study explored such differences using a new scale, the Multi-Modality Unusual Sensory Experiences Questionnaire (MUSEQ). The MUSEQ is a 43-item self-report measure which assesses USE in six modalities: auditory, visual, olfactory, gustatory, bodily sensations, and sensed presence. Scale development and validation involved a total of 1,300 participants, which included: 513 students and community members for initial development, 32 individuals with schizophrenia spectrum disorder or bipolar disorder for validation, 659 students for factor replication, and 96 students for test-retest reliability. Confirmatory factor analyses showed that a correlated-factors model and bifactor model yielded acceptable model fit, while a unidimensional model fitted poorly. These findings were confirmed in the replication sample. Results showed contributions from a general common factor, as well as modality-specific factors. The latter accounted for less variance than the general factor, but could still detect theoretically meaningful group differences. The MUSEQ showed good reliability, construct validity, and could discriminate non-clinical and clinical groups. The MUSEQ offers a reliable means of measuring hallucinations and other USE in six different modalities.
Collapse
Affiliation(s)
- Claire A A Mitchell
- School of Psychological Science, The University of Western AustraliaCrawley, WA, Australia
| | - Murray T Maybery
- School of Psychological Science, The University of Western AustraliaCrawley, WA, Australia
| | | | - Daniel Collerton
- Northumberland, Tyne and Wear NHS Foundation Trust, Bensham HospitalGateshead, United Kingdom.,Institute of Neuroscience, Newcastle UniversityNewcastle upon Tyne, United Kingdom
| | - Gilles E Gignac
- School of Psychological Science, The University of Western AustraliaCrawley, WA, Australia
| | - Flavie Waters
- School of Psychological Science, The University of Western AustraliaCrawley, WA, Australia.,Clinical Research Centre, Graylands Hospital, North Metro Health Service Mental HealthMount Claremont, WA, Australia
| |
Collapse
|
23
|
Dombrowski SC, Canivez GL, Watkins MW. Factor Structure of the 10 WISC-V Primary Subtests Across Four Standardization Age Groups. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/s40688-017-0125-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
24
|
Multi-group and hierarchical confirmatory factor analysis of the Wechsler Intelligence Scale for Children—Fifth Edition: What does it measure? INTELLIGENCE 2017. [DOI: 10.1016/j.intell.2017.02.005] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
25
|
Canivez GL, Watkins MW, Good R, James K, James T. Construct validity of the Wechsler Intelligence Scale for Children - Fourth UK Edition with a referred Irish sample: Wechsler and Cattell-Horn-Carroll model comparisons with 15 subtests. BRITISH JOURNAL OF EDUCATIONAL PSYCHOLOGY 2017; 87:383-407. [DOI: 10.1111/bjep.12155] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 03/01/2017] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Rebecca Good
- Éirim: The National Assessment Agency, Ltd.; Dublin Ireland
| | - Kate James
- Éirim: The National Assessment Agency, Ltd.; Dublin Ireland
| | - Trevor James
- Éirim: The National Assessment Agency, Ltd.; Dublin Ireland
| |
Collapse
|
26
|
Meyer EM, Reynolds MR. Scores in Space: Multidimensional Scaling of the WISC-V. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2017. [DOI: 10.1177/0734282917696935] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The purpose of this study was to use multidimensional scaling (MDS) to investigate relations among scores from the standardization sample of the Wechsler Intelligence Scale for Children–Fifth edition (WISC-V; Wechsler, 2014). Nonmetric two-dimensional MDS maps were selected for interpretation. The most cognitively complex subtests and indexes were located near the center of the maps. Subtests were also grouped together in the way that they are organized in the primary and complementary indexes, and not necessarily on content surface features. Naming Speed and Symbol Translation scores may be better off kept as separate indexes, and Digit Span Sequencing appeared to add more complexity to the Digit Span subtest. One implication related to score interpretation is that General Ability Index (GAI) and Cognitive Proficiency Index (CPI) may be distinguished by the level of complexity of the subtests included in each index (i.e., more complex subtests included in GAI and less complex subtests included in the CPI).
Collapse
|