251
|
John SE, Gurnani AS, Bussell C, Saurman JL, Griffin JW, Gavett BE. The effectiveness and unique contribution of neuropsychological tests and the δ latent phenotype in the differential diagnosis of dementia in the uniform data set. Neuropsychology 2016; 30:946-960. [PMID: 27797542 PMCID: PMC5130291 DOI: 10.1037/neu0000315] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Two main approaches to the interpretation of cognitive test performance have been utilized for the characterization of disease: evaluating shared variance across tests, as with measures of severity, and evaluating the unique variance across tests, as with pattern and error analysis. Both methods provide necessary information, but the unique contributions of each are rarely considered. This study compares the 2 approaches on their ability to differentially diagnose with accuracy, while controlling for the influence of other relevant demographic and risk variables. METHOD Archival data requested from the NACC provided clinical diagnostic groups that were paired to 1 another through a genetic matching procedure. For each diagnostic pairing, 2 separate logistic regression models predicting clinical diagnosis were performed and compared on their predictive ability. The shared variance approach was represented through the latent phenotype δ, which served as the lone predictor in 1 set of models. The unique variance approach was represented through raw score values for the 12 neuropsychological test variables comprising δ, which served as the set of predictors in the second group of models. RESULTS Examining the unique patterns of neuropsychological test performance across a battery of tests was the superior method of differentiating between competing diagnoses, and it accounted for 16-30% of the variance in diagnostic decision making. CONCLUSION Implications for clinical practice are discussed, including test selection and interpretation. (PsycINFO Database Record
Collapse
Affiliation(s)
- Samantha E John
- Department of Psychology, University of Colorado Colorado Springs
| | - Ashita S Gurnani
- Department of Psychology, University of Colorado Colorado Springs
| | - Cara Bussell
- Department of Psychology, University of Colorado Colorado Springs
| | | | - Jason W Griffin
- Department of Psychology, University of Colorado Colorado Springs
| | - Brandon E Gavett
- Department of Psychology, University of Colorado Colorado Springs
| |
Collapse
|
252
|
Emmert N, Schwarz L, Vander Wal J, Gfeller J. RBANS factor structure in older adults with suspected cognitive impairment: Evidence for a 5-factor structure. APPLIED NEUROPSYCHOLOGY-ADULT 2016; 25:38-50. [PMID: 27762635 DOI: 10.1080/23279095.2016.1238827] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Previous research has yielded minimal empirical support for the theoretically formulated five-factor structure of the RBANS, a brief, yet comprehensive standardized neuropsychological test battery used to assess cognitive impairment. The present study tested the theoretically formulated five-factor structure, as well as three alternative factor solutions, using a combination of exploratory and confirmatory factor analytic approaches. The present study utilized archival data from a clinical sample of 150 older adults who were evaluated at an outpatient neuropsychological service. A total of four RBANS models were specified using confirmatory factor analysis. Results of the five-factor model demonstrated good to excellent fit following modifications to the model. Results of chi-square difference tests demonstrated that the five-factor model was statistically superior to the two- and three-factor models (p < .001). In summary, results provide support for the theoretically derived five-factor structure of the RBANS in a clinical sample of older adults. Cautious interpretation of the RBANS index scores as five distinct cognitive domains may be warranted, particularly when there is minimal discrepancy across performance on the tests that comprise each index.
Collapse
Affiliation(s)
- Natalie Emmert
- a Department of Psychology , Saint Louis University , Saint Louis , Missouri , USA
| | - Lauren Schwarz
- b Department of Neurology & Psychiatry , Saint Louis University , Saint Louis , Missouri , USA
| | - Jillon Vander Wal
- a Department of Psychology , Saint Louis University , Saint Louis , Missouri , USA
| | - Jeffrey Gfeller
- a Department of Psychology , Saint Louis University , Saint Louis , Missouri , USA
| |
Collapse
|
253
|
Gavett BE, Gurnani AS, Saurman JL, Chapman KR, Steinberg EG, Martin B, Chaisson CE, Mez J, Tripodis Y, Stern RA. Practice Effects on Story Memory and List Learning Tests in the Neuropsychological Assessment of Older Adults. PLoS One 2016; 11:e0164492. [PMID: 27711147 PMCID: PMC5053775 DOI: 10.1371/journal.pone.0164492] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 09/26/2016] [Indexed: 11/18/2022] Open
Abstract
Two of the most commonly used methods to assess memory functioning in studies of cognitive aging and dementia are story memory and list learning tests. We hypothesized that the most commonly used story memory test, Wechsler's Logical Memory, would generate more pronounced practice effects than a well validated but less common list learning test, the Neuropsychological Assessment Battery (NAB) List Learning test. Two hundred eighty-seven older adults, ages 51 to 100 at baseline, completed both tests as part of a larger neuropsychological test battery on an annual basis. Up to five years of recall scores from participants who were diagnosed as cognitively normal (n = 96) or with mild cognitive impairment (MCI; n = 72) or Alzheimer's disease (AD; n = 121) at their most recent visit were analyzed with linear mixed effects regression to examine the interaction between the type of test and the number of times exposed to the test. Other variables, including age at baseline, sex, education, race, time (years) since baseline, and clinical diagnosis were also entered as fixed effects predictor variables. The results indicated that both tests produced significant practice effects in controls and MCI participants; in contrast, participants with AD declined or remained stable. However, for the delayed-but not the immediate-recall condition, Logical Memory generated more pronounced practice effects than NAB List Learning (b = 0.16, p < .01 for controls). These differential practice effects were moderated by clinical diagnosis, such that controls and MCI participants-but not participants with AD-improved more on Logical Memory delayed recall than on delayed NAB List Learning delayed recall over five annual assessments. Because the Logical Memory test is ubiquitous in cognitive aging and neurodegenerative disease research, its tendency to produce marked practice effects-especially on the delayed recall condition-suggests a threat to its validity as a measure of new learning, an essential construct for dementia diagnosis.
Collapse
Affiliation(s)
- Brandon E. Gavett
- Department of Psychology, University of Colorado Colorado Springs, Colorado Springs, Colorado, United States of America
| | - Ashita S. Gurnani
- Department of Psychology, University of Colorado Colorado Springs, Colorado Springs, Colorado, United States of America
| | - Jessica L. Saurman
- Department of Psychology, University of Colorado Colorado Springs, Colorado Springs, Colorado, United States of America
| | - Kimberly R. Chapman
- Alzheimer's Disease Center, Boston University School of Medicine, Boston, Massachusetts, United States of America
| | - Eric G. Steinberg
- Alzheimer's Disease Center, Boston University School of Medicine, Boston, Massachusetts, United States of America
| | - Brett Martin
- Boston University School of Public Health, Boston, Massachusetts, United States of America
| | - Christine E. Chaisson
- Boston University School of Public Health, Boston, Massachusetts, United States of America
| | - Jesse Mez
- Alzheimer's Disease Center, Boston University School of Medicine, Boston, Massachusetts, United States of America
| | - Yorghos Tripodis
- Boston University School of Public Health, Boston, Massachusetts, United States of America
| | - Robert A. Stern
- Alzheimer's Disease Center, Boston University School of Medicine, Boston, Massachusetts, United States of America
| |
Collapse
|
254
|
Egeland J, Løvstad M, Norup A, Nybo T, Persson BA, Rivera DF, Schanke AK, Sigurdardottir S, Arango-Lasprilla JC. Following international trends while subject to past traditions: neuropsychological test use in the Nordic countries. Clin Neuropsychol 2016; 30:1479-1500. [DOI: 10.1080/13854046.2016.1237675] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
255
|
Zane KL, Gfeller JD, Roskos PT, Bucholz RD. The Clinical Utility of the Conners' Continuous Performance Test-II in Traumatic Brain Injury. Arch Clin Neuropsychol 2016; 31:996-1005. [PMID: 27650713 DOI: 10.1093/arclin/acw078] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/28/2016] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The Conners' Continuous Performance Test Second Edition (CPT-II) is a measure commonly used in persons with suspected attentional deficits. Our study examined the utility of the CPT-II as a measure of attention in adults with traumatic brain injury (TBI) of varying severity. METHOD As part of a larger investigation, several measures of cognitive functioning, including the CPT-II, were administered to 30 healthy control participants (HCP), 30 mild TBI participants (M-TBI), and 30 moderate to severe TBI participants (MS-TBI). Multivariate and correlational analyses compared group performances and examined convergent and divergent relationships between the CPT-II and various measures, including other tests of attention and neuropsychological function. RESULTS Group differences were found for four of six CPT-II variables, with the MS-TBI group exhibiting greater impairment, relative to M-TBI and HCP. In addition, the CPT-II commission and detectability variables were found to correlate significantly with TBI severity. The CPT-II variables also demonstrated correlations of varying magnitude between commonly used neuropsychological measures. CONCLUSIONS These findings support the utility of the CPT-II for assessing attentional abilities in persons with TBI of varying severity, particularly those with moderate to severe status. Moreover, the current study also demonstrates relationships that are consistent with convergent validity but inconsistent findings with regard to divergent validity. As a result, the CPT-II measures components of attention that is unique to other commonly used neuropsychological measures of attentive functioning. Further research examining CPT-II performance in TBI populations is recommended.
Collapse
Affiliation(s)
- Katherine L Zane
- Department of Psychology, Saint Louis University, St. Louis, MO63108, United States
| | - Jeffrey D Gfeller
- Department of Psychology, Saint Louis University, St. Louis, MO63108, United States
| | - P Tyler Roskos
- Department of Physical Medicine and Rehabilitation Oakwood, Wayne State University School of Medicine, Dearborn, MI48201, United States
| | - Richard D Bucholz
- Department of Neurosurgery, Saint Louis University School of Medicine, St. Louis, MO63104, United States
| |
Collapse
|
256
|
Meltzer EP, Kapoor A, Fogel J, Elbulok-Charcape MM, Roth RM, Katz MJ, Lipton RB, Rabin LA. Association of psychological, cognitive, and functional variables with self-reported executive functioning in a sample of nondemented community-dwelling older adults. APPLIED NEUROPSYCHOLOGY-ADULT 2016; 24:364-375. [PMID: 27282245 DOI: 10.1080/23279095.2016.1185428] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Subjective executive functioning (EF) measures provide valuable information about real-world difficulties, although it is unclear what variables actually associate with subjective EF scores. We investigated subjective EF in 245 nondemented, community-dwelling older adults (aged 70 and above) from the Einstein Aging Study. Partial correlational analyses controlling for age were performed between the nine Behavior Rating Inventory of Executive Function-Adult version (BRIEF-A) clinical scales and objective EF tests, self-reported mood and personality, and informant-reported activities of daily living. The significance level was set at p < .006 for all analyses (two-tailed). Most notably, higher worry/oversensitivity, physiological anxiety, and fear of aging were significantly associated with increased EF difficulties on all nine BRIEF-A scales. Additionally, increased EF difficulties on five or more BRIEF-A scales were significantly associated with lower conscientiousness, higher neuroticism, and higher depressive symptom scores. The only objective neuropsychological test that significantly correlated with increased EF difficulties (on four BRIEF-A scales) was a measure of practical judgment. Overall, results indicate that interpretation of subjective EF scores must account for self-report of mood and personality. Moreover, the BRIEF-A only minimally taps objective EF as measured by performance-based measures. We discuss the theoretical and practical implications of these findings.
Collapse
Affiliation(s)
- Erica P Meltzer
- a Department of Psychology , Queens College of the City University of New York , Queens , NY , USA.,b Department of Psychology , Brooklyn College and the Graduate Center of the City University of New York , New York , NY , USA
| | - Ashu Kapoor
- c Department of Psychology, Ferkauf Graduate School of Psychology , Yeshiva University , Bronx , NY , USA.,d Department of Neurology , Albert Einstein College of Medicine , Bronx , NY , USA
| | - Joshua Fogel
- e Department of Business Management , Brooklyn College of the City University of New York , Brooklyn , NY , USA
| | - Milushka M Elbulok-Charcape
- b Department of Psychology , Brooklyn College and the Graduate Center of the City University of New York , New York , NY , USA
| | - Robert M Roth
- f Department of Psychiatry , Geisel School of Medicine at Dartmouth College , Lebanon , NH , USA
| | - Mindy J Katz
- d Department of Neurology , Albert Einstein College of Medicine , Bronx , NY , USA
| | - Richard B Lipton
- d Department of Neurology , Albert Einstein College of Medicine , Bronx , NY , USA
| | - Laura A Rabin
- a Department of Psychology , Queens College of the City University of New York , Queens , NY , USA.,b Department of Psychology , Brooklyn College and the Graduate Center of the City University of New York , New York , NY , USA.,d Department of Neurology , Albert Einstein College of Medicine , Bronx , NY , USA.,f Department of Psychiatry , Geisel School of Medicine at Dartmouth College , Lebanon , NH , USA
| |
Collapse
|