1
|
Keezer RD, Leib SI, Scimeca LM, Smith JT, Holbrook LR, Sharp DW, Jennette KJ, Ovsiew GP, Resch ZJ, Soble JR. Masking effect of high IQ on the Rey Auditory Verbal Learning Test in an adult sample with attention deficit/hyperactivity disorder. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1-9. [PMID: 34623950 DOI: 10.1080/23279095.2021.1983575] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE High intelligence (IQ) adults with attention-deficit/hyperactivity disorder (ADHD) often perform better on neuropsychological tests relative to average IQ adults with ADHD, despite commensurate functional impairment. This study compared adults with ADHD and high versus average IQ on the Rey Auditory Verbal Learning Test (RAVLT) to specifically assess this proposed masking effect of IQ on verbal learning/memory performance among those undergoing neuropsychological evaluation. METHOD RAVLT performance between patients with ADHD and average versus high Test of Premorbid Function-estimated IQ were compared. Latent growth curve modeling (LGCM) evaluated learning acquisition across trials. RESULTS RAVLT total learning, immediate, and delayed free recall performances were significantly better in the high IQ relative to the average IQ group. LGCM showed similar quadradic growth trajectories for both IQ groups. Both groups reported equivalent symptom severity and functional complaints in childhood and adulthood. CONCLUSIONS Adults with ADHD and high IQ performed normally on a verbal learning/memory test compared to adults with average IQ, who scored 0.5-1.0 standard deviations below the mean. These results suggest a masking of performance-based memory deficits in the context of higher IQ in adults with ADHD, supporting growing evidence that higher IQ masks neurocognitive deficits during the assessment of adults with ADHD.
Collapse
Affiliation(s)
- Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Deparment of Psychology, Wheaton College, Wheaton, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Lauren M Scimeca
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Justin T Smith
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Deparment of Psychology, Wheaton College, Wheaton, IL, USA
| | - Lindsey R Holbrook
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Dillon W Sharp
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
2
|
Crișan I, Sava FA. Validity assessment in Eastern Europe: cross-validation of the Dot Counting Test and MODEMM against the TOMM-1 and Rey-15 in a Romanian mixed clinical sample. Arch Clin Neuropsychol 2023:acad085. [PMID: 37961918 DOI: 10.1093/arclin/acad085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/15/2023] Open
Abstract
OBJECTIVE This study investigated performance validity in the understudied Romanian clinical population by exploring classification accuracies of the Dot Counting Test (DCT) and the first Romanian performance validity test (PVT) (Memory of Objects and Digits and Evaluation of Memory Malingering/MODEMM) in a heterogeneous clinical sample. METHODS We evaluated 54 outpatients (26 females; MAge = 62.02; SDAge = 12.3; MEducation = 2.41, SDEducation = 2.82) with the Test of Memory Malingering 1 (TOMM-1), Rey Fifteen Items Test (Rey-15) (free recall and recognition trials), DCT, MODEMM, and MMSE/MoCA as part of their neuropsychological assessment. Accuracy parameters and base failure rates were computed for the DCT and MODEMM indicators against the TOMM-1 and Rey-15. Two patient groups were constructed according to psychometrically defined credible/noncredible performance (i.e., pass/fail both TOMM-1 and Rey-15). RESULTS Similar to other cultures, a cutoff of ≥18 on the DCT E score produced the best combination between sensitivity (0.50-0.57) and specificity (≥0.90). MODEMM indicators based on recognition accuracy, inconsistencies, and inclusion false positives generated 0.75-0.86 sensitivities at ≥0.90 specificities. Multivariable models of MODEMM indicators reached perfect sensitivities at ≥0.90 specificities against two PVTs. Patients who failed the TOMM-1 and Rey-15 were significantly more likely to fail the DCT and MODEMM than patients who passed both PVTs. CONCLUSIONS Our results offer proof of concept for the DCT's cross-cultural validity and the applicability of the MODEMM on Romanian clinical examinees, further recommending the use of heterogeneous validity indicators in clinical assessments.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara 300223, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timișoara 300223, Romania
| |
Collapse
|
3
|
Finley JCA, Brooks JM, Nili AN, Oh A, VanLandingham HB, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Multivariate examination of embedded indicators of performance validity for ADHD evaluations: A targeted approach. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-14. [PMID: 37703401 DOI: 10.1080/23279095.2023.2256440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Amanda N Nili
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Medical Social Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
4
|
Alkhouli M, Al-Nerabieah Z, Dashash M. A Novel Scale to Assess Parental Satisfaction of Dental Local Anesthetic Techniques in Children: A Cross-Sectional Study. Pain Res Manag 2023; 2023:9973749. [PMID: 37251688 PMCID: PMC10219770 DOI: 10.1155/2023/9973749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/05/2023] [Accepted: 05/11/2023] [Indexed: 05/31/2023]
Abstract
Background Pain control is one of the most important aspects that can affect parental satisfaction of the dental care provided for children. Dental local anesthesia has the highest impact on pain sensation of the children. However, there is no scale in the literature to assess parental satisfaction of dental local anesthetic techniques. Objectives This study was aimed to assess the parental satisfaction with dental local anesthetic techniques for their children through designing a scale that reflects satisfaction and to study the validity and reliability of this scale. Methods A cross-sectional observational study was conducted on 150 parents (102 mothers and 48 fathers). Two techniques of local anesthesia were used for each child participated in this study (inferior alveolar nerve block and computerized intraosseous anesthesia). The developed scale consisted of 20 items in a 5-point Likert scale. Half of the items were written in a negative format. Internal consistency, validity, and factor analysis were performed in this study. Independent t-test was used to compare between the two techniques of anesthesia, between boys and girls and among fathers and mothers. Results Parental satisfaction mean values were higher in the computerized intraosseous anesthesia group in comparison to inferior alveolar nerve block (P value <0.05). The T-test showed that there was no difference between boys and girls regarding parental satisfaction (P value >0.05). Furthermore, fathers show lower satisfaction in the computerized interosseous anesthesia group (P value <0.05). Excellent internal consistency of this scale was resulted as Cronbach's alpha reliability coefficient was 0.985. After factor analysis, seven factor components were retained by using varimax rotation. Conclusions Findings of this study reported that the designed parental satisfaction of dental local anesthetic techniques scale (PSLAS) is a valid and reliable scale to be used. Moreover, this study showed that parental satisfaction was higher when computerized intraosseous anesthesia was used in comparison to inferior alveolar nerve block.
Collapse
Affiliation(s)
- Muaaz Alkhouli
- Pediatric Dentistry, Faculty of Dentistry, Damascus University, Damascus, Syria
| | - Zuhair Al-Nerabieah
- Pediatric Dentistry, Faculty of Dentistry, Damascus University, Damascus, Syria
| | - Mayssoon Dashash
- Pediatric Dentistry, Faculty of Dentistry, Damascus University, Damascus, Syria
| |
Collapse
|
5
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
6
|
Hansen ND, Rhoads T, Jennette KJ, Reynolds TP, Ovsiew GP, Resch ZJ, Critchfield EA, Marceaux JC, O'Rourke JJF, Soble JR. Validation of alternative dot counting test E-score cutoffs based on degree of cognitive impairment in veteran and civilian clinical samples. Clin Neuropsychol 2023; 37:402-415. [PMID: 35343379 DOI: 10.1080/13854046.2022.2054863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE This study examined Dot Counting Test (DCT) performance among patient populations with no/minimal impairment and mild impairment in an attempt to cross-validate a more parsimonious interpretative strategy and to derive optimal E-Score cutoffs. METHOD Participants included clinically-referred patients from VA (n = 101) and academic medical center (AMC, n = 183) settings. Patients were separated by validity status (valid/invalid), and subsequently two comparison groups were formed from each sample's valid group. Namely, Group 1 included patients with no to minimal cognitive impairment, and Group 2 included those with mild neurocognitive disorder. Analysis of variance tested for differences between rounded and unrounded DCT E-Scores across both comparison groups and the invalid group. Receiver operating characteristic curve analyses identified optimal validity cut-scores for each sample and stratified by comparison groups. RESULTS In the VA sample, cut scores of ≥13 (rounded) and ≥12.58 (unrounded) differentiated Group 1 from the invalid performers (87% sensitivity/88% specificity), and cut scores of ≥17 (rounded; 58% sensitivity/90% specificity) and ≥16.49 (unrounded; 61% sensitivity/90% specificity) differentiated Group 2 from the invalid group. Similarly, in the AMC group, a cut score of ≥13 (rounded and unrounded; 75% sensitivity/90% specificity) differentiated Group 1 from the invalid group, whereas cut scores of ≥18 (rounded; 43% sensitivity/94% specificity) and ≥16.94 (unrounded; 46% sensitivity/90% specificity) differentiated Group 2 from the invalid performers. CONCLUSIONS Different cut scores were indicated based on degree of cognitive impairment, and provide proof-of-concept for a more parsimonious interpretative paradigm than using individual cut scores derived for specific diagnostic groups.
Collapse
Affiliation(s)
- Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tristan P Reynolds
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Edan A Critchfield
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
7
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
8
|
Chang F, Cerny BM, Tse PKY, Rauch AA, Khan H, Phillips MS, Fletcher NB, Resch ZJ, Ovsiew GP, Jennette KJ, Soble JR. Using the Grooved Pegboard Test as an Embedded Validity Indicator in a Mixed Neuropsychiatric Sample with Varying Cognitive Impairment: Cross-Validation Problems. Percept Mot Skills 2023; 130:770-789. [PMID: 36634223 DOI: 10.1177/00315125231151779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Embedded validity indicators (EVIs) derived from motor tests have received less empirical attention than those derived from tests of other neuropsychological abilities, particularly memory. Preliminary evidence suggests that the Grooved Pegboard Test (GPB) may function as an EVI, but existing studies were largely conducted using simulators and population samples without cognitive impairment. In this study we aimed to evaluate the GPB's classification accuracy as an EVI among a mixed clinical neuropsychiatric sample with and without cognitive impairment. This cross-sectional study comprised 223 patients clinically referred for neuropsychological testing. GPB raw and T-scores for both dominant and nondominant hands were examined as EVIs. A known-groups design, based on ≤1 failure on a battery of validated, independent criterion PVTs, showed that GPB performance differed significantly by validity group. Within the valid group, receiver operating characteristic curve analyses revealed that only the dominant hand raw score displayed acceptable classification accuracy for detecting invalid performance (area under curve [AUC] = .72), with an optimal cut-score of ≥106 seconds (33% sensitivity/88% specificity). All other scores had marginally lower classification accuracy (AUCs = .65-.68) for differentiating valid from invalid performers. Therefore, the GPB demonstrated limited utility as an EVI in a clinical sample containing patients with bona fide cognitive impairment.
Collapse
Affiliation(s)
- Fini Chang
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, 12247University of Illinois at Chicago, Chicago, Illinois, United States
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Phoebe Ka Yin Tse
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Andrew A Rauch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Loyola University Chicago, Chicago, Illinois, United States
| | - Humza Khan
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Matthew S Phillips
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Noah B Fletcher
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| |
Collapse
|
9
|
Pain Influences Neuropsychological Performance Following Electrical Injury: A Cross-Sectional Study. J Int Neuropsychol Soc 2023; 29:35-45. [PMID: 35039108 DOI: 10.1017/s1355617721001478] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
OBJECTIVE Electrical injury (EI) is a significant, multifaceted trauma often with multi-domain cognitive sequelae, even when the expected current path does not pass through the brain. Chronic pain (CP) research suggests pain may affect cognition directly and indirectly by influencing emotional distress which then impacts cognitive functioning. As chronic pain may be critical to understanding EI-related cognitive difficulties, the aims of the current study were: examine the direct and indirect effects of pain on cognition following EI and compare the relationship between pain and cognition in EI and CP populations. METHOD This cross-sectional study used data from a clinical sample of 50 patients with EI (84.0% male; Mage = 43.7 years) administered standardized measures of pain (Pain Patient Profile), depression, and neurocognitive functioning. A CP comparison sample of 93 patients was also included. RESULTS Higher pain levels were associated with poorer attention/processing speed and executive functioning performance among patients with EI. Depression was significantly correlated with pain and mediated the relationship between pain and attention/processing speed in patients with EI. When comparing the patients with EI and CP, the relationship between pain and cognition was similar for both clinical groups. CONCLUSIONS Findings indicate that pain impacts mood and cognition in patients with EI, and the influence of pain and its effect on cognition should be considered in the assessment and treatment of patients who have experienced an electrical injury.
Collapse
|
10
|
Weigard A, Spencer RJ. Benefits and challenges of using logistic regression to assess neuropsychological performance validity: Evidence from a simulation study. Clin Neuropsychol 2023; 37:34-59. [PMID: 35006042 PMCID: PMC9273108 DOI: 10.1080/13854046.2021.2023650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 12/22/2021] [Indexed: 02/07/2023]
Abstract
Logistic regression (LR) is recognized as a promising method for making decisions about neuropsychological performance validity by integrating information across multiple measures. However, this method has yet to be widely adopted in clinical practice, likely because several open questions remain about its utility relative to simpler methods, its effectiveness across different clinical contexts, and its feasibility at sample sizes common in the field. The current study addresses these questions by assessing classification performance of logistic regression and alternative methods across an array of simulated data sets. We simulated scores of valid and invalid performers on 6 tests designed to mimic the psychometric and distributional properties of real performance validity measures. Out-of-sample predictive performance of LR and a commonly used alternative ("vote counting") was assessed across different base rates, validity measure properties, and sample sizes. LR improved classification accuracy by 2%-12% across simulation conditions, primarily by improving sensitivity. False positives and negatives can be further reduced when LR predictions are interpreted as continuous, rather than binary. LR made robust predictions at sample sizes feasible for neuropsychology research (N = 307) and when as few as 2 tests with good psychometric properties were used. Although training and test data sets of at least several hundred individuals may be required to develop and evaluate LR models for use in clinical practice, LR promises to be an efficient and powerful tool for improving judgements about performance validity. We offer several recommendations for model development and LR interpretation in a clinical setting.
Collapse
Affiliation(s)
| | - Robert J. Spencer
- Department of Psychiatry, University of Michigan
- VA Ann Arbor Healthcare System
| |
Collapse
|
11
|
Resch ZJ, Cerny BM, Ovsiew GP, Jennette KJ, Bing-Canar H, Rhoads T, Soble JR. A Direct Comparison of 10 WAIS-IV Digit Span Embedded Validity Indicators among a Mixed Neuropsychiatric Sample with Varying Degrees of Cognitive Impairment. Arch Clin Neuropsychol 2022; 38:619-632. [DOI: 10.1093/arclin/acac082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2022] [Indexed: 11/13/2022] Open
Abstract
Abstract
Objective
Reliable Digit Span (RDS), RDS-Revised (RDS-R), and age-corrected scaled score (ACSS) have been previously validated as embedded performance validity tests (PVTs) from the Wechsler Adult Intelligence Scale-IV Digit Span subtest (WAIS-IV DS). However, few studies have directly compared the relative utility of these and other proposed WAIS-IV DS validity indicators within a single sample.
Method
This study compared classification accuracies of 10 WAIS-IV DS indices in a mixed neuropsychiatric sample of 227 outpatients who completed a standardized neuropsychological battery. Participants with ≤1 PVT failures of the four, freestanding criterion PVTs constituted the valid group (n = 181), whereas those with ≥2 PVT failures formed the invalid group (n = 46). Among the valid group, 113 met criteria for mild cognitive impairment (MCI).
Results
Classification accuracies for all DS indicators were statistically significant across the overall sample and subsamples with and without MCI, apart from indices derived from the Forward trial in the MCI sample. DS Sequencing ACSS, working memory RDS (wmRDS), and DS ACSS emerged as the most effective predictors of validity status, with acceptable to excellent classification accuracy for the overall sample (AUCs = 0.792–0.816; 35%–50% sensitivity/88%–96% specificity).
Conclusions
Although most DS indices demonstrated clinical utility as embedded PVTs, DS Sequencing ACSS, wmRDS, and DS ACSS may be particularly robust to cognitive impairment, minimizing risk of false positive errors while identifying noncredible performance. Moreover, DS indices incorporating data from multiple trials (i.e., wmRDS, DS ACSS) also generally yielded greater classification accuracy than those derived from a single trial.
Collapse
Affiliation(s)
- Zachary J Resch
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
| | - Brian M Cerny
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- Illinois Institute of Technology Department of Psychology, , Chicago, IL, USA
| | - Gabriel P Ovsiew
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
| | - Kyle J Jennette
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
| | - Hanaan Bing-Canar
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- University of Illinois at Chicago Department of Psychology, , Chicago, IL, USA
| | - Tasha Rhoads
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science Department of Psychology, , North Chicago, IL, USA
| | - Jason R Soble
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- University of Illinois College of Medicine Department of Neurology, , Chicago, IL, USA
| |
Collapse
|
12
|
Jennette KJ, Rhoads T, Resch ZJ, Cerny BM, Leib SI, Sharp DW, Ovsiew GP, Soble JR. Multivariable analysis of the relative utility and additive value of eight embedded performance validity tests for classifying invalid neuropsychological test performance. J Clin Exp Neuropsychol 2022; 44:451-460. [PMID: 36197342 DOI: 10.1080/13803395.2022.2128067] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
INTRODUCTION This study investigated a combination of eight embedded performance validity tests (PVTs) derived from commonly administered neuropsychological tests to optimize sensitivity/specificity for detecting invalid neuropsychological test performance. The goal of this study was to evaluate what combination of these common embedded PVTs that have the most robust predictive power for detecting invalid neuropsychological test performance in a single diverse clinical sample. METHOD Eight previously validated memory- and nonmemory-based embedded PVTs were examined among 231 patients undergoing neuropsychological evaluation. Patients were classified into valid/invalid groups based on four independent criterion PVTs. Embedded PVT accuracy was assessed using standard and stepwise multiple logistic regression models. RESULTS Three PVTs, the Brief Visuospatial Memory Test-Revised Recognition Discrimination (BVMT-R-RD), Rey Auditory Verbal Learning Test Forced Choice, and WAIS-IV Digit Span Age Corrected Scaled Score, predicted 45.5% of the variance in validity group membership. BVMT-RD independently accounted for 32% of the variance in prediction of independent, criterion-defined validity group membership. CONCLUSIONS This study demonstrated the incremental predictive power of multiple embedded PVTs derived from common neuropsychological measures in detecting invalid test performance and those measures accounting for the greatest portion of the variance. These results provide guidance for evaluating the most fruitful embedded PVTs and proof of concept to better guide selection of embedded validity indices. Further, this offers clinicians an efficient, empirically derived approach to assessing performance validity when time restraints potentially limit the use of freestanding PVTs.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Dillon W Sharp
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
13
|
Jennette KJ, Williams CP, Resch ZJ, Ovsiew GP, Durkin NM, O'Rourke JJF, Marceaux JC, Critchfield EA, Soble JR. Assessment of differential neurocognitive performance based on the number of performance validity tests failures: A cross-validation study across multiple mixed clinical samples. Clin Neuropsychol 2022; 36:1915-1932. [PMID: 33759699 DOI: 10.1080/13854046.2021.1900398] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This cross-sectional study examined the effect of number of Performance Validity Test (PVT) failures on neuropsychological test performance among a demographically diverse Veteran (VA) sample (n = 76) and academic medical sample (AMC; n = 128). A secondary goal was to investigate the psychometric implications of including versus excluding those with one PVT failure when cross-validating a series of embedded PVTs. Method: All patients completed the same six criterion PVTs, with the AMC sample completing three additional embedded PVTs. Neurocognitive test performance differences were examined based on number of PVT failures (0, 1, 2+) for both samples, and effect of number of criterion failures on embedded PVT performance was analyzed among the AMC sample. Results: Both groups with 0 or 1 PVT failures performed better than those with ≥2 PVT failures across most cognitive tests. There were nonsignificant differences between those with 0 or 1 PVT failures except for one test in the AMC sample. Receiver operator characteristic curve analyses found no differences in optimal cut score based on number of PVT failures when retaining/excluding one PVT failure. Conclusion: Findings support the use of ≥2 PVT failures as indicative of performance invalidity. These findings strongly support including those with one PVT failure with those with zero PVT failures in diagnostic accuracy studies, given that their inclusion reflects actual clinical practice, does not reduce sample sizes, and does not artificially deflate neurocognitive test results or inflate PVT classification accuracy statistics.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
14
|
Weitzner DS, Miller BI, Webber TA. Embedded cognitive and emotional/affective self-reported symptom validity indices on the patient competency rating scale. J Clin Exp Neuropsychol 2022; 44:533-549. [PMID: 36369702 DOI: 10.1080/13803395.2022.2138270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Although there is an abundance of research on stand-alone and embedded performance validity tests and stand-alone symptom validity tests (SVTs), less emphasis has been placed on embedded SVTs. The goal of the current study was to examine the ability of embedded indicators within the Patient Competency Rating Scale (PCRS) to separately detect invalid cognitive and/or emotional/affective symptom responding. METHOD Participants included 299 veterans assessed in a VA medical center epilepsy monitoring unit from 2013-2017 (mean age = 48.8 years, SD = 13.5 years). Two SVT composites were created; self-reported cognitive symptom validity (SVT-C) and self-reported emotional/affective symptom validity (SVT-E). Groups were compared on PCRS total and index scores (i.e., cognitive, activities of daily living, emotional, and interpersonal competencies) using ANOVAs. Receiver operating characteristic (ROC) curve analyses assessed the classification accuracy of the PCRS total and index scores for SVT-C and SVT-E. RESULTS In ANOVAs, SVT-C was significantly associated with all PCRS indices, while SVT-E was only significantly associated with the PCRS total, emotional, and interpersonal competency indices. Although the PCRS-T ≤ 90 had the strongest classification of SVT-C and SVT-E (specificities: .90, sensitivities: .44 to .50), PCRS index scores showed suggestive evidence of domain specificity, with PCRS-ADL ≤22, PCRS-C ≤ 20, and PCRS-CADL ≤45 best classifying SVT-C (specificities: .92, sensitivities: .33) and the PCRS-E ≤ 18 best classifying the SVT-E group (specificity: .93, sensitivity: .40). CONCLUSION Results suggest the PCRS may be used to obtain clinically useful information while including embedded indicators that can assess cognitive and/or emotional/affective symptom invalidity.
Collapse
Affiliation(s)
- Daniel S Weitzner
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
15
|
Cohen CD, Rhoads T, Keezer RD, Jennette KJ, Williams CP, Hansen ND, Ovsiew GP, Resch ZJ, Soble JR. All of the accuracy in half of the time: Assessing abbreviated versions of the Test of Memory Malingering in the context of verbal and visual memory impairment. Clin Neuropsychol 2022; 36:1933-1949. [PMID: 33836622 DOI: 10.1080/13854046.2021.1908596] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveThe Test of Memory Malingering (TOMM) Trial 1 (T1) and errors on the first 10 items of T1 (T1-e10) were developed as briefer versions of the TOMM to minimize evaluation time and burden, although the effect of genuine memory impairment on these indices is not well established. This study examined whether increasing material-specific verbal and visual memory impairment affected T1 and T1-e10 performance and accuracy for detecting invalidity. Method: Data from 155 neuropsychiatric patients administered the TOMM, Rey Auditory Verbal Learning Test (RAVLT), and Brief Visuospatial Memory Test-Revised (BVMT-R) during outpatient evaluation were examined. Valid (N = 125) and invalid (N = 30) groups were established by four independent criterion performance validity tests. Verbal/visual memory impairment was classified as ≥37 T (normal memory); 30 T-36T (mild impairment); and ≤29 T (severe impairment). Results: Overall, T1 had outstanding accuracy, with 77% sensitivity/90% specificity. T1-e10 was less accurate but had excellent discriminability, with 60% sensitivity/87% specificity. T1 maintained excellent accuracy regardless of memory impairment severity, with 77% sensitivity/≥88% specificity and a relatively invariant cut-score even among those with severe verbal/visual memory impairment. T1-e10 had excellent classification accuracy among those with normal memory and mild impairment, but accuracy and sensitivity dropped with severe impairment and the optimal cut-score had to be increased to maintain adequate specificity. Conclusion: TOMM T1 is an effective performance validity test with strong psychometric properties regardless of material-specificity and severity of memory impairment. By contrast, T1-e10 functions relatively well in the context of mild memory impairment but has reduced discriminability with severe memory impairment.
Collapse
Affiliation(s)
- Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,School of Psychology, Counseling, and Family Therapy, Wheaton College, Wheaton, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
16
|
Ausloos-Lozano JE, Bing-Canar H, Khan H, Singh PG, Wisinger AM, Rauch AA, Ogram Buckley CM, Petry LG, Jennette KJ, Soble JR, Resch ZJ. Assessing performance validity during attention-deficit/hyperactivity disorder evaluations: Cross-validation of non-memory embedded validity indicators. Dev Neuropsychol 2022; 47:247-257. [PMID: 35787068 DOI: 10.1080/87565641.2022.2096889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Embedded performance validity tests (PVTs) are key components of neuropsychological evaluations. However, most are memory-based and may be less useful in the assessment of attention-deficit/hyperactivity disorder (ADHD). Four non-memory-based validity indices derived from processing speed and executive functioning measures commonly included in ADHD evaluations, namely Verbal Fluency (VF) and the Trail Making Test (TMT), were cross-validated using the Rey 15-Item Test (RFIT) Recall and Recall/Recognition as memory-based comparison measures. This consecutive case series included data from 416 demographically-diverse adults who underwent outpatient neuropsychological evaluation for ADHD. Validity classifications were established, with ≤1 PVT failure of five independent criterion PVTs as indicative of valid performance (374 valid performers/42 invalid performers). Among the statistically significant validity indicators, TMT-A and TMT-B T-scores (AUCs = .707-.723) had acceptable classification accuracy ranges and sensitivities ranging from 29%-36% (≥89% specificity). RFIT Recall/Recognition produced similar results as TMT-B T-score with 42% sensitivity/90% specificity, but with lower classification accuracy. In evaluating adult ADHD, VF and TMT embedded PVTs demonstrated comparable sensitivity and specificity values to those found in other clinical populations but necessitated alternate cut-scores. Results also support use of RFIT Recall/Recognition over the standard RFIT Recall as a PVT for adult ADHD evaluations.
Collapse
Affiliation(s)
- Jenna E Ausloos-Lozano
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Hanaan Bing-Canar
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Humza Khan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Palak G Singh
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Amanda M Wisinger
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Andrew A Rauch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Caitlin M Ogram Buckley
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Luke G Petry
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| |
Collapse
|
17
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
18
|
Bing-Canar H, Phillips MS, Shields AN, Ogram Buckley CM, Chang F, Khan H, Skymba HV, Ovsiew GP, Resch ZJ, Jennette KJ, Soble JR. Cross-Validation of Multiple WAIS-IV Digit Span Embedded Performance Validity Indices Among a Large Sample of Adult Attention Deficit/Hyperactivity Disorder Clinical Referrals. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2022. [DOI: 10.1177/07342829221081921] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study investigated the utility of four WAIS-IV Digit Span (DS) indices (traditional Reliable Digit Span [RDS], RDS-Working Memory [RDS-WM], RDS-Revised [RDS-R], and DS Age-Corrected Scaled Score [ACSS]) as embedded performance validity tests (PVTs) among a sample of 342 consecutive adults referred for neuropsychological evaluation of ADHD. All DS indices had acceptable classification accuracy (areas under the curve: .73–.76) for detecting invalid performance with optimal cut-scores of RDS ≤7 (35% sensitivity/93% specificity), RDS-WM ≤7 (56% sensitivity/86% specificity), RDS-R ≤12 (48% sensitivity/85% specificity), and ACSS ≤7 (46% sensitivity/87% specificity). Although all indices were able to detect invalid performance, DS indices incorporating the more complex working memory trials of the task yielded the best accuracy for identification of invalid test performance among adults referred for ADHD evaluation.
Collapse
Affiliation(s)
| | | | | | | | - Fini Chang
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Humza Khan
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Haley V. Skymba
- University of Illinois College of Medicine, Chicago, IL, USA
| | | | | | | | - Jason R. Soble
- University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
19
|
DiCarlo GM, Ernst WJ, Kneavel ME. An exploratory study of the convergent validity of the Test of Effort (TOE) in adults with acquired brain injury. Brain Inj 2022; 36:424-431. [PMID: 35113759 DOI: 10.1080/02699052.2022.2034953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
PRIMARY OBJECTIVE To examine the convergent validity of the Test of Effort (TOE), a performance validity test (PVT) currently under development that employs a two-subtest (one verbal, one visual), forced-choice recognition memory format. RESEARCH DESIGN A descriptive, correlational design was employed to describe performance on the TOE and examine the convergent validity between the TOE and comparison measures. METHODS AND PROCEDURES A sample of 53 individuals with chronic acquired brain injury (ABI) were administered the TOE and three well-validated PVTs (Reliable Digit Span [RDS], Test of Memory Malingering [TOMM] and Dot Counting Test [DCT]). MAIN OUTCOMES AND RESULTS The TOE appeared more difficult than it actually was, suggesting adequate face validity. Medium-to-large correlations were observed between the TOE and established PVTs, suggesting good convergent validity. Provisional cutoff scores are offered based on performance of a subgroup of participants with "sufficient effort." CONCLUSIONS Overall, the TOE shows promise as a PVT measure for clinical use. Future studies with larger and more diverse samples are needed to more fully determine the psychometric characteristics of the TOE.
Collapse
Affiliation(s)
| | - William J Ernst
- Department of Professional Psychology, Chestnut Hill College, Philadelphia, Pennsylvania, USA
| | - Meredith E Kneavel
- School of Nursing and Health Sciences, La Salle University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
20
|
Cross-Validation of Multiple Embedded Performance Validity Indices in the Rey Auditory Verbal Learning Test and Brief Visuospatial Memory Test-Revised in an Adult Attention Deficit/Hyperactivity Disorder Clinical Sample. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09443-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
21
|
Stocks JK, Shields AN, DeBoer AB, Cerny BM, Ogram Buckley CM, Ovsiew GP, Jennette KJ, Resch ZJ, Basurto KS, Song W, Pliskin NH, Soble JR. The impact of visual memory impairment on Victoria Symptom Validity Test performance: A known-groups analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-10. [PMID: 34985401 DOI: 10.1080/23279095.2021.2021911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE We assessed the effect of visual learning and recall impairment on Victoria Symptom Validity Test (VSVT) accuracy and response latency for Easy, Difficult, and Total Items. METHOD A sample of 163 adult patients administered the VSVT and Brief Visuospatial Memory Test-Revised were classified as valid (114/163) or invalid (49/163) groups via independent criterion performance validity tests (PVTs). Classification accuracies for all VSVT indices were examined for the overall sample, and separately for subgroups based on visual memory functioning. RESULTS In the overall sample, all indices produced acceptable classification accuracy (areas under the curve [AUCs] ≥ 0.79). When stratified by visual learning/recall impairment, accuracy indices yielded acceptable classification for both the unimpaired (AUCs ≥0.79) and impaired subsamples (AUCs ≥0.75). Latency indices had acceptable classification accuracy for the unimpaired subsample (AUCs ≥0.74), but accuracy and sensitivity dropped for the impaired sample (AUCs ≥0.67). CONCLUSIONS VSVT accuracy and response latency yielded acceptable classification accuracies in the overall sample, and this effect was maintained in those with and without visual learning/recall impairment for the accuracy indices. Findings indicate that the VSVT is a psychometrically robust PVT with largely invariant cut-scores, even in the presence of bona fide visual learning/recall impairment.
Collapse
Affiliation(s)
- Jane K Stocks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Allison N Shields
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Adam B DeBoer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | | | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Woojin Song
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
22
|
White DJ, Ovsiew GP, Rhoads T, Resch ZJ, Lee M, Oh AJ, Soble JR. The Divergent Roles of Symptom and Performance Validity in the Assessment of ADHD. J Atten Disord 2022; 26:101-108. [PMID: 33084457 DOI: 10.1177/1087054720964575] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study examined concordance between symptom and performance validity among clinically-referred patients undergoing neuropsychological evaluation for Attention-Deficit/Hyperactivity Disorder (ADHD). METHOD Data from 203 patients who completed the WAIS-IV Working Memory Index, the Clinical Assessment of Attention Deficit-Adult (CAT-A), and ≥4 criterion performance validity tests (PVTs) were analyzed. RESULTS Symptom and performance validity were concordant in 76% of cases, with the majority being valid performance. Of the remaining 24% of cases with divergent validity findings, patients were more likely to exhibit symptom invalidity (15%) than performance invalidity (9%). Patients demonstrating symptom invalidity endorsed significantly more ADHD symptoms than those with credible symptom reporting (ηp2 = .06-.15), but comparable working memory test performance, whereas patients with performance invalidity had significantly worse working memory performance than those with valid PVT performance (ηp2 = .18). CONCLUSION Symptom and performance invalidity represent dissociable constructs in patients undergoing neuropsychological evaluation of ADHD and should be evaluated independently.
Collapse
Affiliation(s)
- Daniel J White
- University of Illinois College of Medicine, Chicago, IL USA.,Roosevelt University, Chicago, IL, USA
| | | | - Tasha Rhoads
- University of Illinois College of Medicine, Chicago, IL USA
| | - Zachary J Resch
- University of Illinois College of Medicine, Chicago, IL USA.,Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Mary Lee
- University of Illinois College of Medicine, Chicago, IL USA
| | - Alison J Oh
- University of Illinois College of Medicine, Chicago, IL USA
| | - Jason R Soble
- University of Illinois College of Medicine, Chicago, IL USA
| |
Collapse
|
23
|
Leib SI, Keezer RD, Cerny BM, Holbrook LR, Gallagher VT, Jennette KJ, Ovsiew GP, Soble JR. Distinct Latent Profiles of Working Memory and Processing Speed in Adults with ADHD. Dev Neuropsychol 2021; 46:574-587. [PMID: 34743616 DOI: 10.1080/87565641.2021.1999454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
This study examined the neuropsychological profile of patients with Attention-Deficit/Hyperactivity Disorder (ADHD) based on Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) working memory and processing speed indices. We aimed to establish whether distinct ADHD subtypes emerge based on neuropsychological testing and determine whether ADHD subgroups differ based on neurocognitive and demographic factors in 179 adult patients with ADHD. Latent Profile Analysis (LPA) revealed four discrete latent subgroups within the sample, each with distinct patterns of working memory and processing speed. Classes significantly differed in demographically predicted IQ, education, and self-reported depression and anxiety. Results reveal heterogeneity in cognitive performance in adult ADHD.
Collapse
Affiliation(s)
- Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, Illinois, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Wheaton College, Wheaton, Illinois, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Lindsey R Holbrook
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, Illinois, USA
| | - Virginia T Gallagher
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, Illinois, USA
| |
Collapse
|
24
|
Messerly J, Soble JR, Webber TA, Alverson WA, Fullen C, Kraemer LD, Marceaux JC. Evaluation of the classification accuracy of multiple performance validity tests in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY. ADULT 2021; 28:727-736. [PMID: 31835915 DOI: 10.1080/23279095.2019.1698581] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Test of Memory Malingering (TOMM) and Word Memory Test (WMT) are among the most well-known performance validity tests (PVTs) and regarded as gold standard measures. Due to the many factors that impact PVT selection, it is imperative that clinicians make informed clinical decisions with respect to additional or alternative PVTs that demonstrate similar classification accuracy as these well-validated measures. The present archival study evaluated the agreement/classification accuracy of a large battery consisting of multiple other freestanding/embedded PVTs in a mixed clinical sample of 126 veterans. We examined failure rates for all standalone/embedded PVTs using established cut-scores and calculated pass/fail agreement rates and diagnostic odds ratios for various combinations of PVTs using the TOMM and WMT as criterion measures. TOMM and WMT demonstrated the best agreement, followed by Word Choice Test (WCT). The Rey Fifteen Item Test had an excessive number of false-negative errors and reduced classification accuracy. The Digit Span age-corrected scaled score (DS-ACSS) had highest agreement. Findings lend further support to the use of a combination of embedded and standalone PVTs in identifying suboptimal performance. Results provide data to enhance clinical decision making for neuropsychologists who implement combinations of PVTs in a larger clinical battery.
Collapse
Affiliation(s)
- Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Departments of Psychiatry and Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Troy A Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Mental Health and Rehabilitation and Extended Carelines, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - W Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Chrystal Fullen
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Lindsay D Kraemer
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center, San Antonio, TX, USA
| |
Collapse
|
25
|
Rhoads T, Leib SI, Resch ZJ, Basurto KS, Castillo LR, Jennette KJ, Soble JR. Relative Rates of Invalidity for the Test of Memory Malingering and the Dot Counting Test Among Spanish-Speaking Patients Residing in the USA. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09423-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
26
|
Future Directions in Performance Validity Assessment to Optimize Detection of Invalid Neuropsychological Test Performance: Special Issue Introduction. PSYCHOLOGICAL INJURY & LAW 2021; 14:227-231. [PMID: 34567346 PMCID: PMC8455301 DOI: 10.1007/s12207-021-09425-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 09/13/2021] [Indexed: 11/27/2022]
|
27
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
28
|
McClintock SM, Minto L, Denney DA, Bailey KC, Cullum CM, Dotson VM. Clinical Neuropsychological Evaluation in Older Adults With Major Depressive Disorder. Curr Psychiatry Rep 2021; 23:55. [PMID: 34255167 PMCID: PMC8764751 DOI: 10.1007/s11920-021-01267-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/25/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE OF THE REVIEW Older adults with major depressive disorder are particularly vulnerable to MDD-associated adverse cognitive effects including slowed processing speed, decreased attention, and executive dysfunction. The purpose of this review is to describe the approach to a clinical neuropsychological evaluation in older adults with MDD. Specifically, this review compares and contrasts neurocognitive screening and clinical neuropsychological evaluation procedures and details the multiple components of the clinical neuropsychological evaluation. RECENT FINDINGS Research has shown that neurocognitive screening serves a useful purpose to provide an acute and rapid assessment of global cognitive function; however, it has limited sensitivity and specificity. The clinical neuropsychological evaluation process is multifaceted and encompasses a review of available medical records, neurobehavioral status and diagnostic interview, comprehensive cognitive and clinical assessment, examination of inclusion and diversity factors as well as symptom and performance validity, and therapeutic feedback. As such, the evaluation provides invaluable information on multiple cognitive functions, establishes brain and behavior relationships, clarifies neuropsychiatric diagnoses, and can inform the etiology of cognitive impairment. Clinical neuropsychological evaluation plays a unique and critical role in integrated healthcare for older adults with MDD. Indeed, the evaluation can serve as a nexus to synthesize information across healthcare providers in order to maximize measurement-based care that can optimize personalized medicine and overall health outcomes.
Collapse
Affiliation(s)
- Shawn M McClintock
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA.
- Division of Brain Stimulation and Neurophysiology, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA.
| | - Lex Minto
- Georgia State University, Atlanta, GA, USA
| | - David A Denney
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA
| | - K Chase Bailey
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA
| | - C Munro Cullum
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA
| | - Vonetta M Dotson
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, 30302-5010, USA
- Gerontology Institute, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
29
|
Scimeca LM, Holbrook L, Rhoads T, Cerny BM, Jennette KJ, Resch ZJ, Obolsky MA, Ovsiew GP, Soble JR. Examining Conners Continuous Performance Test-3 (CPT-3) Embedded Performance Validity Indicators in an Adult Clinical Sample Referred for ADHD Evaluation. Dev Neuropsychol 2021; 46:347-359. [PMID: 34256665 DOI: 10.1080/87565641.2021.1951270] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
This study evaluated multiple previously-identified Continuous Performance Test-Third Edition (CPT-3) scores as embedded validity indicators (EVIs) among 201 adults undergoing neuropsychological evaluation for Attention-Deficit/Hyperactivity Disorder (ADHD) divided into valid (n = 169) and invalid (n = 32) groups based on seven criterion measures. Although 6/10 CPT-3 scores accurately detected invalidity, only two reached minimally acceptable classification accuracy of ≥0.70. The remaining four had unacceptably low accuracy (AUCs = 0.62-0.69) with 0.19-0.41 sensitivity at ≥0.90 specificity. Composite scores did not provide better classification accuracy than individual CPT-3 scores. In sum, CPT-3 individual and composite scores generally are not accurate PVTs among adults undergoing clinical evaluation for ADHD.
Collapse
Affiliation(s)
- Lauren M Scimeca
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Lindsey Holbrook
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, Illinois, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, Illinois, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, Illinois, USA
| | - Maximillian A Obolsky
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Psychology, Roosevelt University, Chicago, Illinois, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, Illinois, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, Illinois, USA
| |
Collapse
|
30
|
Rhoads T, Neale AC, Resch ZJ, Cohen CD, Keezer RD, Cerny BM, Jennette KJ, Ovsiew GP, Soble JR. Psychometric implications of failure on one performance validity test: a cross-validation study to inform criterion group definition. J Clin Exp Neuropsychol 2021; 43:437-448. [PMID: 34233580 DOI: 10.1080/13803395.2021.1945540] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Introduction: Research to date has supported the use of multiple performance validity tests (PVTs) for determining validity status in clinical settings. However, the implications of including versus excluding patients failing one PVT remains a source of debate, and methodological guidelines for PVT research are lacking. This study evaluated three validity classification approaches (i.e. 0 vs. ≥2, 0-1 vs. ≥2, and 0 vs. ≥1 PVT failures) using three reference standards (i.e. criterion PVT groupings) to recommend approaches best suited to establishing validity groups in PVT research methodology.Method: A mixed clinical sample of 157 patients was administered freestanding (Medical Symptom Validity Test, Dot Counting Test, Test of Memory Malingering, Word Choice Test), and embedded PVTs (Reliable Digit Span, RAVLT Effort Score, Stroop Word Reading, BVMT-R Recognition Discrimination) during outpatient neuropsychological evaluation. Three reference standards (i.e. two freestanding and three embedded PVTs from the above list) were created. Rey 15-Item Test and RAVLT Forced Choice were used solely as outcome measures in addition to two freestanding PVTs not employed in the reference standard. Receiver operating characteristic curve analyses evaluated classification accuracy using the three validity classification approaches for each reference standard.Results: When patients failing only one PVT were excluded or classified as valid, classification accuracy ranged from acceptable to excellent. However, classification accuracy was poor to acceptable when patients failing one PVT were classified as invalid. Sensitivity/specificity across two of the validity classification approaches (0 vs. ≥2; 0-1 vs. ≥2) remained reasonably stable.Conclusions: These results reflect that both inclusion and exclusion of patients failing one PVT are acceptable approaches to PVT research methodology and the choice of method likely depends on the study rationale. However, including such patients in the invalid group yields unacceptably poor classification accuracy across a number of psychometrically robust outcome measures and therefore is not recommended.
Collapse
Affiliation(s)
- Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Alec C Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
31
|
Nayar K, Ventura LM, DeDios-Stern S, Oh A, Soble JR. The Impact of Learning and Memory on Performance Validity Tests in a Mixed Clinical Pediatric Population. Arch Clin Neuropsychol 2021; 37:50-62. [PMID: 34050354 DOI: 10.1093/arclin/acab040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the degree to which verbal and visuospatial memory abilities influence performance validity test (PVT) performance in a mixed clinical pediatric sample. METHOD Data from 252 consecutive clinical pediatric cases (Mage=11.23 years, SD=4.02; 61.9% male) seen for outpatient neuropsychological assessment were collected. Measures of learning and memory (e.g., The California Verbal Learning Test-Children's Version; Child and Adolescent Memory Profile [ChAMP]), performance validity (Test of Memory Malingering Trial 1 [TOMM T1]; Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition Digit Span indices; ChAMP Overall Validity Index), and intellectual abilities (e.g., WISC-V) were included. RESULTS Learning/memory abilities were not significantly correlated with TOMM T1 and accounted for relatively little variance in overall TOMM T1 performance (i.e., ≤6%). Conversely, ChAMP Validity Index scores were significantly correlated with verbal and visual learning/memory abilities, and learning/memory accounted for significant variance in PVT performance (12%-26%). Verbal learning/memory performance accounted for 5%-16% of the variance across the Digit Span PVTs. No significant differences in TOMM T1 and Digit Span PVT scores emerged between verbal/visual learning/memory impairment groups. ChAMP validity scores were lower for the visual learning/memory impairment group relative to the nonimpaired group. CONCLUSIONS Findings highlight the utility of including PVTs as standard practice for pediatric populations, particularly when memory is a concern. Consistent with the adult literature, TOMM T1 outperformed other PVTs in its utility even among the diverse clinical sample with/without learning/memory impairment. In contrast, use of Digit Span indices appear to be best suited in the presence of visuospatial (but not verbal) learning/memory concerns. Finally, the ChAMP's embedded validity measure was most strongly impacted by learning/memory performance.
Collapse
Affiliation(s)
- Kritika Nayar
- Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
32
|
Cerny BM, Resch ZJ, Rhoads T, Jennette KJ, Singh PG, Ovsiew GP, Soble JR. Examining Traditional and Novel Validity Indicators from the Medical Symptom Validity Test Across Levels of Verbal and Visual Memory Impairment. Arch Clin Neuropsychol 2021; 37:146-159. [PMID: 34050349 DOI: 10.1093/arclin/acab038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/05/2021] [Accepted: 05/01/2021] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE This cross-sectional study examined accuracy of traditional Medical Symptom Validity Test (MSVT) validity indicators, including immediate recognition (IR), delayed recognition (DR), and consistency (CNS), as well as a novel indicator derived from the mean performance on IR, DR, and CNS across verbal, visual, and combined learning and memory impairment bands. METHOD A sample of 180 adult outpatients was divided into valid (n = 150) and invalid (n = 30) groups based on results of four independent criterion performance validity tests. Verbal and visual learning and recall were classified as indicative of no impairment, mild impairment, or severe impairment based on performance on the Rey Auditory Verbal Learning Test and Brief Visuospatial Memory Test-Revised, respectively. RESULTS In general, individual MSVT subtests were able to accurately classify performance as valid or invalid, even in the context of severe learning and memory deficits. However, as verbal and visual memory impairment increased, optimal MSVT cut-scores diverged from manual-specified cutoffs such that DR and CNS required cut-scores to be lowered to maintain adequate specificity. By contrast, the newly proposed scoring algorithm generally showed more robust psychometric properties across the memory impairment bands. CONCLUSIONS The mean performance index, a novel scoring algorithm using the mean of the three primary MSVT subtests, may be a more robust validity indicator than the individual MSVT subtests in the context of bona fide memory impairment.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Palak G Singh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
33
|
Ovsiew GP, Carter DA, Rhoads T, Resch ZJ, Jennette KJ, Soble JR. Concordance Between Standard and Abbreviated Administrations of the Test of Memory Malingering: Implications for Streamlining Performance Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09408-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
34
|
Resch ZJ, Paxton JL, Obolsky MA, Lapitan F, Cation B, Schulze ET, Calderone V, Fink JW, Lee RC, Pliskin NH, Soble JR. Establishing the base rate of performance invalidity in a clinical electrical injury sample: Implications for neuropsychological test performance. J Clin Exp Neuropsychol 2021; 43:213-223. [PMID: 33858295 DOI: 10.1080/13803395.2021.1914002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Objective: The base rate of neuropsychological performance invalidity in electrical injury, a clinically-distinct and frequently compensation-seeking population, is not well established. This study determined the base rate of performance invalidity in a large electrical injury sample, and examined patient characteristics, injury parameters, and neuropsychological test performance based on validity status.Method: This cross-sectional study included data from 101 patients with electrical injury consecutively referred for post-acute neuropsychological evaluation. Eighty-five percent of the sample was compensation-seeking. Multiple performance validity tests (PVTs) were administered as part of standard clinical evaluation. For patients with four or more PVTs, valid performance was operationalized as less than or equal to one PVT failure and invalid performance as two or more failures.Results: Frequency analysis revealed 66% (n = 67) had valid performance while 29% (n = 29) demonstrated probable invalid performance; the remaining 5% (n = 5) had indeterminate validity. No significant differences in demographics or injury parameters emerged between validity groups (0 vs. 1 vs. ≥2 PVT failures). In contrast, the electrical injury group with invalid performance performed significantly worse across tests of processing speed and executive abilities than those with valid performance (ps< .05, ηp2 = .19-.25).Conclusions: The current study is the first to establish the base rate of neuropsychological performance invalidity in electrical injury survivors using empirical methods and current practice standards. Patient and clinical variables, including compensation-seeking status, did not differ between validity groups; however, neuropsychological test performance did, supporting the need for multi-method, objective performance validity assessment.
Collapse
Affiliation(s)
- Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jessica L Paxton
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Maximillian A Obolsky
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Franchezka Lapitan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Bailey Cation
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Evan T Schulze
- Department of Neurology, Saint Louis University, St. Louis, MO, USA
| | - Veroly Calderone
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
| | - Joseph W Fink
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL, USA
| | - Raphael C Lee
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Departments of Surgery, Medicine and Organismal Biology, University of Chicago, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| |
Collapse
|
35
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
36
|
Leib SI, Schieszler-Ockrassa C, White DJ, Gallagher VT, Carter DA, Basurto KS, Ovsiew GP, Resch ZJ, Jennette KJ, Soble JR. Concordance between the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) and Clinical Assessment of Attention Deficit-Adult (CAT-A) over-reporting validity scales for detecting invalid ADHD symptom reporting. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1522-1529. [PMID: 33719792 DOI: 10.1080/23279095.2021.1894150] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
This study investigated the relationship between symptom validity scales on the Clinical Assessment of Attention Deficit-Adult (CAT-A) and the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in the context of Attention-Deficit/Hyperactivity Disorder (ADHD) evaluation. The sample comprised 140 consecutive patients referred for a neuropsychological evaluation of ADHD and were administered the CAT-A and the MMPI-2-RF and a battery of performance-based neurocognitive tests. Results indicated CAT-A/MMPI-2-RF symptom validity concordance of 51% between measures, with 38% concordant valid and 13% concordant invalid responses. Among those with discordance symptom validity results, rates of valid CAT-A/invalid MMPI-2-RF responding (41%) were more common than invalid CAT-A/valid MMPI-2-RF responding (8%). Results also indicated higher levels of ADHD symptoms among invalid responding within the CAT-A, whereas the MMPI-2-RF Cognitive Complaints scale did not differ by CAT-A validity status. Finally, symptom validity scales on both the CAT-A and MMPI-2-RF were largely discordant from neuropsychological test validity status per performance validity tests. Findings highlight the need for symptom validity testing when assessing ADHD and indicate that validity indices on broad personality assessments may assess different constructs than embedded validity indices in ADHD-specific measures.
Collapse
Affiliation(s)
- Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Christine Schieszler-Ockrassa
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Deparment of Psychology, Roosevelt University, Chicago, IL, USA
| | - Daniel J White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Deparment of Psychology, Roosevelt University, Chicago, IL, USA
| | - Virginia T Gallagher
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Dustin A Carter
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
37
|
Cerny BM, Rhoads T, Leib SI, Jennette KJ, Basurto KS, Durkin NM, Ovsiew GP, Resch ZJ, Soble JR. Mean response latency indices on the Victoria Symptom Validity Test do not contribute meaningful predictive value over accuracy scores for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1304-1311. [PMID: 33470869 DOI: 10.1080/23279095.2021.1872575] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The utility of the Victoria Symptom Validity Test (VSVT) as a performance validity test (PVT) has been primarily established using response accuracy scores. However, the degree to which response latency may contribute to accurate classification of performance invalidity over and above accuracy scores remains understudied. Therefore, this study investigated whether combining VSVT accuracy and response latency scores would increase predictive utility beyond use of accuracy scores alone. Data from a mixed clinical sample of 163 patients, who were administered the VSVT as part of a larger neuropsychological battery, were analyzed. At least four independent criterion PVTs were used to establish validity groups (121 valid/42 invalid). Logistic regression models examining each difficulty level revealed that all VSVT measures were useful in classifying validity groups, both independently and when combined. Individual predictor classification accuracy ranged from 77.9 to 81.6%, indicating acceptable to excellent discriminability across the validity indices. The results of this study support the value of both accuracy and latency scores on the VSVT to identify performance invalidity, although the accuracy scores had superior classification statistics compared to response latency, and mean latency indices provided no unique benefit for classification accuracy beyond dimensional accuracy scores alone.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
38
|
A Systematic Review and Meta-Analysis of the Diagnostic Accuracy of the Advanced Clinical Solutions Word Choice Test as a Performance Validity Test. Neuropsychol Rev 2021; 31:349-359. [PMID: 33447952 DOI: 10.1007/s11065-020-09468-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 11/29/2020] [Indexed: 10/22/2022]
Abstract
Thorough assessment of performance validity has become an established standard of practice in neuropsychological assessment. While there has been a large focus on the development and cross-validation of embedded performance validity tests (PVTs) in recent years, new freestanding PVTs have also been developed, including the Word Choice Test (WCT) as part of the Advanced Clinical Solutions Effort System. And, while the WCT's general utility for identifying invalid performance has been demonstrated in the ensuing decade since its initial publication, optimal cut-scores and associated psychometric properties have varied widely across studies. This study sought to synthesize the existing diagnostic accuracy literature regarding the WCT via a systematic review and to conduct a meta-analysis to determine the performance validity cut-score that best maximizes sensitivity while maintaining acceptable specificity. A systematic search of the literature resulted in 14 studies for synthesis, with eight of those available for meta-analysis. Meta-analytic results revealed an optimal cut-score of ≤ 42 with 54% sensitivity and 93% specificity for identifying invalid neuropsychological test performance. Collectively, the WCT demonstrated adequate diagnostic accuracy as a PVT across a variety of populations. Recommendations for future studies are also provided.
Collapse
|
39
|
Victoria Symptom Validity Test: A Systematic Review and Cross-Validation Study. Neuropsychol Rev 2021; 31:331-348. [PMID: 33433828 DOI: 10.1007/s11065-021-09477-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 01/03/2021] [Indexed: 12/12/2022]
Abstract
The Victoria Symptom Validity Test (VSVT) is a performance validity test (PVT) with over two decades of empirical backing, although methodological limitations within the extant literature restrict its clinical and research generalizability. Chief among these constraints includes limited consensus on the most accurate index within the VSVT and the most appropriate cut-scores within each VSVT validity index. The current systematic review synthesizes existing VSVT validation studies and provides additional cross-validation in an independent sample using a known-groups design. We completed a systematic search of the literature, identifying 17 peer-reviewed studies for synthesis (7 simulation designs, 7 differential prevalence designs, and 3 known-groups designs). The independent cross-validation sample consisted of 200 mixed clinical neuropsychiatric patients referred for outpatient neuropsychological evaluation. Across all indices, Total item accuracy produced the strongest psychometric properties at an optimal cut-score of ≤ 40 (62% sensitivity/88% specificity). However, ROC curve analyses for all VSVT indices yielded statistically significant areas under the curve (AUCs; .73-81), suggestive of moderate classification accuracy. Cut-scores derived using the independent cross-validation sample converged with some previous findings supporting cut-scores of ≤ 22 for Easy item accuracy and ≤ 40 for Total item accuracy, although divergent findings were noted for Difficult item accuracy. Overall, VSVT validity indicators have adequate diagnostic accuracy across populations, with the current study providing additional support for its use as a psychometrically sound PVT in clinical settings. However, caution is recommended among patients with certain verified clinical conditions (e.g., dementia) and those with pronounced working memory deficits due to concerns for increased risk of false positives.
Collapse
|
40
|
Resch ZJ, Rhoads T, Ovsiew GP, Soble JR. A Known-Groups Validation of the Medical Symptom Validity Test and Analysis of the Genuine Memory Impairment Profile. Assessment 2020; 29:455-466. [DOI: 10.1177/1073191120983919] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study cross-validated the Medical Symptom Validity Test (MSVT) in a mixed neuropsychiatric sample and examined its accuracy for identifying invalid neuropsychological performance using a known-groups design. Cross-sectional data from 129 clinical patients who completed the MSVT were examined. Validity groups were established using six, independent criterion performance validity tests, which yielded 98 patients in the valid group and 31 in the invalid group. All MSVT subtest scores were significantly lower in the invalid group (η p2=.22-.39). Using published cut-scores, sensitivities of 42% to 71% were found among the primary effort subtests, and 74% sensitivity/90% specificity was observed for the overall MSVT. Among this sample, the MSVT component validity scales produced areas under the curve of .78-.86, suggesting moderate classification accuracy. At optimal cut-scores, the MSVT primary effort validity scales demonstrated 55% to 71% sensitivity/91% to 93% specificity, with the Consistency subtest exhibiting the strongest psychometric properties. The MSVT exhibited relatively robust sensitivity and specificity, supporting its utility as a briefer freestanding performance validity test to its predecessor, the Word Memory Test. Finally, the Genuine Memory Impairment Profile appears promising for patients with Major Neurocognitive Disorder, but is cautioned against for those without significant functional decline in activities of daily living at this time.
Collapse
Affiliation(s)
- Zachary J. Resch
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | | | - Jason R. Soble
- University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
41
|
Kosky KM, Lace JW, Austin TA, Seitz DJ, Clark B. The utility of the Wisconsin card sorting test, 64-card version to detect noncredible attention-deficit/hyperactivity disorder. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:1231-1241. [DOI: 10.1080/23279095.2020.1864633] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Karen M. Kosky
- Department of Health Psychology, University of Missouri, Columbia, MO, USA
| | - John W. Lace
- Department of Neurology, Cleveland Clinic, Cleveland, OH, USA
| | - Tara A. Austin
- University of Texas at Austin Dell Medical School, Austin, TX, USA
| | - Dylan J. Seitz
- Department of Neurology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Brook Clark
- Department of Health Psychology, University of Missouri, Columbia, MO, USA
| |
Collapse
|
42
|
Neale AC, Ovsiew GP, Resch ZJ, Soble JR. Feigning or forgetfulness: The effect of memory impairment severity on word choice test performance. Clin Neuropsychol 2020; 36:584-599. [DOI: 10.1080/13854046.2020.1799076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Affiliation(s)
- Alec C. Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
43
|
Identifying Novel Embedded Performance Validity Test Formulas Within the Repeatable Battery for the Assessment of Neuropsychological Status: a Simulation Study. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09382-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
44
|
Ovsiew GP, Resch ZJ, Nayar K, Williams CP, Soble JR. Not so fast! Limitations of processing speed and working memory indices as embedded performance validity tests in a mixed neuropsychiatric sample. J Clin Exp Neuropsychol 2020; 42:473-484. [DOI: 10.1080/13803395.2020.1758635] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kritika Nayar
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Christopher P. Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
45
|
White DJ, Korinek D, Bernstein MT, Ovsiew GP, Resch ZJ, Soble JR. Cross-validation of non-memory-based embedded performance validity tests for detecting invalid performance among patients with and without neurocognitive impairment. J Clin Exp Neuropsychol 2020; 42:459-472. [DOI: 10.1080/13803395.2020.1758634] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Dale Korinek
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Matthew T. Bernstein
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
46
|
Resch ZJ, Pham AT, Abramson DA, White DJ, DeDios-Stern S, Ovsiew GP, Castillo LR, Soble JR. Examining independent and combined accuracy of embedded performance validity tests in the California Verbal Learning Test-II and Brief Visuospatial Memory Test-Revised for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:252-261. [DOI: 10.1080/23279095.2020.1742718] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Amber T. Pham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, DePaul University, Chicago, IL, USA
| | - Dayna A. Abramson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Liliam R. Castillo
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
47
|
Soble JR, Alverson WA, Phillips JI, Critchfield EA, Fullen C, O’Rourke JJF, Messerly J, Highsmith JM, Bailey KC, Webber TA, Marceaux JC. Strength in Numbers or Quality over Quantity? Examining the Importance of Criterion Measure Selection to Define Validity Groups in Performance Validity Test (PVT) Research. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-019-09370-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
48
|
Ventura LM, DeDios-Stern S, Oh A, Soble JR. They're not just little adults: The utility of adult performance validity measures in a mixed clinical pediatric sample. APPLIED NEUROPSYCHOLOGY-CHILD 2019; 10:297-307. [PMID: 31703167 DOI: 10.1080/21622965.2019.1685522] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Performance validity tests (PVTs) have become a standard part of adult neuropsychological practice; however, they are less widely used in pediatric testing. The current study aimed to obtain a better understanding of the application of PVTs within a mixed clinical pediatric sample with a wide range of diagnosis, IQ, and age. Cross-sectional data were analyzed from 130 consecutive pediatric patients evaluated as part of clinical care and diagnosed with a variety of medical/neurological, developmental, and psychiatric disorders. Patients were administered a battery of neuropsychological tests; results of intellectual functioning measures (i.e., Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition [WAIS-IV]), and PVTs (i.e., Test of Memory Malingering [TOMM] and Digit Span [DS] subtests of the WISC-V/WAIS-IV) were analyzed to assess PVT performance across the sample as well as age- and Full-Scale IQ-related (FSIQ) effects on pass rate. Results suggested that the TOMM is an effective validity test for youth, as the TOMM adult cutoff score was also valid for children (88% pass rate on TOMM trial 1 cut-score ≥41, 71% pass rate on TOMM trial 1 cut-score ≥45). In contrast, Reliable Digit Span (RDS) was less accurate (34% failed RDS [cut-score ≤6], 54% failed RDS-r [cut-score ≤10], and 25% failed DS ACSS [cut-score ≤5]) using standard adult cutoffs. Notably, although TOMM scores were not strongly influenced by IQ, DS scores increased as IQ increased. Overall, further analysis of PVTs can champion new standards of practice through additional research establishing PVT accuracy within pediatric populations.
Collapse
Affiliation(s)
- Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
49
|
Abeare C, Sabelli A, Taylor B, Holcomb M, Dumitrescu C, Kirsch N, Erdodi L. The Importance of Demographically Adjusted Cutoffs: Age and Education Bias in Raw Score Cutoffs Within the Trail Making Test. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09353-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
50
|
Bain KM, Soble JR, Webber TA, Messerly JM, Bailey KC, Kirton JW, McCoy KJM. Cross-validation of three Advanced Clinical Solutions performance validity tests: Examining combinations of measures to maximize classification of invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:24-34. [PMID: 30987451 DOI: 10.1080/23279095.2019.1585352] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Use of multiple performance validity tests (PVTs) may best identify invalid performance, though few studies have examined the utility and accuracy of combining PVTs. This study examined the following PVTs in the Advanced Clinical Solutions (ACS) package to determine their utility alone and in concert: Word Choice Test (WCT), Reliable Digit Span (RDS), and Logical Memory Recognition (LMR). Ninety-three veterans participated in clinical neuropsychological evaluations to determine presence of cognitive impairment; 25% of the performances were deemed invalid via criterion PVTs. Classification accuracy of the ACS measures was assessed via receiver operating characteristic curves, while logistic regressions determined utility of combining these PVTs. The WCT demonstrated superior classification accuracy compared to the two embedded measures of the ACS, even in veterans with cognitive impairment. The two embedded measures (even when used in concert) exhibited inadequate classification accuracy. A combined model with all three ACS PVTs similarly demonstrated little benefit of the embedded indicators over the WCT alone. Results suggest the ACS WCT has utility for detecting invalid performance in a clinical sample with likely cognitive impairment, though the embedded ACS measures (RDS and LMR) may have limited incremental utility, particularly in individuals with cognitive impairment.
Collapse
Affiliation(s)
- Kathleen M Bain
- South Texas Veterans Health Care System, San Antonio, Texas, USA
| | - Jason R Soble
- University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Troy A Webber
- Michael E. DeBakey VA Medical Center, Houston, Texas, USA
| | | | - K Chase Bailey
- University of Texas Southwestern Medical Center at Dallas, Dallas, Texas, USA
| | - Joshua W Kirton
- Fort Carson Army Medical Center, Colorado Springs, Colorado, USA
| | - Karin J M McCoy
- South Texas Veterans Health Care System, San Antonio, Texas, USA
| |
Collapse
|