1
|
Beach J, Bain K, Valencia J, Marceaux J, Soble J. Validation and psychometric properties of the Word Choice Test-10 as an abbreviated performance validity test. Clin Neuropsychol 2024; 38:493-507. [PMID: 37266928 DOI: 10.1080/13854046.2023.2218576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 05/23/2023] [Indexed: 06/03/2023]
Abstract
Objective: The objective of the current investigation was to validate and establish the psychometric properties of an abbreviated, 10-item version of the Word Choice Test (WCT). Method: Data from one hundred ten clinically-referred participants (M age = 55.92, SD = 14.07; M education = 13.74, SD = 2.43; 84.5% Male) in a Veterans Affairs neuropsychology outpatient clinic was analyzed. All participants completed the WCT, the TOMM T1, the WMT, and the Digit Span subtest of the WAIS-IV as part of a larger battery of neuropsychological tests. Results: Correlation analyses revealed significant relationships between the 10-item WCT-10, the TOMM T1, the RDS forward/backward, as well as the IR, DR, and CNS subtests of the WMT. ROC analysis for the WCT-10 indicated optimal cutoff of 2 or more errors, with 52% sensitivity and 97% specificity (AUC=.786, p<.001), compared with the standard administration of the WCT with a cutoff of 8 or more errors, which had 67% sensitivity and 91% specificity. Specificity/sensitivity values remained adequate at a cutoff of two or more errors when participants with cognitive impairment (Sensitivity=.52, Specificity=.92) and without cognitive impairment (Sensitivity=.52, Specificity = 1.0) were examined separately. Conclusions: The present investigation revealed that the WCT-10, an abbreviated free-standing PVT comprised of the initial 10 items of the WCT, demonstrated clinical utility in a mixed clinical sample of Veterans and was robust to cognitive impairment. This abbreviated PVT may benefit researchers and clinicians through adequate identification of invalid performance while minimizing completion time.
Collapse
Affiliation(s)
- Jameson Beach
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Kathleen Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Julianna Valencia
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason Soble
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
2
|
Rohling ML, Demakis GJ, Langhinrichsen-Rohling J. Lowered cutoffs to reduce false positives on the Word Memory Test. J Clin Exp Neuropsychol 2024; 46:67-79. [PMID: 38362939 DOI: 10.1080/13803395.2024.2314736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/11/2024] [Indexed: 02/17/2024]
Abstract
OBJECTIVE To adjust the decision criterion for the Word Memory Test (WMT, Green, 2003) to minimize the frequency of false positives. METHOD Archival data were combined into a database (n = 3,210) to examine the best cut score for the WMT. We compared results based on the original scoring rules and those based on adjusted scoring rules using a criterion based on 16 performance validity tests (PVTs) exclusive of the WMT. Cutoffs based on peer-reviewed publications and test manuals were used. The resulting PVT composite was considered the best estimate of validity status. We focused on a specificity of .90 with a false-positive rate of less than .10 across multiple samples. RESULTS Each examinee was administered the WMT, as well as on average 5.5 (SD = 2.5) other PVTs. Based on the original scoring rules of the WMT, 31.8% of examinees failed. Using a single failure on the criterion PVT (C-PVT), the base rate of failure was 45.9%. When requiring two or more failures on the C-PVT, the failure rate dropped to 22.8%. Applying a contingency analysis (i.e., X2) to the two failures model on the C-PVT measure and using the original rules for the WMT resulted in only 65.3% agreement. However, using our adjusted rules for the WMT, which consisted of relying on only the IR and DR WMT subtest scores with a cutoff of 77.5%, agreement between the adjusted and the C-PVT criterion equaled 80.8%, for an improvement of 12.1% identified. The adjustmeny resulted in a 49.2% reduction in false positives while preserving a sensitivity of 53.6%. The specificity for the new rules was 88.8%, for a false positive rate of 11.2%. CONCLUSIONS Results supported lowering of the cut score for correct responding from 82.5% to 77.5% correct. We also recommend discontinuing the use of the Consistency subtest score in the determination of WMT failure.
Collapse
|
3
|
Monjazeb S, Crowell TA. Performance validity of the Dot Counting Test in a dementia clinic setting. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-11. [PMID: 37119265 DOI: 10.1080/23279095.2023.2207125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/01/2023]
Abstract
OBJECTIVE This study examined the utility of a performance validity test (PVT), the Dot Counting Test (DCT), in individuals undergoing neuropsychological evaluations for dementia. We investigated specificity rates of the DCT Effort Index score (E-Score) and various individual DCT scores (based on completion time/errors) to further establish appropriate cutoff scores. METHOD This cross-sectional study included 56 non-litigating, validly performing older adults with no/minimal, mild, or major cognitive impairment. Cutoffs associated with ≥90% specificity were established for 7 DCT scoring methods across impairment severity subgroups. RESULTS Performance on 5 of 7 DCT scoring methods significantly differed based on impairment severity. Overall, more severely impaired participants had significantly higher E-Scores and longer completion times but demonstrated comparable errors to their less impaired counterparts. Contrary to the previously established E-Score cutoff of ≥17, a cutoff of ≥22 was required to maintain adequate specificity in our total sample, with significantly higher adjustments required in the Mild and Major Neurocognitive Disorder subgroups (≥27 and ≥40, respectively). A cutoff of >3 errors achieved adequate specificity in our sample, suggesting that error scores may produce lower false positive rates than E-Scores and completion time scores, both of which overemphasize speed and could inadvertently penalize more severely impaired individuals. CONCLUSIONS In a dementia clinic setting, error scores on the DCT may have greater utility in detecting non-credible performance than E-Scores and completion time scores, particularly among more severely impaired individuals. Future research should establish and cross-validate the sensitivity and specificity of the DCT for assessing performance validity.
Collapse
Affiliation(s)
- Sanam Monjazeb
- Department of Psychology, Simon Fraser University, Burnaby, Canada
| | - Timothy A Crowell
- Department of Psychiatry, University of British Columbia, Vancouver, Canada
| |
Collapse
|
4
|
Hansen ND, Rhoads T, Jennette KJ, Reynolds TP, Ovsiew GP, Resch ZJ, Critchfield EA, Marceaux JC, O'Rourke JJF, Soble JR. Validation of alternative dot counting test E-score cutoffs based on degree of cognitive impairment in veteran and civilian clinical samples. Clin Neuropsychol 2023; 37:402-415. [PMID: 35343379 DOI: 10.1080/13854046.2022.2054863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE This study examined Dot Counting Test (DCT) performance among patient populations with no/minimal impairment and mild impairment in an attempt to cross-validate a more parsimonious interpretative strategy and to derive optimal E-Score cutoffs. METHOD Participants included clinically-referred patients from VA (n = 101) and academic medical center (AMC, n = 183) settings. Patients were separated by validity status (valid/invalid), and subsequently two comparison groups were formed from each sample's valid group. Namely, Group 1 included patients with no to minimal cognitive impairment, and Group 2 included those with mild neurocognitive disorder. Analysis of variance tested for differences between rounded and unrounded DCT E-Scores across both comparison groups and the invalid group. Receiver operating characteristic curve analyses identified optimal validity cut-scores for each sample and stratified by comparison groups. RESULTS In the VA sample, cut scores of ≥13 (rounded) and ≥12.58 (unrounded) differentiated Group 1 from the invalid performers (87% sensitivity/88% specificity), and cut scores of ≥17 (rounded; 58% sensitivity/90% specificity) and ≥16.49 (unrounded; 61% sensitivity/90% specificity) differentiated Group 2 from the invalid group. Similarly, in the AMC group, a cut score of ≥13 (rounded and unrounded; 75% sensitivity/90% specificity) differentiated Group 1 from the invalid group, whereas cut scores of ≥18 (rounded; 43% sensitivity/94% specificity) and ≥16.94 (unrounded; 46% sensitivity/90% specificity) differentiated Group 2 from the invalid performers. CONCLUSIONS Different cut scores were indicated based on degree of cognitive impairment, and provide proof-of-concept for a more parsimonious interpretative paradigm than using individual cut scores derived for specific diagnostic groups.
Collapse
Affiliation(s)
- Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tristan P Reynolds
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Edan A Critchfield
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
5
|
Jennette KJ, Rhoads T, Resch ZJ, Cerny BM, Leib SI, Sharp DW, Ovsiew GP, Soble JR. Multivariable analysis of the relative utility and additive value of eight embedded performance validity tests for classifying invalid neuropsychological test performance. J Clin Exp Neuropsychol 2022; 44:451-460. [PMID: 36197342 DOI: 10.1080/13803395.2022.2128067] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
INTRODUCTION This study investigated a combination of eight embedded performance validity tests (PVTs) derived from commonly administered neuropsychological tests to optimize sensitivity/specificity for detecting invalid neuropsychological test performance. The goal of this study was to evaluate what combination of these common embedded PVTs that have the most robust predictive power for detecting invalid neuropsychological test performance in a single diverse clinical sample. METHOD Eight previously validated memory- and nonmemory-based embedded PVTs were examined among 231 patients undergoing neuropsychological evaluation. Patients were classified into valid/invalid groups based on four independent criterion PVTs. Embedded PVT accuracy was assessed using standard and stepwise multiple logistic regression models. RESULTS Three PVTs, the Brief Visuospatial Memory Test-Revised Recognition Discrimination (BVMT-R-RD), Rey Auditory Verbal Learning Test Forced Choice, and WAIS-IV Digit Span Age Corrected Scaled Score, predicted 45.5% of the variance in validity group membership. BVMT-RD independently accounted for 32% of the variance in prediction of independent, criterion-defined validity group membership. CONCLUSIONS This study demonstrated the incremental predictive power of multiple embedded PVTs derived from common neuropsychological measures in detecting invalid test performance and those measures accounting for the greatest portion of the variance. These results provide guidance for evaluating the most fruitful embedded PVTs and proof of concept to better guide selection of embedded validity indices. Further, this offers clinicians an efficient, empirically derived approach to assessing performance validity when time restraints potentially limit the use of freestanding PVTs.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Dillon W Sharp
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
6
|
Jennette KJ, Williams CP, Resch ZJ, Ovsiew GP, Durkin NM, O'Rourke JJF, Marceaux JC, Critchfield EA, Soble JR. Assessment of differential neurocognitive performance based on the number of performance validity tests failures: A cross-validation study across multiple mixed clinical samples. Clin Neuropsychol 2022; 36:1915-1932. [PMID: 33759699 DOI: 10.1080/13854046.2021.1900398] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This cross-sectional study examined the effect of number of Performance Validity Test (PVT) failures on neuropsychological test performance among a demographically diverse Veteran (VA) sample (n = 76) and academic medical sample (AMC; n = 128). A secondary goal was to investigate the psychometric implications of including versus excluding those with one PVT failure when cross-validating a series of embedded PVTs. Method: All patients completed the same six criterion PVTs, with the AMC sample completing three additional embedded PVTs. Neurocognitive test performance differences were examined based on number of PVT failures (0, 1, 2+) for both samples, and effect of number of criterion failures on embedded PVT performance was analyzed among the AMC sample. Results: Both groups with 0 or 1 PVT failures performed better than those with ≥2 PVT failures across most cognitive tests. There were nonsignificant differences between those with 0 or 1 PVT failures except for one test in the AMC sample. Receiver operator characteristic curve analyses found no differences in optimal cut score based on number of PVT failures when retaining/excluding one PVT failure. Conclusion: Findings support the use of ≥2 PVT failures as indicative of performance invalidity. These findings strongly support including those with one PVT failure with those with zero PVT failures in diagnostic accuracy studies, given that their inclusion reflects actual clinical practice, does not reduce sample sizes, and does not artificially deflate neurocognitive test results or inflate PVT classification accuracy statistics.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
7
|
Cohen CD, Rhoads T, Keezer RD, Jennette KJ, Williams CP, Hansen ND, Ovsiew GP, Resch ZJ, Soble JR. All of the accuracy in half of the time: Assessing abbreviated versions of the Test of Memory Malingering in the context of verbal and visual memory impairment. Clin Neuropsychol 2022; 36:1933-1949. [PMID: 33836622 DOI: 10.1080/13854046.2021.1908596] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveThe Test of Memory Malingering (TOMM) Trial 1 (T1) and errors on the first 10 items of T1 (T1-e10) were developed as briefer versions of the TOMM to minimize evaluation time and burden, although the effect of genuine memory impairment on these indices is not well established. This study examined whether increasing material-specific verbal and visual memory impairment affected T1 and T1-e10 performance and accuracy for detecting invalidity. Method: Data from 155 neuropsychiatric patients administered the TOMM, Rey Auditory Verbal Learning Test (RAVLT), and Brief Visuospatial Memory Test-Revised (BVMT-R) during outpatient evaluation were examined. Valid (N = 125) and invalid (N = 30) groups were established by four independent criterion performance validity tests. Verbal/visual memory impairment was classified as ≥37 T (normal memory); 30 T-36T (mild impairment); and ≤29 T (severe impairment). Results: Overall, T1 had outstanding accuracy, with 77% sensitivity/90% specificity. T1-e10 was less accurate but had excellent discriminability, with 60% sensitivity/87% specificity. T1 maintained excellent accuracy regardless of memory impairment severity, with 77% sensitivity/≥88% specificity and a relatively invariant cut-score even among those with severe verbal/visual memory impairment. T1-e10 had excellent classification accuracy among those with normal memory and mild impairment, but accuracy and sensitivity dropped with severe impairment and the optimal cut-score had to be increased to maintain adequate specificity. Conclusion: TOMM T1 is an effective performance validity test with strong psychometric properties regardless of material-specificity and severity of memory impairment. By contrast, T1-e10 functions relatively well in the context of mild memory impairment but has reduced discriminability with severe memory impairment.
Collapse
Affiliation(s)
- Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,School of Psychology, Counseling, and Family Therapy, Wheaton College, Wheaton, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
8
|
Soble JR, Cerny BM, Ovsiew GP, Rhoads T, Reynolds TP, Sharp DW, Jennette KJ, Marceaux JC, O'Rourke JJF, Critchfield EA, Resch ZJ. Comparing the Independent and Aggregated Accuracy of Trial 1 and the First 10 TOMM Items for Detecting Invalid Neuropsychological Test Performance Across Civilian and Veteran Clinical Samples. Percept Mot Skills 2022; 129:269-288. [PMID: 35139315 DOI: 10.1177/00315125211066399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Previous studies support using two abbreviated tests of the Test of Memory Malingering (TOMM), including (a) Trial 1 (T1) and (b) the number of errors on the first 10 items of T1 (T1e10), as performance validity tests (PVTs). In this study, we examined the independent and aggregated predictive utility of TOMM T1 and T1e10 for identifying invalid neuropsychological test performance across two clinical samples. We employed cross-sectional research to examine two independent and demographically diverse mixed samples of military veterans and civilians (VA = 108; academic medical center = 234) of patients who underwent neuropsychological evaluations. We determined validity groups by patient performance on four independent criterion PVTs. We established concordances between passing/failing the TOMM T1e10 and T1, followed by logistic regression to determine individual and aggregated accuracy of T1e10 and T1 for predicting validity group membership. Concordance between passing T1e10 and T1 was high, as was overall validity (87-98%) across samples. By contrast, T1e10 failure was more highly concordant with T1 failure (69-77%) than with overall invalidity status (59-60%) per criterion PVTs, whereas T1 failure was more highly concordant with invalidity status (72-88%) per criterion PVTs. Logistic regression analyses demonstrated similar results, with T1 accounting for more variance than T1e10. However, combining T1e10 and T1 accounted for the most variance of any model, with T1e10 and T1 each emerging as significant predictors. TOMM T1 and, to a lesser extent, T1e10 were significant predictors of independent criterion-derived validity status across two distinct clinical samples, but they did not offer improved classification accuracy when aggregated.
Collapse
Affiliation(s)
- Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tristan P Reynolds
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Dillion W Sharp
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
9
|
DiCarlo GM, Ernst WJ, Kneavel ME. An exploratory study of the convergent validity of the Test of Effort (TOE) in adults with acquired brain injury. Brain Inj 2022; 36:424-431. [PMID: 35113759 DOI: 10.1080/02699052.2022.2034953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
PRIMARY OBJECTIVE To examine the convergent validity of the Test of Effort (TOE), a performance validity test (PVT) currently under development that employs a two-subtest (one verbal, one visual), forced-choice recognition memory format. RESEARCH DESIGN A descriptive, correlational design was employed to describe performance on the TOE and examine the convergent validity between the TOE and comparison measures. METHODS AND PROCEDURES A sample of 53 individuals with chronic acquired brain injury (ABI) were administered the TOE and three well-validated PVTs (Reliable Digit Span [RDS], Test of Memory Malingering [TOMM] and Dot Counting Test [DCT]). MAIN OUTCOMES AND RESULTS The TOE appeared more difficult than it actually was, suggesting adequate face validity. Medium-to-large correlations were observed between the TOE and established PVTs, suggesting good convergent validity. Provisional cutoff scores are offered based on performance of a subgroup of participants with "sufficient effort." CONCLUSIONS Overall, the TOE shows promise as a PVT measure for clinical use. Future studies with larger and more diverse samples are needed to more fully determine the psychometric characteristics of the TOE.
Collapse
Affiliation(s)
| | - William J Ernst
- Department of Professional Psychology, Chestnut Hill College, Philadelphia, Pennsylvania, USA
| | - Meredith E Kneavel
- School of Nursing and Health Sciences, La Salle University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
10
|
White DJ, Ovsiew GP, Rhoads T, Resch ZJ, Lee M, Oh AJ, Soble JR. The Divergent Roles of Symptom and Performance Validity in the Assessment of ADHD. J Atten Disord 2022; 26:101-108. [PMID: 33084457 DOI: 10.1177/1087054720964575] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study examined concordance between symptom and performance validity among clinically-referred patients undergoing neuropsychological evaluation for Attention-Deficit/Hyperactivity Disorder (ADHD). METHOD Data from 203 patients who completed the WAIS-IV Working Memory Index, the Clinical Assessment of Attention Deficit-Adult (CAT-A), and ≥4 criterion performance validity tests (PVTs) were analyzed. RESULTS Symptom and performance validity were concordant in 76% of cases, with the majority being valid performance. Of the remaining 24% of cases with divergent validity findings, patients were more likely to exhibit symptom invalidity (15%) than performance invalidity (9%). Patients demonstrating symptom invalidity endorsed significantly more ADHD symptoms than those with credible symptom reporting (ηp2 = .06-.15), but comparable working memory test performance, whereas patients with performance invalidity had significantly worse working memory performance than those with valid PVT performance (ηp2 = .18). CONCLUSION Symptom and performance invalidity represent dissociable constructs in patients undergoing neuropsychological evaluation of ADHD and should be evaluated independently.
Collapse
Affiliation(s)
- Daniel J White
- University of Illinois College of Medicine, Chicago, IL USA.,Roosevelt University, Chicago, IL, USA
| | | | - Tasha Rhoads
- University of Illinois College of Medicine, Chicago, IL USA
| | - Zachary J Resch
- University of Illinois College of Medicine, Chicago, IL USA.,Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Mary Lee
- University of Illinois College of Medicine, Chicago, IL USA
| | - Alison J Oh
- University of Illinois College of Medicine, Chicago, IL USA
| | - Jason R Soble
- University of Illinois College of Medicine, Chicago, IL USA
| |
Collapse
|
11
|
OUP accepted manuscript. Arch Clin Neuropsychol 2022; 37:1158-1176. [DOI: 10.1093/arclin/acac020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2022] [Indexed: 11/14/2022] Open
|
12
|
Messerly J, Soble JR, Webber TA, Alverson WA, Fullen C, Kraemer LD, Marceaux JC. Evaluation of the classification accuracy of multiple performance validity tests in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY. ADULT 2021; 28:727-736. [PMID: 31835915 DOI: 10.1080/23279095.2019.1698581] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Test of Memory Malingering (TOMM) and Word Memory Test (WMT) are among the most well-known performance validity tests (PVTs) and regarded as gold standard measures. Due to the many factors that impact PVT selection, it is imperative that clinicians make informed clinical decisions with respect to additional or alternative PVTs that demonstrate similar classification accuracy as these well-validated measures. The present archival study evaluated the agreement/classification accuracy of a large battery consisting of multiple other freestanding/embedded PVTs in a mixed clinical sample of 126 veterans. We examined failure rates for all standalone/embedded PVTs using established cut-scores and calculated pass/fail agreement rates and diagnostic odds ratios for various combinations of PVTs using the TOMM and WMT as criterion measures. TOMM and WMT demonstrated the best agreement, followed by Word Choice Test (WCT). The Rey Fifteen Item Test had an excessive number of false-negative errors and reduced classification accuracy. The Digit Span age-corrected scaled score (DS-ACSS) had highest agreement. Findings lend further support to the use of a combination of embedded and standalone PVTs in identifying suboptimal performance. Results provide data to enhance clinical decision making for neuropsychologists who implement combinations of PVTs in a larger clinical battery.
Collapse
Affiliation(s)
- Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Departments of Psychiatry and Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Troy A Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Mental Health and Rehabilitation and Extended Carelines, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - W Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Chrystal Fullen
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Lindsay D Kraemer
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center, San Antonio, TX, USA
| |
Collapse
|
13
|
Mulligan R, Basso MR, Hoffmeister J, Lau L, Whiteside DM, Combs D. Classification accuracy of the word memory test genuine memory impairment index. J Clin Exp Neuropsychol 2021; 43:655-662. [PMID: 34686108 DOI: 10.1080/13803395.2021.1988520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE The Word Memory Test (WMT) assesses non-credible performance in neuropsychological assessment. To mitigate risk of false positives among patients with severe cognitive dysfunction, the Genuine Memory Impairment Profile was derived. Only a modest number of investigations has evaluated classification accuracy among clinical samples, leaving the GMIP's accuracy largely uncertain. Accordingly, a simulation experiment evaluated the classification accuracy of the GMIP in a group of healthy individuals coached to simulate mild traumatic brain injury (TBI) related memory impairment on the WMT. PARTICIPANTS AND METHODS Eighty healthy individuals were randomly assigned to one of the four experimental groups. One group was provided superficial information concerning TBI symptoms (naïve simulators), another was provided extensive information concerning TBI symptoms (sophisticated simulators), and a third group was provided extensive TBI symptom information and tactics to evade detection by performance validity tests (PVT) (test-coached). An honest responding control group was directed to give their best performance. All participants were administered the California Verbal Learning Test-2 (CVLT-2) and the WMT. RESULTS Among the TBI simulators, 90% of the test-coached, 95% of the sophisticated simulators, and 100% of the naïve simulators were correctly classified as exaggerating memory impairment on the primary WMT indices. The simulator groups performed worse than the honest responding group on the CVLT-2. Of those who exceeded the WMT cutoffs, 60%, 27%, and 6% of the naïve-, sophisticated-, and test-coached simulators manifested the GMIP profile, respectively. CONCLUSIONS The GMIP is apt to misclassify individuals as having genuine memory impairment, especially if a naïve or unsophisticated effort is made to exert non-credible performance. Indeed, individuals who employ the least sophisticated efforts to exaggerate cognitive impairment appear most likely to manifest the GMIP. The GMIP should be used cautiously to discriminate genuine impairment from non-credible performance, especially among people with mild TBI.
Collapse
Affiliation(s)
- Ryan Mulligan
- Department of Psychology, University of Tulsa, Tulsa, US
| | - Michael R Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, US
| | | | - Lily Lau
- Department of Psychology, University of Tulsa, Tulsa, US
| | - Douglas M Whiteside
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, US
| | - Dennis Combs
- Department of Psychology, University of Texas, Austin, US
| |
Collapse
|
14
|
Sanborn V, Lace J, Gunstad J, Galioto R. Considerations regarding noncredible performance in the neuropsychological assessment of patients with multiple sclerosis: A case series. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 30:458-467. [PMID: 34514920 DOI: 10.1080/23279095.2021.1971229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Determining the validity of data during clinical neuropsychological assessment is crucial for proper interpretation, and extensive literature has emphasized myriad methods of doing so in diverse samples. However, little research has considered noncredible presentation in persons with multiple sclerosis (pwMS). PwMS often experience one or more factors known to impact validity of data, including major neurocognitive impairment, psychological distress/psychogenic interference, and secondary gain. This case series aimed to illustrate the potential relationships between these factors and performance validity testing in pwMS. Six cases from an IRB-approved database containing pwMS referred for neuropsychological assessment at a large, academic medical center involving at least one of the above-stated factors were identified. Backgrounds, neuropsychological test data, and clinical considerations for each were reviewed. Interestingly, no pwMS diagnosed with major neurocognitive impairment was found to have noncredible performance, nor was any patient with noncredible performance in the absence of notable psychological distress. Given the variability of noncredible performance and multiplicity of factors affecting performance validity in pwMS, clinicians are strongly encouraged to consider psychometrically appropriate methods for evaluating validity of cognitive data in pwMS. Additional research aiming to elucidate base rates of, mechanisms begetting, and methods for assessing noncredible performance in pwMS is imperative.
Collapse
Affiliation(s)
| | - John Lace
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA
| | - John Gunstad
- Psychological Sciences, Kent State University, Kent, OH, USA.,Brain Health Research Institute, Kent State University, Kent, OH, USA
| | - Rachel Galioto
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA.,Cleveland Clinic, Mellen Center for Multiple Sclerosis, Cleveland, OH, USA
| |
Collapse
|
15
|
Rhoads T, Neale AC, Resch ZJ, Cohen CD, Keezer RD, Cerny BM, Jennette KJ, Ovsiew GP, Soble JR. Psychometric implications of failure on one performance validity test: a cross-validation study to inform criterion group definition. J Clin Exp Neuropsychol 2021; 43:437-448. [PMID: 34233580 DOI: 10.1080/13803395.2021.1945540] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Introduction: Research to date has supported the use of multiple performance validity tests (PVTs) for determining validity status in clinical settings. However, the implications of including versus excluding patients failing one PVT remains a source of debate, and methodological guidelines for PVT research are lacking. This study evaluated three validity classification approaches (i.e. 0 vs. ≥2, 0-1 vs. ≥2, and 0 vs. ≥1 PVT failures) using three reference standards (i.e. criterion PVT groupings) to recommend approaches best suited to establishing validity groups in PVT research methodology.Method: A mixed clinical sample of 157 patients was administered freestanding (Medical Symptom Validity Test, Dot Counting Test, Test of Memory Malingering, Word Choice Test), and embedded PVTs (Reliable Digit Span, RAVLT Effort Score, Stroop Word Reading, BVMT-R Recognition Discrimination) during outpatient neuropsychological evaluation. Three reference standards (i.e. two freestanding and three embedded PVTs from the above list) were created. Rey 15-Item Test and RAVLT Forced Choice were used solely as outcome measures in addition to two freestanding PVTs not employed in the reference standard. Receiver operating characteristic curve analyses evaluated classification accuracy using the three validity classification approaches for each reference standard.Results: When patients failing only one PVT were excluded or classified as valid, classification accuracy ranged from acceptable to excellent. However, classification accuracy was poor to acceptable when patients failing one PVT were classified as invalid. Sensitivity/specificity across two of the validity classification approaches (0 vs. ≥2; 0-1 vs. ≥2) remained reasonably stable.Conclusions: These results reflect that both inclusion and exclusion of patients failing one PVT are acceptable approaches to PVT research methodology and the choice of method likely depends on the study rationale. However, including such patients in the invalid group yields unacceptably poor classification accuracy across a number of psychometrically robust outcome measures and therefore is not recommended.
Collapse
Affiliation(s)
- Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Alec C Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
16
|
Nayar K, Ventura LM, DeDios-Stern S, Oh A, Soble JR. The Impact of Learning and Memory on Performance Validity Tests in a Mixed Clinical Pediatric Population. Arch Clin Neuropsychol 2021; 37:50-62. [PMID: 34050354 DOI: 10.1093/arclin/acab040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the degree to which verbal and visuospatial memory abilities influence performance validity test (PVT) performance in a mixed clinical pediatric sample. METHOD Data from 252 consecutive clinical pediatric cases (Mage=11.23 years, SD=4.02; 61.9% male) seen for outpatient neuropsychological assessment were collected. Measures of learning and memory (e.g., The California Verbal Learning Test-Children's Version; Child and Adolescent Memory Profile [ChAMP]), performance validity (Test of Memory Malingering Trial 1 [TOMM T1]; Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition Digit Span indices; ChAMP Overall Validity Index), and intellectual abilities (e.g., WISC-V) were included. RESULTS Learning/memory abilities were not significantly correlated with TOMM T1 and accounted for relatively little variance in overall TOMM T1 performance (i.e., ≤6%). Conversely, ChAMP Validity Index scores were significantly correlated with verbal and visual learning/memory abilities, and learning/memory accounted for significant variance in PVT performance (12%-26%). Verbal learning/memory performance accounted for 5%-16% of the variance across the Digit Span PVTs. No significant differences in TOMM T1 and Digit Span PVT scores emerged between verbal/visual learning/memory impairment groups. ChAMP validity scores were lower for the visual learning/memory impairment group relative to the nonimpaired group. CONCLUSIONS Findings highlight the utility of including PVTs as standard practice for pediatric populations, particularly when memory is a concern. Consistent with the adult literature, TOMM T1 outperformed other PVTs in its utility even among the diverse clinical sample with/without learning/memory impairment. In contrast, use of Digit Span indices appear to be best suited in the presence of visuospatial (but not verbal) learning/memory concerns. Finally, the ChAMP's embedded validity measure was most strongly impacted by learning/memory performance.
Collapse
Affiliation(s)
- Kritika Nayar
- Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
17
|
Cerny BM, Resch ZJ, Rhoads T, Jennette KJ, Singh PG, Ovsiew GP, Soble JR. Examining Traditional and Novel Validity Indicators from the Medical Symptom Validity Test Across Levels of Verbal and Visual Memory Impairment. Arch Clin Neuropsychol 2021; 37:146-159. [PMID: 34050349 DOI: 10.1093/arclin/acab038] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/05/2021] [Accepted: 05/01/2021] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE This cross-sectional study examined accuracy of traditional Medical Symptom Validity Test (MSVT) validity indicators, including immediate recognition (IR), delayed recognition (DR), and consistency (CNS), as well as a novel indicator derived from the mean performance on IR, DR, and CNS across verbal, visual, and combined learning and memory impairment bands. METHOD A sample of 180 adult outpatients was divided into valid (n = 150) and invalid (n = 30) groups based on results of four independent criterion performance validity tests. Verbal and visual learning and recall were classified as indicative of no impairment, mild impairment, or severe impairment based on performance on the Rey Auditory Verbal Learning Test and Brief Visuospatial Memory Test-Revised, respectively. RESULTS In general, individual MSVT subtests were able to accurately classify performance as valid or invalid, even in the context of severe learning and memory deficits. However, as verbal and visual memory impairment increased, optimal MSVT cut-scores diverged from manual-specified cutoffs such that DR and CNS required cut-scores to be lowered to maintain adequate specificity. By contrast, the newly proposed scoring algorithm generally showed more robust psychometric properties across the memory impairment bands. CONCLUSIONS The mean performance index, a novel scoring algorithm using the mean of the three primary MSVT subtests, may be a more robust validity indicator than the individual MSVT subtests in the context of bona fide memory impairment.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Palak G Singh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
18
|
Ovsiew GP, Carter DA, Rhoads T, Resch ZJ, Jennette KJ, Soble JR. Concordance Between Standard and Abbreviated Administrations of the Test of Memory Malingering: Implications for Streamlining Performance Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09408-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
19
|
Resch ZJ, Paxton JL, Obolsky MA, Lapitan F, Cation B, Schulze ET, Calderone V, Fink JW, Lee RC, Pliskin NH, Soble JR. Establishing the base rate of performance invalidity in a clinical electrical injury sample: Implications for neuropsychological test performance. J Clin Exp Neuropsychol 2021; 43:213-223. [PMID: 33858295 DOI: 10.1080/13803395.2021.1914002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Objective: The base rate of neuropsychological performance invalidity in electrical injury, a clinically-distinct and frequently compensation-seeking population, is not well established. This study determined the base rate of performance invalidity in a large electrical injury sample, and examined patient characteristics, injury parameters, and neuropsychological test performance based on validity status.Method: This cross-sectional study included data from 101 patients with electrical injury consecutively referred for post-acute neuropsychological evaluation. Eighty-five percent of the sample was compensation-seeking. Multiple performance validity tests (PVTs) were administered as part of standard clinical evaluation. For patients with four or more PVTs, valid performance was operationalized as less than or equal to one PVT failure and invalid performance as two or more failures.Results: Frequency analysis revealed 66% (n = 67) had valid performance while 29% (n = 29) demonstrated probable invalid performance; the remaining 5% (n = 5) had indeterminate validity. No significant differences in demographics or injury parameters emerged between validity groups (0 vs. 1 vs. ≥2 PVT failures). In contrast, the electrical injury group with invalid performance performed significantly worse across tests of processing speed and executive abilities than those with valid performance (ps< .05, ηp2 = .19-.25).Conclusions: The current study is the first to establish the base rate of neuropsychological performance invalidity in electrical injury survivors using empirical methods and current practice standards. Patient and clinical variables, including compensation-seeking status, did not differ between validity groups; however, neuropsychological test performance did, supporting the need for multi-method, objective performance validity assessment.
Collapse
Affiliation(s)
- Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jessica L Paxton
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Maximillian A Obolsky
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Franchezka Lapitan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Bailey Cation
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Evan T Schulze
- Department of Neurology, Saint Louis University, St. Louis, MO, USA
| | - Veroly Calderone
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
| | - Joseph W Fink
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL, USA
| | - Raphael C Lee
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Departments of Surgery, Medicine and Organismal Biology, University of Chicago, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| |
Collapse
|
20
|
Cerny BM, Rhoads T, Leib SI, Jennette KJ, Basurto KS, Durkin NM, Ovsiew GP, Resch ZJ, Soble JR. Mean response latency indices on the Victoria Symptom Validity Test do not contribute meaningful predictive value over accuracy scores for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1304-1311. [PMID: 33470869 DOI: 10.1080/23279095.2021.1872575] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The utility of the Victoria Symptom Validity Test (VSVT) as a performance validity test (PVT) has been primarily established using response accuracy scores. However, the degree to which response latency may contribute to accurate classification of performance invalidity over and above accuracy scores remains understudied. Therefore, this study investigated whether combining VSVT accuracy and response latency scores would increase predictive utility beyond use of accuracy scores alone. Data from a mixed clinical sample of 163 patients, who were administered the VSVT as part of a larger neuropsychological battery, were analyzed. At least four independent criterion PVTs were used to establish validity groups (121 valid/42 invalid). Logistic regression models examining each difficulty level revealed that all VSVT measures were useful in classifying validity groups, both independently and when combined. Individual predictor classification accuracy ranged from 77.9 to 81.6%, indicating acceptable to excellent discriminability across the validity indices. The results of this study support the value of both accuracy and latency scores on the VSVT to identify performance invalidity, although the accuracy scores had superior classification statistics compared to response latency, and mean latency indices provided no unique benefit for classification accuracy beyond dimensional accuracy scores alone.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
21
|
Victoria Symptom Validity Test: A Systematic Review and Cross-Validation Study. Neuropsychol Rev 2021; 31:331-348. [PMID: 33433828 DOI: 10.1007/s11065-021-09477-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 01/03/2021] [Indexed: 12/12/2022]
Abstract
The Victoria Symptom Validity Test (VSVT) is a performance validity test (PVT) with over two decades of empirical backing, although methodological limitations within the extant literature restrict its clinical and research generalizability. Chief among these constraints includes limited consensus on the most accurate index within the VSVT and the most appropriate cut-scores within each VSVT validity index. The current systematic review synthesizes existing VSVT validation studies and provides additional cross-validation in an independent sample using a known-groups design. We completed a systematic search of the literature, identifying 17 peer-reviewed studies for synthesis (7 simulation designs, 7 differential prevalence designs, and 3 known-groups designs). The independent cross-validation sample consisted of 200 mixed clinical neuropsychiatric patients referred for outpatient neuropsychological evaluation. Across all indices, Total item accuracy produced the strongest psychometric properties at an optimal cut-score of ≤ 40 (62% sensitivity/88% specificity). However, ROC curve analyses for all VSVT indices yielded statistically significant areas under the curve (AUCs; .73-81), suggestive of moderate classification accuracy. Cut-scores derived using the independent cross-validation sample converged with some previous findings supporting cut-scores of ≤ 22 for Easy item accuracy and ≤ 40 for Total item accuracy, although divergent findings were noted for Difficult item accuracy. Overall, VSVT validity indicators have adequate diagnostic accuracy across populations, with the current study providing additional support for its use as a psychometrically sound PVT in clinical settings. However, caution is recommended among patients with certain verified clinical conditions (e.g., dementia) and those with pronounced working memory deficits due to concerns for increased risk of false positives.
Collapse
|
22
|
Resch ZJ, Rhoads T, Ovsiew GP, Soble JR. A Known-Groups Validation of the Medical Symptom Validity Test and Analysis of the Genuine Memory Impairment Profile. Assessment 2020; 29:455-466. [DOI: 10.1177/1073191120983919] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study cross-validated the Medical Symptom Validity Test (MSVT) in a mixed neuropsychiatric sample and examined its accuracy for identifying invalid neuropsychological performance using a known-groups design. Cross-sectional data from 129 clinical patients who completed the MSVT were examined. Validity groups were established using six, independent criterion performance validity tests, which yielded 98 patients in the valid group and 31 in the invalid group. All MSVT subtest scores were significantly lower in the invalid group (η p2=.22-.39). Using published cut-scores, sensitivities of 42% to 71% were found among the primary effort subtests, and 74% sensitivity/90% specificity was observed for the overall MSVT. Among this sample, the MSVT component validity scales produced areas under the curve of .78-.86, suggesting moderate classification accuracy. At optimal cut-scores, the MSVT primary effort validity scales demonstrated 55% to 71% sensitivity/91% to 93% specificity, with the Consistency subtest exhibiting the strongest psychometric properties. The MSVT exhibited relatively robust sensitivity and specificity, supporting its utility as a briefer freestanding performance validity test to its predecessor, the Word Memory Test. Finally, the Genuine Memory Impairment Profile appears promising for patients with Major Neurocognitive Disorder, but is cautioned against for those without significant functional decline in activities of daily living at this time.
Collapse
Affiliation(s)
- Zachary J. Resch
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | | | - Jason R. Soble
- University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
23
|
Abstract
OBJECTIVES A number of commonly used performance validity tests (PVTs) may be prone to high failure rates when used for individuals with severe neurocognitive deficits. This study investigated the validity of 10 PVT scores in justice-involved adults with fetal alcohol spectrum disorder (FASD), a neurodevelopmental disability stemming from prenatal alcohol exposure and linked with severe neurocognitive deficits. METHOD The sample comprised 80 justice-involved adults (ages 19-40) including 25 with confirmed or possible FASD and 55 where FASD was ruled out. Ten PVT scores were calculated, derived from Word Memory Test, Genuine Memory Impairment Profile, Advanced Clinical Solutions (Word Choice), the Wechsler Adult Intelligence Scale - Fourth Edition (Reliable Digit Span and age-corrected scaled scores (ACSS) from Digit Span, Coding, Symbol Search, Coding - Symbol Search, Vocabulary - Digit Span), and the Wechsler Memory Scale - Fourth Edition (Logical Memory II Recognition). RESULTS Participants with diagnosed/possible FASD were more likely to fail any single PVT, and failed a greater number of PVTs overall, compared to those without FASD. They were also more likely to fail based on Word Memory Test, Digit Span ACSS, Coding ACSS, Symbol Search ACSS, and Logical Memory II Recognition, compared to controls (35-76%). Across both groups, substantially more participants with IQ <70 failed two or more PVTs (90%), compared to those with an IQ ≥70 (44%). CONCLUSIONS Results highlight the need for additional research examining the use of PVTs in justice-involved populations with FASD.
Collapse
|
24
|
Neale AC, Ovsiew GP, Resch ZJ, Soble JR. Feigning or forgetfulness: The effect of memory impairment severity on word choice test performance. Clin Neuropsychol 2020; 36:584-599. [DOI: 10.1080/13854046.2020.1799076] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Affiliation(s)
- Alec C. Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
25
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
26
|
Abramson DA, Resch ZJ, Ovsiew GP, White DJ, Bernstein MT, Basurto KS, Soble JR. Impaired or invalid? Limitations of assessing performance validity using the Boston Naming Test. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:486-491. [DOI: 10.1080/23279095.2020.1774378] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Dayna A. Abramson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Matthew T. Bernstein
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S. Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
27
|
Ovsiew GP, Resch ZJ, Nayar K, Williams CP, Soble JR. Not so fast! Limitations of processing speed and working memory indices as embedded performance validity tests in a mixed neuropsychiatric sample. J Clin Exp Neuropsychol 2020; 42:473-484. [DOI: 10.1080/13803395.2020.1758635] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kritika Nayar
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Christopher P. Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
28
|
Graver C, Green P. Misleading conclusions about word memory test results in multiple sclerosis (MS) by Loring and Goldstein (2019). APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:315-323. [DOI: 10.1080/23279095.2020.1748035] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
29
|
Resch ZJ, Pham AT, Abramson DA, White DJ, DeDios-Stern S, Ovsiew GP, Castillo LR, Soble JR. Examining independent and combined accuracy of embedded performance validity tests in the California Verbal Learning Test-II and Brief Visuospatial Memory Test-Revised for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:252-261. [DOI: 10.1080/23279095.2020.1742718] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Amber T. Pham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, DePaul University, Chicago, IL, USA
| | - Dayna A. Abramson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Liliam R. Castillo
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
30
|
Soble JR, Alverson WA, Phillips JI, Critchfield EA, Fullen C, O’Rourke JJF, Messerly J, Highsmith JM, Bailey KC, Webber TA, Marceaux JC. Strength in Numbers or Quality over Quantity? Examining the Importance of Criterion Measure Selection to Define Validity Groups in Performance Validity Test (PVT) Research. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-019-09370-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
31
|
Bailey KC, Webber TA, Phillips JI, Kraemer LDR, Marceaux JC, Soble JR. When Time is of the Essence: Preliminary Findings for a Quick Administration of the Dot Counting Test. Arch Clin Neuropsychol 2019; 36:403-413. [DOI: 10.1093/arclin/acz058] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 08/14/2019] [Indexed: 11/13/2022] Open
Abstract
Abstract
Objective
Performance validity research has emphasized the need for briefer measures and, more recently, abbreviated versions of established free-standing tests to minimize neuropsychological evaluation costs/time burden. This study examined the accuracy of multiple abbreviated versions of the Dot Counting Test (“quick” DCT) for detecting invalid performance in isolation and in combination with the Test of Memory Malingering Trial 1 (TOMMT1).
Method
Data from a mixed clinical sample of 107 veterans (80 valid/27 invalid per independent validity measures and structured criteria) were included in this cross-sectional study; 47% of valid participants were cognitively impaired. Sensitivities/specificities of various 6- and 4-card DCT combinations were calculated and compared to the full, 12-card DCT. Combined models with the most accurate 6- and 4-card combinations and TOMMT1 were then examined.
Results
Receiver operator characteristic curve analyses were significant for all 6- and 4-card DCT combinations with areas under the curve of .868–.897. The best 6-card combination (cards, 1-3-5-8-11-12) had 56% sensitivity/90% specificity (E-score cut-off, ≥14.5), and the best 4-card combination (cards, 3-4-8-11) had 63% sensitivity/94% specificity (cut-off, ≥16.75). The full DCT had 70% sensitivity/90% specificity (cut-off, ≥16.00). Logistic regression revealed 95% classification accuracy when 6-card or 4-card “quick” combinations were combined with TOMMT1, with the DCT combinations and TOMMT1 both emerging as significant predictors.
Conclusions
Abbreviated DCT versions utilizing 6- and 4-card combinations yielded comparable sensitivity/specificity as the full DCT. When these “quick” DCT combinations were further combined with an abbreviated memory-based performance validity test (i.e., TOMMT1), overall classification accuracy for identifying invalid performance was 95%.
Collapse
Affiliation(s)
- K Chase Bailey
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Troy A Webber
- Rehabilitation and Extended Care Line, Michael E. DeBakey VA Medical Center, Houston, TX 77030, USA
| | - Jacob I Phillips
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Lindsay D R Kraemer
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL 60612, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL 60612, USA
| |
Collapse
|
32
|
Ventura LM, DeDios-Stern S, Oh A, Soble JR. They're not just little adults: The utility of adult performance validity measures in a mixed clinical pediatric sample. APPLIED NEUROPSYCHOLOGY-CHILD 2019; 10:297-307. [PMID: 31703167 DOI: 10.1080/21622965.2019.1685522] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Performance validity tests (PVTs) have become a standard part of adult neuropsychological practice; however, they are less widely used in pediatric testing. The current study aimed to obtain a better understanding of the application of PVTs within a mixed clinical pediatric sample with a wide range of diagnosis, IQ, and age. Cross-sectional data were analyzed from 130 consecutive pediatric patients evaluated as part of clinical care and diagnosed with a variety of medical/neurological, developmental, and psychiatric disorders. Patients were administered a battery of neuropsychological tests; results of intellectual functioning measures (i.e., Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition [WAIS-IV]), and PVTs (i.e., Test of Memory Malingering [TOMM] and Digit Span [DS] subtests of the WISC-V/WAIS-IV) were analyzed to assess PVT performance across the sample as well as age- and Full-Scale IQ-related (FSIQ) effects on pass rate. Results suggested that the TOMM is an effective validity test for youth, as the TOMM adult cutoff score was also valid for children (88% pass rate on TOMM trial 1 cut-score ≥41, 71% pass rate on TOMM trial 1 cut-score ≥45). In contrast, Reliable Digit Span (RDS) was less accurate (34% failed RDS [cut-score ≤6], 54% failed RDS-r [cut-score ≤10], and 25% failed DS ACSS [cut-score ≤5]) using standard adult cutoffs. Notably, although TOMM scores were not strongly influenced by IQ, DS scores increased as IQ increased. Overall, further analysis of PVTs can champion new standards of practice through additional research establishing PVT accuracy within pediatric populations.
Collapse
Affiliation(s)
- Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
33
|
Suchy Y. Introduction to special issue: Current trends in empirical examinations of performance and symptom validity. Clin Neuropsychol 2019; 33:1349-1353. [PMID: 31595824 DOI: 10.1080/13854046.2019.1672334] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|