1
|
Huston CA, Poreh AM. Preliminary validation of the computerized N-Tri - A Tri-Choice naming and response bias test. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1125-1131. [PMID: 35995131 DOI: 10.1080/23279095.2022.2110872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The study describes the validation of a computerized adaptation of the novel Tri-Choice Naming and Response Bias Measure (N-Tri) developed to detect untruthful responding while being less susceptible to coaching than existing measures. We hypothesized that the N-Tri would have comparable sensitivity and specificity to traditional tests but would have improved accuracy for detecting coached simulators. Four-hundred volunteers were randomly assigned to one of three groups: uncoached simulators' group (n = 118), coached simulators' group (n = 136), or control group (n = 146). Both simulator groups were asked to feign concussion symptoms, but the coached group received a test-taking strategy and a description of concussion symptoms. The participants were administered the computerized version of the new measure in conjunction with computerized adaptations of two well-validated response bias tests commonly used to detect cognitive malingering, the Reliable Digit Span (RDS) and Portland Digit Recognition Test (PDRT). Our data show the new measure correlated highly with other established measures. However, the classification accuracy did not significantly increase when compared to the traditional tests. Our findings support that the N-Tri performs at a comparable level to existing forced choice measures of response bias. Nevertheless, the N-Tri could potentially improve the detection of response bias as existing tests become more recognizable by the public.
Collapse
Affiliation(s)
- Chloe A Huston
- Department of Psychology, Cleveland State University, Cleveland, OH, USA
| | - Amir M Poreh
- Department of Psychology, Cleveland State University, Cleveland, OH, USA
- Department of Psychiatry, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| |
Collapse
|
2
|
Kanser RJ, Rohling ML, Davis JJ. Determining whether false positive rates increase with performance validity test battery expansion. Clin Neuropsychol 2024:1-13. [PMID: 39415334 DOI: 10.1080/13854046.2024.2416543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 10/10/2024] [Indexed: 10/18/2024]
Abstract
OBJECTIVE Performance validity test (PVT) misclassification is an important concern for neuropsychologists. The present study determined whether expanding PVT analysis from 4-PVTs to 8-PVTs could lead to elevated rates of false positive performance validity misclassifications. METHOD Retrospective analysis of 443 patients who underwent a fixed neuropsychological test battery in a mixed clinical and forensic setting. Rates of failing two PVTs were compared to those predicted by Monte Carlo simulations when PVT analysis extended from 4-PVTs to 8-PVTs. Indeterminate performers (IDT; n = 42; those who failed two PVTs only after PVT analysis extended from 4-PVTs to 8-PVTs) were compared to a PVT-Fail group (n = 148; those who failed two PVTs in the 4-PVT battery or failed >2 PVTs). RESULTS Rate of failing two PVTs remained stable when PVT analysis extended from 4- to 8-PVTs (12.9 to 11.9%) and was significantly lower than those predicted by Monte Carlo simulations. Compared to PVT-Fail, the IDT group was significantly younger, had stronger neuropsychological test performance, and demonstrated comparable rates of forensic referral and conditions with known neurocognitive sequelae (e.g. stroke, moderate-to-severe TBI). CONCLUSIONS Monte Carlo simulations significantly overestimated rates of individuals failing two PVTs as PVT battery length doubled. IDT did not differ from PVT-Fail across variables with known PVT effects (e.g. age, referral context, neurologic diagnoses), lowering concern that this group is comprised entirely of false-positive PVT classifications. More research is needed to determine the effect of PVT battery length on validity classification accuracy.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Physical Medicine and Rehabilitation, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | | | - Jeremy J Davis
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
3
|
Grewal KS, Trites M, Kirk A, MacDonald SWS, Morgan D, Gowda-Sookochoff R, O'Connell ME. CVLT-II short form forced choice recognition in a clinical dementia sample: Cautions for performance validity assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:839-848. [PMID: 35635794 DOI: 10.1080/23279095.2022.2079088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Performance validity tests are susceptible to false positives from genuine cognitive impairment (e.g., dementia); this has not been explored with the short form of the California Verbal Learning Test II (CVLT-II-SF). In a memory clinic sample, we examined whether CVLT-II-SF Forced Choice Recognition (FCR) scores differed across diagnostic groups, and how the severity of impairment [Clinical Dementia Rating Sum of Boxes (CDR-SOB) or Mini-Mental State Examination (MMSE)] modulated test performance. Three diagnostic groups were identified: subjective cognitive impairment (SCI; n = 85), amnestic mild cognitive impairment (a-MCI; n = 17), and dementia due to Alzheimer's Disease (AD; n = 50). Significant group differences in FCR were observed using one-way ANOVA; post-hoc analysis indicated the AD group performed significantly worse than the other groups. Using multiple regression, FCR performance was modeled as a function of the diagnostic group, severity (MMSE or CDR-SOB), and their interaction. Results yielded significant main effects for MMSE and diagnostic group, with a significant interaction. CDR-SOB analyses were non-significant. Increases in impairment disproportionately impacted FCR performance for persons with AD, adding caution to research-based cutoffs for performance validity in dementia. Caution is warranted when assessing performance validity in dementia populations. Future research should examine whether CVLT-II-SF-FCR is appropriately specific for best-practice testing batteries for dementia.
Collapse
Affiliation(s)
- Karl S Grewal
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| | - Michaella Trites
- Department of Psychology, University of Victoria, Victoria, Canada
| | - Andrew Kirk
- Department of Medicine, University of Saskatchewan, Saskatoon, Canada
| | | | - Debra Morgan
- Canadian Centre for Health and Safety in Agriculture, University of Saskatchewan, Saskatoon, Canada
| | | | - Megan E O'Connell
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
4
|
Rohling ML, Binder LM, Larrabee GJ, Langhinrichsen-Rohling J. Forced choice test score of p ≤ .20 and failures on ≥ six performance validity tests results in similar Overall Test Battery Means. Clin Neuropsychol 2024; 38:1193-1209. [PMID: 38041021 DOI: 10.1080/13854046.2023.2284975] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 11/13/2023] [Indexed: 12/03/2023]
Abstract
Objective: To determine if similar levels of performance on the Overall Test Battery Mean (OTBM) occur at different forced choice test (FCT) p-value score failures. Second, to determine the OTBM levels that are associated with failures at above chance on various performance validity (PVT) tests. Method: OTBMs were computed from archival data obtained from four practices. We calculated each examinee's Estimated Premorbid Global Ability (EPGA) and OTBM. The sample size was 5,103 examinees with 282 (5.5%) of these scoring below chance at p ≤ .20 on at least one FCT. Results: The OTBM associated with a failure at p ≤ .20 was equivalent to the OTBM that was associated with failing 6 or more PVTs at above-chance cutoffs. The mean OTBMs relative to increasingly strict FCT p cutoffs were similar (T scores in the 30s). As expected, there was an inverse relationship between the number of PVTs failed and examinees' OTBMs. Conclusions: The data support the use of p ≤ .20 as the probability level for testing the significance of below chance performance on FCTs. The OTBM can be used to index the influence of invalid performance on outcomes, especially when an examinee scores below chance.
Collapse
|
5
|
Kim S, Currao A, Brown E, Milberg WP, Fortier CB. Importance of validity testing in psychiatric assessment: evidence from a sample of multimorbid post-9/11 veterans. J Int Neuropsychol Soc 2024; 30:410-419. [PMID: 38014547 DOI: 10.1017/s1355617723000711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
OBJECTIVE Performance validity (PVTs) and symptom validity tests (SVTs) are necessary components of neuropsychological testing to identify suboptimal performances and response bias that may impact diagnosis and treatment. The current study examined the clinical and functional characteristics of veterans who failed PVTs and the relationship between PVT and SVT failures. METHOD Five hundred and sixteen post-9/11 veterans participated in clinical interviews, neuropsychological testing, and several validity measures. RESULTS Veterans who failed 2+ PVTs performed significantly worse than veterans who failed one PVT in verbal memory (Cohen's d = .60-.69), processing speed (Cohen's d = .68), working memory (Cohen's d = .98), and visual memory (Cohen's d = .88-1.10). Individuals with 2+ PVT failures had greater posttraumatic stress (PTS; β = 0.16; p = .0002), and worse self-reported depression (β = 0.17; p = .0001), anxiety (β = 0.15; p = .0007), sleep (β = 0.10; p = .0233), and functional outcomes (β = 0.15; p = .0009) compared to veterans who passed PVTs. 7.8% veterans failed the SVT (Validity-10; ≥19 cutoff); Multiple PVT failures were significantly associated with Validity-10 failure at the ≥19 and ≥23 cutoffs (p's < .0012). The Validity-10 had moderate correspondence in predicting 2+ PVTs failures (AUC = 0.83; 95% CI = 0.76, 0.91). CONCLUSION PVT failures are associated with psychiatric factors, but not traumatic brain injury (TBI). PVT failures predict SVT failure and vice versa. Standard care should include SVTs and PVTs in all clinical assessments, not just neuropsychological assessments, particularly in clinically complex populations.
Collapse
Affiliation(s)
- Sahra Kim
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Alyssa Currao
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Emma Brown
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - William P Milberg
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Catherine B Fortier
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Henry GK. Ability of the Wisconsin Card-Sorting Test-64 as an embedded measure to identify noncredible neurocognitive performance in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-7. [PMID: 38684109 DOI: 10.1080/23279095.2024.2348012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
OBJECTIVE To investigate the ability of selective measures on the Wisconsin Card Sorting Test-64 (WCST-64) to predict noncredible neurocognitive dysfunction in a large sample of mild traumatic brain injury (mTBI) litigants. METHOD Participants included 114 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT). RESULTS Participants failing PVTs performed worse across all WCST-64 dependent variables of interest compared to participants who passed PVTs. Receiver operating curve analysis revealed that only categories completed was a significant predictors of PVT status. Multivariate logistic regression did not add to classification accuracy. CONCLUSION Consideration of noncredible executive functioning may be warranted in mild traumatic brain injury (mTBI) litigants who complete ≤ 1 category on the WCST-64.
Collapse
|
7
|
Salo SK, Harries CA, Riddoch MJ, Smith AD. Visuospatial memory in apraxia: Exploring quantitative drawing metrics to assess the representation of local and global information. Mem Cognit 2024:10.3758/s13421-024-01531-w. [PMID: 38334870 DOI: 10.3758/s13421-024-01531-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 02/10/2024]
Abstract
Neuropsychological evidence suggests that visuospatial memory is subserved by two separable processing systems, with dorsal underpinnings for global form and ventral underpinnings for the integration of part elements. Previous drawing studies have explored the effects of Gestalt organisation upon memory for hierarchical stimuli, and we here present an exploratory study of an apraxic dorsal stream patient's (MH) performance. We presented MH with a stimulus set (previously reported by Riddoch et al., Cognitive Neuropsychology, 20(7), 641-671, 2003) and devised a novel quantitative scoring system to obtain a finer grain of insight into performance. Stimuli possessed either good or poor Gestalt qualities and were reproduced in a copy condition and two visual memory conditions (with unlimited viewing before the model was removed, or with 3 s viewing). MH's copying performance was impaired in comparison to younger adult and age-matched older adult controls, with a variety of errors at the local level but relatively few at the global level. However, his performance in the visual memory conditions revealed impairments at the global level. For all participants, drawing errors were modulated by the Gestalt qualities of the stimuli, with accuracy at the global and local levels being lesser for poor global stimuli in all conditions. These data extend previous observations of this patient, and support theories that posit interaction between dorsal and ventral streams in the representation of hierarchical stimuli. We discuss the implications of these findings for our understanding of visuospatial memory in neurological patients, and also evaluate the application of quantitative metrics to the interpretation of drawings.
Collapse
Affiliation(s)
- Sarah K Salo
- School of Psychology, University of Plymouth, Plymouth, UK.
- Brain Research and Imaging Centre, University of Plymouth, Plymouth, UK.
| | | | - M Jane Riddoch
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- School of Psychology, University of Birmingham, Birmingham, UK
| | - Alastair D Smith
- School of Psychology, University of Plymouth, Plymouth, UK.
- Brain Research and Imaging Centre, University of Plymouth, Plymouth, UK.
| |
Collapse
|
8
|
Rohling ML, Demakis GJ, Langhinrichsen-Rohling J. Lowered cutoffs to reduce false positives on the Word Memory Test. J Clin Exp Neuropsychol 2024; 46:67-79. [PMID: 38362939 DOI: 10.1080/13803395.2024.2314736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/11/2024] [Indexed: 02/17/2024]
Abstract
OBJECTIVE To adjust the decision criterion for the Word Memory Test (WMT, Green, 2003) to minimize the frequency of false positives. METHOD Archival data were combined into a database (n = 3,210) to examine the best cut score for the WMT. We compared results based on the original scoring rules and those based on adjusted scoring rules using a criterion based on 16 performance validity tests (PVTs) exclusive of the WMT. Cutoffs based on peer-reviewed publications and test manuals were used. The resulting PVT composite was considered the best estimate of validity status. We focused on a specificity of .90 with a false-positive rate of less than .10 across multiple samples. RESULTS Each examinee was administered the WMT, as well as on average 5.5 (SD = 2.5) other PVTs. Based on the original scoring rules of the WMT, 31.8% of examinees failed. Using a single failure on the criterion PVT (C-PVT), the base rate of failure was 45.9%. When requiring two or more failures on the C-PVT, the failure rate dropped to 22.8%. Applying a contingency analysis (i.e., X2) to the two failures model on the C-PVT measure and using the original rules for the WMT resulted in only 65.3% agreement. However, using our adjusted rules for the WMT, which consisted of relying on only the IR and DR WMT subtest scores with a cutoff of 77.5%, agreement between the adjusted and the C-PVT criterion equaled 80.8%, for an improvement of 12.1% identified. The adjustmeny resulted in a 49.2% reduction in false positives while preserving a sensitivity of 53.6%. The specificity for the new rules was 88.8%, for a false positive rate of 11.2%. CONCLUSIONS Results supported lowering of the cut score for correct responding from 82.5% to 77.5% correct. We also recommend discontinuing the use of the Consistency subtest score in the determination of WMT failure.
Collapse
|
9
|
Crişan I, Sava FA, Maricuţoiu LP. Strategies of feigning mild head injuries related to validity indicators and types of coaching: Results of two experimental studies. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:705-715. [PMID: 34510965 DOI: 10.1080/23279095.2021.1973004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE In this paper, we analyzed differences between uncoached, symptom-coached, and test-coached simulators regarding strategies of feigning mild head injuries. METHOD Healthy undergraduates (n = 67 in the first study; n = 48 in the second study), randomized into three simulator groups, were assessed with four experimental memory tests. In the first study, tests were administered face-to-face, while in the second study, the procedure was adapted for online testing. RESULTS Online simulators showed a different approach to testing than face-to-face participants (U tests < 920, p < .05). Nevertheless, both samples favored strategies like memory loss, error making, concentration difficulties, and slow responding. Except for slow responding and concentration difficulties, the favorite strategies correlated with validity indicators. In the first study, test-coached simulators (m = 4.58-5.68, SD = 2.2-3) used strategies less than uncoached participants (m = 5.25-5.88, SD = 2.26-2.84). In the second study, test-coached participants (m = 3.8-5.6, SD = 1.51-2.2) employed strategies less than uncoached (m = 6.21-7.29, SD = 1.25-1.85) and symptom-coached participants (m = 6.14-6.79, SD = 1.69-2.76). DISCUSSION Similarities and differences between online and face-to-face assessments are discussed. Recommendations to associate heterogeneous indicators for detecting feigning strategies are issued.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | | |
Collapse
|
10
|
Bütz MR, English JV, Meyers JE, Cohen LJ. Threats to the integrity of psychological assessment: The misuse of test raw data and materials. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-20. [PMID: 37573544 DOI: 10.1080/23279095.2023.2241094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
In the practice of psychological assessment there have been warnings for decades by the American Psychological Association (APA), the National Academy of Neuropsychology (NAN), other associations, and test vendors, against the disclosure of test raw data and test materials. Psychological assessment occurs across several different practice environments, and test raw data is a particularly sensitive aspect of practice considering what it implicitly represents about a client/patient, and this concept is further developed in this paper. Many times, test materials are intellectual property protected by copyrights and user agreements. It follows that improper management of the release of test raw data and test materials threatens the scientific integrity of psychological assessment. Here the matters of test raw data, test materials, and different practice environments are addressed to highlight the challenges involved with improper releases and to offer guidance concerning good-faith efforts to preserve the integrity of psychological assessment and legal agreements. The unique demands of forensic practice are also discussed, including attorneys' needs for cross-examination and discovery, which may place psychologists (and other duly vetted evaluators) in conflict with their commitment to professional ethical codes and legal agreements. To this end, important threats to the proper use of test raw data and test materials include uninformed professionals and compromised evaluators. In this paper, the mishandling of test raw data and materials by both psychologists and other evaluators is reviewed, representative case examples, including those from the literature, are provided, pertinent case law is discussed, and practical stepwise conflict resolutions are offered.
Collapse
Affiliation(s)
- Michael R Bütz
- Aspen Practice, P.C. and Intermountain Healthcare, Billings, MT, USA
| | | | - John E Meyers
- Meyers Neuropsychological Services, Clermont, FL, USA
| | | |
Collapse
|
11
|
Keatley ES, Bombardier CH, Watson E, Kumar RG, Novack T, Monden KR, Dams-O'Connor K. Cognitive Performance, Depression, and Anxiety 1 Year After Traumatic Brain Injury. J Head Trauma Rehabil 2023; 38:E195-E202. [PMID: 36730989 PMCID: PMC10102243 DOI: 10.1097/htr.0000000000000819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES To evaluate associations between depression, anxiety, and cognitive impairment among individuals with complicated mild to severe traumatic brain injury (TBI) 1 year after injury. SETTING Multiple inpatient rehabilitation units across the United States. PARTICIPANTS A total of 498 adults 16 years and older who completed inpatient rehabilitation for complicated mild to severe TBI. DESIGN Secondary analysis of a prospective, multicenter, cross-sectional observational cohort study. MAIN MEASURES Assessments of depression (Traumatic Brain Injury Quality of Life [TBI-QOL] Depression) and anxiety (TBI-QOL Anxiety) as well as a telephone-based brief screening measure of cognitive functioning (Brief Test of Adult Cognition by Telephone [BTACT]). RESULTS We found an inverse relationship between self-reported depression symptoms and the BTACT Composite score (β = -0.18, P < .01) and anxiety symptoms and the BTACT Composite score (β = -0.20, P < .01). There was no evidence this relationship varied by injury severity. Exploratory analyses showed depression and anxiety were negatively correlated with both BTACT Executive Function factor score and BTACT Memory factor score. CONCLUSIONS Both depression and anxiety have a small but significant negative association with cognitive performance in the context of complicated mild to severe TBI. These findings highlight the importance of considering depression and anxiety when interpreting TBI-related neuropsychological impairments, even among more severe TBI.
Collapse
Affiliation(s)
- Eva S Keatley
- Department of Physical Medicine and Rehabilitation, Johns Hopkins Medicine, Baltimore, Maryland (Dr Keatley); Department of Physical Medicine and Rehabilitation, University of Washington, Seattle (Dr Bombardier); Departments of Rehabilitation and Human Performance (Drs Watson, Kumar, and Dams-O'Connor) and Neurology (Dr Dams-O'Connor), Icahn School of Medicine at Mount Sinai, New York, New York; Department of Physical Medicine and Rehabilitation, University of Alabama at Birmingham, Birmingham (Dr Novack); and Research Department, Craig Hospital, Englewood, Colorado, and Department of Rehabilitation Medicine, University of Minnesota Medical School, Minneapolis (Dr Monden)
| | | | | | | | | | | | | |
Collapse
|
12
|
Jinkerson JD, Lu LH, Kennedy J, Armistead-Jehle P, Nelson JT, Seegmiller RA. Grooved Pegboard adds incremental value over memory-apparent performance validity tests in predicting psychiatric symptom report. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-9. [PMID: 37094095 DOI: 10.1080/23279095.2023.2192409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
The present study evaluated whether Grooved Pegboard (GPB), when used as a performance validity test (PVT), can incrementally predict psychiatric symptom report elevations beyond memory-apparent PVTs. Participants (N = 111) were military personnel and were predominantly White (84%), male (76%), with a mean age of 43 (SD = 12) and having on average 16 years of education (SD = 2). Individuals with disorders potentially compromising motor dexterity were excluded. Participants were administered GPB, three memory-apparent PVTs (Medical Symptom Validity Test, Non-Verbal Medical Symptom Validity Test, Reliable Digit Span), and a symptom validity test (Personality Assessment Inventory Negative Impression Management [NIM]). Results from the three memory-apparent PVTs were entered into a model for predicting NIM, where failure of two or more PVTs was categorized as evidence of non-credible responding. Hierarchical regression revealed that non-dominant hand GPB T-score incrementally predicted NIM beyond memory-apparent PVTs (F(2,108) = 16.30, p < .001; R2 change = .05, β = -0.24, p < .01). In a second hierarchical regression, GPB performance was dichotomized into pass or fail, using T-score cutoffs (≤29 for either hand, ≤31 for both). Non-dominant hand GPB again predicted NIM beyond memory-apparent PVTs (F(2,108) = 18.75, p <.001; R2 change = .08, β = -0.28, p < .001). Results indicated that noncredible/failing GPB performance adds incremental value over memory-apparent PVTs in predicting psychiatric symptom report.
Collapse
Affiliation(s)
| | - Lisa H Lu
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
- TBI Center of Excellence (TBICoE), Arlington, VA, USA
- General Dynamics Information Technology, Falls Church, VA, USA
| | - Jan Kennedy
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
- TBI Center of Excellence (TBICoE), Arlington, VA, USA
- General Dynamics Information Technology, Falls Church, VA, USA
| | | | | | | |
Collapse
|
13
|
Robinson A, Huber M, Breaux E, Pugh E, Calamia M. Failing The b Test: The influence of cutoff scores and criterion group approaches in a sample of adults referred for psychoeducational evaluation. J Clin Exp Neuropsychol 2022; 44:619-626. [PMID: 36727266 DOI: 10.1080/13803395.2022.2153805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE Previous research has shown that both criterion grouping approaches and cutoff scores can impact PVT classification accuracy statistics. This study aimed to examine the influence of cutoff scores and criterion grouping approaches on The b Test, a measure designed to identify feigned impairment in visual scanning, processing speed, and letter identification. METHOD Two hundred ninety-seven adults referred for psychoeducational testing were included with the majority of individuals seeking academic accommodations (n = 215). Cutoff scores of ≥82, ≥90, and ≥120 were utilized along with two different criterion group approaches, 0 PVT failures vs. ≥2 PVT failures and 0 PVT failures versus ≥ 1 PVT failures. RESULTS Failure rates for The b Test in the overall sample ranged from 12.5% to 16.2%. Subgroup analyses in those referred specifically for ADHD revealed failure rates for The b Test ranging from 10.5% to 14.2%. ROC curves within the full sample and ADHD subsample demonstrated significant AUCs utilizing both criterion group approaches (AUC = .66 - .78). Sensitivity and specificity varied as a function of criterion group approach and cutoff score, with 0 PVT failures vs. ≥ 2 PVT failures resulting in the greatest sensitivity when maximizing specificity at ≥.90 in the full sample and ADHD sample. CONCLUSIONS The results demonstrate that criterion approaches and cutoff scores impact classification accuracy of The b Test with 0 PVT vs. ≥ 2 PVT failures demonstrating the greatest classification accuracy. Special considerations should be made with regard to clinical decision making in the context of psychoeducational evaluations given that a large portion of individuals seeking accommodations fail only one PVT. Limitations of this study are also discussed.
Collapse
Affiliation(s)
- Anthony Robinson
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Marissa Huber
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Eathan Breaux
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Erika Pugh
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| |
Collapse
|
14
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Utility of WAIS-IV Digit Span indices as measures of performance validity in moderate to severe traumatic brain injury. Clin Neuropsychol 2022; 36:1950-1963. [PMID: 34044725 DOI: 10.1080/13854046.2021.1921277] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: The addition of Sequencing to WAIS-IV Digit Span (DS) brought about new Reliable Digit Span (RDS) indices and an Age-Corrected Scaled Score that includes Sequencing trials. Reports have indicated that these new performance validity tests (PVTs) are superior to the traditional RDS; however, comparisons in the context of known neurocognitive impairment are sparse. This study compared DS-derived PVT classification accuracies in a design that included adults with verified TBI. Methods: Participants included 64 adults with moderate-to-severe TBI (TBI), 51 healthy adults coached to simulate TBI (SIM), and 78 healthy comparisons (HC). Participants completed the WAIS-IV DS subtest in the context of a larger test battery. Results: Kruskal-Wallis tests indicated that all DS indices differed significantly across groups. Post hoc contrasts revealed that only RDS Forward and the traditional RDS differed significantly between SIM and TBI. ROC analyses indicated that RDS variables were comparable predictors of SIM vs. HC; however, the traditional RDS showed the highest sensitivity when approximating 90% specificity for SIM vs. TBI. A greater percentage of TBI scored RDS Sequencing < 1 compared to SIM and HC. Conclusion: In the context of moderate-to-severe TBI, the DS-derived PVTs showed comparable discriminability. However, the Greiffenstein et al. traditional RDS demonstrated the best classification accuracy with respect to specificity/sensitivity balance. This relative superiority may reflect that individuals with verified TBI are more likely to perseverate on prior instructions during DS Sequencing. Findings highlight the importance of including individuals with verified TBI when evaluating and developing PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
15
|
Jennette KJ, Williams CP, Resch ZJ, Ovsiew GP, Durkin NM, O'Rourke JJF, Marceaux JC, Critchfield EA, Soble JR. Assessment of differential neurocognitive performance based on the number of performance validity tests failures: A cross-validation study across multiple mixed clinical samples. Clin Neuropsychol 2022; 36:1915-1932. [PMID: 33759699 DOI: 10.1080/13854046.2021.1900398] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This cross-sectional study examined the effect of number of Performance Validity Test (PVT) failures on neuropsychological test performance among a demographically diverse Veteran (VA) sample (n = 76) and academic medical sample (AMC; n = 128). A secondary goal was to investigate the psychometric implications of including versus excluding those with one PVT failure when cross-validating a series of embedded PVTs. Method: All patients completed the same six criterion PVTs, with the AMC sample completing three additional embedded PVTs. Neurocognitive test performance differences were examined based on number of PVT failures (0, 1, 2+) for both samples, and effect of number of criterion failures on embedded PVT performance was analyzed among the AMC sample. Results: Both groups with 0 or 1 PVT failures performed better than those with ≥2 PVT failures across most cognitive tests. There were nonsignificant differences between those with 0 or 1 PVT failures except for one test in the AMC sample. Receiver operator characteristic curve analyses found no differences in optimal cut score based on number of PVT failures when retaining/excluding one PVT failure. Conclusion: Findings support the use of ≥2 PVT failures as indicative of performance invalidity. These findings strongly support including those with one PVT failure with those with zero PVT failures in diagnostic accuracy studies, given that their inclusion reflects actual clinical practice, does not reduce sample sizes, and does not artificially deflate neurocognitive test results or inflate PVT classification accuracy statistics.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
16
|
Profile of Embedded Validity Indicators in Criminal Defendants with Verified Valid Neuropsychological Test Performance. Arch Clin Neuropsychol 2022; 38:513-524. [DOI: 10.1093/arclin/acac073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/21/2022] [Accepted: 08/09/2022] [Indexed: 11/15/2022] Open
Abstract
Abstract
Objective
Few studies have examined the use of embedded validity indicators (EVIs) in criminal-forensic practice settings, where judgements regarding performance validity can carry severe consequences for the individual and society. This study sought to examine how various EVIs perform in criminal defendant populations, and determine relationships between EVI scores and intrapersonal variables thought to influence performance validity.
Method
Performance on 16 empirically established EVI cutoffs were examined in a sample of 164 criminal defendants with valid performance who were referred for forensic neuropsychological evaluation. Subsequent analyses examined the relationship between EVI scores and intrapersonal variables in 83 of these defendants.
Results
Half of the EVIs (within the Wechsler Adult Intelligence Scale Digit Span Total, Conners’ Continuous Performance Test Commissions, Wechsler Memory Scale Logical Memory I and II, Controlled Oral Word Association Test, Trail Making Test Part B, and Stroop Word and Color) performed as intended in this sample. The EVIs that did not perform as intended were significantly influenced by relevant intrapersonal variables, including below-average intellectual functioning and history of moderate–severe traumatic brain injury and neurodevelopmental disorder.
Conclusions
This study identifies multiple EVIs appropriate for use in criminal-forensic settings. However, based on these findings, practitioners may wish to be selective in choosing and interpreting EVIs for forensic evaluations of criminal court defendants.
Collapse
|
17
|
Meyers JE, Miller RM, Vincent AS. A Validity Measure for the Automated Neuropsychological Assessment Metrics. ARCHIVES OF CLINICAL NEUROPSYCHOLOGY : THE OFFICIAL JOURNAL OF THE NATIONAL ACADEMY OF NEUROPSYCHOLOGISTS 2022; 37:1765-1771. [PMID: 35780310 DOI: 10.1093/arclin/acac046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/06/2022] [Indexed: 11/12/2022]
Abstract
The Automated Neuropsychological Assessment Metrics (ANAM) is one of the most widely used and validated neuropsychological instruments for assessing cognition. The ANAM Test System includes a reporting tool, the ANAM Validity Indicator Report that generates scores for the embedded effort measure, the ANAM Performance Validity Index (APVI). The current study seeks to develop a proxy for the APVI, using raw subtest summary test scores. This would be useful for situations where the APVI score is unavailable (e.g., validity report not generated at the time of the assessment) or when the item level data needed to generate this score are inaccessible. ANAM scores from a large data set of 1,000,000+ observations were used for this retrospective analysis. Results of linear regression analysis suggest that the APVI can be reasonably estimated from the raw subtest summary test scores that are presented on the ANAM Performance Report. Clinically, this means that an important step in the interpretation process, checking the validity of test data, can still be performed even when the APVI is not available.
Collapse
Affiliation(s)
- John E Meyers
- Meyers Neuropsychological Services, 11727 Graces Way Clermont, FL 34711, USA
| | - Ronald Mellado Miller
- Utah Valley University, Woodbury School of Business Strategic Management and Operations MS 119 800 W. University Parkway Orem, UT 84058-8703, USA
| | | |
Collapse
|
18
|
Uiterwijk D, Stargatt R, Crowe SF. Objective Cognitive Outcomes and Subjective Emotional Sequelae in Litigating Adults with a Traumatic Brain Injury: The Impact of Performance and Symptom Validity Measures. Arch Clin Neuropsychol 2022; 37:1662-1687. [PMID: 35704852 DOI: 10.1093/arclin/acac039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the relative contribution of performance and symptom validity in litigating adults with traumatic brain injury (TBI), as a function of TBI severity, and examined the relationship between self-reported emotional symptoms and cognitive tests scores while controlling for validity test performance. METHOD Participants underwent neuropsychological assessment between January 2012 and June 2021 in the context of compensation-seeking claims related to a TBI. All participants completed a cognitive test battery, the Personality Assessment Inventory (including symptom validity tests; SVTs), and multiple performance validity tests (PVTs). Data analyses included independent t-tests, one-way ANOVAs, correlation analyses, and hierarchical multiple regression. RESULTS A total of 370 participants were included. Atypical PVT and SVT performance were associated with poorer cognitive test performance and higher emotional symptom report, irrespective of TBI severity. PVTs and SVTs had an additive effect on cognitive test performance for uncomplicated mTBI, but less so for more severe TBI. The relationship between emotional symptoms and cognitive test performance diminished substantially when validity test performance was controlled, and validity test performance had a substantially larger impact than emotional symptoms on cognitive test performance. CONCLUSION Validity test performance has a significant impact on the neuropsychological profiles of people with TBI, irrespective of TBI severity, and plays a significant role in the relationship between emotional symptoms and cognitive test performance. Adequate validity testing should be incorporated into every neuropsychological assessment, and associations between emotional symptoms and cognitive outcomes that do not consider validity testing should be interpreted with extreme caution.
Collapse
Affiliation(s)
- Daniel Uiterwijk
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Robyn Stargatt
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Simon F Crowe
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| |
Collapse
|
19
|
Jurick SM, Eglit GML, Delis DC, Bondi MW, Jak AJ. D-KEFS trail making test as an embedded performance validity measure. J Clin Exp Neuropsychol 2022; 44:62-72. [DOI: 10.1080/13803395.2022.2073334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- S. M. Jurick
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
| | - G. M. L. Eglit
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
| | - D. C. Delis
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| | - M. W. Bondi
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
| | - A. J. Jak
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
| |
Collapse
|
20
|
The Relationship Between Cognitive Functioning and Symptoms of Depression, Anxiety, and Post-Traumatic Stress Disorder in Adults with a Traumatic Brain Injury: a Meta-Analysis. Neuropsychol Rev 2021; 32:758-806. [PMID: 34694543 DOI: 10.1007/s11065-021-09524-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 09/09/2021] [Indexed: 12/12/2022]
Abstract
A thorough understanding of the relationship between cognitive test performance and symptoms of depression, anxiety, or post-traumatic stress disorder (PTSD) in people with traumatic brain injury (TBI) is important given the high prevalence of these emotional symptoms following injury. It is also important to understand whether these relationships are affected by TBI severity, and the validity of test performance and symptom report. This meta-analysis was conducted to investigate whether these symptoms are associated with cognitive test performance alterations in adults with a TBI. This meta-analysis was prospectively registered on the PROSPERO International Prospective Register of Systematic Reviews website (registration number: CRD42018089194). The electronic databases Medline, PsycINFO, and CINAHL were searched for journal articles published up until May 2020. In total, 61 studies were included, which enabled calculation of pooled effect sizes for the cognitive domains of immediate memory (verbal and visual), recent memory (verbal and visual), attention, executive function, processing speed, and language. Depression had a small, negative relationship with most cognitive domains. These relationships remained, for the most part, when samples with mild TBI (mTBI)-only were analysed separately, but not for samples with more severe TBI (sTBI)-only. A similar pattern of results was found in the anxiety analysis. PTSD had a small, negative relationship with verbal memory, in samples with mTBI-only. No data were available for the PTSD analysis with sTBI samples. Moderator analyses indicated that the relationships between emotional symptoms and cognitive test performance may be impacted to some degree by exclusion of participants with atypical performance on performance validity tests (PVTs) or symptom validity tests (SVTs), however there were small study numbers and changes in effect size were not statistically significant. These findings are useful in synthesising what is currently known about the relationship between cognitive test performance and emotional symptoms in adults with TBI, demonstrating significant, albeit small, relationships between emotional symptoms and cognitive test performance in multiple domains, in non-military samples. Some of these relationships appeared to be mildly impacted by controlling for performance validity or symptom validity, however this was based on the relatively few studies using validity tests. More research including PVTs and SVTs whilst examining the relationship between emotional symptoms and cognitive outcomes is needed.
Collapse
|
21
|
Lace JW, Merz ZC, Galioto R. Examining the Clinical Utility of Selected Memory-Based Embedded Performance Validity Tests in Neuropsychological Assessment of Patients with Multiple Sclerosis. Neurol Int 2021; 13:477-486. [PMID: 34698256 PMCID: PMC8544445 DOI: 10.3390/neurolint13040047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/16/2022] Open
Abstract
Within the neuropsychological assessment, clinicians are responsible for ensuring the validity of obtained cognitive data. As such, increased attention is being paid to performance validity in patients with multiple sclerosis (pwMS). Experts have proposed batteries of neuropsychological tests for use in this population, though none contain recommendations for standalone performance validity tests (PVTs). The California Verbal Learning Test, Second Edition (CVLT-II) and Brief Visuospatial Memory Test, Revised (BVMT-R)—both of which are included in the aforementioned recommended neuropsychological batteries—include previously validated embedded PVTs (which offer some advantages, including expedience and reduced costs), with no prior work exploring their utility in pwMS. The purpose of the present study was to determine the potential clinical utility of embedded PVTs to detect the signal of non-credibility as operationally defined by below criterion standalone PVT performance. One hundred thirty-three (133) patients (M age = 48.28; 76.7% women; 85.0% White) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into “credible” (n = 100) or “noncredible” (n = 33) groups based on a standalone PVT criterion. Classification statistics for four CVLT-II and BVMT-R PVTs of interest in isolation were poor (AUCs = 0.58–0.62). Several arithmetic and logistic regression-derived multivariate formulas were calculated, all of which similarly demonstrated poor discriminability (AUCs = 0.61–0.64). Although embedded PVTs may arguably maximize efficiency and minimize test burden in pwMS, common ones in the CVLT-II and BVMT-R may not be psychometrically appropriate, sufficiently sensitive, nor substitutable for standalone PVTs in this population. Clinical neuropsychologists who evaluate such patients are encouraged to include standalone PVTs in their assessment batteries to ensure that clinical care conclusions drawn from neuropsychological data are valid.
Collapse
Affiliation(s)
- John W. Lace
- Neurological Institute, Section of Neuropsychology, Cleveland Clinic Foundation, Cleveland, OH 44195, USA;
- Correspondence:
| | - Zachary C. Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC 27401, USA;
| | - Rachel Galioto
- Neurological Institute, Section of Neuropsychology, Cleveland Clinic Foundation, Cleveland, OH 44195, USA;
- Mellen Center for Multiple Sclerosis, Cleveland Clinic Foundation, Cleveland, OH 44195, USA
| |
Collapse
|
22
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
23
|
Uiterwijk D, Wong D, Stargatt R, Crowe SF. Performance and symptom validity testing in neuropsychological assessments in Australia: a survey of practises and beliefs. AUSTRALIAN PSYCHOLOGIST 2021. [DOI: 10.1080/00050067.2021.1948797] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Affiliation(s)
- Daniel Uiterwijk
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| | - Dana Wong
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| | - Robyn Stargatt
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| | - Simon F. Crowe
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| |
Collapse
|
24
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
25
|
Rhoads T, Neale AC, Resch ZJ, Cohen CD, Keezer RD, Cerny BM, Jennette KJ, Ovsiew GP, Soble JR. Psychometric implications of failure on one performance validity test: a cross-validation study to inform criterion group definition. J Clin Exp Neuropsychol 2021; 43:437-448. [PMID: 34233580 DOI: 10.1080/13803395.2021.1945540] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Introduction: Research to date has supported the use of multiple performance validity tests (PVTs) for determining validity status in clinical settings. However, the implications of including versus excluding patients failing one PVT remains a source of debate, and methodological guidelines for PVT research are lacking. This study evaluated three validity classification approaches (i.e. 0 vs. ≥2, 0-1 vs. ≥2, and 0 vs. ≥1 PVT failures) using three reference standards (i.e. criterion PVT groupings) to recommend approaches best suited to establishing validity groups in PVT research methodology.Method: A mixed clinical sample of 157 patients was administered freestanding (Medical Symptom Validity Test, Dot Counting Test, Test of Memory Malingering, Word Choice Test), and embedded PVTs (Reliable Digit Span, RAVLT Effort Score, Stroop Word Reading, BVMT-R Recognition Discrimination) during outpatient neuropsychological evaluation. Three reference standards (i.e. two freestanding and three embedded PVTs from the above list) were created. Rey 15-Item Test and RAVLT Forced Choice were used solely as outcome measures in addition to two freestanding PVTs not employed in the reference standard. Receiver operating characteristic curve analyses evaluated classification accuracy using the three validity classification approaches for each reference standard.Results: When patients failing only one PVT were excluded or classified as valid, classification accuracy ranged from acceptable to excellent. However, classification accuracy was poor to acceptable when patients failing one PVT were classified as invalid. Sensitivity/specificity across two of the validity classification approaches (0 vs. ≥2; 0-1 vs. ≥2) remained reasonably stable.Conclusions: These results reflect that both inclusion and exclusion of patients failing one PVT are acceptable approaches to PVT research methodology and the choice of method likely depends on the study rationale. However, including such patients in the invalid group yields unacceptably poor classification accuracy across a number of psychometrically robust outcome measures and therefore is not recommended.
Collapse
Affiliation(s)
- Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Alec C Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
26
|
Nayar K, Ventura LM, DeDios-Stern S, Oh A, Soble JR. The Impact of Learning and Memory on Performance Validity Tests in a Mixed Clinical Pediatric Population. Arch Clin Neuropsychol 2021; 37:50-62. [PMID: 34050354 DOI: 10.1093/arclin/acab040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the degree to which verbal and visuospatial memory abilities influence performance validity test (PVT) performance in a mixed clinical pediatric sample. METHOD Data from 252 consecutive clinical pediatric cases (Mage=11.23 years, SD=4.02; 61.9% male) seen for outpatient neuropsychological assessment were collected. Measures of learning and memory (e.g., The California Verbal Learning Test-Children's Version; Child and Adolescent Memory Profile [ChAMP]), performance validity (Test of Memory Malingering Trial 1 [TOMM T1]; Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition Digit Span indices; ChAMP Overall Validity Index), and intellectual abilities (e.g., WISC-V) were included. RESULTS Learning/memory abilities were not significantly correlated with TOMM T1 and accounted for relatively little variance in overall TOMM T1 performance (i.e., ≤6%). Conversely, ChAMP Validity Index scores were significantly correlated with verbal and visual learning/memory abilities, and learning/memory accounted for significant variance in PVT performance (12%-26%). Verbal learning/memory performance accounted for 5%-16% of the variance across the Digit Span PVTs. No significant differences in TOMM T1 and Digit Span PVT scores emerged between verbal/visual learning/memory impairment groups. ChAMP validity scores were lower for the visual learning/memory impairment group relative to the nonimpaired group. CONCLUSIONS Findings highlight the utility of including PVTs as standard practice for pediatric populations, particularly when memory is a concern. Consistent with the adult literature, TOMM T1 outperformed other PVTs in its utility even among the diverse clinical sample with/without learning/memory impairment. In contrast, use of Digit Span indices appear to be best suited in the presence of visuospatial (but not verbal) learning/memory concerns. Finally, the ChAMP's embedded validity measure was most strongly impacted by learning/memory performance.
Collapse
Affiliation(s)
- Kritika Nayar
- Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
27
|
Resch ZJ, Paxton JL, Obolsky MA, Lapitan F, Cation B, Schulze ET, Calderone V, Fink JW, Lee RC, Pliskin NH, Soble JR. Establishing the base rate of performance invalidity in a clinical electrical injury sample: Implications for neuropsychological test performance. J Clin Exp Neuropsychol 2021; 43:213-223. [PMID: 33858295 DOI: 10.1080/13803395.2021.1914002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Objective: The base rate of neuropsychological performance invalidity in electrical injury, a clinically-distinct and frequently compensation-seeking population, is not well established. This study determined the base rate of performance invalidity in a large electrical injury sample, and examined patient characteristics, injury parameters, and neuropsychological test performance based on validity status.Method: This cross-sectional study included data from 101 patients with electrical injury consecutively referred for post-acute neuropsychological evaluation. Eighty-five percent of the sample was compensation-seeking. Multiple performance validity tests (PVTs) were administered as part of standard clinical evaluation. For patients with four or more PVTs, valid performance was operationalized as less than or equal to one PVT failure and invalid performance as two or more failures.Results: Frequency analysis revealed 66% (n = 67) had valid performance while 29% (n = 29) demonstrated probable invalid performance; the remaining 5% (n = 5) had indeterminate validity. No significant differences in demographics or injury parameters emerged between validity groups (0 vs. 1 vs. ≥2 PVT failures). In contrast, the electrical injury group with invalid performance performed significantly worse across tests of processing speed and executive abilities than those with valid performance (ps< .05, ηp2 = .19-.25).Conclusions: The current study is the first to establish the base rate of neuropsychological performance invalidity in electrical injury survivors using empirical methods and current practice standards. Patient and clinical variables, including compensation-seeking status, did not differ between validity groups; however, neuropsychological test performance did, supporting the need for multi-method, objective performance validity assessment.
Collapse
Affiliation(s)
- Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jessica L Paxton
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Maximillian A Obolsky
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Franchezka Lapitan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Bailey Cation
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Evan T Schulze
- Department of Neurology, Saint Louis University, St. Louis, MO, USA
| | - Veroly Calderone
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
| | - Joseph W Fink
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL, USA
| | - Raphael C Lee
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Departments of Surgery, Medicine and Organismal Biology, University of Chicago, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| |
Collapse
|
28
|
Ord AS, Shura RD, Sansone AR, Martindale SL, Taber KH, Rowland JA. Performance validity and symptom validity tests: Are they measuring different constructs? Neuropsychology 2021; 35:241-251. [PMID: 33829824 DOI: 10.1037/neu0000722] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVE To evaluate the relationships among performance validity, symptom validity, symptom self-report, and objective cognitive testing. METHOD Combat Veterans (N = 338) completed a neurocognitive assessment battery and several self-report symptom measures assessing depression, posttraumatic stress disorder (PTSD) symptoms, sleep quality, pain interference, and neurobehavioral complaints. All participants also completed two performance validity tests (PVTs) and one stand-alone symptom validity test (SVT) along with two embedded SVTs. RESULTS Results of an exploratory factor analysis revealed a three-factor solution: performance validity, cognitive performance, and symptom report (SVTs loaded on the third factor). Results of t tests demonstrated that participants who failed PVTs displayed significantly more severe symptoms and significantly worse performance on most measures of neurocognitive functioning compared to those who passed. Participants who failed a stand-alone SVT also reported significantly more severe symptomatology on all symptom report measures, but the pattern of cognitive performance differed based on the selected SVT cutoff. Multiple linear regressions revealed that both SVT and PVT failure explained unique variance in symptom report, but only PVT failure significantly predicted cognitive performance. CONCLUSIONS Performance and symptom validity tests measure distinct but related constructs. SVTs and PVTs are significantly related to both cognitive performance and symptom report; however, the relationship between symptom validity and symptom report is strongest. SVTs are also differentially related to cognitive performance and symptom report based on the utilized cutoff score. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna S Ord
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center (MA-MIRECC)
| | | | | | | | | | | |
Collapse
|
29
|
Identifying Novel Embedded Performance Validity Test Formulas Within the Repeatable Battery for the Assessment of Neuropsychological Status: a Simulation Study. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09382-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
30
|
Pliskin JI, DeDios Stern S, Resch ZJ, Saladino KF, Ovsiew GP, Carter DA, Soble JR. Comparing the Psychometric Properties of Eight Embedded Performance Validity Tests in the Rey Auditory Verbal Learning Test, Wechsler Memory Scale Logical Memory, and Brief Visuospatial Memory Test–Revised Recognition Trials for Detecting Invalid Neuropsychological Test Performance. Assessment 2020; 28:1871-1881. [DOI: 10.1177/1073191120929093] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This cross-sectional study evaluated eight embedded performance validity tests (PVTs) previously derived from the Rey Auditory Verbal Learning Test (RAVLT), Wechsler Memory Scale–IV–Logical Memory (LM), and Brief Visuospatial Memory Test–Revised (BVMT-R) recognition trials among a single mixed clinical sample of 108 neuropsychiatric patients (83 valid/25 invalid) with ( n = 54) and without ( n = 29) mild neurocognitive disorder. Among the overall sample, all eight recognition PVTs significantly differentiated valid from invalid performance (areas under the curve [AUCs] = .64-.81) with 26% to 44% sensitivity (≥89% specificity) at optimal cut-scores depending on the specific PVT. After subdividing the sample by cognitive impairment status, all eight PVTs continued to reliably identify invalid performance (AUC = .68-.91) with markedly increased sensitivities of 56% to 80% (≥89% specificity) in the unimpaired group. In contrast, among those with mild neurocognitive disorder, RAVLT False Positives and LM became nonsignificant, whereas the other six PVTs remained significant (AUC = .64-.77), albeit with reduced sensitivities of 32% to 44% (≥89% specificity) at optimal cut-scores. Taken together, results cross-validated BVMT-R and most RAVLT recognition indices as effective embedded PVTs for identifying invalid neuropsychological test performance with diverse populations including examinees with and without suspected mild neurocognitive disorder, whereas LM had more limited utility as an embedded PVT, particularly when mild neurocognitive disorder was present.
Collapse
Affiliation(s)
| | | | - Zachary J. Resch
- University of Illinois College of Medicine, Chicago, IL USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | | | | | | | - Jason R. Soble
- University of Illinois College of Medicine, Chicago, IL USA
- University of Illinois College of Medicine, Chicago, IL USA
| |
Collapse
|
31
|
White DJ, Korinek D, Bernstein MT, Ovsiew GP, Resch ZJ, Soble JR. Cross-validation of non-memory-based embedded performance validity tests for detecting invalid performance among patients with and without neurocognitive impairment. J Clin Exp Neuropsychol 2020; 42:459-472. [DOI: 10.1080/13803395.2020.1758634] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Dale Korinek
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Matthew T. Bernstein
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
32
|
|
33
|
Schroeder RW, Olsen DH, Martin PK. Classification accuracy rates of four TOMM validity indices when examined independently and jointly. Clin Neuropsychol 2019; 33:1373-1387. [DOI: 10.1080/13854046.2019.1619839] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ryan W. Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Phillip K. Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
34
|
Jurick SM, Crocker LD, Keller AV, Hoffman SN, Bomyea J, Jacobson MW, Jak AJ. The Minnesota Multiphasic Personality Inventory-2-RF in Treatment-Seeking Veterans with History of Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2019; 34:366-380. [PMID: 29850866 DOI: 10.1093/arclin/acy048] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/26/2018] [Accepted: 05/09/2018] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the Minnesota Multiphasic Personality Inventory-Second Edition-Restructured Form (MMPI-2-RF) to better understand symptom presentation in a sample of treatment-seeking Operation Enduring Freedom/Operation Iraqi Freedom (OEF/OIF) Veterans with self-reported history of mild traumatic brain injury (mTBI). METHOD Participants underwent a comprehensive clinical neuropsychological battery including performance and symptom validity measures and self-report measures of depressive, posttraumatic, and post-concussive symptomatology. Those with possible symptom exaggeration (SE+) on the MMPI-2-RF were compared with those without (SE-) with regard to injury, psychiatric, validity, and cognitive variables. RESULTS Between 50% and 87% of participants demonstrated possible symptom exaggeration on one or more MMPI-2-RF validity scales, and a large majority were elevated on content scales related to cognitive, somatic, and emotional complaints. The SE+ group reported higher depressive, posttraumatic, and post-concussive symptomatology, had higher scores on symptom validity measures, and performed more poorly on neuropsychological measures compared with the SE- group. There were no group differences with regard to injury variables or performance validity measures. Participants were more likely to exhibit possible symptom exaggeration on cognitive/somatic compared with traditional psychopathological validity scales. CONCLUSIONS A sizable portion of treatment-seeking OEF/OIF Veterans demonstrated possible symptom exaggeration on MMPI-2-RF validity scales, which was associated with elevated scores on self-report measures and poorer cognitive performance, but not higher rates of performance validity failure, suggesting symptom and performance validity are distinct concepts. These findings have implications for the interpretation of clinical data in the context of possible symptom exaggeration and treatment in Veterans with persistent post-concussive symptoms.
Collapse
Affiliation(s)
- S M Jurick
- Department of Psychiatry, San Diego State University/University of California San Diego Joint Doctoral Program in Clinical Psychology, San Diego, CA, USA.,Veterans Medical Research Foundation, San Diego, CA, USA
| | - L D Crocker
- Psychology Service, VA San Diego Healthcare System, San Diego, CA, USA.,Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
| | - A V Keller
- Psychology Service, VA San Diego Healthcare System, San Diego, CA, USA
| | - S N Hoffman
- Psychology Service, VA San Diego Healthcare System, San Diego, CA, USA
| | - J Bomyea
- Psychology Service, VA San Diego Healthcare System, San Diego, CA, USA
| | - M W Jacobson
- Psychology Service, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - A J Jak
- Psychology Service, VA San Diego Healthcare System, San Diego, CA, USA.,Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| |
Collapse
|
35
|
Roye S, Calamia M, Bernstein JPK, De Vito AN, Hill BD. A multi-study examination of performance validity in undergraduate research participants. Clin Neuropsychol 2019; 33:1138-1155. [DOI: 10.1080/13854046.2018.1520303] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Scott Roye
- Department of Psychology, Louisiana State University, Baton Rouge, LA, United States
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, United States
| | - John P. K. Bernstein
- Department of Psychology, Louisiana State University, Baton Rouge, LA, United States
| | - Alyssa N. De Vito
- Department of Psychology, Louisiana State University, Baton Rouge, LA, United States
| | - Benjamin D. Hill
- Department of Psychology, University of Southern Alabama, Mobile, AL, United States
| |
Collapse
|
36
|
Meyers JE, Miller RM, Rohling ML, Kalat SS. Premorbid estimates of neuropsychological functioning for diverse groups. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:364-375. [DOI: 10.1080/23279095.2018.1550412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- John E. Meyers
- Meyers Neuropsychological Services, Nokomis, Florida, USA
| | - Ronald M. Miller
- Woodbury School of Business, Utah Valley University, Orem, Utah, USA
| | | | | |
Collapse
|
37
|
Gabel NM, Waldron-Perrine B, Spencer RJ, Pangilinan PH, Hale AC, Bieliauskas LA. Suspiciously slow: timed digit span as an embedded performance validity measure in a sample of veterans with mTBI. Brain Inj 2018; 33:377-382. [DOI: 10.1080/02699052.2018.1553311] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Nicolette M. Gabel
- Department of Physical Medicine and Rehabilitation, Michigan Medicine, Ann Arbor, USA
| | | | - Robert J. Spencer
- Mental Health Services, VA Ann Arbor Healthcare System, Ann Arbor, USA
| | - Percival H. Pangilinan
- Department of Physical Medicine and Rehabilitation, Michigan Medicine/VA Ann Arbor Healthcare System, Ann Arbor, USA
| | - Andrew C. Hale
- Mental Health Services, VA Ann Arbor Healthcare System, Ann Arbor, USA
| | - Linas A. Bieliauskas
- Department of Neuropsychology, University of Michigan Health System, Ann Arbor, USA
| |
Collapse
|
38
|
Critchfield E, Soble JR, Marceaux JC, Bain KM, Chase Bailey K, Webber TA, Alex Alverson W, Messerly J, Andrés González D, O’Rourke JJF. Cognitive impairment does not cause invalid performance: analyzing performance patterns among cognitively unimpaired, impaired, and noncredible participants across six performance validity tests. Clin Neuropsychol 2018; 33:1083-1101. [DOI: 10.1080/13854046.2018.1508615] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Edan Critchfield
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Janice C. Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Kathleen M. Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - K. Chase Bailey
- Division of Psychology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Troy A. Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - W. Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - David Andrés González
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | | |
Collapse
|
39
|
Olsen DH, Schroeder RW, Heinrichs RJ, Martin PK. Examination of optimal embedded PVTs within the BVMT-R in an outpatient clinical sample. Clin Neuropsychol 2018; 33:732-742. [DOI: 10.1080/13854046.2018.1501096] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
40
|
Schroeder RW, Martin PK, Heinrichs RJ, Baade LE. Research methods in performance validity testing studies: Criterion grouping approach impacts study outcomes. Clin Neuropsychol 2018; 33:466-477. [PMID: 29884112 DOI: 10.1080/13854046.2018.1484517] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
OBJECTIVE Performance validity test (PVT) research studies commonly utilize a known-groups design, but the criterion grouping approaches within the design vary greatly from one study to another. At the present time, it is unclear as to what degree different criterion grouping approaches might impact PVT classification accuracy statistics. METHOD To analyze this, the authors used three different criterion grouping approaches to examine how classification accuracy statistics of a PVT (Word Choice Test; WCT) would differ. The three criterion grouping approaches included: (1) failure of 2+ PVTs versus failure of 0 PVTs, (2) failure of 2+ PVTs versus failure of 0-1 PVT, and (3) failure of a stand-alone PVT versus passing of a stand-alone PVT (Test of Memory Malingering). RESULTS When setting specificity at ≥.90, WCT cutoff scores ranged from 41 to 44 and associated sensitivity values ranged from .64 to .88, depending on the criterion grouping approach that was utilized. CONCLUSIONS When using a stand-alone PVT to define criterion group status, classification accuracy rates of the WCT were higher than expected, likely due to strong correlations between the reference PVT and the WCT. This held true even when considering evidence that this grouping approach results in higher rates of criterion group misclassification. Conversely, when using criterion grouping approaches that utilized failure of 2+ PVTs, accuracy rates were more consistent with expectations. These findings demonstrate that criterion grouping approaches can impact PVT classification accuracy rates and resultant cutoff scores. Strengths, weaknesses, and practical implications of each of the criterion grouping approaches are discussed.
Collapse
Affiliation(s)
- Ryan W Schroeder
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| | - Phillip K Martin
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| | - Robin J Heinrichs
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| | - Lyle E Baade
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| |
Collapse
|
41
|
Persinger VC, Whiteside DM, Bobova L, Saigal SD, Vannucci MJ, Basso MR. Using the California Verbal Learning Test, Second Edition as an embedded performance validity measure among individuals with TBI and individuals with psychiatric disorders. Clin Neuropsychol 2017; 32:1039-1053. [DOI: 10.1080/13854046.2017.1419507] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Virginia C. Persinger
- Department of Neuropsychology, Methodist Rehabilitation Center, Jackson, MS, USA
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | | | - Lyuba Bobova
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | - Seema D. Saigal
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | - Marla J. Vannucci
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | | |
Collapse
|
42
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|
43
|
Manderino LM, Gunstad J. Performance of the Immediate Post-Concussion Assessment and Cognitive Testing Protocol Validity Indices. Arch Clin Neuropsychol 2017; 33:596-605. [DOI: 10.1093/arclin/acx102] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 10/04/2017] [Indexed: 11/12/2022] Open
Affiliation(s)
- L M Manderino
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| | - J Gunstad
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| |
Collapse
|
44
|
Denning JH, Shura RD. Cost of malingering mild traumatic brain injury-related cognitive deficits during compensation and pension evaluations in the veterans benefits administration. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 26:1-16. [DOI: 10.1080/23279095.2017.1350684] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- John H. Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Robert D. Shura
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, North Carolina, USA
- Mental Health and Behavioral Science Service Line, W. G. (Bill) Hefner Veterans Affairs Medical Center (VAMC), Salisbury, North Carolina, USA
- Department of Psychiatry and Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| |
Collapse
|
45
|
Morin RT, Axelrod BN. Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning. Clin Neuropsychol 2017. [PMID: 28632025 DOI: 10.1080/13854046.2017.1341550] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. METHOD A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. RESULTS A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. CONCLUSIONS LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.
Collapse
Affiliation(s)
- Ruth T Morin
- a Department of Counseling and Clinical Psychology , Teachers College, Columbia University , New York , NY , USA.,b John D. Dingell VA Medical Center , Detroit , MI , USA
| | | |
Collapse
|
46
|
Rickards TA, Cranston CC, Touradji P, Bechtold KT. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 25:219-230. [DOI: 10.1080/23279095.2017.1278602] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Tyler A. Rickards
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Christopher C. Cranston
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Pegah Touradji
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Kathleen T. Bechtold
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
47
|
Barry DM, Ettenhofer ML. Assessment of Performance Validity Using Embedded Saccadic and Manual Indices on a Continuous Performance Test. Arch Clin Neuropsychol 2016; 31:963-975. [PMID: 27625047 DOI: 10.1093/arclin/acw070] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2016] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE In addition to manual (i.e., "button press") metrics, oculomotor metrics demonstrate considerable promise as tools for detecting invalid responding in neurocognitive assessment. This study was conducted to evaluate saccadic and manual metrics from a computerized continuous performance test as embedded indices of performance validity. METHOD Receiver operating characteristic analyses, logistic regressions, and ANOVAs were performed to evaluate saccadic and manual metrics in classification of healthy adults instructed to feign deficits ("Fake Bad" group; n = 24), healthy adults instructed to perform their best ("Best Effort" group; n = 26), and adults with a history of mild traumatic brain injury (TBI) who passed a series of validity indices ("mTBI-Pass" group; n = 19). RESULTS Several saccadic and manual metrics achieved outstanding classification accuracy between Fake Bad versus Best Effort and mTBI-Pass groups, including variability (consistency) of saccadic and manual response time (RT), saccadic commission errors, and manual omission errors. Very large effect sizes were obtained between Fake Bad and Best Effort groups (Cohen's d range: 1.89-2.90; r range: .75-.78) as well as between Fake Bad and mTBI-Pass groups (Cohen's d range: 1.32-2.21; r range: .69-.71). The Fake Bad group consistently had higher saccadic and manual RT variability, more saccadic commission errors, and more manual omission errors than the Best Effort and mTBI-Pass groups. CONCLUSIONS These findings are the first to demonstrate that eye movements can be used to detect invalid responding in neurocognitive assessment. These results also provide compelling evidence that concurrently measured saccadic and manual metrics can detect invalid responding with high levels of sensitivity and specificity.
Collapse
Affiliation(s)
- David M Barry
- Department of Medical and Clinical Psychology, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
| | - Mark L Ettenhofer
- Department of Medical and Clinical Psychology, F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
| |
Collapse
|
48
|
Poreh A, Tolfo S, Krivenko A, Teaford M. Base-rate data and norms for the Rey Auditory Verbal Learning Embedded Performance Validity Indicator. APPLIED NEUROPSYCHOLOGY-ADULT 2016; 24:540-547. [DOI: 10.1080/23279095.2016.1223670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Amir Poreh
- Department of Psychology, Cleveland State University, Cleveland, Ohio
| | - Sarah Tolfo
- Department of Psychology, Cleveland State University, Cleveland, Ohio
| | - Anna Krivenko
- Department of Psychology, Cleveland State University, Cleveland, Ohio
| | - Max Teaford
- Department of Psychology, Cleveland State University, Cleveland, Ohio
| |
Collapse
|
49
|
Martin PK, Schroeder RW, Wyman-Chick KA, Hunter BP, Heinrichs RJ, Baade LE. Rates of Abnormally Low TOPF Word Reading Scores in Individuals Failing Versus Passing Performance Validity Testing. Assessment 2016; 25:640-652. [DOI: 10.1177/1073191116656796] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The present study examined the impact of performance validity test (PVT) failure on the Test of Premorbid Functioning (TOPF) in a sample of 252 neuropsychological patients. Word reading performance differed significantly according to PVT failure status, and number of PVTs failed accounted for 7.4% of the variance in word reading performance, even after controlling for education. Furthermore, individuals failing ≥2 PVTs were twice as likely as individuals passing all PVTs (33% vs. 16%) to have abnormally low obtained word reading scores relative to demographically predicted scores when using a normative base rate of 10% to define abnormality. When compared with standardization study clinical groups, those failing ≥2 PVTs were twice as likely as patients with moderate to severe traumatic brain injury and as likely as patients with Alzheimer’s dementia to obtain abnormally low TOPF word reading scores. Findings indicate that TOPF word reading based estimates of premorbid functioning should not be interpreted in individuals invalidating cognitive testing.
Collapse
Affiliation(s)
| | | | - Kathryn A. Wyman-Chick
- University of Kansas School of Medicine, Wichita, KS, USA
- University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Ben P. Hunter
- University of Kansas School of Medicine, Wichita, KS, USA
| | | | - Lyle E. Baade
- University of Kansas School of Medicine, Wichita, KS, USA
| |
Collapse
|
50
|
Armistead-Jehle P, Green P. Model for the effects of invalid styles of response. APPLIED NEUROPSYCHOLOGY-ADULT 2016; 23:449-58. [DOI: 10.1080/23279095.2016.1178646] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
| | - Paul Green
- Private Practice, Edmonton, Alberta, Canada
| |
Collapse
|