1
|
Rohling ML, Binder LM, Larrabee GJ, Langhinrichsen-Rohling J. Forced choice test score of p ≤ .20 and failures on ≥ six performance validity tests results in similar Overall Test Battery Means. Clin Neuropsychol 2024; 38:1193-1209. [PMID: 38041021 DOI: 10.1080/13854046.2023.2284975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 11/13/2023] [Indexed: 12/03/2023]
Abstract
Objective: To determine if similar levels of performance on the Overall Test Battery Mean (OTBM) occur at different forced choice test (FCT) p-value score failures. Second, to determine the OTBM levels that are associated with failures at above chance on various performance validity (PVT) tests. Method: OTBMs were computed from archival data obtained from four practices. We calculated each examinee's Estimated Premorbid Global Ability (EPGA) and OTBM. The sample size was 5,103 examinees with 282 (5.5%) of these scoring below chance at p ≤ .20 on at least one FCT. Results: The OTBM associated with a failure at p ≤ .20 was equivalent to the OTBM that was associated with failing 6 or more PVTs at above-chance cutoffs. The mean OTBMs relative to increasingly strict FCT p cutoffs were similar (T scores in the 30s). As expected, there was an inverse relationship between the number of PVTs failed and examinees' OTBMs. Conclusions: The data support the use of p ≤ .20 as the probability level for testing the significance of below chance performance on FCTs. The OTBM can be used to index the influence of invalid performance on outcomes, especially when an examinee scores below chance.
Collapse
|
2
|
Mikolic A, Panenka WJ, Iverson GL, Cotton E, Burke MJ, Silverberg ND. Litigation, Performance Validity Testing, and Treatment Outcomes in Adults With Mild Traumatic Brain Injury. J Head Trauma Rehabil 2024; 39:E153-E161. [PMID: 37773600 DOI: 10.1097/htr.0000000000000903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2023]
Abstract
OBJECTIVE To investigate whether involvement in litigation and performance validity test (PVT) failure predict adherence to treatment and treatment outcomes in adults with persistent symptoms after mild traumatic brain injury (mTBI). SETTING Outpatient concussion clinics in British Columbia, Canada. Participants were assessed at intake (average 12.9 weeks postinjury) and again following 3 to 4 months of rehabilitation. PARTICIPANTS Adults who met the World Health Organization Neurotrauma Task Force definition of mTBI. Litigation status was known for 69 participants ( n = 21 reported litigation), and 62 participants completed a PVT ( n = 13 failed the Test of Memory Malingering) at clinic intake. DESIGN Secondary analysis of a clinical trial (ClinicalTrials.gov #NCT03972579). MAIN MEASURES Outcomes included number of completed sessions, homework adherence, symptoms (Rivermead Post Concussion Symptoms Questionnaire), disability ratings (World Health Organization Disability Assessment Schedule 2.0), and patient-rated global impression of change. RESULTS We did not observe substantial differences in session and homework adherence associated with litigation or PVT failure. Disability and postconcussion symptoms generally improved with treatment. Involvement in litigation was associated with a smaller improvement in outcomes, particularly disability ( B = 2.57, 95% confidence interval [CI] [0.25-4.89], P = .03) and patient-reported global impression of change (odds ratio [OR] = 4.19, 95% CI [1.40-12.57], P = .01). PVT failure was not associated with considerable differences in treatment outcomes. However, participants who failed the PVT had a higher rate of missing outcomes (31% vs 8%) and perceived somewhat less global improvement (OR = 3.47, 95% CI [0.86-14.04]; P = .08). CONCLUSION Adults with mTBI who are in litigation or who failed PVTs tend to adhere to and improve following treatment. However, involvement in litigation may be associated with attenuated improvements, and pretreatment PVT failure may predict lower engagement in the treatment process.
Collapse
Affiliation(s)
- Ana Mikolic
- Departments of Psychology (Drs Mikolic and Silverberg) and Psychiatry (Dr Panenka), The University of British Columbia, Vancouver, British Columbia, Canada; Rehabilitation Research Program, Centre for Aging SMART at Vancouver Coastal Health, Vancouver, British Columbia, Canada (Drs Mikolic and Silverberg); British Columbia Mental Health and Substance Use Services Research Institute, Vancouver, British Columbia, Canada (Dr Panenka); BC Neuropsychiatry Program, Vancouver, British Columbia, Canada (Dr Panenka); Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, Massachusetts (Dr Iverson); Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, Massachusetts (Dr Iverson); Department of Physical Medicine and Rehabilitation, Schoen Adams Research Institute at Spaulding Rehabilitation, Charlestown, Massachusetts (Dr Iverson); Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Charlestown (Dr Iverson); Departments of Psychiatry and Behavioral Sciences (Dr Cotton) and Neurology (Dr Cotton), Northwestern University, Chicago, Illinois; Neuropsychiatry Program, Department of Psychiatry and Division of Neurology and Department of Medicine, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada (Dr Burke); and Hurvitz Brain Sciences Program, Sunnybrook Research Institute, Toronto, Ontario, Canada (Dr Burke)
| | | | | | | | | | | |
Collapse
|
3
|
Henry GK. Ability of the Wisconsin Card-Sorting Test-64 as an embedded measure to identify noncredible neurocognitive performance in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-7. [PMID: 38684109 DOI: 10.1080/23279095.2024.2348012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
OBJECTIVE To investigate the ability of selective measures on the Wisconsin Card Sorting Test-64 (WCST-64) to predict noncredible neurocognitive dysfunction in a large sample of mild traumatic brain injury (mTBI) litigants. METHOD Participants included 114 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT). RESULTS Participants failing PVTs performed worse across all WCST-64 dependent variables of interest compared to participants who passed PVTs. Receiver operating curve analysis revealed that only categories completed was a significant predictors of PVT status. Multivariate logistic regression did not add to classification accuracy. CONCLUSION Consideration of noncredible executive functioning may be warranted in mild traumatic brain injury (mTBI) litigants who complete ≤ 1 category on the WCST-64.
Collapse
|
4
|
van Vliet FIM, van Schothorst HP, Donker-Cools BHPM, Schaafsma FG, Ponds RWHM, Geurtsen GJ. Validity of the Groningen Effort Test in patients with suspected chronic solvent-induced encephalopathy. Arch Clin Neuropsychol 2024:acae025. [PMID: 38572600 DOI: 10.1093/arclin/acae025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/11/2024] [Accepted: 02/27/2024] [Indexed: 04/05/2024] Open
Abstract
INTRODUCTION The use of performance validity tests (PVTs) in a neuropsychological assessment to determine indications of invalid performance has been a common practice for over a decade. Most PVTs are memory-based; therefore, the Groningen Effort Test (GET), a non-memory-based PVT, has been developed. OBJECTIVES This study aimed to validate the GET in patients with suspected chronic solvent-induced encephalopathy (CSE) using the criterion standard of 2PVTs. A second goal was to determine diagnostic accuracy for GET. METHOD Sixty patients with suspected CSE referred for NPA were included. The GET was compared to the criterion standard of 2PVTs based on the Test of Memory Malingering and the Amsterdam Short Term Memory Test. RESULTS The frequency of invalid performance using the GET was significantly higher compared to the criterion of 2PVTs (51.7% vs. 20.0% respectively; p < 0.001). For the GET index, the sensitivity was 75% and the specificity was 54%, with a Youden's Index of 27. CONCLUSION The GET showed significantly more invalid performance compared to the 2PVTs criterion suggesting a high number of false positives. The general accepted minimum norm of specificity for PVTs of >90% was not met. Therefore, the GET is of limited use in clinical practice with suspected CSE patients.
Collapse
Affiliation(s)
- Fabienne I M van Vliet
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Henrita P van Schothorst
- Department of Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Birgit H P M Donker-Cools
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Frederieke G Schaafsma
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Rudolf W H M Ponds
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Ladowsky-Brooks RL. Recall and recognition of similarities items in neuropsychological assessment: Memory, validity, and meaning. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-8. [PMID: 38557276 DOI: 10.1080/23279095.2024.2334344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The current study examined whether the Memory Similarities Extended Test (M-SET), a memory test based on the Similarities subtest of the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II), has value in neuropsychological testing. The relationship of M-SET measures of cued recall (CR) and recognition memory (REC) to brain injury severity and memory scores from the Wechsler Memory Scale, Fourth Edition (WMS-IV) was analyzed in examinees with traumatic brain injuries ranging from mild to severe. Examinees who passed standard validity tests were divided into groups with intracranial injury (CT + ve, n = 18) and without intracranial injury (CT-ve, n = 50). In CT + ve only, CR was significantly correlated with Logical Memory I (LMI: rs = .62) and Logical Memory II (LMII: rs = .65). In both groups, there were smaller correlations with delayed visual memory (VRII: rs = .38; rs = .44) and psychomotor speed (Coding: rs = .29; rs = .29). The REC score was neither an indicator of memory ability nor an internal indicator of performance validity. There were no differences in M-SET or WMS-IV scores for CT-ve and CT + ve, and reasons for this are discussed. It is concluded that M-SET has utility as an incidental cued recall measure.
Collapse
|
6
|
Roor JJ, Peters MJV, Dandachi-FitzGerald B, Ponds RWHM. Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates. Neuropsychol Rev 2024; 34:299-319. [PMID: 36872398 PMCID: PMC10920461 DOI: 10.1007/s11065-023-09582-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 11/16/2022] [Indexed: 03/07/2023]
Abstract
Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97, p < .001; I2 = 91%; τ2 = 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.
Collapse
Affiliation(s)
- Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, The Netherlands.
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Maarten J V Peters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Brechje Dandachi-FitzGerald
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology, Open University, Heerlen, The Netherlands
| | - Rudolf W H M Ponds
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, location VU, Amsterdam, The Netherlands
| |
Collapse
|
7
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
8
|
Denning JH, Horner MD. The impact of race and other demographic factors on the false positive rates of five embedded Performance Validity Tests (PVTs) in a Veteran sample. J Clin Exp Neuropsychol 2024; 46:25-35. [PMID: 38353039 DOI: 10.1080/13803395.2024.2314737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 01/11/2024] [Indexed: 05/12/2024]
Abstract
INTRODUCTION It is common to use normative adjustments based on race to maintain accuracy when interpreting cognitive test results during neuropsychological assessment. However, embedded performance validity tests (PVTs) do not adjust for these racial differences and may result in elevated rates of false positives in African American/Black (AA) samples compared to European American/White (EA) samples. METHODS Veterans without Major Neurocognitive Disorder completed an outpatient neuropsychological assessment and were deemed to be performing in a valid manner (e.g., passing both the Test of Memory Malingering Trial 1 (TOMM1) and the Medical Symptom Validity Test (MSVT), (n = 531, EA = 473, AA = 58). Five embedded PVTs were administered to all patients: WAIS-III/IV Processing Speed Index (PSI), Brief Visuospatial Memory Test-Revised: Discrimination Index (BVMT-R), TMT-A (secs), California Verbal Learning Test-II (CVLT-II) Forced Choice, and WAIS-III/IV Digit Span Scaled Score. Individual PVT false positive rates, as well as the rate of failing two or more embedded PVTs, were calculated. RESULTS Failure rates of two embedded PVTs (PSI, TMT-A), and the total number of PVTs failed, were higher in the AA sample. The PSI and TMT-A remained significantly impacted by race after accounting for age, education, sex, and presence of Mild Neurocognitive Disorder. There were PVT failure rates greater than 10% (and considered false positives) in both groups (AA: PSI, TMT-A, and BVMT-R, 12-24%; EA: BVMT-R, 17%). Failing 2 or more PVTs (AA = 9%, EA = 4%) was impacted by education and Mild Neurocognitive Disorder but not by race. CONCLUSIONS Individual (timed) PVTs showed higher false positive rates in the AA sample even after accounting for demographic factors and diagnosis of Mild Neurocognitive Disorder. Requiring failure on 2 or more embedded PVTs reduced false positive rates to acceptable levels across both groups (10% or less) and was not significantly influenced by race.
Collapse
Affiliation(s)
- John H Denning
- Mental Health Service, Ralph H. Johnson Veterans Affairs Health Care System, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Mental Health Service, Ralph H. Johnson Veterans Affairs Health Care System, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
9
|
Peak AM, Marceaux JC, Chicota-Carroll C, Soble JR. Cross-validation of the Trail Making Test as a non-memory-based embedded performance validity test among veterans with and without cognitive impairment. J Clin Exp Neuropsychol 2024; 46:16-24. [PMID: 38007610 DOI: 10.1080/13803395.2023.2287784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/20/2023] [Indexed: 11/27/2023]
Abstract
OBJECTIVE This study cross-validated multiple Trail Making Test (TMT) Parts A and B scores as non-memory-based embedded performance validity tests (PVTs) for detecting invalid neuropsychological performance among veterans with and without cognitive impairment. METHOD Data were collected from a demographically and diagnostically diverse mixed clinical sample of 100 veterans undergoing outpatient neuropsychological evaluation at a Southwestern VA Medical Center. As part of a larger battery of neuropsychological tests, all veterans completed TMT A and B and four independent criterion PVTs, which were used to classify veterans into valid (n = 75) and invalid (n = 25) groups. Among the valid group 47% (n = 35) were cognitively impaired. RESULTS Among the overall sample, all embedded PVTs derived from TMT A and B raw and demographically corrected T-scores significantly differed between validity groups (ηp2 = .21-.31) with significant areas under the curve (AUCs) of .72-.78 and 32-48% sensitivity (≥91% specificity) at optimal cut-scores. When subdivided by cognitive impairment status (i.e., valid-unimpaired vs. invalid; valid-impaired vs. invalid), all TMT scores yielded significant AUCs of .80-.88 and 56%-72% sensitivity (≥90% specificity) at optimal cut-scores. Among veterans with cognitive impairment, neither TMT A or B raw scores were able to significantly differentiate the invalid from the valid-cognitively impaired group; however, demographically corrected T-scores were able to significantly differentiate groups but had poor classification accuracy (AUCs = .66-.68) and reduced sensitivity of 28%-44% (≥91% specificity). CONCLUSIONS Embedded PVTs derived from TMT Parts A and B raw and T-scores were able to accurately differentiate valid from invalid neuropsychological performance among veterans without cognitive impairment; however, the demographically corrected T-scores generally were more robust and consistent with prior studies compared to raw scores. By contrast, TMT embedded PVTs had poor accuracy and low sensitivity among veterans with cognitive impairment, suggesting limited utility as PVTs among populations with cognitive dysfunction.
Collapse
Affiliation(s)
- Ashley M Peak
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | | | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
10
|
Rohling ML, Demakis GJ, Langhinrichsen-Rohling J. Lowered cutoffs to reduce false positives on the Word Memory Test. J Clin Exp Neuropsychol 2024; 46:67-79. [PMID: 38362939 DOI: 10.1080/13803395.2024.2314736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/11/2024] [Indexed: 02/17/2024]
Abstract
OBJECTIVE To adjust the decision criterion for the Word Memory Test (WMT, Green, 2003) to minimize the frequency of false positives. METHOD Archival data were combined into a database (n = 3,210) to examine the best cut score for the WMT. We compared results based on the original scoring rules and those based on adjusted scoring rules using a criterion based on 16 performance validity tests (PVTs) exclusive of the WMT. Cutoffs based on peer-reviewed publications and test manuals were used. The resulting PVT composite was considered the best estimate of validity status. We focused on a specificity of .90 with a false-positive rate of less than .10 across multiple samples. RESULTS Each examinee was administered the WMT, as well as on average 5.5 (SD = 2.5) other PVTs. Based on the original scoring rules of the WMT, 31.8% of examinees failed. Using a single failure on the criterion PVT (C-PVT), the base rate of failure was 45.9%. When requiring two or more failures on the C-PVT, the failure rate dropped to 22.8%. Applying a contingency analysis (i.e., X2) to the two failures model on the C-PVT measure and using the original rules for the WMT resulted in only 65.3% agreement. However, using our adjusted rules for the WMT, which consisted of relying on only the IR and DR WMT subtest scores with a cutoff of 77.5%, agreement between the adjusted and the C-PVT criterion equaled 80.8%, for an improvement of 12.1% identified. The adjustmeny resulted in a 49.2% reduction in false positives while preserving a sensitivity of 53.6%. The specificity for the new rules was 88.8%, for a false positive rate of 11.2%. CONCLUSIONS Results supported lowering of the cut score for correct responding from 82.5% to 77.5% correct. We also recommend discontinuing the use of the Consistency subtest score in the determination of WMT failure.
Collapse
|
11
|
Williamson ES, Arentsen TJ, Roper BL, Pedersen HA, Shultz LA, Crouse EM. The Importance of the Morel Emotional Numbing Test Instructions: A Diagnosis Threat Induction Study. Arch Clin Neuropsychol 2024; 39:35-50. [PMID: 37449530 DOI: 10.1093/arclin/acad048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/07/2023] [Indexed: 07/18/2023] Open
Abstract
OBJECTIVE Marketed as a validity test that detects feigning of posttraumatic stress disorder (PTSD), the Morel Emotional Numbing Test for PTSD (MENT) instructs examinees that PTSD may negatively affect performance on the measure. This study explored the potential that MENT performance depends on inclusion of "PTSD" in its instructions and the nature of the MENT as a performance validity versus a symptom validity test (PVT/SVT). METHOD 358 participants completed the MENT as a part of a clinical neuropsychological evaluation. Participants were either administered the MENT with the standard instructions (SIs) that referenced "PTSD" or revised instructions (RIs) that did not. Others were administered instructions that referenced "ADHD" rather than PTSD (AI). Comparisons were conducted on those who presented with concerns for potential traumatic-stress related symptoms (SI vs. RI-1) or attention deficit (AI vs. RI-2). RESULTS Participants in either the SI or AI condition produced more MENT errors than those in their respective RI conditions. The relationship between MENT errors and other S/PVTs was significantly stronger in the SI: RI-1 comparison, such that errors correlated with self-reported trauma-related symptoms in the SI but not RI-1 condition. MENT failure also predicted PVT failure at nearly four times the rate of SVT failure. CONCLUSIONS Findings suggest that the MENT relies on overt reference to PTSD in its instructions, which is linked to the growing body of literature on "diagnosis threat" effects. The MENT may be considered a measure of suggestibility. Ethical considerations are discussed, as are the construct(s) measured by PVTs versus SVTs.
Collapse
Affiliation(s)
- Emily S Williamson
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
| | - Timothy J Arentsen
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
- Department of Psychiatry, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Brad L Roper
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
- Department of Psychiatry, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Heather A Pedersen
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
| | - Laura A Shultz
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
| | - Ellen M Crouse
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
- Department of Psychiatry, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
12
|
Ton Loy AF, Lee JE, Asimakopoulos G, Sakamoto MS, Merritt VC. Symptom attribution is a stronger predictor of PVT-failure than symptom endorsement in treatment-seeking Veterans with remote mTBI history: A pilot study. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-6. [PMID: 38113857 DOI: 10.1080/23279095.2023.2293979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
OBJECTIVE To examine relationships between performance validity testing (PVT), neurobehavioral symptom endorsement, and symptom attribution in Veterans with a history of mild traumatic brain injury (mTBI). METHOD Participants included treatment-seeking Veterans (n = 37) with remote mTBI histories who underwent a neuropsychological assessment and completed a modified version of the Neurobehavioral Symptom Inventory (NSI) to assess symptom endorsement and symptom attribution (the latter evaluated by having Veterans indicate whether they believed each NSI symptom was caused by their mTBI). Veterans were divided into two subgroups, PVT-Valid (n = 25) and PVT-Invalid (n = 12). RESULTS Independent samples t-tests showed that two of five symptom endorsement variables and all five symptom attribution variables were significantly different between PVT groups (PVT-Invalid > PVT-Valid; Cohen's d = 0.67-1.02). Logistic regression analyses adjusting for PTSD symptoms showed that symptom endorsement (Nagelkerke's R2 = .233) and symptom attribution (Nagelkerke's R2 = .279) significantly distinguished between PVT groups. According to the Wald criterion, greater symptom endorsement (OR = 1.09) and higher attribution of symptoms to mTBI (OR = 1.21) each reliably predicted PVT-failure. CONCLUSIONS While both symptom endorsement and symptom attribution were significantly associated with PVT-failure, our preliminary results suggest that symptom attribution is a stronger predictor of PVT-failure. Results highlight the importance of assessing symptom attribution to mTBI in this population.
Collapse
Affiliation(s)
- Adan F Ton Loy
- Research Service, VA San Diego Healthcare System (VASDHS), San Diego, CA, USA
| | - Jeong-Eun Lee
- Research Service, VA San Diego Healthcare System (VASDHS), San Diego, CA, USA
| | | | - McKenna S Sakamoto
- Department of Psychology, Penn State University, University Park, PA, USA
| | - Victoria C Merritt
- Research Service, VA San Diego Healthcare System (VASDHS), San Diego, CA, USA
- Department of Psychiatry, School of Medicine, UC San Diego, La Jolla, CA, USA
- Center of Excellence for Stress and Mental Health, VASDHS, San Diego, CA, USA
| |
Collapse
|
13
|
Henry GK. Detection of noncredible cognitive performance with Wechsler Memory Scale-IV measures in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-8. [PMID: 38039520 DOI: 10.1080/23279095.2023.2287139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2023]
Abstract
OBJECTIVE To investigate the operating characteristics of selective measures on the Wechsler Memory Scale-IV (WMS-IV) to predict noncredible neurocognitive dysfunction in a sample of mild traumatic brain injury (mTBI) litigants. METHOD Participants included 110 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity testing (PVT). RESULTS Participants failing two stand-alone PVTs exhibited significantly lower scores across all WMS-IV dependent variables of interest compared to participants who passed both PVTs. Participants who failed one PVT were excluded. Bivariate logistic regression revealed that all six dependent variables were significant predictors of PVT status. The best prediction model consisted of three WMS-IV variables including Logical Memory Delayed Recall (LM2), Logical Memory Recognition (LMR), and Visual Reproduction Recognition (VRR). This model demonstrated an accuracy of 90.2%, 0.89 sensitivity, 0.92 specificity, and a Receiver Operating Curve (ROC) of 0.957. CONCLUSION The current empirically-derived cutscores and logit equation for the WMS-IV may be an additional consideration in analyzing database validity and noncredible performance in mTBI personal injury litigants ages 18-69.
Collapse
|
14
|
Picon EL, Wardell V, Palombo DJ, Todd RM, Aziz B, Bedi S, Silverberg ND. Factors perpetuating functional cognitive symptoms after mild traumatic brain injury. J Clin Exp Neuropsychol 2023; 45:988-1002. [PMID: 37602857 DOI: 10.1080/13803395.2023.2247601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/08/2023] [Indexed: 08/22/2023]
Abstract
INTRODUCTION Self-reported memory difficulties (forgetting familiar names, misplacing objects) often persist long after a mild traumatic brain injury (mTBI), despite normal neuropsychological test performance. This clinical presentation may be a manifestation of a functional cognitive disorder (FCD). Several mechanisms underlying FCD have been proposed, including metacognitive impairment, memory perfectionism, and misdirected attention, as well as depression or anxiety-related explanations. This study aims to explore these candidate perpetuating factors in mTBI, to advance our understanding of why memory symptoms frequently persist following mTBI. METHODS A cross-sectional study of 67 adults (n = 39 with mTBI mean = 25 months ago and n = 28 healthy controls). Participants completed standardized questionnaires (including the Functional Memory Disorder Inventory), a metacognitive task (to quantify discrepancies between their trial-by-trial accuracy and confidence), and a brief neuropsychological test battery. We assessed candidate mechanisms in two ways: (1) between-groups, comparing participants with mTBI to healthy controls, and (2) within-group, examining their associations with functional memory symptom severity (FMDI) in the mTBI group. RESULTS Participants with mTBI performed similarly to controls on objective measures of memory ability but reported experiencing much more frequent memory lapses in daily life. Contrary to expectations, metacognitive efficiency did not differentiate the mTBI and control groups and was not associated with functional memory symptoms. Memory perfectionism was strongly associated with greater functional memory symptoms among participants with mTBI but did not differ between groups when accounting for age. Depression and checking behaviors produced consistent results across between-groups and within-group analyses: these factors were greater in the mTBI group compared to the control group and were associated with greater functional memory symptoms within the mTBI group. CONCLUSIONS This study highlights promising (e.g., depression, checking behaviors) and unlikely (e.g., metacognitive impairment) mechanisms underlying functional memory symptoms after mTBI, to guide future research and treatment.
Collapse
Affiliation(s)
- Edwina L Picon
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Victoria Wardell
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Daniela J Palombo
- Department of Psychology, University of British Columbia, Vancouver, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rebecca M Todd
- Department of Psychology, University of British Columbia, Vancouver, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
| | - Bilal Aziz
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Sanjana Bedi
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Noah D Silverberg
- Department of Psychology, University of British Columbia, Vancouver, Canada
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Rehabilitation Research Program, Centre for Aging SMART, Vancouver Coastal Health Research Institute, Vancouver, Canada
| |
Collapse
|
15
|
Crișan I, Sava FA. Validity assessment in Eastern Europe: cross-validation of the Dot Counting Test and MODEMM against the TOMM-1 and Rey-15 in a Romanian mixed clinical sample. Arch Clin Neuropsychol 2023:acad085. [PMID: 37961918 DOI: 10.1093/arclin/acad085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/15/2023] Open
Abstract
OBJECTIVE This study investigated performance validity in the understudied Romanian clinical population by exploring classification accuracies of the Dot Counting Test (DCT) and the first Romanian performance validity test (PVT) (Memory of Objects and Digits and Evaluation of Memory Malingering/MODEMM) in a heterogeneous clinical sample. METHODS We evaluated 54 outpatients (26 females; MAge = 62.02; SDAge = 12.3; MEducation = 2.41, SDEducation = 2.82) with the Test of Memory Malingering 1 (TOMM-1), Rey Fifteen Items Test (Rey-15) (free recall and recognition trials), DCT, MODEMM, and MMSE/MoCA as part of their neuropsychological assessment. Accuracy parameters and base failure rates were computed for the DCT and MODEMM indicators against the TOMM-1 and Rey-15. Two patient groups were constructed according to psychometrically defined credible/noncredible performance (i.e., pass/fail both TOMM-1 and Rey-15). RESULTS Similar to other cultures, a cutoff of ≥18 on the DCT E score produced the best combination between sensitivity (0.50-0.57) and specificity (≥0.90). MODEMM indicators based on recognition accuracy, inconsistencies, and inclusion false positives generated 0.75-0.86 sensitivities at ≥0.90 specificities. Multivariable models of MODEMM indicators reached perfect sensitivities at ≥0.90 specificities against two PVTs. Patients who failed the TOMM-1 and Rey-15 were significantly more likely to fail the DCT and MODEMM than patients who passed both PVTs. CONCLUSIONS Our results offer proof of concept for the DCT's cross-cultural validity and the applicability of the MODEMM on Romanian clinical examinees, further recommending the use of heterogeneous validity indicators in clinical assessments.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara 300223, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timișoara 300223, Romania
| |
Collapse
|
16
|
Kim MS, Torres K, Kang HJ, Drane DL. Specificity of performance validity tests in patients with confirmed epilepsy. Clin Neuropsychol 2023; 37:1530-1547. [PMID: 36219095 DOI: 10.1080/13854046.2022.2127424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 09/16/2022] [Indexed: 11/03/2022]
Abstract
Objective: While assessment of performance validity is essential to neuropsychological evaluations, use of performance validity tests (PVTs) in an epilepsy population has raised concerns due to factors that may result in performance fluctuations. The current study assessed whether specificity was maintained at previously suggested cutoffs in a confirmed epilepsy population on the Warrington Recognition Memory Test (WRMT) - Words and Test of Memory Malingering (TOMM). Method: Eighty-two confirmed epilepsy patients were administered the WRMT-Words and TOMM as part of a standardized neuropsychological evaluation. Frequency tables were utilized to investigate specificity rates on these two PVTs. Results: The suggested WRMT-Words Accuracy Score cutoff of ≤42 was associated with a specificity rate of 90.2%. Five out of the 8 individuals falling below the Accuracy Score cutoff scored 42, suggesting specificity could be further improved by slightly lowering the cutoff. The WRMT-Words Total Time cutoff of ≥207 seconds was associated with 95.1% specificity. A TOMM Trial 1 cutoff of <40 was associated with 93.9% specificity, while the established cutoff of <45 on Trial 2 and the Retention Trial yielded specificity rates of 98.6% and 100.0%, respectively. Conclusions: Our findings demonstrate acceptable performance on two PVTs in a select confirmed epilepsy population without a history of brain surgery, active seizures during testing, and/or low IQ, irrespective of various factors such as seizure type, seizure lateralization/localization, and language lateralization. The possible presence of interictal discharges were not controlled for in the current study, which may have contributed to reduced PVT performances.
Collapse
Affiliation(s)
- Michelle S Kim
- Department of Neurology, University of Washington, Seattle, USA
| | - Karen Torres
- Department of Neurology, University of Washington, Seattle, USA
| | - Hyun Jin Kang
- Department of Neurology, University of Washington, Seattle, USA
| | - Daniel L Drane
- Department of Neurology, University of Washington, Seattle, USA
- Department of Neurology, Emory University, Atlanta, GA, USA
- Department of Pediatrics, Emory University, Atlanta, GA, USA
| |
Collapse
|
17
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
18
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
19
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues. Neuropsychol Rev 2023; 33:581-603. [PMID: 37612531 DOI: 10.1007/s11065-023-09601-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 03/29/2023] [Indexed: 08/25/2023]
Abstract
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, Box 200, 1 Drexel Dr, New Orleans, LA, 70125, USA.
| |
Collapse
|
20
|
Malik HB, Norman JB. Best Practices and Methodological Strategies for Addressing Generalizability in Neuropsychological Assessment. JOURNAL OF PEDIATRIC NEUROPSYCHOLOGY 2023; 9:47-63. [PMID: 37250805 PMCID: PMC10182845 DOI: 10.1007/s40817-023-00145-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 04/11/2023] [Accepted: 04/15/2023] [Indexed: 05/31/2023]
Abstract
Generalizability considerations are widely discussed and a core foundation for understanding when and why treatment effects will replicate across sample demographics. However, guidelines on assessing and reporting generalizability-related factors differ across fields and are inconsistently applied. This paper synthesizes obstacles and best practices to apply recent work on measurement and sample diversity. We present a brief history of how knowledge in psychology has been constructed, with implications for who has been historically prioritized in research. We then review how generalizability remains a contemporary threat to neuropsychological assessment and outline best practices for researchers and clinical neuropsychologists. In doing so, we provide concrete tools to evaluate whether a given assessment is generalizable across populations and assist researchers in effectively testing and reporting treatment differences across sample demographics.
Collapse
Affiliation(s)
- Hinza B. Malik
- Department of Psychology, University of North Carolina Wilmington, 601 South College Road, Wilmington, NC 28403-5612 USA
| | - Jasmine B. Norman
- Department of Psychology, University of North Carolina Wilmington, 601 South College Road, Wilmington, NC 28403-5612 USA
| |
Collapse
|
21
|
Davis JJ. Time is money: Examining the time cost and associated charges of common performance validity tests. Clin Neuropsychol 2023; 37:475-490. [PMID: 35414332 DOI: 10.1080/13854046.2022.2063190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Objective: This study presents data on the time cost and associated charges for common performance validity tests (PVTs). It also applies an approach from cost effectiveness research to comparison of tests that incorporates cost and classification accuracy. Method: A recent test usage survey was used to identify PVTs in common use among adult neuropsychologists. Data on test administration and scoring time were aggregated. Charges per test were calculated. A cost effectiveness approach was applied to compare pairs of tests from three studies using data on test administration time and classification accuracy operationalized as improvement in posterior probability beyond base rate. Charges per unit increase in posterior probability over base rate were calculated for base rates of invalidity ranging from 10 to 40%. Results: Ten commonly used PVTs measures showed a wide range in test administration and scoring time from 1 to 3 minutes to over 40 minutes with associated charge estimates from $4 to $284. Cost effectiveness comparisons illustrated the nuance in test selection and benefit of considering cost in relation to outcome rather than prioritizing time (i.e. cost minimization) classification accuracy alone. Conclusions: Findings extend recent research efforts to fill knowledge gaps related to the cost of neuropsychological evaluation. The cost effectiveness approach warrants further study in other samples with different neuropsychological and outcome measures.
Collapse
Affiliation(s)
- Jeremy J Davis
- Department of Neurology, Glenn Biggs Institute for Alzheimer's and Neurogenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
22
|
Bajjaleh C, Braw YC, Elkana O. Adaptation and initial validation of the Arabic version of the Word Memory Test (WMT ARB). APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:204-213. [PMID: 34043924 DOI: 10.1080/23279095.2021.1923495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. METHODS We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. RESULTS The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. CONCLUSION The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
Collapse
Affiliation(s)
- Christine Bajjaleh
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| | - Yoram C Braw
- Department of Psychology, Ariel University, Ariel, Israel
| | - Odelia Elkana
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| |
Collapse
|
23
|
Low rate of performance validity failures among individuals with bipolar disorder. J Int Neuropsychol Soc 2023; 29:298-305. [PMID: 35403599 DOI: 10.1017/s1355617722000145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Assessing performance validity is imperative in both clinical and research contexts as data interpretation presupposes adequate participation from examinees. Performance validity tests (PVTs) are utilized to identify instances in which results cannot be interpreted at face value. This study explored the hit rates for two frequently used PVTs in a research sample of individuals with and without histories of bipolar disorder (BD). METHOD As part of an ongoing longitudinal study of individuals with BD, we examined the performance of 736 individuals with BD and 255 individuals with no history of mental health disorder on the Test of Memory Malingering (TOMM) and the California Verbal Learning Test forced choice trial (CVLT-FC) at three time points. RESULTS Undiagnosed individuals demonstrated 100% pass rate on PVTs and individuals with BD passed over 98% of the time. A mixed effects model adjusting for relevant demographic variables revealed no significant difference in TOMM scores between the groups, a = .07, SE = .07, p = .31. On the CVLT-FC, no clinically significant differences were observed (ps < .001). CONCLUSIONS Perfect PVT scores were obtained by the majority of individuals, with no differences in failure rates between groups. The tests have approximately >98% specificity in BD and 100% specificity among non-diagnosed individuals. Further, nearly 90% of individuals with BD obtained perfect scores on both measures, a trend observed at each time point.
Collapse
|
24
|
Hansen ND, Rhoads T, Jennette KJ, Reynolds TP, Ovsiew GP, Resch ZJ, Critchfield EA, Marceaux JC, O'Rourke JJF, Soble JR. Validation of alternative dot counting test E-score cutoffs based on degree of cognitive impairment in veteran and civilian clinical samples. Clin Neuropsychol 2023; 37:402-415. [PMID: 35343379 DOI: 10.1080/13854046.2022.2054863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE This study examined Dot Counting Test (DCT) performance among patient populations with no/minimal impairment and mild impairment in an attempt to cross-validate a more parsimonious interpretative strategy and to derive optimal E-Score cutoffs. METHOD Participants included clinically-referred patients from VA (n = 101) and academic medical center (AMC, n = 183) settings. Patients were separated by validity status (valid/invalid), and subsequently two comparison groups were formed from each sample's valid group. Namely, Group 1 included patients with no to minimal cognitive impairment, and Group 2 included those with mild neurocognitive disorder. Analysis of variance tested for differences between rounded and unrounded DCT E-Scores across both comparison groups and the invalid group. Receiver operating characteristic curve analyses identified optimal validity cut-scores for each sample and stratified by comparison groups. RESULTS In the VA sample, cut scores of ≥13 (rounded) and ≥12.58 (unrounded) differentiated Group 1 from the invalid performers (87% sensitivity/88% specificity), and cut scores of ≥17 (rounded; 58% sensitivity/90% specificity) and ≥16.49 (unrounded; 61% sensitivity/90% specificity) differentiated Group 2 from the invalid group. Similarly, in the AMC group, a cut score of ≥13 (rounded and unrounded; 75% sensitivity/90% specificity) differentiated Group 1 from the invalid group, whereas cut scores of ≥18 (rounded; 43% sensitivity/94% specificity) and ≥16.94 (unrounded; 46% sensitivity/90% specificity) differentiated Group 2 from the invalid performers. CONCLUSIONS Different cut scores were indicated based on degree of cognitive impairment, and provide proof-of-concept for a more parsimonious interpretative paradigm than using individual cut scores derived for specific diagnostic groups.
Collapse
Affiliation(s)
- Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tristan P Reynolds
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Edan A Critchfield
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
25
|
Crocker LD, Sullan MJ, Jurick SM, Thomas KR, Davey DK, Hoffman SN, Twamley EW, Jak AJ. Baseline executive functioning moderates treatment-related changes in quality of life in veterans with posttraumatic stress disorder and comorbid traumatic brain injury. J Trauma Stress 2023; 36:94-105. [PMID: 36204974 DOI: 10.1002/jts.22883] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 08/09/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022]
Abstract
Posttraumatic stress disorder (PTSD) treatment has been associated with improvement in quality of life (QOL); however, little is known about factors that moderate treatment-related changes in QOL, particularly cognitive factors. Executive functioning (EF) is important for success across all aspects of everyday life and predicts better psychological and physical health. EF is important to QOL, but more work is needed to better understand the association between EF and QOL improvements following interventions. We hypothesized that poorer baseline EF would be associated with less improvement in overall life satisfaction and satisfaction with health following PTSD treatment. U.S. veterans who served after the September 11, 2001 terrorist attacks (post 9-11; N = 80) with PTSD and a history of mild-to-moderate traumatic brain injury were randomized to standard cognitive processing therapy (CPT) or CPT combined with cognitive rehabilitation (SMART-CPT). Multilevel modeling was used to examine whether baseline EF performance was associated with changes in QOL scores from pretreatment to follow-up across both groups. Results indicated that poorer baseline performance on EF tests of working memory and inhibition were associated with less treatment-related improvements in general life satisfaction and satisfaction with health, rs = .26-.36. Treatment condition did not moderate any results. Future research should examine whether implementing EF-focused techniques before and/or concurrently with CPT for individuals with poorer baseline working memory and inhibition enhances QOL treatment gains, particularly in terms of general life and health-related satisfaction.
Collapse
Affiliation(s)
- Laura D Crocker
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, California, USA
| | - Molly J Sullan
- Psychology Service, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| | - Sarah M Jurick
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| | - Kelsey R Thomas
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| | - Delaney K Davey
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
| | - Samantha N Hoffman
- San Diego State University/University of California San Diego Joint Doctoral Program in Clinical Psychology, San Diego, California, USA
| | - Elizabeth W Twamley
- Research Service, VA San Diego Healthcare System, San Diego, California, USA
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| | - Amy J Jak
- Center of Excellence for Stress and Mental Health, VA San Diego Healthcare System, San Diego, California, USA
- Psychology Service, VA San Diego Healthcare System, San Diego, California, USA
- Department of Psychiatry, University of California San Diego, San Diego, California, USA
| |
Collapse
|
26
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
27
|
Henry GK. The Pain Disability Index: Effects of performance and symptom validity in mild traumatic brain injury litigants with persistent Post-Concussion pain complaints. Clin Neuropsychol 2023; 37:448-458. [PMID: 35109767 DOI: 10.1080/13854046.2022.2029576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE Objectives of the current study were to cross validate the Pain Disability Index (PDI) as a measure of symptom validity in a large sample of mild traumatic brain injury (MTBI) litigants with persistent post-concussive pain complaints, and investigate the effects of performance and symptom validity testing on PDI scores. METHODS Participants included 91 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT), and the MMPI-2-RF Symptom Validity Scale (FBS-r) as a measure of symptom validity (SVT). RESULTS Participants who failed PVT and SVT scored significantly higher on the PDI compared to participants who passed both. Failing both was associated with a large effect size. Failing PVT, but passing SVT, was associated with a medium effect on PDI scores, while passing PVT, but failing SVT demonstrated a small effect. A PDI cutscore of 49 was associated with .90 specificity and .47 sensitivity. CONCLUSION The PDI demonstrates external validity as a self-report measure of symptom validity in MTBI litigants with persistent post-concussive pain complaints. A dose response relationship exists between PVT, SVT and PDI scores. Forensic examiners should include both PVT and SVT to optimize clinical decision making when evaluating MTBI litigants with complaints of pain-related disability years post-incident.
Collapse
|
28
|
Comparative Data for the Morel Emotional Numbing Test: High False-Positive Rate in Older Bona-Fide Neurological Patients. PSYCHOLOGICAL INJURY & LAW 2023. [DOI: 10.1007/s12207-023-09470-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
29
|
Denning JH. The TOMM1 discrepancy index (TDI): A new performance validity test (PVT) that differentiates between invalid cognitive testing and those diagnosed with dementia. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:83-90. [PMID: 33945362 DOI: 10.1080/23279095.2021.1910951] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
There is a need to develop performance validity tests (PVTs) that accurately identify those with severe cognitive decline but also remain sensitive to those suspected of invalid cognitive testing. The TOMM1 Discrepancy Index (TDI) attempts to address both of these issues. Veterans diagnosed with dementia (n = 251) were administered TOMM1 and the MSVT in order to develop the TDI (TOMM1 percent correct minus MSVT Free Recall percent correct). Cut offs based on the dementia sample were then used to identify those in the non-dementia sample (n = 1,226) suspected of invalid test performance (n = 401). Combining TOMM1 and the TDI in the dementia sample greatly reduced the false positive rate (specificity = 0.97) at a cut off of 28 points or less on the TDI. Those suspected of invalid testing were identified at much higher rates (sensitivity = 0.75) compared to the MSVT genuine memory impairment profile (GMIP, sensitivity = 0.49). By utilizing a neurologically plausible pattern of scores across two PVTs, the TDI correctly classified those with dementia and identified a large percentage with invalid test performance. PVTs utilizing a complex pattern of performance may help reduce one's ability to fabricate cognitive deficits.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
30
|
Donders J, Vos M. Utility of CVLT-3 response bias as a measure of performance validity after traumatic brain injury. Clin Neuropsychol 2023; 37:91-100. [PMID: 35285406 DOI: 10.1080/13854046.2022.2051152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE We sought to determine the utility of a new performance validity index that was recently proposed. In particular, we wanted to determine if this index would be associated with a specificity of at least .90, a sensitivity of at least .40, and an Area Under the Curve of at least .70 in a traumatic brain injury (TBI) sample. METHOD We used logistic regression to investigate how well this new index could distinguish persons with TBI (n = 148) who were evaluated within 1-36 months after injury. All participants had been classified on the basis of at least two independent performance validity tests as having provided valid performance (n = 128) or invalid performance (n = 20). RESULTS The new performance validity index had acceptable specificity (.96) but had suboptimal sensitivity (.35) and Area Under the Curve (.66). It was concerning that almost half (5/12) of the cases that were identified by this index as providing invalid effort were false positives. Although a slightly more liberal cut-off improved sensitivity, the problem with poor positive predictive power remained. The conventional Forced Choice index had relatively better classification accuracy. CONCLUSION Differences in base rates between the original sample of Martin et al. and the current one most likely affected positive predictive power of the new index. Although their performance validity has excellent specificity, the current results do not support the application of this index in the clinical evaluation of patients with traumatic brain injury when base rates of invalid performance differ markedly from those in the original study.
Collapse
Affiliation(s)
- Jacobus Donders
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| | - Matthew Vos
- Department of Psychology, Calvin College, Grand Rapids, MI, USA
| |
Collapse
|
31
|
Comprehensive Analysis of MMPI-2-RF Symptom Validity Scales and Performance Validity Test Relationships in a Diverse Mixed Neuropsychiatric Setting. PSYCHOLOGICAL INJURY & LAW 2023; 16:61-72. [PMID: 36348958 PMCID: PMC9633118 DOI: 10.1007/s12207-022-09467-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/24/2022] [Indexed: 11/06/2022]
Abstract
The utility of symptom (SVT) and performance (PVT) validity tests has been independently established in neuropsychological evaluations, yet research on the relationship between these two types of validity indices is limited to circumscribed populations and measures. This study examined the relationship between SVTs on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) and PVTs in a mixed neuropsychiatric setting. This cross-sectional study included data from 181 diagnostically and demographically diverse patients with neuropsychiatric conditions referred for outpatient clinical neuropsychological evaluation at an academic medical center. All patients were administered a uniform neuropsychological battery, including the MMPI-2-RF and five PVTs (i.e., Dot Counting Test; Medical Symptom Validity Test; Reliable Digit Span; Test of Memory Malingering-Trial 1; Word Choice Test). Nonsignificant associations emerged between SVT and PVT performance. Although the Response Bias Scale was most predictive of PVT performance, MMPI-2-RF SVTs generally had low classification accuracy for predicting PVT performance. Neuropsychological test performance was related to MMPI-2-RF SVT status only when overreporting elevations were at extreme scores. The current study further supports that SVTs and PVTs measure unique and dissociable constructs among diverse patients with neuropsychiatric conditions, consistent with literature from other clinical contexts. Therefore, objective evidence of symptom overreporting on MMPI-2-RF SVTs cannot be interpreted as definitively indicating invalid performance on tests of neurocognitive abilities. As such, clinicians should include both SVTs and PVTs as part of a comprehensive neuropsychological evaluation as they provide unique information regarding performance and symptom validity.
Collapse
|
32
|
Henry GK. The Continuous Visual Memory Test: Update and extension on the operating characteristics as an embedded measure of cognitive performance validity. Clin Neuropsychol 2023; 37:194-206. [PMID: 34890307 DOI: 10.1080/13854046.2021.2010807] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Objective: To compare and update predictive models comprised of embedded measures from the Continuous Visual Memory Test (CVMT) in their ability to predict performance validity in personal injury litigants. Methods: Ninety-two personal injury litigants underwent a comprehensive neuropsychological examination. Criterion groups were formed, i.e. PVT-Pass and PVT-Fail, based upon their performance on stand-alone measures of performance validity (PVT). Independent-samples t-tests investigated group differences on dependent variables of interest while logistic regression analyses, as well as a decision tree classification procedure, were employed to identify the best predictive model. Results: The PVT-Fail group scored significantly lower on the 20-item Larrabee Index (LI), and three CVMT variables comprising the Henry-Enders Index (HEI) including Hits, Total Score, and Delayed Recall, but significantly higher on False Alarm Errors. Although the Total score was the best single predictor of PVT status, the addition of LI improved sensitivity. The best predictive model was derived via a classification and regression tree analysis which selected LI and CVMT-FA resulting in .91 specificity, .60 sensitivity, and ROC = 0.832. Conclusion: In the current study total CVMT scores < 70, and LI scores < 18 were rare for PI litigants with MTBI and not seen in litigants with moderate and severe brain injury who passed PVTs. Three predictive CVMT models were derived. When failure on one of the models is observed then concerns about the credibility of visual memory performance should be considered with particular attention to other stand-alone and embedded measures of performance validity.
Collapse
|
33
|
Obolsky MA, Resch ZJ, Fellin TJ, Cerny BM, Khan H, Bing-Canar H, McCollum K, Lee RC, Fink JW, Pliskin NH, Soble JR. Concordance of Performance and Symptom Validity Tests Within an Electrical Injury Sample. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09469-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
34
|
Krynicki CR, Hacker D, Jones CA. An evaluation of the convergent validity of a face‐to‐face and virtual neuropsychological assessment counter balanced. J Neuropsychol 2022. [DOI: 10.1111/jnp.12300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 10/24/2022] [Accepted: 10/27/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Carl R. Krynicki
- School of Psychology The University of Birmingham Birmingham UK
- Birmingham and Solihull Mental Health NHS Foundation Trust Birmingham UK
| | - David Hacker
- Clinical Neuropsychology Department University Hospitals Birmingham NHS Foundation Trust Birmingham UK
| | | |
Collapse
|
35
|
Binder LM, Tadrous-Furnanz SK, Storzbach D, Larrabee GJ, Salinsky MC. The rate of psychiatric disorders in veterans undergoing intensive EEG monitoring is associated with symptom and performance invalidity. Clin Neuropsychol 2022; 36:2120-2134. [PMID: 34632958 DOI: 10.1080/13854046.2021.1974564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
To determine if the number of participants with psychiatric disorders increased in association with failures on symptom validity tests (SVTs) and a performance validity test (PVT) in Veterans admitted for evaluation of possible seizures. The 254 participants were Veterans undergoing inpatient video-EEG monitoring for the diagnosis of possible seizures. DSM-IV psychiatric disorders were diagnosed with the SCID IV. Symptom exaggeration was assessed with the MMPI-2-RF and performance validity with the TOMM. On the MMPI-2-RF, 27.6%-32.7% showed symptom exaggeration. Participants who exaggerated on the MMPI-2-RF were more often diagnosed with psychiatric disorders. The TOMM was failed by 15.4% of the sample. Participants who failed the TOMM were more often diagnosed with an Axis I disorder but not with a personality disorder. The MMPI-2-RF was invalid in more cases than the TOMM, but 7.9% of the sample generated a valid MMPI-2-RF and an invalid TOMM. The correlational design does not allow conclusions about cause and effect. The invalid groups may have had a higher rate of psychopathology. The number of participants with psychiatric disorders increased in association with symptom exaggeration and performance invalidity. Symptom exaggeration was more frequent than performance invalidity, but the TOMM made a unique contribution to identification of invalidity. The routine clinical use of SVTs and PVTs is supported. The results also suggest the need for caution in diagnosing psychiatric disorders when there is symptom exaggeration or performance invalidity, because diagnostic validity is dependent on the accuracy of symptom reporting.
Collapse
Affiliation(s)
| | | | | | | | - Martin C Salinsky
- VA Healthcare System, Portland, Oregon, USA.,Oregon Health and Science University, Portland, Oregon, USA
| |
Collapse
|
36
|
Guty E, Horner MD. The minimal effect of depression on cognitive functioning when accounting for TOMM performance in a sample of U.S. veterans. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-9. [PMID: 36315488 DOI: 10.1080/23279095.2022.2137026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While many studies have demonstrated a relationship between depression and cognitive deficits, most have neglected to include measurements of performance validity. This study examined the relationship between depression and cognition after accounting for noncredible performance. Participants were veterans referred for outpatient clinical evaluation. The first set of regression analyses (N = 187) included age, sex, and education in Model 1, Beck Depression Inventory-2 (BDI-2) added in Model 2, and pass/failure of Test of Memory Malingering (TOMM) added in Model 3 as predictors of 12 neuropsychological test indices. The second set of analyses (N = 559) mirrored the first but with Major Depressive Disorder (MDD) diagnosis in Models 2 and 3. In the first analyses, after including TOMM in the model, only the relationship between BDI-2 and verbal fluency remained significant, but this did not survive a Bonferroni correction. In the second analyses, after including TOMM and Bonferroni correction, MDD diagnosis was a significant predictor only for CVLT-II Short Delay Free Recall. Therefore, the relationship between depression and cognition may not be driven by frank cognitive impairment, but rather by psychological mechanisms, which has implications for addressing depressed individuals' concerns about their cognitive functioning and suggest the value of providing psychoeducation and reassurance.
Collapse
Affiliation(s)
- Erin Guty
- Psychology, The Pennsylvania State University, University Park, PA, USA
- Mental Health Service, Ralph H. Johnson VAMC, Charleston, SC, USA
| | - Michael David Horner
- Mental Health, Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
37
|
Resch ZJ, Cerny BM, Ovsiew GP, Jennette KJ, Bing-Canar H, Rhoads T, Soble JR. A Direct Comparison of 10 WAIS-IV Digit Span Embedded Validity Indicators among a Mixed Neuropsychiatric Sample with Varying Degrees of Cognitive Impairment. Arch Clin Neuropsychol 2022; 38:619-632. [DOI: 10.1093/arclin/acac082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2022] [Indexed: 11/13/2022] Open
Abstract
Abstract
Objective
Reliable Digit Span (RDS), RDS-Revised (RDS-R), and age-corrected scaled score (ACSS) have been previously validated as embedded performance validity tests (PVTs) from the Wechsler Adult Intelligence Scale-IV Digit Span subtest (WAIS-IV DS). However, few studies have directly compared the relative utility of these and other proposed WAIS-IV DS validity indicators within a single sample.
Method
This study compared classification accuracies of 10 WAIS-IV DS indices in a mixed neuropsychiatric sample of 227 outpatients who completed a standardized neuropsychological battery. Participants with ≤1 PVT failures of the four, freestanding criterion PVTs constituted the valid group (n = 181), whereas those with ≥2 PVT failures formed the invalid group (n = 46). Among the valid group, 113 met criteria for mild cognitive impairment (MCI).
Results
Classification accuracies for all DS indicators were statistically significant across the overall sample and subsamples with and without MCI, apart from indices derived from the Forward trial in the MCI sample. DS Sequencing ACSS, working memory RDS (wmRDS), and DS ACSS emerged as the most effective predictors of validity status, with acceptable to excellent classification accuracy for the overall sample (AUCs = 0.792–0.816; 35%–50% sensitivity/88%–96% specificity).
Conclusions
Although most DS indices demonstrated clinical utility as embedded PVTs, DS Sequencing ACSS, wmRDS, and DS ACSS may be particularly robust to cognitive impairment, minimizing risk of false positive errors while identifying noncredible performance. Moreover, DS indices incorporating data from multiple trials (i.e., wmRDS, DS ACSS) also generally yielded greater classification accuracy than those derived from a single trial.
Collapse
Affiliation(s)
- Zachary J Resch
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
| | - Brian M Cerny
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- Illinois Institute of Technology Department of Psychology, , Chicago, IL, USA
| | - Gabriel P Ovsiew
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
| | - Kyle J Jennette
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
| | - Hanaan Bing-Canar
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- University of Illinois at Chicago Department of Psychology, , Chicago, IL, USA
| | - Tasha Rhoads
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science Department of Psychology, , North Chicago, IL, USA
| | - Jason R Soble
- University of Illinois College of Medicine Department of Psychiatry, , Chicago, IL, USA
- University of Illinois College of Medicine Department of Neurology, , Chicago, IL, USA
| |
Collapse
|
38
|
Bosso T, Vischia F, Keller R, Vai D, Imperiale D, Vercelli A. A case report and literature review of cognitive malingering and psychopathology. Front Psychiatry 2022; 13:981475. [PMID: 36311526 PMCID: PMC9613951 DOI: 10.3389/fpsyt.2022.981475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 09/27/2022] [Indexed: 11/13/2022] Open
Abstract
Malingering of cognitive difficulties constitutes a major issue in psychiatric forensic settings. Here, we present a selective literature review related to the topic of cognitive malingering, psychopathology and their possible connections. Furthermore, we report a single case study of a 60-year-old man with a long and ongoing judicial history who exhibits a suspicious multi-domain neurocognitive disorder with significant reduction of autonomy in daily living, alongside a longtime history of depressive symptoms. Building on this, we suggest the importance of evaluating malingering conditions through both psychiatric and neuropsychological assessment tools. More specifically, the use of Performance Validity Tests (PVTs)-commonly but not quite correctly considered as tests of "malingering"-alongside the collection of clinical history and the use of routine psychometric testing, seems to be crucial in order to detect discrepancies between self-reported patient's symptoms, embedded validity indicators and psychometric results.
Collapse
Affiliation(s)
- Tea Bosso
- Department of Psychology, University of Turin, Turin, Italy
| | - Flavio Vischia
- Cognitive Disorders Diagnosis and Treatment Centre, North-West Unit Amedeo di Savoia Hospital, ASL Città di Torino, Turin, Italy
| | - Roberto Keller
- Mental Health Department North-West Unit, Local Health Unit, ASL Città di Torino, Turin, Italy
| | - Daniela Vai
- Cognitive Disorders Diagnosis and Treatment Centre, North-West Unit Amedeo di Savoia Hospital, ASL Città di Torino, Turin, Italy
| | - Daniele Imperiale
- Cognitive Disorders Diagnosis and Treatment Centre, North-West Unit Amedeo di Savoia Hospital, ASL Città di Torino, Turin, Italy
| | - Alessandro Vercelli
- Department of Neuroscience "Rita Levi Montalcini", University of Turin, Turin, Italy
| |
Collapse
|
39
|
Doddato FR, Forde J, Wang Y, Puente AE. An alternative approach to TOMM cutoff scores using a large sample of military personnel. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-9. [PMID: 36227693 DOI: 10.1080/23279095.2022.2119391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The accuracy of neuropsychological assessments relies on participants exhibiting their true abilities during administration. The Test of Memory Malingering (TOMM) is a popular performance validity test used to determine whether an individual is providing honest answers. While the TOMM has proven to be highly sensitive to those who are deliberately exaggerating their symptoms, there is a limited explanation regarding the significance of using 45 as a cutoff score. The present study aims to further investigate this question by examining TOMM scores obtained in a large sample of active-duty military personnel (N = 859, M = 26 years, SD = 6.14, 97.31% males, 72.44% white). Results indicated that no notable discrepancies existed between the frequency of participants who scored a 45 and those who scored slightly below a 45 on the TOMM. The sensitivity and specificity of the TOMM were derived using the forced-choice recognition (FCR) scores obtained by participants on the California Verbal Learning Test, Second Edition (CVLT-II). The sensitivity for each trial of the TOMM was 0.84, 0.55, and 0.63, respectively; the specificity for each trial of the TOMM was 0.69, 0.93, and 0.92, respectively. Because sensitivity and specificity rates are both of importance in this study, balanced accuracy scores were also reported. Results suggested that various alternative cutoff scores produced a more accurate classification compared to the traditional cutoff of 45. Further analyses using Fisher's exact test also indicated that there were no significant performance differences on the FCR of the CVLT-II between individuals who received a 44 and individuals who received a 45 on the TOMM. The current study provides evidence on why the traditional cutoff may not be the most effective score. Future research should consider employing alternative methods which do not rely on a single score.
Collapse
Affiliation(s)
- Felicity R Doddato
- Department of Psychology, University of North Carolina Wilmington, Wilmington, NC, USA
| | - Jessica Forde
- Naval Hospital, Marine Corps Base Camp LeJeune, Hampstead, NC, USA
| | - Yishi Wang
- Department of Mathematics and Statistics, University of North Carolina Wilmington, Wilmington, NC, USA
| | - Antonio E Puente
- Department of Psychology, University of North Carolina Wilmington, Wilmington, NC, USA
| |
Collapse
|
40
|
Jennette KJ, Rhoads T, Resch ZJ, Cerny BM, Leib SI, Sharp DW, Ovsiew GP, Soble JR. Multivariable analysis of the relative utility and additive value of eight embedded performance validity tests for classifying invalid neuropsychological test performance. J Clin Exp Neuropsychol 2022; 44:451-460. [PMID: 36197342 DOI: 10.1080/13803395.2022.2128067] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
INTRODUCTION This study investigated a combination of eight embedded performance validity tests (PVTs) derived from commonly administered neuropsychological tests to optimize sensitivity/specificity for detecting invalid neuropsychological test performance. The goal of this study was to evaluate what combination of these common embedded PVTs that have the most robust predictive power for detecting invalid neuropsychological test performance in a single diverse clinical sample. METHOD Eight previously validated memory- and nonmemory-based embedded PVTs were examined among 231 patients undergoing neuropsychological evaluation. Patients were classified into valid/invalid groups based on four independent criterion PVTs. Embedded PVT accuracy was assessed using standard and stepwise multiple logistic regression models. RESULTS Three PVTs, the Brief Visuospatial Memory Test-Revised Recognition Discrimination (BVMT-R-RD), Rey Auditory Verbal Learning Test Forced Choice, and WAIS-IV Digit Span Age Corrected Scaled Score, predicted 45.5% of the variance in validity group membership. BVMT-RD independently accounted for 32% of the variance in prediction of independent, criterion-defined validity group membership. CONCLUSIONS This study demonstrated the incremental predictive power of multiple embedded PVTs derived from common neuropsychological measures in detecting invalid test performance and those measures accounting for the greatest portion of the variance. These results provide guidance for evaluating the most fruitful embedded PVTs and proof of concept to better guide selection of embedded validity indices. Further, this offers clinicians an efficient, empirically derived approach to assessing performance validity when time restraints potentially limit the use of freestanding PVTs.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Dillon W Sharp
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
41
|
Schroeder RW, Clark HA, Martin PK. Base rates of invalidity when patients undergoing routine clinical evaluations have social security disability as an external incentive. Clin Neuropsychol 2022; 36:1902-1914. [PMID: 33706657 DOI: 10.1080/13854046.2021.1895322] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: Social Security Disability is a common external incentive in neuropsychological evaluations. This study determined base rates of invalidity when patients referred for routine clinical evaluations have Social Security Disability as an external incentive. Method: Patients (n = 242) were grouped as validly or invalidly performing based on the use of multiple performance validity tests. Frequency analyses were then conducted. Results: As a whole, 46.0% of clinically referred patients with Social Security Disability as an external incentive produced invalid data. When divided by disability pursuit status, 58.6% of individuals already receiving Social Security Disability, 44.6% of individuals actively seeking Social Security Disability, and 39.3% of individuals considering seeking Social Security Disability produced invalid data. By comparison, only 8.5% of clinically referred patients without known external incentives produced invalid data. Conclusions: Beyond establishing base rates, these data indicate that the external incentive, not necessarily the evaluation setting, increases the rate of invalidity, as obtained base rates mirror those observed in independent medical examinations. In addition, this study highlights that even patients who report that they are considering but have not committed themselves to pursuing an external incentive frequently invalidate testing.
Collapse
Affiliation(s)
- Ryan W Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, Kansas, USA
| | - Hilary A Clark
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, Kansas, USA
| | - Phillip K Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, Kansas, USA
| |
Collapse
|
42
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Replication and cross-validation of the personality assessment inventory (PAI) cognitive bias scale (CBS) in a mixed clinical sample. Clin Neuropsychol 2022; 36:1860-1877. [PMID: 33612093 PMCID: PMC8454137 DOI: 10.1080/13854046.2021.1889681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/08/2021] [Indexed: 01/27/2023]
Abstract
Objective: This study is a cross-validation of the Cognitive Bias Scale (CBS) from the Personality Assessment Inventory (PAI), a ten-item scale designed to assess symptom endorsement associated with performance validity test failure in neuropsychological samples. The study utilized a mixed neuropsychological sample of consecutively referred patients at a large academic medical center in the Midwest. Participants and Methods: Participants were 332 patients who completed embedded and free-standing performance validity tests (PVTs) and the PAI. Pass and fail groups were created based on PVT performance to evaluate classification accuracy of the CBS. Results: The results were generally consistent with the initial study for overall classification accuracy, sensitivity, and cut-off score. Consistent with the validation study, CBS had better classification accuracy than the original PAI validity scales and a comparable effect size to that obtained in the original validation publication; however, the Somatic Complaints scale (SOM) and the Conversion subscale (SOM-C) also demonstrated good classification accuracy. The CBS had incremental predictive ability compared to existing PAI scales. Conclusions: The results supported the CBS, but further research is needed on specific populations. Findings from this present study also suggest the relationship between conversion tendencies and PVT failure may be stronger in some geographic locations or population types (forensic versus clinical patients).
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, USA
| |
Collapse
|
43
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
44
|
Jennette KJ, Williams CP, Resch ZJ, Ovsiew GP, Durkin NM, O'Rourke JJF, Marceaux JC, Critchfield EA, Soble JR. Assessment of differential neurocognitive performance based on the number of performance validity tests failures: A cross-validation study across multiple mixed clinical samples. Clin Neuropsychol 2022; 36:1915-1932. [PMID: 33759699 DOI: 10.1080/13854046.2021.1900398] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This cross-sectional study examined the effect of number of Performance Validity Test (PVT) failures on neuropsychological test performance among a demographically diverse Veteran (VA) sample (n = 76) and academic medical sample (AMC; n = 128). A secondary goal was to investigate the psychometric implications of including versus excluding those with one PVT failure when cross-validating a series of embedded PVTs. Method: All patients completed the same six criterion PVTs, with the AMC sample completing three additional embedded PVTs. Neurocognitive test performance differences were examined based on number of PVT failures (0, 1, 2+) for both samples, and effect of number of criterion failures on embedded PVT performance was analyzed among the AMC sample. Results: Both groups with 0 or 1 PVT failures performed better than those with ≥2 PVT failures across most cognitive tests. There were nonsignificant differences between those with 0 or 1 PVT failures except for one test in the AMC sample. Receiver operator characteristic curve analyses found no differences in optimal cut score based on number of PVT failures when retaining/excluding one PVT failure. Conclusion: Findings support the use of ≥2 PVT failures as indicative of performance invalidity. These findings strongly support including those with one PVT failure with those with zero PVT failures in diagnostic accuracy studies, given that their inclusion reflects actual clinical practice, does not reduce sample sizes, and does not artificially deflate neurocognitive test results or inflate PVT classification accuracy statistics.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
45
|
Weitzner DS, Miller BI, Webber TA. Embedded cognitive and emotional/affective self-reported symptom validity indices on the patient competency rating scale. J Clin Exp Neuropsychol 2022; 44:533-549. [PMID: 36369702 DOI: 10.1080/13803395.2022.2138270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Although there is an abundance of research on stand-alone and embedded performance validity tests and stand-alone symptom validity tests (SVTs), less emphasis has been placed on embedded SVTs. The goal of the current study was to examine the ability of embedded indicators within the Patient Competency Rating Scale (PCRS) to separately detect invalid cognitive and/or emotional/affective symptom responding. METHOD Participants included 299 veterans assessed in a VA medical center epilepsy monitoring unit from 2013-2017 (mean age = 48.8 years, SD = 13.5 years). Two SVT composites were created; self-reported cognitive symptom validity (SVT-C) and self-reported emotional/affective symptom validity (SVT-E). Groups were compared on PCRS total and index scores (i.e., cognitive, activities of daily living, emotional, and interpersonal competencies) using ANOVAs. Receiver operating characteristic (ROC) curve analyses assessed the classification accuracy of the PCRS total and index scores for SVT-C and SVT-E. RESULTS In ANOVAs, SVT-C was significantly associated with all PCRS indices, while SVT-E was only significantly associated with the PCRS total, emotional, and interpersonal competency indices. Although the PCRS-T ≤ 90 had the strongest classification of SVT-C and SVT-E (specificities: .90, sensitivities: .44 to .50), PCRS index scores showed suggestive evidence of domain specificity, with PCRS-ADL ≤22, PCRS-C ≤ 20, and PCRS-CADL ≤45 best classifying SVT-C (specificities: .92, sensitivities: .33) and the PCRS-E ≤ 18 best classifying the SVT-E group (specificity: .93, sensitivity: .40). CONCLUSION Results suggest the PCRS may be used to obtain clinically useful information while including embedded indicators that can assess cognitive and/or emotional/affective symptom invalidity.
Collapse
Affiliation(s)
- Daniel S Weitzner
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
46
|
Donders J, Hayden A. Utility of the D-KEFS color word interference test as an embedded measure of performance validity after traumatic brain injury. Clin Neuropsychol 2022; 36:1964-1974. [PMID: 33327855 DOI: 10.1080/13854046.2020.1861659] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveWe sought to determine the accuracy of embedded performance measures for the D-KEFS Color Word Interference Test that were recently proposed by Eglit et al. In particular, we wanted to determine if these indices would be associated with a specificity of at least .90, an Area Under the Curve of at least .70 and a positive likelihood ratio of at least 2. Method: We used logistic regression to investigate how well these indices could distinguish persons with traumatic brain injury (n = 169) who were evaluated within 1-12 months after injury. All participants had been classified on the basis of at least three independent performance validity tests as valid performance (n = 145) or invalid performance (n = 24). Results: None of the three indices that Eglit et al. had proposed as embedded performance measures for the D-KEFS Color Word Interference Test achieved the a priori defined minimally acceptable level of specificity. One of them did meet the criteria for Area Under the Curve as well as positive likelihood ratio. Conclusion: The current results do not support the application of the Eglit et al. embedded performance validity measures for the D-KEFS Color Word Interference Test in the clinical evaluation of patients with traumatic brain injury.
Collapse
Affiliation(s)
- Jacobus Donders
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| | - Ashley Hayden
- Department of Psychology, Hope College, Holland, MI, USA
| |
Collapse
|
47
|
Cohen CD, Rhoads T, Keezer RD, Jennette KJ, Williams CP, Hansen ND, Ovsiew GP, Resch ZJ, Soble JR. All of the accuracy in half of the time: Assessing abbreviated versions of the Test of Memory Malingering in the context of verbal and visual memory impairment. Clin Neuropsychol 2022; 36:1933-1949. [PMID: 33836622 DOI: 10.1080/13854046.2021.1908596] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveThe Test of Memory Malingering (TOMM) Trial 1 (T1) and errors on the first 10 items of T1 (T1-e10) were developed as briefer versions of the TOMM to minimize evaluation time and burden, although the effect of genuine memory impairment on these indices is not well established. This study examined whether increasing material-specific verbal and visual memory impairment affected T1 and T1-e10 performance and accuracy for detecting invalidity. Method: Data from 155 neuropsychiatric patients administered the TOMM, Rey Auditory Verbal Learning Test (RAVLT), and Brief Visuospatial Memory Test-Revised (BVMT-R) during outpatient evaluation were examined. Valid (N = 125) and invalid (N = 30) groups were established by four independent criterion performance validity tests. Verbal/visual memory impairment was classified as ≥37 T (normal memory); 30 T-36T (mild impairment); and ≤29 T (severe impairment). Results: Overall, T1 had outstanding accuracy, with 77% sensitivity/90% specificity. T1-e10 was less accurate but had excellent discriminability, with 60% sensitivity/87% specificity. T1 maintained excellent accuracy regardless of memory impairment severity, with 77% sensitivity/≥88% specificity and a relatively invariant cut-score even among those with severe verbal/visual memory impairment. T1-e10 had excellent classification accuracy among those with normal memory and mild impairment, but accuracy and sensitivity dropped with severe impairment and the optimal cut-score had to be increased to maintain adequate specificity. Conclusion: TOMM T1 is an effective performance validity test with strong psychometric properties regardless of material-specificity and severity of memory impairment. By contrast, T1-e10 functions relatively well in the context of mild memory impairment but has reduced discriminability with severe memory impairment.
Collapse
Affiliation(s)
- Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,School of Psychology, Counseling, and Family Therapy, Wheaton College, Wheaton, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
48
|
Henry GK. Response time measures on the Word Memory Test do not add incremental validity to accuracy scores in predicting noncredible neurocognitive dysfunction in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-7. [PMID: 36170848 DOI: 10.1080/23279095.2022.2126320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The objective of the current study was to investigate whether response time measures on the Word Memory Test (WMT) increase predictive validity on determining noncredible neurocognitive dysfunction in a large sample of mild traumatic brain injury (MTBI) litigants. Participants included 203 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT). Participants failing PVTs exhibited significantly slower response times and lower accuracy on the WMT compared to participants who passed PVTs. Response time measures did not add significant incremental validity beyond that afforded by WMT accuracy measures alone. The best predictor of PVT status was the WMT Consistency Score (CNS) which was associated with an extremely large effect size (d = 16.44), followed by Immediate Recognition (IR: d = 10.68) and Delayed Recognition (DR: d = 10.10).
Collapse
|
49
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
50
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|