1
|
Webber TA, Lorkiewicz S, Woods SP, Miller B, Soble JR. Does neuropsychological intraindividual variability index cognitive dysfunction, an invalid presentation, or both? Preliminary findings from a mixed clinical older adult veteran sample. J Clin Exp Neuropsychol 2024; 46:535-556. [PMID: 39120111 DOI: 10.1080/13803395.2024.2388096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024]
Abstract
INTRODUCTION Intraindividual variability across a battery of neuropsychological tests (IIV-dispersion) can reflect normal variation in scores or arise from cognitive impairment. An alternate interpretation is IIV-dispersion reflects reduced engagement/invalid test data, although extant research addressing this interpretation is significantly limited. METHOD We used a sample of 97 older adult (mean age: 69.92), predominantly White (57%) or Black/African American (34%), and predominantly cis-gender male (87%) veterans. Examinees completed a comprehensive neuropsychological battery, including measures of reduced engagement/invalid test data (a symptom validity test [SVT], multiple performance validity tests [PVTs]), as part of a clinical evaluation. IIV-dispersion was indexed using the coefficient of variance (CoV). We tested 1) the relationships of raw scores and "failures" on SVT/PVTs with IIV-dispersion, 2) the relationship between IIV-dispersion and validity/neurocognitive disorder status, and 3) whether IIV-dispersion discriminated the validity/neurocognitive disorder groups using receiver operating characteristic (ROC) curves. RESULTS IIV-dispersion was significantly and independently associated with a selection of PVTs, with small to very large effect sizes. Participants with invalid profiles and cognitively impaired participants with valid profiles exhibited medium to large (d = .55-1.09) elevations in IIV-dispersion compared to cognitively unimpaired participants with valid profiles. A non-significant but small to medium (d = .35-.60) elevation in IIV-dispersion was observed for participants with invalid profiles compared to those with a neurocognitive disorder. IIV-dispersion was largely accurate at differentiating participants without a neurocognitive disorder from invalid participants and those with a neurocognitive disorder (areas under the Curve [AUCs]=.69-.83), while accuracy was low for differentiating invalid participants from those with a neurocognitive disorder (AUCs=.58-.65). CONCLUSIONS These preliminary data suggest IIV-dispersion may be sensitive to both neurocognitive disorders and compromised engagement. Clinicians and researchers should exercise due diligence and consider test validity (e.g. PVTs, behavioral signs of engagement) as an alternate explanation prior to interpretation of intraindividual variability as an indicator of cognitive impairment.
Collapse
Affiliation(s)
- Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
- Department of Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sara Lorkiewicz
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | | | - Brian Miller
- Department of Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
2
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
3
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
4
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
5
|
Braun SE, Fountain-Zaragoza S, Halliday CA, Horner MD. Demographic differences in performance validity test failure. APPLIED NEUROPSYCHOLOGY. ADULT 2021:1-9. [PMID: 34428386 DOI: 10.1080/23279095.2021.1958814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The present study investigated demographic differences in performance validity test (PVT) failure in a Veteran sample. METHOD Data were extracted from clinical neuropsychological evaluations. Only veterans who identified as men, as either European American/White (EA) or African American/Black (AA) were included (n = 1261). We investigated whether performance on two frequently used PVTs, the Test of Memory Malingering (TOMM), and the Medical Symptom Validity Test (MSVT), differed by age, education, and race using separate logistic regressions. RESULTS Veterans with younger age, less education, and Veterans Affairs (VA) service-connected disability were significantly more likely to fail both PVTs. Race was not a significant predictor of MSVT failure, but AA patients were significantly more likely than EA patients to fail the TOMM. For all significant demographic predictors in the models, effects were small. In a subsample of patients who were given both PVTs (n = 461), the effects of race on performance remained. CONCLUSIONS Performance on the TOMM and MSVT differed by age and level of education. Performance on the TOMM differed between EA and AA patients, whereas performance on the MSVT did not. These results suggest that demographic factors may play a small but measurable role in performance on specific PVTs.
Collapse
Affiliation(s)
- Sarah Ellen Braun
- Department of Neurology, Virginia Commonwealth University, Richmond, VA, USA
- Massey Cancer Center, Richmond, VA, USA
| | | | - Colleen A Halliday
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
6
|
The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test. Brain Sci 2021; 11:brainsci11081039. [PMID: 34439658 PMCID: PMC8393330 DOI: 10.3390/brainsci11081039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/16/2022] Open
Abstract
Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants' performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering-TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators' objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.
Collapse
|
7
|
Psychological Symptoms and Rates of Performance Validity Improve Following Trauma-Focused Treatment in Veterans with PTSD and History of Mild-to-Moderate TBI. J Int Neuropsychol Soc 2020; 26:108-118. [PMID: 31658923 DOI: 10.1017/s1355617719000997] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Iraq and Afghanistan Veterans with posttraumatic stress disorder (PTSD) and traumatic brain injury (TBI) history have high rates of performance validity test (PVT) failure. The study aimed to determine whether those with scores in the invalid versus valid range on PVTs show similar benefit from psychotherapy and if psychotherapy improves PVT performance. METHOD Veterans (N = 100) with PTSD, mild-to-moderate TBI history, and cognitive complaints underwent neuropsychological testing at baseline, post-treatment, and 3-month post-treatment. Veterans were randomly assigned to cognitive processing therapy (CPT) or a novel hybrid intervention integrating CPT with TBI psychoeducation and cognitive rehabilitation strategies from Cognitive Symptom Management and Rehabilitation Therapy (CogSMART). Performance below standard cutoffs on any PVT trial across three different PVT measures was considered invalid (PVT-Fail), whereas performance above cutoffs on all measures was considered valid (PVT-Pass). RESULTS Although both PVT groups exhibited clinically significant improvement in PTSD symptoms, the PVT-Pass group demonstrated greater symptom reduction than the PVT-Fail group. Measures of post-concussive and depressive symptoms improved to a similar degree across groups. Treatment condition did not moderate these results. Rate of valid test performance increased from baseline to follow-up across conditions, with a stronger effect in the SMART-CPT compared to CPT condition. CONCLUSION Both PVT groups experienced improved psychological symptoms following treatment. Veterans who failed PVTs at baseline demonstrated better test engagement following treatment, resulting in higher rates of valid PVTs at follow-up. Veterans with invalid PVTs should be enrolled in trauma-focused treatment and may benefit from neuropsychological assessment after, rather than before, treatment.
Collapse
|
8
|
Bailey KC, Webber TA, Phillips JI, Kraemer LDR, Marceaux JC, Soble JR. When Time is of the Essence: Preliminary Findings for a Quick Administration of the Dot Counting Test. Arch Clin Neuropsychol 2019; 36:403-413. [DOI: 10.1093/arclin/acz058] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 08/14/2019] [Indexed: 11/13/2022] Open
Abstract
Abstract
Objective
Performance validity research has emphasized the need for briefer measures and, more recently, abbreviated versions of established free-standing tests to minimize neuropsychological evaluation costs/time burden. This study examined the accuracy of multiple abbreviated versions of the Dot Counting Test (“quick” DCT) for detecting invalid performance in isolation and in combination with the Test of Memory Malingering Trial 1 (TOMMT1).
Method
Data from a mixed clinical sample of 107 veterans (80 valid/27 invalid per independent validity measures and structured criteria) were included in this cross-sectional study; 47% of valid participants were cognitively impaired. Sensitivities/specificities of various 6- and 4-card DCT combinations were calculated and compared to the full, 12-card DCT. Combined models with the most accurate 6- and 4-card combinations and TOMMT1 were then examined.
Results
Receiver operator characteristic curve analyses were significant for all 6- and 4-card DCT combinations with areas under the curve of .868–.897. The best 6-card combination (cards, 1-3-5-8-11-12) had 56% sensitivity/90% specificity (E-score cut-off, ≥14.5), and the best 4-card combination (cards, 3-4-8-11) had 63% sensitivity/94% specificity (cut-off, ≥16.75). The full DCT had 70% sensitivity/90% specificity (cut-off, ≥16.00). Logistic regression revealed 95% classification accuracy when 6-card or 4-card “quick” combinations were combined with TOMMT1, with the DCT combinations and TOMMT1 both emerging as significant predictors.
Conclusions
Abbreviated DCT versions utilizing 6- and 4-card combinations yielded comparable sensitivity/specificity as the full DCT. When these “quick” DCT combinations were further combined with an abbreviated memory-based performance validity test (i.e., TOMMT1), overall classification accuracy for identifying invalid performance was 95%.
Collapse
Affiliation(s)
- K Chase Bailey
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Troy A Webber
- Rehabilitation and Extended Care Line, Michael E. DeBakey VA Medical Center, Houston, TX 77030, USA
| | - Jacob I Phillips
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Lindsay D R Kraemer
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL 60612, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL 60612, USA
| |
Collapse
|
9
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
10
|
Denning JH. When 10 is enough: Errors on the first 10 items of the Test of Memory Malingering (TOMMe10) and administration time predict freestanding performance validity tests (PVTs) and underperformance on memory measures. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:35-47. [PMID: 30950290 DOI: 10.1080/23279095.2019.1588122] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
It is critical that we develop more efficient performance validity tests (PVTs). A shorter version of the Test of Memory Malingering (TOMM) that utilizes errors on the first 10 items (TOMMe10) has shown promise as a freestanding PVT. Retrospective review included 397 consecutive veterans administered TOMM trial 1 (TOMM1), the Medical Symptom Validity Test (MSVT), and the Brief Visuospatial Memory Test-Revised (BVMT-R). TOMMe10 accuracy and administration time were used to predict performance on freestanding PVTs (TOMM1, MSVT). The impact of failing TOMMe10 (2 or more errors) on independent memory measures was also explored. TOMMe10 was a robust predictor of TOMM1 (area under the curve [AUC] = 0.97) and MSVT (AUC = 0.88) with sensitivities = 0.76 to 0.89 and specificities = 0.89 to 0.96. Administration time predicted PVT performance but did not improve accuracy compared to TOMMe10 alone. Failing TOMMe10 was associated with clinically and statistically significant declines on the BVMT-R and MSVT Paired Associates and Free Recall memory tests (d = -0.32 to -1.31). Consistent with prior research, TOMMe10 at 2 or more errors was highly accurate in predicting performance on other well-validated freestanding PVTs. Failing just 1 freestanding PVT (TOMMe10) significantly impacted memory measures and likely reflects invalid test performance.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
11
|
Differentiating epilepsy from psychogenic nonepileptic seizures using neuropsychological test data. Epilepsy Behav 2018; 87:39-45. [PMID: 30172082 DOI: 10.1016/j.yebeh.2018.08.010] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 07/25/2018] [Accepted: 08/12/2018] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Differentiating epileptic seizures (ES) from psychogenic nonepileptic seizures (PNES) represents a challenging differential diagnosis with important treatment implications. This study was designed to explore the utility of neuropsychological test scores in differentiating ES from PNES. METHOD Psychometric data from 72 patients with ES and 33 patients with PNES were compared on various tests of cognitive ability and performance validity. Individual measures that best discriminated the diagnoses were then entered as predictors in a logistic regression equation with group membership (ES vs. PNES) as the criterion. RESULTS On most tests of cognitive ability, the PNES sample outperformed the ES sample (medium-large effect) and was less likely to fail the Reliable Digit Span. However, patients with PNES failed two embedded validity indicators at significantly higher rates (risk ratios (RR): 2.45-4.16). There were no group differences on the Test of Memory Malingering (TOMM). A logistic regression equation based on seven neuropsychological tests correctly classified 85.1% of patients. The cutoff with perfect specificity was associated with 0.47 sensitivity. CONCLUSIONS Consistent with previous research, the utility of psychometric methods of differential diagnosis is limited by the complex neurocognitive profiles associated with ES and PNES. Although individual measures might help differentiate ES from PNES, multivariate assessment models have superior discriminant power. The strongest psychometric evidence for PNES appears to be a consistent lack of impairment on tests sensitive to diffuse neurocognitive deficits such as processing speed, working memory, and verbal fluency. While video-electroencephalogram (EEG) monitoring is the gold standard of differential diagnosis, psychometric testing has the potential to enhance clinical decision-making, particularly in complex or unclear cases such as patients with nondiagnostic video-EEGs. Adopting a standardized, fixed neuropsychological battery at epilepsy centers would advance research on the differential diagnostic power of psychometric testing.
Collapse
|
12
|
Erdodi LA, Dunn AG, Seke KR, Charron C, McDermott A, Enache A, Maytham C, Hurtubise JL. The Boston Naming Test as a Measure of Performance Validity. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-017-9309-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
13
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|
14
|
Grabyan JM, Collins RL, Alverson WA, Chen DK. Performance on the Test of Memory Malingering is predicted by the number of errors on its first 10 items on an inpatient epilepsy monitoring unit. Clin Neuropsychol 2017; 32:468-478. [DOI: 10.1080/13854046.2017.1368715] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Jonathan M. Grabyan
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Psychiatry, Baylor College of Medicine, Houston, TX, USA
| | - Robert L. Collins
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| | - W. Alexander Alverson
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
| | - David K. Chen
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
15
|
Denning JH, Shura RD. Cost of malingering mild traumatic brain injury-related cognitive deficits during compensation and pension evaluations in the veterans benefits administration. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 26:1-16. [DOI: 10.1080/23279095.2017.1350684] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- John H. Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Robert D. Shura
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, North Carolina, USA
- Mental Health and Behavioral Science Service Line, W. G. (Bill) Hefner Veterans Affairs Medical Center (VAMC), Salisbury, North Carolina, USA
- Department of Psychiatry and Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| |
Collapse
|
16
|
Fazio RL, Denning JH, Denney RL. TOMM Trial 1 as a performance validity indicator in a criminal forensic sample. Clin Neuropsychol 2016; 31:251-267. [DOI: 10.1080/13854046.2016.1213316] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
| | - John H. Denning
- Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Robert L. Denney
- Neuropsychological Associates of Southwest Missouri, Springfield, MO, USA
| |
Collapse
|