1
|
Finley JCA. Performance validity testing: the need for digital technology and where to go from here. Front Psychol 2024; 15:1452462. [PMID: 39193033 PMCID: PMC11347285 DOI: 10.3389/fpsyg.2024.1452462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 07/29/2024] [Indexed: 08/29/2024] Open
Affiliation(s)
- John-Christopher A. Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| |
Collapse
|
2
|
Ladowsky-Brooks RL. Recall and recognition of similarities items in neuropsychological assessment: Memory, validity, and meaning. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-8. [PMID: 38557276 DOI: 10.1080/23279095.2024.2334344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The current study examined whether the Memory Similarities Extended Test (M-SET), a memory test based on the Similarities subtest of the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II), has value in neuropsychological testing. The relationship of M-SET measures of cued recall (CR) and recognition memory (REC) to brain injury severity and memory scores from the Wechsler Memory Scale, Fourth Edition (WMS-IV) was analyzed in examinees with traumatic brain injuries ranging from mild to severe. Examinees who passed standard validity tests were divided into groups with intracranial injury (CT + ve, n = 18) and without intracranial injury (CT-ve, n = 50). In CT + ve only, CR was significantly correlated with Logical Memory I (LMI: rs = .62) and Logical Memory II (LMII: rs = .65). In both groups, there were smaller correlations with delayed visual memory (VRII: rs = .38; rs = .44) and psychomotor speed (Coding: rs = .29; rs = .29). The REC score was neither an indicator of memory ability nor an internal indicator of performance validity. There were no differences in M-SET or WMS-IV scores for CT-ve and CT + ve, and reasons for this are discussed. It is concluded that M-SET has utility as an incidental cued recall measure.
Collapse
|
3
|
Giromini L, Pignolo C, Zennaro A, Sellbom M. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI. Assessment 2024:10731911241235465. [PMID: 38468147 DOI: 10.1177/10731911241235465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Collapse
|
4
|
Tierney SM, Matchanova A, Miller BI, Troyanskaya M, Romesser J, Sim A, Pastorek NJ. Cognitive "success" in the setting of performance validity test failure. J Clin Exp Neuropsychol 2024; 46:46-54. [PMID: 37555316 DOI: 10.1080/13803395.2023.2244161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Accepted: 07/27/2023] [Indexed: 08/10/2023]
Abstract
BACKGROUND Although studies have shown unique variance contributions from performance invalidity, it is difficult to interpret the meaning of cognitive data in the setting of performance validity test (PVT) failure. The current study aimed to examine cognitive outcomes in this context. METHOD Two hundred and twenty-two veterans with a history of mild traumatic brain injury referred for clinical evaluation completed cognitive and performance validity measures. Standardized scores were characterized as Within Normal Limits (≥16th normative percentile) and Below Normal Limits (<16th percentile). Cognitive outcomes are examined across four commonly used PVTs. Self-reported employment and student status were used as indicators of "productivity" to assess potential functional differences related to lower cognitive performance. RESULTS Among participants who performed in the invalid range on Test of Memory Malingering trial 1, Word Memory Test, Wechsler Adult Intelligence Scale-Fourth Edition Digit Span aged corrected scaled score, and the California Verbal Learning Test-Second Edition Forced Choice index, 16-88% earned broadly within normal limits scores across cognitive testing. Depending on which PVT measure was applied, the average number of cognitive performances below the 16th percentile ranged from 5 to 7 of 14 tasks. There were no differences in the total number of below normal limits performances on cognitive measures between "productive" and "non-productive" participants (T = 1.65, p = 1.00). CONCLUSIONS Results of the current study suggest that the range of within normal limits cognitive performance in the context of failed PVTs varies greatly. Importantly, our findings indicate that neurocognitive data may still provide important practical information regarding cognitive abilities, despite poor PVT outcomes. Further, given that rates of below normal limits cognitive performance did not differ among "productivity" groups, results have important implications for functional abilities and recommendations in a clinical setting.
Collapse
Affiliation(s)
- Savanna M Tierney
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
| | - Anastasia Matchanova
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
- H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| | - Maya Troyanskaya
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
- H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| | - Jennifer Romesser
- Department of Psychology, VA Salt Lake City Health Care System, Salt Lake City, UT, USA
| | - Anita Sim
- Physical Medicine & Rehabilitation, Minneapolis VA Health Care System, Minneapolis, MN, USA
| | - Nicholas J Pastorek
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
- H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
5
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
6
|
Del Bene VA, Gerstenecker A, Lazar RM. Formal Neuropsychological Testing: Test Batteries, Interpretation, and Added Value in Practice. Clin Geriatr Med 2023; 39:27-43. [PMID: 36404031 DOI: 10.1016/j.cger.2022.07.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Neuropsychologists evaluate patients for cognitive decline and dementia, using validated psychometric tests, along with behavioral observation, record review, clinical interview, and information about psychological functioning, to evaluate brain-behavior relationships and aid in differential diagnosis and treatment planning. Also considered are premorbid functioning, education, sex, socioeconomic status, primary language, culture, and race-related health disparities when selecting tests, interpreting performance, and providing a diagnostic impression. Neuropsychologists provide diagnostic clarity, explain symptoms and likely disease course to patients and family members, and assist the family with future planning, behavioral management strategies, and ways to mitigate caregiver burden.
Collapse
Affiliation(s)
- Victor A Del Bene
- Department of Neurology, Division of Neuropsychology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA; The Evelyn F. McKnight Brain Institute, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA
| | - Adam Gerstenecker
- Department of Neurology, Division of Neuropsychology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA; The Evelyn F. McKnight Brain Institute, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA
| | - Ronald M Lazar
- Department of Neurology, Division of Neuropsychology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA; The Evelyn F. McKnight Brain Institute, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA; Department of Neurobiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL 35294, USA.
| |
Collapse
|
7
|
Bellew D, Davenport L, Monaghan R, Cogley C, Gaughan M, Yap SM, Tubridy N, Bramham J, McGuigan C, O'Keeffe F. Interpreting the clinical importance of the relationship between subjective fatigue and cognitive impairment in multiple sclerosis (MS): How BICAMS performance is affected by MS-related fatigue. Mult Scler Relat Disord 2022; 67:104161. [PMID: 36126538 DOI: 10.1016/j.msard.2022.104161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 07/31/2022] [Accepted: 09/04/2022] [Indexed: 10/31/2022]
Abstract
BACKGROUND There is evidence that subjective fatigue can influence cognitive functioning in multiple sclerosis (MS). DeLuca et al.'s (2004) Relative Consequence Model proposes that impairments to other high-level cognitive functions, such as memory, result from the disease's effect on information processing speed. OBJECTIVE The primary aims of the study were to investigate both 1) the relationship between subjective fatigue and cognitive functioning, as measured by the widely used Brief International Cognitive Assessment for Multiple Sclerosis (BICAMS) in MS; and 2) the consequential effect of fatigue on information processing speed as predicted by the Relative Consequence Model. METHODS 192 participants with MS attending tertiary referral MS centre completed the Modified Fatigue Impact Scale and BICAMS. RESULTS Multiple correlation analyses determined that there were statistically significant relationships between all domains assessed by the BICAMS and levels of fatigue, such that higher levels of self-reported fatigue were associated with lower performance on information-processing, and visual and verbal learning. After controlling for information processing speed, the strength of correlation between fatigue and learning performance weakened. Linear regression analysis showed that fatigue predicted the most variance in verbal learning and 11.7% of the overall variance in BICAMS performance. CONCLUSION Subjective fatigue and objective cognitive performance in MS are related. Caution is advised in the interpretation of BICAMS scores in cases where high levels of fatigue are present, and more detailed neuropsychological assessments may be required in order to accurately identify objective cognitive impairment independent of subjective fatigue.
Collapse
Affiliation(s)
- David Bellew
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Laura Davenport
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Ruth Monaghan
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Clodagh Cogley
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Maria Gaughan
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Siew Mei Yap
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Niall Tubridy
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Jessica Bramham
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Christopher McGuigan
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland
| | - Fiadhnait O'Keeffe
- Department of Neurology, St. Vincent's University Hospital, Elm Park, Dublin 4, Ireland.
| |
Collapse
|
8
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
9
|
Uiterwijk D, Stargatt R, Crowe SF. Objective Cognitive Outcomes and Subjective Emotional Sequelae in Litigating Adults with a Traumatic Brain Injury: The Impact of Performance and Symptom Validity Measures. Arch Clin Neuropsychol 2022; 37:1662-1687. [PMID: 35704852 DOI: 10.1093/arclin/acac039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the relative contribution of performance and symptom validity in litigating adults with traumatic brain injury (TBI), as a function of TBI severity, and examined the relationship between self-reported emotional symptoms and cognitive tests scores while controlling for validity test performance. METHOD Participants underwent neuropsychological assessment between January 2012 and June 2021 in the context of compensation-seeking claims related to a TBI. All participants completed a cognitive test battery, the Personality Assessment Inventory (including symptom validity tests; SVTs), and multiple performance validity tests (PVTs). Data analyses included independent t-tests, one-way ANOVAs, correlation analyses, and hierarchical multiple regression. RESULTS A total of 370 participants were included. Atypical PVT and SVT performance were associated with poorer cognitive test performance and higher emotional symptom report, irrespective of TBI severity. PVTs and SVTs had an additive effect on cognitive test performance for uncomplicated mTBI, but less so for more severe TBI. The relationship between emotional symptoms and cognitive test performance diminished substantially when validity test performance was controlled, and validity test performance had a substantially larger impact than emotional symptoms on cognitive test performance. CONCLUSION Validity test performance has a significant impact on the neuropsychological profiles of people with TBI, irrespective of TBI severity, and plays a significant role in the relationship between emotional symptoms and cognitive test performance. Adequate validity testing should be incorporated into every neuropsychological assessment, and associations between emotional symptoms and cognitive outcomes that do not consider validity testing should be interpreted with extreme caution.
Collapse
Affiliation(s)
- Daniel Uiterwijk
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Robyn Stargatt
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Simon F Crowe
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| |
Collapse
|
10
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
11
|
The Relationship Between Cognitive Functioning and Symptoms of Depression, Anxiety, and Post-Traumatic Stress Disorder in Adults with a Traumatic Brain Injury: a Meta-Analysis. Neuropsychol Rev 2021; 32:758-806. [PMID: 34694543 DOI: 10.1007/s11065-021-09524-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 09/09/2021] [Indexed: 12/12/2022]
Abstract
A thorough understanding of the relationship between cognitive test performance and symptoms of depression, anxiety, or post-traumatic stress disorder (PTSD) in people with traumatic brain injury (TBI) is important given the high prevalence of these emotional symptoms following injury. It is also important to understand whether these relationships are affected by TBI severity, and the validity of test performance and symptom report. This meta-analysis was conducted to investigate whether these symptoms are associated with cognitive test performance alterations in adults with a TBI. This meta-analysis was prospectively registered on the PROSPERO International Prospective Register of Systematic Reviews website (registration number: CRD42018089194). The electronic databases Medline, PsycINFO, and CINAHL were searched for journal articles published up until May 2020. In total, 61 studies were included, which enabled calculation of pooled effect sizes for the cognitive domains of immediate memory (verbal and visual), recent memory (verbal and visual), attention, executive function, processing speed, and language. Depression had a small, negative relationship with most cognitive domains. These relationships remained, for the most part, when samples with mild TBI (mTBI)-only were analysed separately, but not for samples with more severe TBI (sTBI)-only. A similar pattern of results was found in the anxiety analysis. PTSD had a small, negative relationship with verbal memory, in samples with mTBI-only. No data were available for the PTSD analysis with sTBI samples. Moderator analyses indicated that the relationships between emotional symptoms and cognitive test performance may be impacted to some degree by exclusion of participants with atypical performance on performance validity tests (PVTs) or symptom validity tests (SVTs), however there were small study numbers and changes in effect size were not statistically significant. These findings are useful in synthesising what is currently known about the relationship between cognitive test performance and emotional symptoms in adults with TBI, demonstrating significant, albeit small, relationships between emotional symptoms and cognitive test performance in multiple domains, in non-military samples. Some of these relationships appeared to be mildly impacted by controlling for performance validity or symptom validity, however this was based on the relatively few studies using validity tests. More research including PVTs and SVTs whilst examining the relationship between emotional symptoms and cognitive outcomes is needed.
Collapse
|
12
|
Exploring the Structured Inventory of Malingered Symptomatology in Patients with Multiple Sclerosis. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09424-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Lace JW, Merz ZC, Galioto R. Examining the Clinical Utility of Selected Memory-Based Embedded Performance Validity Tests in Neuropsychological Assessment of Patients with Multiple Sclerosis. Neurol Int 2021; 13:477-486. [PMID: 34698256 PMCID: PMC8544445 DOI: 10.3390/neurolint13040047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/16/2022] Open
Abstract
Within the neuropsychological assessment, clinicians are responsible for ensuring the validity of obtained cognitive data. As such, increased attention is being paid to performance validity in patients with multiple sclerosis (pwMS). Experts have proposed batteries of neuropsychological tests for use in this population, though none contain recommendations for standalone performance validity tests (PVTs). The California Verbal Learning Test, Second Edition (CVLT-II) and Brief Visuospatial Memory Test, Revised (BVMT-R)—both of which are included in the aforementioned recommended neuropsychological batteries—include previously validated embedded PVTs (which offer some advantages, including expedience and reduced costs), with no prior work exploring their utility in pwMS. The purpose of the present study was to determine the potential clinical utility of embedded PVTs to detect the signal of non-credibility as operationally defined by below criterion standalone PVT performance. One hundred thirty-three (133) patients (M age = 48.28; 76.7% women; 85.0% White) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into “credible” (n = 100) or “noncredible” (n = 33) groups based on a standalone PVT criterion. Classification statistics for four CVLT-II and BVMT-R PVTs of interest in isolation were poor (AUCs = 0.58–0.62). Several arithmetic and logistic regression-derived multivariate formulas were calculated, all of which similarly demonstrated poor discriminability (AUCs = 0.61–0.64). Although embedded PVTs may arguably maximize efficiency and minimize test burden in pwMS, common ones in the CVLT-II and BVMT-R may not be psychometrically appropriate, sufficiently sensitive, nor substitutable for standalone PVTs in this population. Clinical neuropsychologists who evaluate such patients are encouraged to include standalone PVTs in their assessment batteries to ensure that clinical care conclusions drawn from neuropsychological data are valid.
Collapse
Affiliation(s)
- John W. Lace
- Neurological Institute, Section of Neuropsychology, Cleveland Clinic Foundation, Cleveland, OH 44195, USA;
- Correspondence:
| | - Zachary C. Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC 27401, USA;
| | - Rachel Galioto
- Neurological Institute, Section of Neuropsychology, Cleveland Clinic Foundation, Cleveland, OH 44195, USA;
- Mellen Center for Multiple Sclerosis, Cleveland Clinic Foundation, Cleveland, OH 44195, USA
| |
Collapse
|
14
|
Sanborn V, Lace J, Gunstad J, Galioto R. Considerations regarding noncredible performance in the neuropsychological assessment of patients with multiple sclerosis: A case series. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 30:458-467. [PMID: 34514920 DOI: 10.1080/23279095.2021.1971229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Determining the validity of data during clinical neuropsychological assessment is crucial for proper interpretation, and extensive literature has emphasized myriad methods of doing so in diverse samples. However, little research has considered noncredible presentation in persons with multiple sclerosis (pwMS). PwMS often experience one or more factors known to impact validity of data, including major neurocognitive impairment, psychological distress/psychogenic interference, and secondary gain. This case series aimed to illustrate the potential relationships between these factors and performance validity testing in pwMS. Six cases from an IRB-approved database containing pwMS referred for neuropsychological assessment at a large, academic medical center involving at least one of the above-stated factors were identified. Backgrounds, neuropsychological test data, and clinical considerations for each were reviewed. Interestingly, no pwMS diagnosed with major neurocognitive impairment was found to have noncredible performance, nor was any patient with noncredible performance in the absence of notable psychological distress. Given the variability of noncredible performance and multiplicity of factors affecting performance validity in pwMS, clinicians are strongly encouraged to consider psychometrically appropriate methods for evaluating validity of cognitive data in pwMS. Additional research aiming to elucidate base rates of, mechanisms begetting, and methods for assessing noncredible performance in pwMS is imperative.
Collapse
Affiliation(s)
| | - John Lace
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA
| | - John Gunstad
- Psychological Sciences, Kent State University, Kent, OH, USA.,Brain Health Research Institute, Kent State University, Kent, OH, USA
| | - Rachel Galioto
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA.,Cleveland Clinic, Mellen Center for Multiple Sclerosis, Cleveland, OH, USA
| |
Collapse
|
15
|
Bonete-López B, Oltra-Cucarella J, Marín M, Antón C, Balao N, López E, Macià ES. Validation and Norms for a Recognition Task for the Spanish Version of the Free and Cued Selective Reminding Test. Arch Clin Neuropsychol 2021; 36:954-964. [PMID: 33264394 DOI: 10.1093/arclin/acaa117] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/25/2020] [Accepted: 11/01/2020] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The aim of the present work was to develop and validate a recognition task to be used with the Spanish version of the 16 items Free and Cued Selective Reminding Test (FCSRT). METHOD A total of 96 (67.7% women) cognitively healthy, functionally independent community-dwelling participants aged 55 years or older underwent a comprehensive neuropsychological assessment. A recognition task for the FCSRT was developed that included the original 16 items, 16 semantically related items, and eight unrelated foils. Indices of discriminability (d') and response bias (C), as well as 95% confidence intervals for chance-level responding were calculated. RESULTS On average, our sample was 65.71 years old (SD = 6.68, range: 55-87), had 11.39 years of formal education (SD = 3.37, range: 3-19), and a Mini-Mental State Examination score = 28.42 (SD = 1.49, range: 25-30). Recognition scores did not differ statistically between sexes, nor did they correlate with demographics. Participants scored at ceiling levels (mean number of Hits = 15.52, SD = 0.906, mean number of False Alarms = 0.27, SD = 0.589). All the participants scored above chance levels. CONCLUSIONS Normative data from a novel recognition task for the Spanish version of the FCSRT are provided for use in clinical and research settings. Including a recognition task in the assessment of memory functioning might help uncover the pattern of memory impairments in older adults, and can help improve the memory profile of people with amnestic Mild Cognitive Impairment. Future research is warranted to validate and expand the recognition task.
Collapse
Affiliation(s)
- Beatriz Bonete-López
- Departmento Psicología de la Salud, Universidad Miguel Hernández de Elche, Alicante, Spain.,SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| | - Javier Oltra-Cucarella
- Departmento Psicología de la Salud, Universidad Miguel Hernández de Elche, Alicante, Spain.,SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| | - Marta Marín
- SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| | - Carolina Antón
- SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| | - Nerea Balao
- SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| | - Elena López
- SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| | - Esther Sitges Macià
- Departmento Psicología de la Salud, Universidad Miguel Hernández de Elche, Alicante, Spain.,SABIEX, Universidad Miguel Hernández de Elche, Alicante, Spain
| |
Collapse
|
16
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
17
|
Uiterwijk D, Wong D, Stargatt R, Crowe SF. Performance and symptom validity testing in neuropsychological assessments in Australia: a survey of practises and beliefs. AUSTRALIAN PSYCHOLOGIST 2021. [DOI: 10.1080/00050067.2021.1948797] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Affiliation(s)
- Daniel Uiterwijk
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| | - Dana Wong
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| | - Robyn Stargatt
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| | - Simon F. Crowe
- School of Psychology and Public Health, La Trobe University, Victoria Australia
| |
Collapse
|
18
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
19
|
Nauta IM, Bertens D, van Dam M, Huiskamp M, Driessen S, Geurts J, Uitdehaag B, Fasotti L, Hulst HE, de Jong BA, Klein M. Performance validity in outpatients with multiple sclerosis and cognitive complaints. Mult Scler 2021; 28:642-653. [PMID: 34212754 PMCID: PMC8961248 DOI: 10.1177/13524585211025780] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Suboptimal performance during neuropsychological assessment renders cognitive test results invalid. However, suboptimal performance has rarely been investigated in multiple sclerosis (MS). OBJECTIVES To investigate potential underlying mechanisms of suboptimal performance in MS. METHODS Performance validity testing, neuropsychological assessments, neuroimaging, and questionnaires were analyzed in 99 MS outpatients with cognitive complaints. Based on performance validity testing patients were classified as valid or invalid performers, and based on neuropsychological test results as cognitively impaired or preserved. Group comparisons and correlational analyses were performed on demographics, patient-reported, and disease-related outcomes. RESULTS Twenty percent displayed invalid performance. Invalid and valid performers did not differ regarding demographic, patient-reported, and disease-related outcomes. Disease severity of invalid and valid performers with cognitive impairment was comparable, but worse than cognitively preserved valid performers. Lower performance validity scores related to lower cognitive functioning, lower education, being male, and higher disability levels (p < 0.05). CONCLUSION Suboptimal performance frequently occurs in patients with MS and cognitive complaints. In both clinical practice and in cognitive research, suboptimal performance should be considered in the interpretation of cognitive outcomes. Identification of factors that differentiate between suboptimal and optimal performers with cognitive impairment needs further exploration.
Collapse
Affiliation(s)
- I M Nauta
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Neurology, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - D Bertens
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands/Klimmendaal Rehabilitation Center, Arnhem, The Netherlands
| | - M van Dam
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - M Huiskamp
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - S Driessen
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Medical Psychology, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Jjg Geurts
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Bmj Uitdehaag
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Neurology, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - L Fasotti
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands/Klimmendaal Rehabilitation Center, Arnhem, The Netherlands
| | - H E Hulst
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - B A de Jong
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Neurology, MS Center Amsterdam, Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - M Klein
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Medical Psychology, Amsterdam Neuroscience, Amsterdam, The Netherlands
| |
Collapse
|
20
|
Martinez KA, Sayers C, Hayes C, Martin PK, Clark CB, Schroeder RW. Normal cognitive test scores cannot be interpreted as accurate measures of ability in the context of failed performance validity testing: A symptom- and detection-coached simulation study. J Clin Exp Neuropsychol 2021; 43:301-309. [PMID: 33998369 DOI: 10.1080/13803395.2021.1926435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Introduction: While use of performance validity tests (PVTs) has become a standard of practice in neuropsychology, there are differing opinions regarding whether to interpret cognitive test data when standard scores fall within normal limits despite PVTs being failed. This study is the first to empirically determine whether normal cognitive test scores underrepresent functioning when PVTs are failed.Method: Participants, randomly assigned to either a simulated malingering group (n = 50) instructed to mildly suppress test performances or a best-effort/control group (n = 50), completed neuropsychological tests which included the North American Adult Reading Test (NAART), California Verbal Learning Test - 2nd Edition (CVLT-II), and Test of Memory Malingering (TOMM).Results: Groups were not significantly different in age, sex, education, or NAART predicted intellectual ability, but simulators performed significantly worse than controls on the TOMM, CVLT-II Forced Choice Recognition, and CVLT-II Short Delay Free Recall. The groups did not significantly differ on other examined CVLT-II measures. Of simulators who failed validity testing, 36% scored no worse than average and 73% scored no worse than low average on any of the examined CVLT-II indices.Conclusions: Of simulated malingerers who failed validity testing, nearly three-fourths were able to produce cognitive test scores that were within normal limits, which indicates that normal cognitive performances cannot be interpreted as accurately reflecting an individual's capabilities when obtained in the presence of validity test failure. At the same time, only 2 of 50 simulators were successful in passing validity testing while scoring within an impaired range on cognitive testing. This latter finding indicates that successfully feigning cognitive deficits is difficult when PVTs are utilized within the examination.
Collapse
Affiliation(s)
- Karen A Martinez
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Courtney Sayers
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Charles Hayes
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Phillip K Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| | - C Brendan Clark
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Ryan W Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| |
Collapse
|
21
|
DaCosta A, Webbe F, LoGalbo A. The Rey Dot Counting Test as a Tool for Detecting Suboptimal Performance in Athlete Baseline Testing. Arch Clin Neuropsychol 2021; 36:414-423. [PMID: 32719864 DOI: 10.1093/arclin/acaa052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 06/09/2020] [Accepted: 06/25/2020] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The limitations of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT)'s embedded validity measures (EVMs) are well-documented, as estimates suggest up to 35% of invalid baseline performances go undetected. Few studies have examined standalone performance validity tests (PVT) as a supplement to ImPACT's EVMs. METHOD College athletes (n = 1,213) were administered a preseason baseline assessment that included ImPACT and the Rey Dot Counting Test (DCT), a standalone PVT, among other measures. RESULTS Sixty-nine athletes (5.69%) met criteria for suboptimal effort on either ImPACT or the DCT. The DCT detected more cases of suboptimal effort (n = 50) than ImPACT (n = 21). A χ2 test of independence detected significant disagreement between the two measures, as only two individuals produced suboptimal effort on both (χ2(2) = 1.568, p = .210). Despite this disagreement, there were significant differences between the suboptimal effort DCT group and the adequate effort DCT group across all four ImPACT neurocognitive domains (U = 19,225.000, p < .001; U = 17,859.000, p < .001; U = 13,854.000, p < .001; U = 17,850.500, p < .001). CONCLUSIONS The DCT appears to detect suboptimal effort otherwise undetected by ImPACT's EVMs.
Collapse
Affiliation(s)
- Andrew DaCosta
- School of Psychology, Florida Institute of Technology, Melbourne, FL, USA
| | - Frank Webbe
- School of Psychology, Florida Institute of Technology, Melbourne, FL, USA
| | - Anthony LoGalbo
- School of Psychology, Florida Institute of Technology, Melbourne, FL, USA
| |
Collapse
|
22
|
Yue JK, Phelps RR, Hemmerle DD, Upadhyayula PS, Winkler EA, Deng H, Chang D, Vassar MJ, Taylor SR, Schnyer DM, Lingsma HF, Puccio AM, Yuh EL, Mukherjee P, Huang MC, Ngwenya LB, Valadka AB, Markowitz AJ, Okonkwo DO, Manley GT. Predictors of six-month inability to return to work in previously employed subjects after mild traumatic brain injury: A TRACK-TBI pilot study. JOURNAL OF CONCUSSION 2021; 5. [PMID: 34046212 PMCID: PMC8153496 DOI: 10.1177/20597002211007271] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Introduction: Return to work (RTW) is an important milestone of mild traumatic brain injury (mTBI) recovery. The objective of this study was to evaluate whether baseline clinical variables, three-month RTW, and three-month postconcussional symptoms (PCS) were associated with six-month RTW after mTBI. Methods: Adult subjects from the prospective multicenter Transforming Research and Clinical Knowledge in Traumatic Brain Injury Pilot study with mTBI (Glasgow Coma Scale 13–15) who were employed at baseline, with completed three-and six-month RTW status, and three-month Acute Concussion Evaluation (ACE), were extracted. Univariate and multivariable analyses were performed for six-month RTW, with focus on baseline employment, three-month RTW, and three-month ACE domains (physical, cognitive, sleep, and/or emotional postconcussional symptoms (PCS)). Odds ratios (OR) and 95% confidence intervals [CI] were reported. Significance was assessed at p < 0.05. Results: In 152 patients aged 40.7 ± 15.0years, 72% were employed full-time at baseline. Three- and six-month RTW were 77.6% and 78.9%, respectively. At three months, 59.2%, 47.4%, 46.1% and 31.6% scored positive for ACE physical, cognitive, sleep, and emotional PCS domains, respectively. Three-month RTW predicted six-month RTW (OR = 19.80, 95% CI [7.61–51.52]). On univariate analysis, scoring positive in any three-month ACE domain predicted inability for six-month RTW (OR = 0.10–0.11). On multivariable analysis, emotional symptoms predicted inability to six-month RTW (OR = 0.19 [0.04–0.85]). Subjects who scored positive in all four ACE domains were more likely to be unable to RTW at six months (4 domains: 58.3%, vs. 0-to-3 domains: 9.5%; multivariable OR = 0.09 [0.02–0.33]). Conclusions: Three-month post-injury is an important time point at which RTW status and PCS should be assessed, as both are prognostic markers for six-month RTW. Clinicians should be particularly vigilant of patients who present with emotional symptoms, and patients with symptoms across multiple PCS categories, as these patients are at further risk of inability to RTW and may benefit from targeted evaluation and support.
Collapse
Affiliation(s)
- John K Yue
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Ryan Rl Phelps
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Debra D Hemmerle
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Pavan S Upadhyayula
- Department of Neurological Surgery, University of California San Diego, San Diego, CA, USA
| | - Ethan A Winkler
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Hansen Deng
- Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Diana Chang
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Mary J Vassar
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Sabrina R Taylor
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - David M Schnyer
- Department of Psychology, University of Texas, Austin, TX, USA
| | - Hester F Lingsma
- Department of Public Health, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Ava M Puccio
- Department of Neurological Surgery, University of California San Diego, San Diego, CA, USA
| | - Esther L Yuh
- Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA.,Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | - Pratik Mukherjee
- Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA.,Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | - Michael C Huang
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - Laura B Ngwenya
- Department of Neurological Surgery, University of Cincinnati, Cincinnati, OH, USA
| | - Alex B Valadka
- Department of Neurological Surgery, Virginia Commonwealth University, Richmond, VA, USA
| | - Amy J Markowitz
- Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | - David O Okonkwo
- Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Geoffrey T Manley
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA.,Brain and Spinal Injury Center, Zuckerberg San Francisco General Hospital, San Francisco, CA, USA
| | | |
Collapse
|
23
|
Braw Y. Response Time Measures as Supplementary Validity Indicators in Forced-Choice Recognition Memory Performance Validity Tests: A Systematic Review. Neuropsychol Rev 2021; 32:71-98. [PMID: 33821424 DOI: 10.1007/s11065-021-09499-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 03/05/2021] [Indexed: 01/17/2023]
Abstract
Performance validity tests (PVTs) based on the forced-choice recognition memory (FCRM) paradigm are commonly used for the detection of noncredible performance. Examinees' response times (RTs) are affected by cognitive processes associated with deception and can also be gathered without lengthening the duration of the assessment. Consequently, interest in the utility of these measures as supplementary validity indicators in FCRM-PVTs has grown over the years. The current systematic review summarizes both clinical and simulation (i.e., healthy participants simulating cognitive impairment) studies of RTs in FCRM-PVTs. The findings of 25 peer-reviewed articles (n = 26 empirical studies) indicate that noncredible performance in FCRM-PVTs is associated with longer RTs. Additionally, there are indications that noncredible performance is associated with larger variability in RTs. RT measures, however, have lower discrimination capacity than conventional accuracy measures. Their utility may therefore lie in reaching decisions regarding cases with border zone accuracy scores, as well as aiding in the detection of more sophisticated examinees who are aware of the use of accuracy-based validity indicators in FCRM-PVTs. More research, however, is required before these measures are incorporated in daily practice and clinical decision-making processes.
Collapse
Affiliation(s)
- Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.
| |
Collapse
|
24
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
25
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
26
|
Erdodi LA, Abeare CA. Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Arch Clin Neuropsychol 2020; 35:188-204. [PMID: 31696203 DOI: 10.1093/arclin/acz032] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/18/2019] [Accepted: 06/22/2019] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). METHOD Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. RESULTS Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). CONCLUSIONS In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. BRIEF SUMMARY Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
27
|
Begali VL. Neuropsychology and the dementia spectrum: Differential diagnosis, clinical management, and forensic utility. NeuroRehabilitation 2020; 46:181-194. [PMID: 32083596 DOI: 10.3233/nre-192965] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The utility of neuropsychology in the treatment and evaluation of neuropsychological disorders and neurodegenerative diseases is supported by scientific study. As a discipline, neuropsychology's value and efficacy when applied to the dementia spectrum are rooted in its inherent adaptability as a practical, cost-effective, and scientifically based resource for differential diagnosis, treatment planning, and forensic decision making. OBJECTIVES This article provides a framework for conceptualizing dementia as a spectrum of disorders and outlines a rationale for preferential reliance upon neuropsychological tenets. The function of neuropsychology in differential diagnosis, clinical management, integrative care, and forensic applications is delineated for use as a contemporary interdisciplinary reference. METHODOLOGY An overview of the literature on dementia as a spectrum of disorders has been integrated with the science and practice of neuropsychology. CONCLUSIONS The utility of neuropsychology emanates from its focus on brain functioning and the discipline's appreciation for the relationship between brain functioning and cognition, mental state, and behavior. Early and routine referral for neuropsychological assessment allows for the objective determination of normal versus abnormal neurocognitive functioning, provides a baseline for serial reassessment, and leads to the more rapid deployment of effective treatments. Beyond the hospital and clinic, neuropsychological expertise is increasingly sought after as integral to the legal system when decisions regarding eligibility for long term care and questions about capacity require objective and reliable measurement.
Collapse
Affiliation(s)
- Vivian L Begali
- Neuropsychology and Psychological Healthcare, Fountain Park Medical Offices, 9327 MidlothianTurnpike, Suite 1-C, Richmond, VA 23235, USA Tel.: +1 804 728 2964; E-mail: ; Web: http://www.drvivianbegali.com
| |
Collapse
|
28
|
Rinaldi A, Stewart-Willis JJ, Scarisbrick D, Proctor-Weber Z. Clinical utility of the TOMMe10 scoring criteria for detecting suboptimal effort in an mTBI veteran sample. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:670-676. [PMID: 32780587 DOI: 10.1080/23279095.2020.1803870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In the context of diminishing reimbursement and patient access demands, researchers continually refine performance validity measures (PVMs) to maximize efficiency while maintaining confidence in obtained data. This is particularly true for high PVM failure populations (e.g., mTBI patients). The TOMMe10 (number of errors on first 10 TOMM items) is one method this study utilized for classifying PVM performance as pass/fail (fail defined as failure on 2 of 6 PVM scores, pass defined as 0/1 failures). The present study hypothesized that the TOMMe10 would have equitable sensitivity/specificity for identifying non-credible cognitive performance among veterans with mTBI compared to previous research findings and commonly used performance validity measures (e.g., TOMM or WMT). Data were analyzed from 54 veterans assigned to a pass and fail group based on their performance across six recognized PVMs. Results revealed pass/fail groups were not significantly different regarding age, educational, or racial background. ROC analyses found the TOMMe10 demonstrated excellent discriminability (AUC = .803 ±.128), indicating that the TOMMe10 could have clinical utility within an mTBI veteran sample, particularly in conjunction with a second PVM. Specific population limitations are discussed. Additional research should elucidate this measure's performance with additional populations, including non-veteran mTBI, dementia, moderate-severe TBI, and inpatient populations.
Collapse
Affiliation(s)
- Anthony Rinaldi
- Department of Psychology, Gaylord Specialty Healthcare, Wallingford, CT, USA
| | | | - David Scarisbrick
- WVU Department of Behavioral Medicine and Psychiatry, WVU Department of Neuroscience, West Virginia School of Medicine, Morgantown, VA, USA
| | - Zoe Proctor-Weber
- Department of Psychology, C.W. Bill Young Bay Pines VAHCS, Bay Pines, FL, USA
| |
Collapse
|
29
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
30
|
H. Hsu N, Dukarm P. Neuropsychological Assessment. Concussion 2020. [DOI: 10.1016/b978-0-323-65384-8.00002-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
31
|
Shura RD, Taber KH, Armistead-Jehle P, Denning JH, Rowland JA. Effects of Distraction on Performance Validity: A Pilot Study with Veterans. Arch Clin Neuropsychol 2019; 34:1432-1437. [PMID: 31329819 DOI: 10.1093/arclin/acz014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Revised: 02/05/2019] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The purpose of this experimental pilot study was to evaluate whether distraction can affect results of performance validity testing. METHOD Thirty-three veterans who have served in the US military since 09/11/2001 (Mage = 38.60, SD = 10.85 years) completed the Test of Memory Malingering (TOMM), Trail Making Test, and Medical Symptom Validity Test (MSVT). Subjects were randomly assigned to complete the MSVT in one of three experimental conditions: standard administration, while performing serial 2 s (Cognitive Distraction), and while submerging a hand in ice water (Physical Distraction). RESULTS All participants included in primary analyses passed the TOMM (n = 30). Physical distraction did not affect performance on the MSVT. Cognitive distraction negatively affected MSVT performance. CONCLUSIONS Cognitive distraction can substantially affect MSVT performance in a subgroup of individuals. Physical distraction did not significantly affect MSVT performance.
Collapse
Affiliation(s)
- Robert D Shura
- VA Mid-Atlantic Mental Illness, Research, Education, and Clinical Center (MA-MIRECC), Salisbury, NC, USA.,Mental Health & Behavioral Sciences Service Line, Salisbury Veterans Affairs Health Care System, Salisbury, NC, USA.,Department of Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Katherine H Taber
- VA Mid-Atlantic Mental Illness, Research, Education, and Clinical Center (MA-MIRECC), Salisbury, NC, USA.,Research & Academic Affairs Service Line, Salisbury Veterans Affairs Health Care System, Salisbury, NC, USA.,Division of Biomedical Sciences, Via College of Osteopathic Medicine, Blacksburg, VA, USA
| | | | - John H Denning
- Mental Health Service Line, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Jared A Rowland
- VA Mid-Atlantic Mental Illness, Research, Education, and Clinical Center (MA-MIRECC), Salisbury, NC, USA.,Research & Academic Affairs Service Line, Salisbury Veterans Affairs Health Care System, Salisbury, NC, USA.,Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|
32
|
Ventura LM, DeDios-Stern S, Oh A, Soble JR. They're not just little adults: The utility of adult performance validity measures in a mixed clinical pediatric sample. APPLIED NEUROPSYCHOLOGY-CHILD 2019; 10:297-307. [PMID: 31703167 DOI: 10.1080/21622965.2019.1685522] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Performance validity tests (PVTs) have become a standard part of adult neuropsychological practice; however, they are less widely used in pediatric testing. The current study aimed to obtain a better understanding of the application of PVTs within a mixed clinical pediatric sample with a wide range of diagnosis, IQ, and age. Cross-sectional data were analyzed from 130 consecutive pediatric patients evaluated as part of clinical care and diagnosed with a variety of medical/neurological, developmental, and psychiatric disorders. Patients were administered a battery of neuropsychological tests; results of intellectual functioning measures (i.e., Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition [WAIS-IV]), and PVTs (i.e., Test of Memory Malingering [TOMM] and Digit Span [DS] subtests of the WISC-V/WAIS-IV) were analyzed to assess PVT performance across the sample as well as age- and Full-Scale IQ-related (FSIQ) effects on pass rate. Results suggested that the TOMM is an effective validity test for youth, as the TOMM adult cutoff score was also valid for children (88% pass rate on TOMM trial 1 cut-score ≥41, 71% pass rate on TOMM trial 1 cut-score ≥45). In contrast, Reliable Digit Span (RDS) was less accurate (34% failed RDS [cut-score ≤6], 54% failed RDS-r [cut-score ≤10], and 25% failed DS ACSS [cut-score ≤5]) using standard adult cutoffs. Notably, although TOMM scores were not strongly influenced by IQ, DS scores increased as IQ increased. Overall, further analysis of PVTs can champion new standards of practice through additional research establishing PVT accuracy within pediatric populations.
Collapse
Affiliation(s)
- Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
33
|
Vander Werff KR, Rieger B. Auditory and Cognitive Behavioral Performance Deficits and Symptom Reporting in Postconcussion Syndrome Following Mild Traumatic Brain Injury. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2501-2518. [PMID: 31260387 PMCID: PMC6808357 DOI: 10.1044/2019_jslhr-h-18-0281] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 12/13/2018] [Accepted: 02/15/2019] [Indexed: 05/07/2023]
Abstract
Purpose This study examined auditory deficits and symptom reporting in individuals with long-term postconcussion symptoms following a single mild traumatic brain injury (mTBI) compared to age- and gender-matched controls without a history of mTBI. Method Case history interviews, symptom questionnaires, and a battery of central auditory and neuropsychological tests were administered to 2 groups. The mTBI group was a civilian population recruited from a local concussion management program who were seeking rehabilitation for postconcussion-related problems in a postacute period between 3 and 18 months following injury. Symptom validity testing was included to assess the rate of possible insufficient test effort and its influence on scores for all outcome measures. Analyses of group differences in test scores were performed both with and without the participants who showed insufficient test effort. Rates of symptom reporting, correlations among symptoms and behavioral test outcomes, and the relationships between auditory and cognitive test performance were analyzed. Results The mTBI group reported a high rate of auditory symptoms and general postconcussion symptoms. Performance on neuropsychological tests of cognitive function showed some differences in raw scores between groups, but when effort was considered, there were no significant differences in the rate of abnormal performance between groups. In contrast, there were significant differences in both raw scores and the rate of abnormal performance between groups for some auditory tests when only considering participants with sufficient effort. Auditory symptoms were strongly correlated with other general postconcussion symptoms. Conclusions Significant auditory symptoms and evidence of long-term central auditory dysfunction were found in a subset of individuals who had chronic postconcussion symptoms after a single mTBI unrelated to blast trauma. The rate of abnormal performance on auditory behavioral tests exceeded the rate of abnormal performance on tests of cognitive function. Supplemental Material https://doi.org/10.23641/asha.8329955.
Collapse
Affiliation(s)
| | - Brian Rieger
- Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY
| |
Collapse
|
34
|
Zasler ND, Bender SD. Validity Assessment in Traumatic Brain Injury Impairment and Disability Evaluations. Phys Med Rehabil Clin N Am 2019; 30:621-636. [PMID: 31227137 DOI: 10.1016/j.pmr.2019.03.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This article provides an overview of validity assessment in persons with traumatic brain injury including evaluation caveats. Specific discussion is provided on post-concussive disorders, malingering, examination techniques to assess for validity, response bias, effort and non-organic/functional presentations. Examinee and examiner biases issues will also be explored. Discussion is also provided regarding judicial trends in limiting examiner scope of testing and/or testimony, and risk of liability when providing expert witness opinions on validity of examinee presentations. The hope is to encourage physiatrists to become more aware and skilled in validity assessment given its importance in differential diagnosis of impairment following traumatic brain injury.
Collapse
Affiliation(s)
- Nathan D Zasler
- Concussion Care Centre of Virginia, Ltd, Tree of Life Services, Inc, 3721 Westerre Parkway, Suite B, Richmond, VA 23233, USA; Department of Physical Medicine and Rehabilitation, Virginia Commonwealth University, Richmond, VA, USA; Department of Physical Medicine and Rehabilitation, University of Virginia, Charlottesville, VA, USA; International Brain Injury Association, Alexandria, VA, USA.
| | - Scott D Bender
- Institute of law, psychiatry and Public policy, Department of Psychiatry and Neurobehavioral Science, University of Virginia, 1230 Cedars Court, Suite 108, Charlottesville, VA 22903, USA
| |
Collapse
|
35
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
36
|
Dodd JN, Murphy S, Bosworth C. Sensitivity of the Memory Validity Profile (MVP): Raising the bar. Child Neuropsychol 2019; 26:137-144. [DOI: 10.1080/09297049.2019.1620714] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Jonathan N. Dodd
- Psychological Services, WellStar Medical Group, Marietta, GA, USA
| | - Samantha Murphy
- Department of Psychology, Southern Illinois University, Edwardsville, IL, USA
| | - Christopher Bosworth
- Department of Psychology, St. Louis Children’s Hospital/Washington University, St. Louis, MO, USA
| |
Collapse
|
37
|
Elbaum T, Golan L, Lupu T, Wagner M, Braw Y. Establishing supplementary response time validity indicators in the Word Memory Test (WMT) and directions for future research. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:403-413. [DOI: 10.1080/23279095.2018.1555161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tomer Elbaum
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
38
|
Paulo R, Albuquerque PB. Detecting memory performance validity with DETECTS: A computerized performance validity test. APPLIED NEUROPSYCHOLOGY. ADULT 2019; 26:48-57. [PMID: 28922010 DOI: 10.1080/23279095.2017.1359179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Evaluating performance validity is essential in neuropsychological and forensic assessments. Nonetheless, most psychological assessment tests are unable to detect performance validity and other methods must be used for this purpose. A new Performance Validity Test (DETECTS - Memory Performance Validity Test) was developed with several characteristics that enhance test utility. Moreover, precise response time measurement was added to DETECTS. Two groups of participants (normative and simulator group) completed DETECTS and three memory tests from the Wechsler Memory Scale III. Simulators achieved considerably lower scores (hits) and higher response times in DETECTS compared with the normative group. All participants in the normative group were classified correctly and no simulator was classified as having legitimate memory deficits. Thus, DETECTS seems to be a valuable computerized Performance Validity Test with reduced application time and effective cut-off scores as well as high sensitivity, specificity, and positive and negative predictive power values. Lastly, response time may be a very useful measure for detecting memory malingering.
Collapse
Affiliation(s)
- Rui Paulo
- a College of Liberal Arts - Bath Spa University , Bath , United Kingdom
- b School of Psychology , University of Minho , Braga , Portugal
| | | |
Collapse
|
39
|
Critchfield E, Soble JR, Marceaux JC, Bain KM, Chase Bailey K, Webber TA, Alex Alverson W, Messerly J, Andrés González D, O’Rourke JJF. Cognitive impairment does not cause invalid performance: analyzing performance patterns among cognitively unimpaired, impaired, and noncredible participants across six performance validity tests. Clin Neuropsychol 2018; 33:1083-1101. [DOI: 10.1080/13854046.2018.1508615] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Edan Critchfield
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Janice C. Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Kathleen M. Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - K. Chase Bailey
- Division of Psychology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Troy A. Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - W. Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - David Andrés González
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | | |
Collapse
|
40
|
Zoccoli C, Li CF, Black D, Smith DB, Sheehan J, Harbaugh RE, Glantz M. The Honest Palm Sign: Detecting Incomplete Effort on Physical Examination. World Neurosurg 2018; 122:e1354-e1358. [PMID: 30448572 DOI: 10.1016/j.wneu.2018.11.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 11/05/2018] [Accepted: 11/07/2018] [Indexed: 11/18/2022]
Abstract
BACKGROUND We investigated a simple, novel diagnostic test for detecting incomplete effort during the motor portion of the neurological examination. METHODS The results from the honest palm sign (HPS) were evaluated for 162 consecutive neuro-oncology patients who had undergone upper extremity strength testing. Deltoid, bicep, and wrist extensor strength was assessed in all patients. During the examination, patients were repeatedly encouraged to "try as hard as possible" and to "resist with all your strength." The absence of nail prints on the palms constituted a positive HPS test result (i.e., indicative of incomplete effort). The presence of nail prints constituted a negative HPS test result (i.e., indicative of full effort). RESULTS A total of 162 patients were tested. Their mean age was 55.5 ± 14.9 years, the median Karnofsky performance scale score was 80 (range, 60-100), and 63 patients (39%) were men. Of the 162 patients, 102 (63%) had malignant gliomas, 28 (17%) had brain metastases, 21 (13%) had other primary brain tumors, and 11 (6.8%) had primary central nervous system lymphomas. Of the 162 patients, 48 (30%) had positive HPS test results. The test sensitivity (84.6%), specificity (75.2%), positive likelihood ratio (3.41), and negative likelihood ratio (0.205) were good. After excluding 33 patients with characteristics that rendered them unsuitable for testing, the results from the remaining 129 patients were analyzed. The sensitivity was unchanged (84.6%), but the specificity (96.6%), positive likelihood ratio (24.5), and negative likelihood ratio (0.16) improved dramatically. CONCLUSIONS The HPS test is a simple, sensitive, and very specific test for detecting incomplete effort during the motor portion of neurological evaluations.
Collapse
Affiliation(s)
| | - Christina F Li
- Kaiser Permanente Oakland Medical Center, Oakland, California, USA
| | - David Black
- Department of Neurosurgery, Penn State Milton S. Hershey Medical Center, Hershey, Pennsylvania, USA
| | - Don B Smith
- Department of Neurology, University of Colorado School of Medicine, Denver, Colorado, USA
| | - Jonas Sheehan
- Geisinger Health Systems, Camp Hill, Pennsylvania, USA
| | - Robert E Harbaugh
- Department of Neurosurgery, Penn State Milton S. Hershey Medical Center, Hershey, Pennsylvania, USA
| | - Michael Glantz
- Department of Neurosurgery, Penn State Milton S. Hershey Medical Center, Hershey, Pennsylvania, USA.
| |
Collapse
|
41
|
The Grooved Pegboard Test as a Validity Indicator—a Study on Psychogenic Interference as a Confound in Performance Validity Research. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-018-9337-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
42
|
Butler O, Herr K, Willmund G, Gallinat J, Zimmermann P, Kühn S. Neural correlates of response bias: Larger hippocampal volume correlates with symptom aggravation in combat-related posttraumatic stress disorder. Psychiatry Res Neuroimaging 2018; 279:1-7. [PMID: 30014966 DOI: 10.1016/j.pscychresns.2018.06.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Revised: 06/25/2018] [Accepted: 06/26/2018] [Indexed: 01/04/2023]
Abstract
The diagnosis of posttraumatic stress disorder (PTSD) is vulnerable to the simulation or exaggeration of symptoms as it depends on the individual's self-report of symptoms. The use of symptom validity tests is recommended to detect malingering in PTSD. However, in neuroimaging research, PTSD diagnosis is often taken at face validity. To date, no neuroimaging study has compared credible PTSD patients with those identified as malingering, and the potential impacts of including malingerers along with credible patients on results is unclear. We classified male patients with combat-related PTSD as either credible (n = 37) or malingerers (n = 9) based on the Morel Emotional Numbing Test and compared structural neuroimaging and psychological questionnaire data. Patients identified as malingerers had larger gray matter volumes in the hippocampus, right inferior frontal gyrus and thalamus, and reported higher PTSD symptoms than credible PTSD patients. This is the first structural neuroimaging study to compare credible PTSD patients and malingerers. We find evidence of structural differences between these groups, in regions implicated in PTSD, inhibition and deception. These results emphasize the need for the inclusion of SVTs in neuroimaging studies of PTSD to ensure future findings are not confounded by an unknown mix of valid PTSD patients and malingerers.
Collapse
Affiliation(s)
- Oisin Butler
- Max Planck Institute for Human Development, Center for Lifespan Psychology, Lentzeallee 94, Berlin 14195, Germany.
| | - Kerstin Herr
- Center for Military Mental Health, Military Hospital Berlin, Scharnhorststr. 13, Berlin 10115, Germany
| | - Gerd Willmund
- Center for Military Mental Health, Military Hospital Berlin, Scharnhorststr. 13, Berlin 10115, Germany
| | - Jürgen Gallinat
- University Medical Centre Hamburg-Eppendorf, Department of Psychiatry and Psychotherapy, Martinistrasse 52, Hamburg 20246, Germany
| | - Peter Zimmermann
- Center for Military Mental Health, Military Hospital Berlin, Scharnhorststr. 13, Berlin 10115, Germany
| | - Simone Kühn
- Max Planck Institute for Human Development, Center for Lifespan Psychology, Lentzeallee 94, Berlin 14195, Germany; University Medical Centre Hamburg-Eppendorf, Department of Psychiatry and Psychotherapy, Martinistrasse 52, Hamburg 20246, Germany
| |
Collapse
|
43
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
44
|
Ngwenya LB, Gardner RC, Yue JK, Burke JF, Ferguson AR, Huang MC, Winkler EA, Pirracchio R, Satris GG, Yuh EL, Mukherjee P, Valadka AB, Okonkwo DO, Manley GT. Concordance of common data elements for assessment of subjective cognitive complaints after mild-traumatic brain injury: a TRACK-TBI Pilot Study. Brain Inj 2018; 32:1071-1078. [DOI: 10.1080/02699052.2018.1481527] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Laura B. Ngwenya
- Department of Neurosurgery, University of Cincinnati, Cincinnati, OH, USA
- Department of Neurology and Rehabilitation Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Raquel C. Gardner
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurology, San Francisco Veterans Administration Medical Center, San Francisco, CA, USA
| | - John K. Yue
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| | - John F. Burke
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| | - Adam R. Ferguson
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| | - Michael C. Huang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| | - Ethan A. Winkler
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| | - Romain Pirracchio
- Department of Anesthesia and Perioperative Care, University of California, San Francisco, San Francisco, CA, USA
| | - Gabriela G. Satris
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| | - Esther L. Yuh
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Department of Radiology, University of California, San Francisco, San Francisco, CA, USA
| | - Pratik Mukherjee
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Department of Radiology, University of California, San Francisco, San Francisco, CA, USA
| | - Alex B. Valadka
- Department of Neurological Surgery, Virginia Commonwealth University, Richmond, VA, USA
| | - David O. Okonkwo
- Department of Neurological Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Geoffrey T. Manley
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Brain and Spinal Injury Center, San Francisco General Hospital, San Francisco, CA, USA
| |
Collapse
|
45
|
Terry DP, Iverson GL, Panenka W, Colantonio A, Silverberg ND. Workplace and non-workplace mild traumatic brain injuries in an outpatient clinic sample: A case-control study. PLoS One 2018; 13:e0198128. [PMID: 29856799 PMCID: PMC5983513 DOI: 10.1371/journal.pone.0198128] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 05/14/2018] [Indexed: 11/29/2022] Open
Abstract
Individuals who are injured in the workplace typically have a greater risk of delayed return to work (RTW) and other poor health outcomes compared to those not injured at work. It is not known whether these differences hold true for mild traumatic brain injuries (MTBI). The present study examined differences associated with workplace and non-workplace MTBI upon intake to a specialty MTBI clinic, their outcomes, and risk factors that influence RTW. Slow-to-recover participants were recruited from consecutive referrals to four outpatient MTBI clinics from March 2015 to February 2017. Two clinics treat Worker’s Compensation claimants and two clinics serve patients with non-work related injuries in the publically funded health care system. Of 273 eligible patients, 102 completed an initial study assessment (M age = 41.2 years, SD age = 11.7; 54% women) at an average of 2–3 months post injury. Participants were interviewed about their MTBI and completed a battery of standardized questionnaires and performance validity testing. Outcomes, including RTW, were assessed via telephone follow-up 4–5 months later. Workplace injuries comprised 45.1% of the sample. The workplace MTBI group had a greater proportion of men and lower education levels compared to the non-workplace MTBI group. The two groups had a comparable post-concussion symptom burden and performance validity test failure rate. Workplace MTBI was associated with greater post-traumatic stress symptoms. Fifteen patients (14.7%) were lost to follow-up. There were no workplace/non-workplace MTBI differences in RTW outcome at 6–7 months post injury. Of the entire sample, 42.5% of patients had full RTW, 18.4% had partial RTW, and 39.1% had no RTW. Greater post-concussion symptom burden was most predictive of no RTW at follow-up. There was no evidence that the workplace and non-workplace MTBI groups had different risk factors associated with prolonged work absence. Despite systemic differences in compensation and health care access, the workplace and non-workplace MTBI groups were similar at clinic intake and indistinguishable at follow-up, 6–7 months post injury.
Collapse
Affiliation(s)
- Douglas P Terry
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, Massachusetts, United States of America.,Spaulding Rehabilitation Hospital, Boston, Massachusetts, United States of America.,MassGeneral Hospital for Children™ Sports Concussion Program, Boston, Massachusetts, United States of America.,Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Boston, Massachusetts, United States of America
| | - Grant L Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, Massachusetts, United States of America.,Spaulding Rehabilitation Hospital, Boston, Massachusetts, United States of America.,MassGeneral Hospital for Children™ Sports Concussion Program, Boston, Massachusetts, United States of America.,Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Boston, Massachusetts, United States of America
| | - William Panenka
- British Columbia Neuropsychiatry Program, Vancouver, British Columbia, Canada.,Department of Psychiatry, University of British Columbia, Vancouver, British Columbia, Canada
| | - Angela Colantonio
- Department of Occupational Science and Occupational Therapy, University of Toronto, Toronto, Ontario, Canada.,Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada
| | - Noah D Silverberg
- Division of Physical Medicine & Rehabilitation, University of British Columbia, Vancouver, British Columbia, Canada.,Rehabilitation Research Program, Vancouver Coastal Health Research Institute, Vancouver, British Columbia, Canada
| |
Collapse
|
46
|
Terry DP, Brassil M, Iverson GL, Panenka WJ, Silverberg ND. Effect of depression on cognition after mild traumatic brain injury in adults. Clin Neuropsychol 2018; 33:124-136. [DOI: 10.1080/13854046.2018.1459853] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
- Douglas P. Terry
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- , Spaulding Rehabilitation Hospital, Boston, MA, USA
- , Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Boston, MA, USA
- , MassGeneral Hospital for Children Sport Concussion Program, Boston, MA, USA
| | - Michelle Brassil
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- , Spaulding Rehabilitation Hospital, Boston, MA, USA
| | - Grant L. Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- , Spaulding Rehabilitation Hospital, Boston, MA, USA
- , Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Boston, MA, USA
- , MassGeneral Hospital for Children Sport Concussion Program, Boston, MA, USA
| | - William J. Panenka
- British Columbia Neuropsychiatry Program, University of British Columbia, Vancouver, Canada
- Department of Psychiatry, University of British Columbia, Vancouver, Canada
| | - Noah D. Silverberg
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- Division of Physical Medicine & Rehabilitation, University of British Columbia,Vancouver, Canada
- Rehabilitation Research Program, Vancouver Coastal Health Research Institute, GF Strong Rehab Centre, Vancouver, Canada
| |
Collapse
|
47
|
White Matter Associations With Performance Validity Testing in Veterans With Mild Traumatic Brain Injury: The Utility of Biomarkers in Complicated Assessment. J Head Trauma Rehabil 2018; 31:346-59. [PMID: 26360002 DOI: 10.1097/htr.0000000000000183] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Failure on performance validity tests (PVTs) is common in Veterans with histories of mild traumatic brain injury (mTBI), leading to questionable validity of clinical presentations. PARTICIPANTS Using diffusion tensor imaging, we investigated white matter (WM) integrity and cognition in 79 Veterans with history of mTBI who passed PVTs (n = 43; traumatic brain injury [TBI]-passed), history of mTBI who failed at least 1 PVT (n = 13; TBI-failed), and military controls (n = 23; MCs) with no history of TBI. RESULTS The TBI-failed group demonstrated significantly lower cognitive scores relative to MCs and the TBI-passed group; however, no such differences were observed between MCs and the TBI-passed group. On a global measure of WM integrity (ie, WM burden), the TBI-failed group showed more overall WM abnormalities than the other groups. However, no differences were observed between the MCs and TBI-passed group on WM burden. Interestingly, regional WM analyses revealed abnormalities in the anterior internal capsule and cingulum of both TBI subgroups relative to MCs. Moreover, compared with the TBI-passed group, the TBI-failed group demonstrated significantly decreased WM integrity in the corpus callosum. CONCLUSIONS Findings revealed that, within our sample, WM abnormalities are evident in those who fail PVTs. This study adds to the burgeoning PVT literature by suggesting that poor PVT performance does not negate the possibility of underlying WM abnormalities in military personnel with history of mTBI.
Collapse
|
48
|
Erdodi LA. Aggregating validity indicators: The salience of domain specificity and the indeterminate range in multivariate models of performance validity assessment. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 26:155-172. [PMID: 29111772 DOI: 10.1080/23279095.2017.1384925] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
This study was designed to examine the "domain specificity" hypothesis in performance validity tests (PVTs) and the epistemological status of an "indeterminate range" when evaluating the credibility of a neuropsychological profile using a multivariate model of performance validity assessment. While previous research suggests that aggregating PVTs produces superior classification accuracy compared to individual instruments, the effect of the congruence between the criterion and predictor variable on signal detection and the issue of classifying borderline cases remain understudied. Data from a mixed clinical sample of 234 adults referred for cognitive evaluation (MAge = 46.6; MEducation = 13.5) were collected. Two validity composites were created: one based on five verbal PVTs (EI-5VER) and one based on five nonverbal PVTs (EI-5NV) and compared against several other PVTs. Overall, language-based tests of cognitive ability were more sensitive to elevations on the EI-5VER compared to visual-perceptual tests; whereas, the opposite was observed with the EI-5NV. However, the match between predictor and criterion variable had a more complex relationship with classification accuracy, suggesting the confluence of multiple factors (sensory modality, cognitive domain, testing paradigm). An "indeterminate range" of performance validity emerged that was distinctly different from both the Pass and the Fail group. Trichotomized criterion PVTs (Pass-Borderline-Fail) had a negative linear relationship with performance on tests of cognitive ability, providing further support for an "in-between" category separating the unequivocal Pass and unequivocal Fail classification range. The choice of criterion variable can influence classification accuracy in PVT research. Establishing a Borderline range between Pass and Fail more accurately reflected the distribution of scores on multiple PVTs. The traditional binary classification system imposes an artificial dichotomy on PVTs that was not fully supported by the data. Accepting "indeterminate" as a legitimate third outcome of performance validity assessment has the potential to improve the clinical utility of PVTs and defuse debates regarding "near-Passes" and "soft Fails."
Collapse
Affiliation(s)
- Laszlo A Erdodi
- a Department of Psychology , University of Windsor , Windsor , Canada
| |
Collapse
|
49
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
50
|
Harrison AG. Clinical, Ethical, and Forensic Implications of a Flexible Threshold for LD and ADHD in Postsecondary Settings. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9291-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|