1
|
Kiselica AM, Karr JE, Mikula CM, Ranum RM, Benge JF, Medina LD, Woods SP. Recent Advances in Neuropsychological Test Interpretation for Clinical Practice. Neuropsychol Rev 2024; 34:637-667. [PMID: 37594687 DOI: 10.1007/s11065-023-09596-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 04/28/2023] [Indexed: 08/19/2023]
Abstract
Much attention in the field of clinical neuropsychology has focused on adapting to the modern healthcare environment by advancing telehealth and promoting technological innovation in assessment. Perhaps as important (but less discussed) are advances in the development and interpretation of normative neuropsychological test data. These techniques can yield improvement in diagnostic decision-making and treatment planning with little additional cost. Brooks and colleagues (Can Psychol 50: 196-209, 2009) eloquently summarized best practices in normative data creation and interpretation, providing a practical overview of norm development, measurement error, the base rates of low scores, and methods for assessing change. Since the publication of this seminal work, there have been several important advances in research on development and interpretation of normative neuropsychological test data, which may be less familiar to the practicing clinician. Specifically, we provide a review of the literature on regression-based normed scores, item response theory, multivariate base rates, summary/factor scores, cognitive intraindividual variability, and measuring change over time. For each topic, we include (1) an overview of the method, (2) a rapid review of the recent literature, (3) a relevant case example, and (4) a discussion of limitations and controversies. Our goal was to provide a primer for use of normative neuropsychological test data in neuropsychological practice.
Collapse
Affiliation(s)
- Andrew M Kiselica
- Department of Health Psychology, University of Missouri, 115 Business Loop 70 W, Columbia, MO, 65203, USA.
| | - Justin E Karr
- Department of Psychology, University of Kentucky, Lexington, KY, USA
| | - Cynthia M Mikula
- Institute of Human Nutrition, Columbia University, New York, NY, USA
| | - Rylea M Ranum
- Department of Health Psychology, University of Missouri, 115 Business Loop 70 W, Columbia, MO, 65203, USA
| | - Jared F Benge
- Department of Neurology, University of Texas-Austin, TX, Austin, USA
| | - Luis D Medina
- Department of Psychology, University of Houston, Houston, TX, USA
| | | |
Collapse
|
2
|
Kim S, Currao A, Brown E, Milberg WP, Fortier CB. Importance of validity testing in psychiatric assessment: evidence from a sample of multimorbid post-9/11 veterans. J Int Neuropsychol Soc 2024; 30:410-419. [PMID: 38014547 DOI: 10.1017/s1355617723000711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
OBJECTIVE Performance validity (PVTs) and symptom validity tests (SVTs) are necessary components of neuropsychological testing to identify suboptimal performances and response bias that may impact diagnosis and treatment. The current study examined the clinical and functional characteristics of veterans who failed PVTs and the relationship between PVT and SVT failures. METHOD Five hundred and sixteen post-9/11 veterans participated in clinical interviews, neuropsychological testing, and several validity measures. RESULTS Veterans who failed 2+ PVTs performed significantly worse than veterans who failed one PVT in verbal memory (Cohen's d = .60-.69), processing speed (Cohen's d = .68), working memory (Cohen's d = .98), and visual memory (Cohen's d = .88-1.10). Individuals with 2+ PVT failures had greater posttraumatic stress (PTS; β = 0.16; p = .0002), and worse self-reported depression (β = 0.17; p = .0001), anxiety (β = 0.15; p = .0007), sleep (β = 0.10; p = .0233), and functional outcomes (β = 0.15; p = .0009) compared to veterans who passed PVTs. 7.8% veterans failed the SVT (Validity-10; ≥19 cutoff); Multiple PVT failures were significantly associated with Validity-10 failure at the ≥19 and ≥23 cutoffs (p's < .0012). The Validity-10 had moderate correspondence in predicting 2+ PVTs failures (AUC = 0.83; 95% CI = 0.76, 0.91). CONCLUSION PVT failures are associated with psychiatric factors, but not traumatic brain injury (TBI). PVT failures predict SVT failure and vice versa. Standard care should include SVTs and PVTs in all clinical assessments, not just neuropsychological assessments, particularly in clinically complex populations.
Collapse
Affiliation(s)
- Sahra Kim
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Alyssa Currao
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Emma Brown
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - William P Milberg
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Catherine B Fortier
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
3
|
van Vliet FIM, van Schothorst HP, Donker-Cools BHPM, Schaafsma FG, Ponds RWHM, Geurtsen GJ. Validity of the Groningen Effort Test in patients with suspected chronic solvent-induced encephalopathy. Arch Clin Neuropsychol 2024:acae025. [PMID: 38572600 DOI: 10.1093/arclin/acae025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/11/2024] [Accepted: 02/27/2024] [Indexed: 04/05/2024] Open
Abstract
INTRODUCTION The use of performance validity tests (PVTs) in a neuropsychological assessment to determine indications of invalid performance has been a common practice for over a decade. Most PVTs are memory-based; therefore, the Groningen Effort Test (GET), a non-memory-based PVT, has been developed. OBJECTIVES This study aimed to validate the GET in patients with suspected chronic solvent-induced encephalopathy (CSE) using the criterion standard of 2PVTs. A second goal was to determine diagnostic accuracy for GET. METHOD Sixty patients with suspected CSE referred for NPA were included. The GET was compared to the criterion standard of 2PVTs based on the Test of Memory Malingering and the Amsterdam Short Term Memory Test. RESULTS The frequency of invalid performance using the GET was significantly higher compared to the criterion of 2PVTs (51.7% vs. 20.0% respectively; p < 0.001). For the GET index, the sensitivity was 75% and the specificity was 54%, with a Youden's Index of 27. CONCLUSION The GET showed significantly more invalid performance compared to the 2PVTs criterion suggesting a high number of false positives. The general accepted minimum norm of specificity for PVTs of >90% was not met. Therefore, the GET is of limited use in clinical practice with suspected CSE patients.
Collapse
Affiliation(s)
- Fabienne I M van Vliet
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Henrita P van Schothorst
- Department of Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Birgit H P M Donker-Cools
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Frederieke G Schaafsma
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Rudolf W H M Ponds
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Roor JJ, Peters MJV, Dandachi-FitzGerald B, Ponds RWHM. Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates. Neuropsychol Rev 2024; 34:299-319. [PMID: 36872398 PMCID: PMC10920461 DOI: 10.1007/s11065-023-09582-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 11/16/2022] [Indexed: 03/07/2023]
Abstract
Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97, p < .001; I2 = 91%; τ2 = 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.
Collapse
Affiliation(s)
- Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, The Netherlands.
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Maarten J V Peters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Brechje Dandachi-FitzGerald
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology, Open University, Heerlen, The Netherlands
| | - Rudolf W H M Ponds
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, location VU, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Karr JE, Pinheiro CN, Harp JP. Performance Validity Testing on the NIH Toolbox Cognition Battery: Base Rates of Failed Embedded Validity Indicators in the Adult Normative Sample. Arch Clin Neuropsychol 2024; 39:204-213. [PMID: 37718664 PMCID: PMC10879920 DOI: 10.1093/arclin/acad071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2023] [Indexed: 09/19/2023] Open
Abstract
OBJECTIVE The goal of this study was to determine the base rates of failing proposed embedded validity indicators (EVIs) for the National Institutes of Health Toolbox Cognition Battery (NIHTB-CB) in the normative sample. METHOD Participants included adults in the NIHTB-CB normative sample with data to calculate age-adjusted standard scores (n = 855; ages: M(SD) = 46.9(17.3), range: 18-85; 65.0% women; education: M(SD) = 14.1(2.5) years) or demographically adjusted T-scores (n = 803; ages: M(SD) = 47.3(17.3), range: 18-85; 65.3% women; education: M(SD) = 14.2(2.5) years) for all tests. The NIHTB-CB includes two tests of crystallized cognition and five tests of fluid cognition. Individual norm-referenced test performances were categorized as falling above or below liberal and conservative cutoffs based on proposed univariate EVIs. The number of univariate EVI failures was summed to compute multivariable EVIs. EVI failure rates above 10% were considered high false-positive rates, indicating specificity < .90. Using chi-square analyses, the frequencies of EVI failures were compared based on gender, race/ethnicity, education, and crystallized composite. RESULTS The multivariable EVIs had predominantly low false-positive rates in the normative sample. EVI failure rates were most common among participants with low crystallized composites. Using age-adjusted standard scores, EVI failure rates varied by education, race/ethnicity, and estimated premorbid intelligence. These differences were mostly eliminated when using demographically adjusted T-scores. CONCLUSIONS Multivariable EVIs requiring ≥ 4 failures using liberal cutoffs or ≥ 3 failures using conservative cutoffs had acceptable false-positive rates (i.e., < 10%) using both age-adjusted standard scores and demographically adjusted T-scores. These multivariable EVIs could be applied to large data sets with NIHTB-CB data to screen for potentially invalid test performances.
Collapse
Affiliation(s)
- Justin E Karr
- Department of Psychology, College of Arts and Sciences, University of Kentucky, Lexington, KY, USA
| | - Cristina N Pinheiro
- Department of Psychology, College of Arts and Sciences, University of Kentucky, Lexington, KY, USA
| | - Jordan P Harp
- Department of Neurology, College of Medicine, University of Kentucky, Lexington, KY, USA
| |
Collapse
|
6
|
Basso MR, Whiteside DM, Combs D. Introduction to the special issue on performance validity: what are we doing? What should we do? J Clin Exp Neuropsychol 2024; 46:1-5. [PMID: 38678395 DOI: 10.1080/13803395.2024.2347119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/30/2024]
|
7
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
8
|
Dong H, Koerts J, Pijnenborg GHM, Scherbaum N, Müller BW, Fuermaier ABM. Cognitive Underperformance in a Mixed Neuropsychiatric Sample at Diagnostic Evaluation of Adult ADHD. J Clin Med 2023; 12:6926. [PMID: 37959391 PMCID: PMC10647211 DOI: 10.3390/jcm12216926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 11/15/2023] Open
Abstract
(1) Background: The clinical assessment of attention-deficit/hyperactivity disorder (ADHD) in adulthood is known to show non-trivial base rates of noncredible performance and requires thorough validity assessment. (2) Objectives: The present study estimated base rates of noncredible performance in clinical evaluations of adult ADHD on one or more of 17 embedded validity indicators (EVIs). This study further examines the effect of the order of test administration on EVI failure rates, the association between cognitive underperformance and symptom overreporting, and the prediction of cognitive underperformance by clinical information. (3) Methods: A mixed neuropsychiatric sample (N = 464, ADHD = 227) completed a comprehensive neuropsychological assessment battery on the Vienna Test System (VTS; CFADHD). Test performance allows the computation of 17 embedded performance validity indicators (PVTs) derived from eight different neuropsychological tests. Further, all participants completed several self- and other-report symptom rating scales assessing depressive symptoms and cognitive functioning. The Conners' Adult ADHD Rating Scale and the Beck Depression Inventory-II were administered to derive embedded symptom validity measures (SVTs). (4) Results and conclusion: Noncredible performance occurs in a sizeable proportion of about 10% up to 30% of individuals throughout the entire battery. Tests for attention and concentration appear to be the most adequate and sensitive for detecting underperformance. Cognitive underperformance represents a coherent construct and seems dissociable from symptom overreporting. These results emphasize the importance of performing multiple PVTs, at different time points, and promote more accurate calculation of the positive and negative predictive values of a given validity measure for noncredible performance during clinical assessments. Future studies should further examine whether and how the present results stand in other clinical populations, by implementing rigorous reference standards of noncredible performance, characterizing those failing PVT assessments, and differentiating between underlying motivations.
Collapse
Affiliation(s)
- Hui Dong
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Janneke Koerts
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Gerdina H. M. Pijnenborg
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Norbert Scherbaum
- LVR University Hospital, Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Duisburg-Essen, 45147 Essen, Germany; (N.S.); (B.W.M.)
| | - Bernhard W. Müller
- LVR University Hospital, Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Duisburg-Essen, 45147 Essen, Germany; (N.S.); (B.W.M.)
- Department of Psychology, University of Wuppertal, 42119 Wuppertal, Germany
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| |
Collapse
|
9
|
Weymann T, Achenbach J, Guevara JE, Bassler M, Karst M, Lambrecht A. EMG measured reaction time as a predictor of invalid symptom report in psychosomatic patients. Clin Neuropsychol 2023:1-17. [PMID: 37917133 DOI: 10.1080/13854046.2023.2276480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 10/23/2023] [Indexed: 11/03/2023]
Abstract
Background: Symptom validity tests (SVTs) and performance validity tests (PVTs) are important tools in sociomedical assessments, especially in the psychosomatic context where diagnoses mainly depend on clinical observation and self-report measures. This study examined the relationship between reaction times (RTs) and scores on the Structured Inventory of Malingered Symptomatology (SIMS). It was proposed that slower RTs and larger standard deviations of reaction times (RTSDs) would be observed in participants who scored above the SIMS cut-off (>16). Methods: Direct surface electromyography (EMG) was used to capture RTs during a computer-based RT test in 152 inpatients from a psychosomatic rehabilitation clinic in Germany. Correlation analyses and Mann-Whitney U were used to examine the relationship between RTs and SIMS scores and to assess the potential impact of covariates such as demographics, medical history, and vocational challenges on RTs. Therefore, dichotomized groups based on each potential covariate were compared. Results: Significantly longer RTs and larger RTSDs were found in participants who scored above the SIMS cut-off. Current treatment with psychopharmacological medication, diagnosis of depression, and age had no significant influence on the RT measures. However, work-related problems had a significant impact on RTSDs. Conclusion: There was a significant relationship between longer and more inconsistent RTs and indicators of exaggerated or feigned symptom report on the SIMS in psychosomatic rehabilitation inpatients. Findings from this study provide a basis for future research developing a new RT-based PVT.
Collapse
Affiliation(s)
- Thorben Weymann
- Department of Psychosomatic Medicine, Rehazentrum Oberharz, Clausthal-Zellerfeld, Germany
| | - Johannes Achenbach
- Department of Anesthesiology, Intensive Care Medicine, Emergency Medicine and Pain Medicine, KRH Klinikum Nordstadt, Hannover, Germany
- Department of Anesthesiology and Intensive Care Medicine, Pain Clinic, Hannover Medical School, Hannover, Germany
| | - Jasmin E Guevara
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Markus Bassler
- Department of Economics and Social Sciences, University of Applied Science Nordhausen, Nordhausen, Germany
| | - Matthias Karst
- Department of Anesthesiology and Intensive Care Medicine, Pain Clinic, Hannover Medical School, Hannover, Germany
| | - Alexandra Lambrecht
- Department of Psychosomatic Medicine, Rehazentrum Oberharz, Clausthal-Zellerfeld, Germany
| |
Collapse
|
10
|
Mascarenhas MA, Cocunato JL, Armstrong IT, Harrison AG, Zakzanis KK. Base rates of non-credible performance in a post-secondary student sample seeking accessibility accommodations. Clin Neuropsychol 2023; 37:1608-1628. [PMID: 36646463 DOI: 10.1080/13854046.2023.2167737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 01/09/2023] [Indexed: 01/18/2023]
Abstract
Objective: Performance Validity Tests (PVTs) have been used to identify non-credible performance in clinical, medicolegal, forensic, and, more recently, academic settings. The inclusion of PVTs when administering psychoeducational assessments is essential given that specific accommodation such as flexible deadlines and increased writing time can provide an external incentive for students without disabilities to feign symptoms. Method: The present study used archival data to establish base rates of non-credible performance in a sample of post-secondary students (n = 1045) who underwent a comprehensive psychoeducational evaluation for the purposes of obtaining academic accommodations. In accordance with current guidelines, non-credible performance was determined by failure on two or more freestanding or embedded PVTs. Results: 9.4% of participants failed at least two of the PVTs they were administered, of which 8.5% failed two PVTs, and approximately 1% failed three PVTs. Base rates of failure for specific PVTs ranged from 25% (b Test) to 11.2% (TOVA). Conclusions: The present study found a lower base rate of non-credible performance than previously observed in comparable populations. This likely reflects the utilization of conservative criteria in detecting non-credible performance to avoid false positives. By contrast, inconsistent base rates previously found in the literature may reflect inconsistent methodologies. These results further emphasize the importance of administering multiple PVTs during psychoeducational assessments. The implications of these findings can further inform clinicians administering assessments in academic settings and aid in the appropriate utilization of PVTs in psychoeducational evaluation to determine accessibility accommodations.
Collapse
Affiliation(s)
- Melanie A Mascarenhas
- Graduate Department of Psychological Clinical Science, University of Toronto Scarborough, Toronto, Canada
| | - Jessica L Cocunato
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| | - Irene T Armstrong
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| | - Allyson G Harrison
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| | - Konstantine K Zakzanis
- Graduate Department of Psychological Clinical Science, University of Toronto Scarborough, Toronto, Canada
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| |
Collapse
|
11
|
Aguilar C, Bailey C, Karyadi KA, Kinney DI, Nitch SR. The use of performance validity tests among inpatient forensic monolingual Spanish-speakers. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:671-679. [PMID: 34491851 DOI: 10.1080/23279095.2021.1970555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Performance validity tests (PVTs) are an integral part of neuropsychological assessments. Yet no studies have examined how Spanish-speaking forensic inpatients perform on PVTs, making it difficult to interpret these tests in this population. The present study examined archival data collected from monolingual Spanish-speaking forensic inpatients (n = 55; Mage = 49.6 years, SD = 12.0; 84.9% male; 93.5% diagnosed with a Psychotic Spectrum Disorder) to determine how this population performs on several PVTs. Most participants' scores on the Dot Counting Test (DCT; 82.2%; n = 45), Repeatable Battery for Assessment of Neuropsychological Status-Effort Index (RBANS EI; 84.4%; n = 33), and Test of Memory Malingering (TOMM; 79.1%; n = 43) were indicative of valid performance. Few participants, however, had Rey-15 Item Test (FIT) scores in the valid range (24.5% to 48.0%; Recall n = 50 and Combined n = 49, respectively); although FIT Recall specificity was improved when cutoff scores were lowered. Total years of education, but not other educational factors, were significantly associated with performance on PVTs (r = .33-.40, p = .01-.03). Study results suggest the DCT, TOMM, and RBANS EI may be more appropriate PVTs for Spanish-speaking forensic inpatients compared to the FIT.
Collapse
Affiliation(s)
- Cynthia Aguilar
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| | - Cassandra Bailey
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| | - Kenny A Karyadi
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| | | | - Stephen R Nitch
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| |
Collapse
|
12
|
Scott JC, Moore TM, Roalf DR, Satterthwaite TD, Wolf DH, Port AM, Butler ER, Ruparel K, Nievergelt CM, Risbrough VB, Baker DG, Gur RE, Gur RC. Development and application of novel performance validity metrics for computerized neurocognitive batteries. J Int Neuropsychol Soc 2023; 29:789-797. [PMID: 36503573 PMCID: PMC10258222 DOI: 10.1017/s1355617722000893] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric concepts using data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). METHODS We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n = 9498); and 2) adult servicemembers from the Marine Resiliency Study-II (n = 1444). RESULTS Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. CONCLUSIONS These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.
Collapse
Affiliation(s)
- J. Cobb Scott
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel H. Wolf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Allison M. Port
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ellyn R. Butler
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kosha Ruparel
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Caroline M. Nievergelt
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Victoria B. Risbrough
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Dewleen G. Baker
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
13
|
Ovsiew GP, Cerny BM, Boer ABD, Petry LG, Resch ZJ, Durkin NM, Soble JR. Performance and symptom validity assessment in attention deficit/hyperactivity disorder: Base rates of invalidity, concordance, and relative impact on cognitive performance. Clin Neuropsychol 2023; 37:1498-1515. [PMID: 36594201 DOI: 10.1080/13854046.2022.2162440] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 12/20/2022] [Indexed: 01/04/2023]
Abstract
Objective: Differential diagnosis of attention deficit/hyperactivity disorder (ADHD) is one of the most common referral questions for neuropsychological evaluation but is complicated by the presence of external incentives. Validity assessment is therefore critical in such evaluations, employing symptom validity tests (SVTs) and performance validity tests (PVTs) to assess the validity of reported symptoms and cognitive test performance, respectively. This study aimed to establish the base rate of symptom and performance invalidity in adults referred for ADHD, compare concordance between performance and symptom validity, and assess the impact of each type of validity on cognitive test performance. Method: This consecutive case series included data from 392 demographically-diverse adults who underwent outpatient neuropsychological evaluation for ADHD. All patients were administered the Clinical Assessment of Attention Deficit-Adult (CAT-A) and a uniform cognitive test battery, including seven PVTs. Results: Invalid symptom reporting and PVT performance were found in 22% and 16% of the sample, respectively. Sixty-eight percent had concordantly valid SVTs/PVTs and 6% had invalid SVTs/PVTs, whereas the remaining 26% had either invalid SVTs or PVTs (but not both). Invalid PVT performance resulted in a significant decrease across all cognitive test scores, with generally large effects (ηp2=.01-.18). Invalid symptom reporting had minimal effects on cognitive test performance (ηp2= ≤.04). Conclusions: PVTs and SVTs are dissociable and therefore should not be used interchangeably in the context of adult ADHD evaluations. Rather, symptom and performance validity should continue to be assessed independently as they provide largely non-redundant information.
Collapse
Affiliation(s)
- Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Adam B De Boer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Luke G Petry
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
14
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
15
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues. Neuropsychol Rev 2023; 33:581-603. [PMID: 37612531 DOI: 10.1007/s11065-023-09601-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 03/29/2023] [Indexed: 08/25/2023]
Abstract
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, Box 200, 1 Drexel Dr, New Orleans, LA, 70125, USA.
| |
Collapse
|
16
|
Harrison AG, Beal AL, Armstrong IT. Predictive value of performance validity testing and symptom validity testing in psychoeducational assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:315-329. [PMID: 34261385 DOI: 10.1080/23279095.2021.1943396] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Using archival data from 2463 psychoeducational assessments of postsecondary students we investigated whether failure on either symptom or performance validity tests (SVTs or PVTs) was associated with score differences on various cognitive, achievement, or executive functioning performance measures or on symptom report measures related to mental health or attention complaints. In total, 14.6% of students failed one or more PVT, 33.6% failed one or more SVT, and 41.6% failed at least one validity test. Individuals who failed SVTs tended to have the highest levels of self-reported symptoms relative to other groups but did not score worse on performance-based psychological tests. Those who failed PVTs scored worse on performance-based tests relative to other groups. Failure on at least one PVT and one SVT resulted in both performance and self-reported symptoms suggestive of greater impairment compared with those who passed all validity measures. Findings also highlight the need for domain-specific SVTs; failing ADHD SVTs was associated only with extreme reports of ADHD and executive functioning symptoms while failing mental health SVTs related only to extreme reports of mental health complaints. Results support using at least one PVT and one SVT in psychoeducational assessments to aid in diagnostic certainty, given the frequency of non-credible presentation in this population of postsecondary students.
Collapse
Affiliation(s)
- Allyson G Harrison
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| | | | - Irene T Armstrong
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| |
Collapse
|
17
|
Monjazeb S, Crowell TA. Performance validity of the Dot Counting Test in a dementia clinic setting. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-11. [PMID: 37119265 DOI: 10.1080/23279095.2023.2207125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/01/2023]
Abstract
OBJECTIVE This study examined the utility of a performance validity test (PVT), the Dot Counting Test (DCT), in individuals undergoing neuropsychological evaluations for dementia. We investigated specificity rates of the DCT Effort Index score (E-Score) and various individual DCT scores (based on completion time/errors) to further establish appropriate cutoff scores. METHOD This cross-sectional study included 56 non-litigating, validly performing older adults with no/minimal, mild, or major cognitive impairment. Cutoffs associated with ≥90% specificity were established for 7 DCT scoring methods across impairment severity subgroups. RESULTS Performance on 5 of 7 DCT scoring methods significantly differed based on impairment severity. Overall, more severely impaired participants had significantly higher E-Scores and longer completion times but demonstrated comparable errors to their less impaired counterparts. Contrary to the previously established E-Score cutoff of ≥17, a cutoff of ≥22 was required to maintain adequate specificity in our total sample, with significantly higher adjustments required in the Mild and Major Neurocognitive Disorder subgroups (≥27 and ≥40, respectively). A cutoff of >3 errors achieved adequate specificity in our sample, suggesting that error scores may produce lower false positive rates than E-Scores and completion time scores, both of which overemphasize speed and could inadvertently penalize more severely impaired individuals. CONCLUSIONS In a dementia clinic setting, error scores on the DCT may have greater utility in detecting non-credible performance than E-Scores and completion time scores, particularly among more severely impaired individuals. Future research should establish and cross-validate the sensitivity and specificity of the DCT for assessing performance validity.
Collapse
Affiliation(s)
- Sanam Monjazeb
- Department of Psychology, Simon Fraser University, Burnaby, Canada
| | - Timothy A Crowell
- Department of Psychiatry, University of British Columbia, Vancouver, Canada
| |
Collapse
|
18
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
19
|
Bajjaleh C, Braw YC, Elkana O. Adaptation and initial validation of the Arabic version of the Word Memory Test (WMT ARB). APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:204-213. [PMID: 34043924 DOI: 10.1080/23279095.2021.1923495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. METHODS We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. RESULTS The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. CONCLUSION The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
Collapse
Affiliation(s)
- Christine Bajjaleh
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| | - Yoram C Braw
- Department of Psychology, Ariel University, Ariel, Israel
| | - Odelia Elkana
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| |
Collapse
|
20
|
Sullivan K, Keyter A, Jones K, Ameratunga S, Starkey N, Barker-Collo S, Webb J, Theadom A. Atypical symptom reporting after mild traumatic brain injury. BRAIN IMPAIR 2023; 24:114-123. [PMID: 38167586 DOI: 10.1017/brimp.2021.30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Early reporting of atypical symptoms following a mild traumatic brain injury (mTBI) may be an early indicator of poor prognosis. This study aimed to determine the percentage of people reporting atypical symptoms 1-month post-mTBI and explore links to recovery 12 months later in a community-dwelling mTBI sample. METHODS Adult participants (>16 years) who had experienced a mTBI were identified from a longitudinal incidence study (BIONIC). At 1-month post-injury, 260 participants completed the Rivermead Post-Concussion Symptoms Questionnaire (typical symptoms) plus four atypical symptom items (hemiplegia, difficulty swallowing, digestion problems and difficulties with fine motor tasks). At 12 months post-injury, 73.9% (n = 193) rated their overall recovery on a 100-point scale. An ordinal regression explored the association between atypical symptoms at 1 month and recovery at 12 months post-injury (low = 0-80, moderate = 81-99 and complete recovery = 100), whilst controlling for age, sex, rehabilitation received, ethnicity, mental and physical comorbidities and additional injuries sustained at the time of injury. RESULTS At 1-month post-injury <1% of participants reported hemiplegia, 5.4% difficulty swallowing, 10% digestion problems and 15.4% difficulties with fine motor tasks. The ordinal regression model revealed atypical symptoms were not significant predictors of self-rated recovery at 12 months. Older age at injury and higher typical symptoms at 1 month were independently associated with poorer recovery at 12 months, p < 0.01. CONCLUSION Atypical symptoms on initial presentation were not linked to global self-reported recovery at 12 months. Age at injury and typical symptoms are stronger early indicators of longer-term prognosis. Further research is needed to determine if atypical symptoms predict other outcomes following mTBI.
Collapse
Affiliation(s)
- Karen Sullivan
- School of Psychology and Counselling, Queensland University of Technology, Brisbane, Australia
| | - Anna Keyter
- Auckland University of Technology, Auckland, New Zealand
| | - Kelly Jones
- National Institute for Stroke and Applied Neuroscience, Auckland University of Technology, Auckland, New Zealand
| | - Shanthi Ameratunga
- School of Population Health, University of Auckland, Auckland, New Zealand
| | - Nicola Starkey
- Faculty of Arts and Social Sciences, University of Waikato, Hamilton, New Zealand
| | | | | | - Alice Theadom
- National Institute for Stroke and Applied Neuroscience, Auckland University of Technology, Auckland, New Zealand
| |
Collapse
|
21
|
Becke M, Tucha L, Butzbach M, Aschenbrenner S, Weisbrod M, Tucha O, Fuermaier ABM. Feigning Adult ADHD on a Comprehensive Neuropsychological Test Battery: An Analogue Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4070. [PMID: 36901080 PMCID: PMC10001580 DOI: 10.3390/ijerph20054070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
The evaluation of performance validity is an essential part of any neuropsychological evaluation. Validity indicators embedded in routine neuropsychological tests offer a time-efficient option for sampling performance validity throughout the assessment while reducing vulnerability to coaching. By administering a comprehensive neuropsychological test battery to 57 adults with ADHD, 60 neurotypical controls, and 151 instructed simulators, we examined each test's utility in detecting noncredible performance. Cut-off scores were derived for all available outcome variables. Although all ensured at least 90% specificity in the ADHD Group, sensitivity differed significantly between tests, ranging from 0% to 64.9%. Tests of selective attention, vigilance, and inhibition were most useful in detecting the instructed simulation of adult ADHD, whereas figural fluency and task switching lacked sensitivity. Five or more test variables demonstrating results in the second to fourth percentile were rare among cases of genuine adult ADHD but identified approximately 58% of instructed simulators.
Collapse
Affiliation(s)
- Miriam Becke
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Lara Tucha
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
| | - Marah Butzbach
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Steffen Aschenbrenner
- Department of Clinical Psychology and Neuropsychology, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
| | - Matthias Weisbrod
- Department of Psychiatry and Psychotherapy, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
- Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg, 69115 Heidelberg, Germany
| | - Oliver Tucha
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
- Department of Psychology, National University of Ireland, W23 F2K8 Maynooth, Ireland
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| |
Collapse
|
22
|
Legemaat AM, Haagedoorn MAS, Burger H, Denys D, Bockting CL, Geurtsen GJ. Is suboptimal effort an issue? A systematic review on neuropsychological performance validity in major depressive disorder. J Affect Disord 2023; 323:731-740. [PMID: 36528136 DOI: 10.1016/j.jad.2022.12.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 11/25/2022] [Accepted: 12/11/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND In Major Depressive Disorder (MDD), emotion- and motivation related symptoms may affect effort during neuropsychological testing. Performance Validity Tests (PVT's) are therefore essential, but are rarely mentioned in research on cognitive functioning in MDD. We aimed to assess the proportion of MDD patients with demonstrated valid performance and determine cognitive functioning in patients with valid performance. This is the first systematic review on neuropsychological performance validity in MDD. METHODS Databases PubMed, PsycINFO, Embase, and Cochrane Library were searched for studies reporting on PVT results of adult MDD patients. We meta-analyzed the proportion of MDD patients with PVT scores indicative of valid performance. RESULTS Seven studies with a total of 409 MDD patients fulfilled inclusion criteria. Six studies reported the exact proportion of patients with PVT scores indicative of valid performance, which ranged from 60 to 100 % with a proportion estimate of 94 %. Four studies reported on cognitive functioning in MDD patients with valid performance. Two out of these studies found memory impairment in a minority of MDD patients and two out of these studies found no cognitive impairment. LIMITATIONS Small number of studies and small sample sizes. CONCLUSIONS A surprisingly small number of studies reported on PVT in MDD. About 94 % of MDD patients in studies using PVT's had valid neuropsychological test performance. Concessive information regarding cognitive functioning in MDD patients with valid performance was lacking. Neuropsychological performance validity should be taken into account since this may alter conclusions regarding cognitive functioning.
Collapse
Affiliation(s)
- Amanda M Legemaat
- Department of Psychiatry Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Marcella A S Haagedoorn
- Department of Geriatric Psychiatry, Mental Health Care North-Holland North, Maelsonstraat 1, 1624 NP Hoorn, the Netherlands
| | - Huibert Burger
- Department of General Practice and Elderly Care Medicine, University Medical Center Groningen, University of Groningen, Antonius Deusinglaan 1, 9713 AV Groningen, the Netherlands
| | - Damiaan Denys
- Department of Psychiatry Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Claudi L Bockting
- Department of Psychiatry Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands; Centre for Urban Mental Health, University of Amsterdam, Oude Turfmarkt 147, 1012 GC Amsterdam, the Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands.
| |
Collapse
|
23
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
24
|
Chang F, Cerny BM, Tse PKY, Rauch AA, Khan H, Phillips MS, Fletcher NB, Resch ZJ, Ovsiew GP, Jennette KJ, Soble JR. Using the Grooved Pegboard Test as an Embedded Validity Indicator in a Mixed Neuropsychiatric Sample with Varying Cognitive Impairment: Cross-Validation Problems. Percept Mot Skills 2023; 130:770-789. [PMID: 36634223 DOI: 10.1177/00315125231151779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Embedded validity indicators (EVIs) derived from motor tests have received less empirical attention than those derived from tests of other neuropsychological abilities, particularly memory. Preliminary evidence suggests that the Grooved Pegboard Test (GPB) may function as an EVI, but existing studies were largely conducted using simulators and population samples without cognitive impairment. In this study we aimed to evaluate the GPB's classification accuracy as an EVI among a mixed clinical neuropsychiatric sample with and without cognitive impairment. This cross-sectional study comprised 223 patients clinically referred for neuropsychological testing. GPB raw and T-scores for both dominant and nondominant hands were examined as EVIs. A known-groups design, based on ≤1 failure on a battery of validated, independent criterion PVTs, showed that GPB performance differed significantly by validity group. Within the valid group, receiver operating characteristic curve analyses revealed that only the dominant hand raw score displayed acceptable classification accuracy for detecting invalid performance (area under curve [AUC] = .72), with an optimal cut-score of ≥106 seconds (33% sensitivity/88% specificity). All other scores had marginally lower classification accuracy (AUCs = .65-.68) for differentiating valid from invalid performers. Therefore, the GPB demonstrated limited utility as an EVI in a clinical sample containing patients with bona fide cognitive impairment.
Collapse
Affiliation(s)
- Fini Chang
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, 12247University of Illinois at Chicago, Chicago, Illinois, United States
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Phoebe Ka Yin Tse
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Andrew A Rauch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Loyola University Chicago, Chicago, Illinois, United States
| | - Humza Khan
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Matthew S Phillips
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Noah B Fletcher
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| |
Collapse
|
25
|
Brooks KJL, Sullivan KA. Validating the modified Rivermead Post-concussion Symptoms Questionnaire (mRPQ). Clin Neuropsychol 2023; 37:207-226. [PMID: 34348079 DOI: 10.1080/13854046.2021.1942555] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 06/09/2021] [Indexed: 02/07/2023]
Abstract
OBJECTIVE Response distortions in the reporting of postconcussion symptoms can occur for many reasons. The Rivermead Post-concussion Symptoms Questionnaire (RPQ) was recently modified to include an embedded symptom validity indicator to test for atypical symptoms. The present study used a simulation design to investigate the psychometric properties of the modified RPQ (mRPQ). METHOD 298 adult volunteers were randomised into three groups: honest responding (Controls, C) who reported actual, current symptoms; mild traumatic brain injury (mTBI) simulators (MS) who role played being injured, and; biased mTBI simulators (BMS) who role played being injured and were asked to bias (exaggerate) their response. The MS and BMS participants received instructions to support the simulation. All participants completed the mRPQ and a modified Neurobehavioral Symptom Inventory (mNSI). RESULTS A 2 × 3 mixed ANOVA with one within-group variable (Symptom type: Standard or Atypical) and one between-group variable (Instruction type: C, MS, BMS) found a significant two-way interaction (p < .05, ηp2 = .08). CONCLUSIONS The BMS group had score elevations for both standard and atypical postconcussion symptoms; therefore, both symptom types should be considered when evaluating for biased responding. The mRPQ has promising psychometric properties and should be further developed.
Collapse
Affiliation(s)
- Kelly Jack Lee Brooks
- School of Psychology and Counselling, Queensland University of Technology, Brisbane, QLD, Australia
| | - Karen A Sullivan
- School of Psychology and Counselling, Queensland University of Technology, Brisbane, QLD, Australia
- Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
26
|
Robinson A, Reed C, Davis K, Divers R, Miller L, Erdodi LA, Calamia M. Settling the Score: Can CPT-3 Embedded Validity Indicators Distinguish Between Credible and Non-Credible Responders Referred for ADHD and/or SLD? J Atten Disord 2023; 27:80-88. [PMID: 36113024 DOI: 10.1177/10870547221121781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The purpose of the present study was to further investigate the clinical utility of individual and composite indicators within the CPT-3 as embedded validity indicators (EVIs) given the discrepant findings of previous investigations. METHODS A total of 201 adults undergoing psychoeducational evaluation for ADHD and/or Specific Learning Disorder (SLD) were divided into credible (n = 159) and non-credible (n = 42) groups based on five criterion measures. RESULTS Receiver operating characteristic curves (ROC) revealed that 5/9 individual indicators and 2/4 composite indicators met minimally acceptable classification accuracy of ≥0.70 (AUC = 0.43-0.78). Individual (0.16-0.45) and composite indicators (0.23-0.35) demonstrated low sensitivity when using cutoffs that maintained specificity ≥90%. CONCLUSION Given the lack of stability across studies, further research is needed before recommending any specific cutoff be used in clinical practice with individuals seeking psychoeducational assessment.
Collapse
Affiliation(s)
| | | | | | - Ross Divers
- Louisiana State University, Baton Rouge, USA
| | - Luke Miller
- Louisiana State University, Baton Rouge, USA
| | | | | |
Collapse
|
27
|
Ausloos-Lozano JE, Bing-Canar H, Khan H, Singh PG, Wisinger AM, Rauch AA, Ogram Buckley CM, Petry LG, Jennette KJ, Soble JR, Resch ZJ. Assessing performance validity during attention-deficit/hyperactivity disorder evaluations: Cross-validation of non-memory embedded validity indicators. Dev Neuropsychol 2022; 47:247-257. [PMID: 35787068 DOI: 10.1080/87565641.2022.2096889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Embedded performance validity tests (PVTs) are key components of neuropsychological evaluations. However, most are memory-based and may be less useful in the assessment of attention-deficit/hyperactivity disorder (ADHD). Four non-memory-based validity indices derived from processing speed and executive functioning measures commonly included in ADHD evaluations, namely Verbal Fluency (VF) and the Trail Making Test (TMT), were cross-validated using the Rey 15-Item Test (RFIT) Recall and Recall/Recognition as memory-based comparison measures. This consecutive case series included data from 416 demographically-diverse adults who underwent outpatient neuropsychological evaluation for ADHD. Validity classifications were established, with ≤1 PVT failure of five independent criterion PVTs as indicative of valid performance (374 valid performers/42 invalid performers). Among the statistically significant validity indicators, TMT-A and TMT-B T-scores (AUCs = .707-.723) had acceptable classification accuracy ranges and sensitivities ranging from 29%-36% (≥89% specificity). RFIT Recall/Recognition produced similar results as TMT-B T-score with 42% sensitivity/90% specificity, but with lower classification accuracy. In evaluating adult ADHD, VF and TMT embedded PVTs demonstrated comparable sensitivity and specificity values to those found in other clinical populations but necessitated alternate cut-scores. Results also support use of RFIT Recall/Recognition over the standard RFIT Recall as a PVT for adult ADHD evaluations.
Collapse
Affiliation(s)
- Jenna E Ausloos-Lozano
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Hanaan Bing-Canar
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Humza Khan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Palak G Singh
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Amanda M Wisinger
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Andrew A Rauch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Caitlin M Ogram Buckley
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Luke G Petry
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, Illinois, USA
| |
Collapse
|
28
|
Fox ME, King TZ. Considerations for Reliable Digit Span as a performance validity test for long-term survivors of childhood brain tumors. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:469-477. [PMID: 32503366 DOI: 10.1080/23279095.2020.1771714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The Reliable Digit Span (RDS) is a performance validity test (PVT) used widely within non-clinical samples, but its utility is in question in clinical groups with cognitive impairment. To investigate, RDS scores were calculated and correlated with the Neurological Predictor Scale, an informant-reported Activities of Daily Living score, and a proxy measure of intelligence (Vocabulary) for 83 adult survivors of childhood brain tumors and 105 healthy controls. Analyses were covaried for age at examination. Participants were divided into passing and failing groups at each RDS cutoff, and ANCOVAs for each of the three variables of interest covaried for age at the examination were run. RDS was correlated with all three variables of interest in survivors but only Vocabulary in controls. At the ≤7 cutoff, passing and failing survivors demonstrated significant differences across all variables of interest, while passing and failing controls differed only on Vocabulary. Differences were also found between passing and failing survivors at lower cutoffs. RDS is related to and likely impacted by various neurological and cognitive challenges faced by brain tumor survivors. Using the standard RDS cutoff of ≤7 may result in inaccurate interpretation of valid performance in this population; therefore, the use of other PVTs is recommended.
Collapse
Affiliation(s)
| | - Tricia Z King
- Department of Psychology and the Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
29
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
30
|
Grewal KS, Trites M, Kirk A, MacDonald SWS, Morgan D, Gowda-Sookochoff R, O'Connell ME. CVLT-II short form forced choice recognition in a clinical dementia sample: Cautions for performance validity assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-10. [PMID: 35635794 DOI: 10.1080/23279095.2022.2079088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Performance validity tests are susceptible to false positives from genuine cognitive impairment (e.g., dementia); this has not been explored with the short form of the California Verbal Learning Test II (CVLT-II-SF). In a memory clinic sample, we examined whether CVLT-II-SF Forced Choice Recognition (FCR) scores differed across diagnostic groups, and how the severity of impairment [Clinical Dementia Rating Sum of Boxes (CDR-SOB) or Mini-Mental State Examination (MMSE)] modulated test performance. Three diagnostic groups were identified: subjective cognitive impairment (SCI; n = 85), amnestic mild cognitive impairment (a-MCI; n = 17), and dementia due to Alzheimer's Disease (AD; n = 50). Significant group differences in FCR were observed using one-way ANOVA; post-hoc analysis indicated the AD group performed significantly worse than the other groups. Using multiple regression, FCR performance was modeled as a function of the diagnostic group, severity (MMSE or CDR-SOB), and their interaction. Results yielded significant main effects for MMSE and diagnostic group, with a significant interaction. CDR-SOB analyses were non-significant. Increases in impairment disproportionately impacted FCR performance for persons with AD, adding caution to research-based cutoffs for performance validity in dementia. Caution is warranted when assessing performance validity in dementia populations. Future research should examine whether CVLT-II-SF-FCR is appropriately specific for best-practice testing batteries for dementia.
Collapse
Affiliation(s)
- Karl S Grewal
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| | - Michaella Trites
- Department of Psychology, University of Victoria, Victoria, Canada
| | - Andrew Kirk
- Department of Medicine, University of Saskatchewan, Saskatoon, Canada
| | | | - Debra Morgan
- Canadian Centre for Health and Safety in Agriculture, University of Saskatchewan, Saskatoon, Canada
| | | | - Megan E O'Connell
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
31
|
Varela JL, Ord AS, Phillips JI, Shura RD, Sautter SW. Preliminary evidence for digit span performance validity indicators within the neuropsychological assessment battery. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-7. [PMID: 35603608 DOI: 10.1080/23279095.2022.2076602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The purpose of this study was to evaluate multiple embedded performance validity indicators within the Digits Forward and Digits Backward subtests of the Neuropsychological Assessment Battery (NAB), including Reliable Digit Span (RDS), as no published papers have examined embedded digit span validity indicators within these subtests of the NAB. Retrospective archival chart review was conducted at an outpatient neuropsychology clinic. Participants were 92 adults (ages 19-68) who completed NAB Digits Forward and Digits Backward, and the Word Choice Test (WCT). Receiver operating characteristic (ROC) curves, t-tests, and sensitivity and specificity analyses were conducted. Analyses showed that RDS demonstrated acceptable classification accuracy between those who passed the WCT and those who did not. The area under the curve (AUC) value for RDS was 0.702; however, AUC values for all other digit span indices were unacceptably low. The optimal cutoff for RDS was identified (<8). RDS for the NAB appears to be an adequate indicator of performance validity; however, considering the very small number of participants who were invalid on the WCT (n = 15), as well as the utilization of only one stand-alone PVT to classify validity status, these findings are preliminary and in need of replication.
Collapse
Affiliation(s)
- Jacob L Varela
- College of Health and Behavioral Sciences, Regent University, Virginia Beach, VA, USA
| | - Anna S Ord
- College of Health and Behavioral Sciences, Regent University, Virginia Beach, VA, USA
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
- Mid-Atlantic Mental Illness Research Education and Clinical Center, Durham, NC, USA
| | - Jacob I Phillips
- College of Health and Behavioral Sciences, Regent University, Virginia Beach, VA, USA
- Independent Private Practice, Virginia Beach, VA, USA
| | - Robert D Shura
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
- Mid-Atlantic Mental Illness Research Education and Clinical Center, Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Scott W Sautter
- College of Health and Behavioral Sciences, Regent University, Virginia Beach, VA, USA
- Independent Private Practice, Virginia Beach, VA, USA
| |
Collapse
|
32
|
Performance Validity and Outcome of Cognitive Behavior Therapy in Patients with Chronic Fatigue Syndrome. J Int Neuropsychol Soc 2022; 28:473-482. [PMID: 34130768 DOI: 10.1017/s1355617721000643] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE There is limited research examining the impact of the validity of cognitive test performance on treatment outcome. All known studies to date have operationalized performance validity dichotomously, leading to the loss of predictive information. Using the range of scores on a performance validity test (PVT), we hypothesized that lower performance at baseline was related to a worse treatment outcome following cognitive behavioral therapy (CBT) in patients with Chronic Fatigue Syndrome (CFS) and to lower adherence to treatment. METHOD Archival data of 1081 outpatients treated with CBT for CFS were used in this study. At baseline, all patients were assessed with a PVT, the Amsterdam Short-Term Memory test (ASTM). Questionnaires assessing fatigue, physical disabilities, psychological distress, and level of functional impairment were administered before and after CBT. RESULTS Our main hypothesis was not confirmed: the total ASTM score was not significantly associated with outcomes at follow-up. However, patients with a missing follow-up assessment had a lower ASTM performance at baseline, reported higher levels of physical limitations, and completed fewer therapy sessions. CONCLUSIONS CFS patients who scored low on the ASTM during baseline assessment are more likely to complete fewer therapy sessions and not to complete follow-up assessment, indicative of limited adherence to treatment. However, if these patients were retained in the intervention, their response to CBT for CFS was comparable with subjects who score high on the ASTM. This finding calls for more research to better understand the impact of performance validity on engagement with treatment and outcomes.
Collapse
|
33
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
34
|
DiCarlo GM, Ernst WJ, Kneavel ME. An exploratory study of the convergent validity of the Test of Effort (TOE) in adults with acquired brain injury. Brain Inj 2022; 36:424-431. [PMID: 35113759 DOI: 10.1080/02699052.2022.2034953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
PRIMARY OBJECTIVE To examine the convergent validity of the Test of Effort (TOE), a performance validity test (PVT) currently under development that employs a two-subtest (one verbal, one visual), forced-choice recognition memory format. RESEARCH DESIGN A descriptive, correlational design was employed to describe performance on the TOE and examine the convergent validity between the TOE and comparison measures. METHODS AND PROCEDURES A sample of 53 individuals with chronic acquired brain injury (ABI) were administered the TOE and three well-validated PVTs (Reliable Digit Span [RDS], Test of Memory Malingering [TOMM] and Dot Counting Test [DCT]). MAIN OUTCOMES AND RESULTS The TOE appeared more difficult than it actually was, suggesting adequate face validity. Medium-to-large correlations were observed between the TOE and established PVTs, suggesting good convergent validity. Provisional cutoff scores are offered based on performance of a subgroup of participants with "sufficient effort." CONCLUSIONS Overall, the TOE shows promise as a PVT measure for clinical use. Future studies with larger and more diverse samples are needed to more fully determine the psychometric characteristics of the TOE.
Collapse
Affiliation(s)
| | - William J Ernst
- Department of Professional Psychology, Chestnut Hill College, Philadelphia, Pennsylvania, USA
| | - Meredith E Kneavel
- School of Nursing and Health Sciences, La Salle University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
35
|
Rosen AS, King LC, Kinney DI, Nitch SR, Glassmire DM. Are TOPF and WRAT WR Interchangeable Measures among Psychiatric Inpatients? Arch Clin Neuropsychol 2022; 37:641-653. [PMID: 35034118 DOI: 10.1093/arclin/acab098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Revised: 11/16/2021] [Accepted: 12/15/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE To examine whether Test of Premorbid Functioning (TOPF) and Wide Range Achievement Test-Word Reading subtest (WRAT WR) are interchangeable measures, and the relationship between these measures and intelligence, among patients with schizophrenia. METHOD In this archival study, the authors examined neuropsychology referrals of an inpatient forensic state hospital. Patients with a schizophrenia spectrum disorder (SSD) who received the Wechsler Adult Intelligence Scale-Fourth Edition or the Wechsler Abbreviated Scale of Intelligence-Second Edition and either TOPF or WRAT WR were considered for inclusion. The final sample consisted of 119 individuals (73.1% male). RESULTS Although there was a linear relationship between most TOPF variables and WRAT WR, their concordance was weak (concordance correlation coefficients [CCC] < 0.90). Poor concordance was also observed between current FSIQ and all standard scores (SS) derived from word reading measures. FSIQ-word reading measure discrepancy scores differed significantly from a hypothesized mean of 0 (mean discrepancy range = -7.42 to -16.60). Discrepancies greater than one standard deviation (>1 SD) were highest among demographics-based SS (i.e. TOPF Predicted and Simple without TOPF). Performance-based SS, particularly TOPF Actual and WRAT4 WR, had the fewest discrepancy scores >1 SD fromFSIQ. CONCLUSIONS TOPF and WRAT WR should not be used interchangeably among institutionalized patients with SSDs. TOPF and WRAT WR were discrepant from FSIQ, with demographic variables producing higher SS relative to performance-based variables. Future research is needed to determine which of these measures more accurately estimates intelligence among inpatients withSSDs.
Collapse
Affiliation(s)
- Alexis S Rosen
- Department of Psychology, Department of State Hospitals-Patton, Patton, CA 92369, USA
| | - Loren C King
- Department of Psychology, Department of State Hospitals-Patton, Patton, CA 92369, USA
| | - Dominique I Kinney
- Department of Psychology, Department of State Hospitals-Patton, Patton, CA 92369, USA
| | - Stephen R Nitch
- Department of Psychology, Department of State Hospitals-Patton, Patton, CA 92369, USA
| | - David M Glassmire
- Department of Psychology, Department of State Hospitals-Patton, Patton, CA 92369, USA
| |
Collapse
|
36
|
OUP accepted manuscript. Arch Clin Neuropsychol 2022; 37:1158-1176. [DOI: 10.1093/arclin/acac020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2022] [Indexed: 11/14/2022] Open
|
37
|
Omer E, Braw Y. The Effects of Cognitive Load on Strategy Utilization in a Forced-Choice Recognition Memory Performance Validity Test. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2022. [DOI: 10.1027/1015-5759/a000636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Despite the importance of detecting feigned cognitive impairment, we have a limited understanding of the theoretical foundation of the phenomenon and the factors that affect it. Studies regarding the formation and implementation of feigning strategies during neuropsychological assessments are numbered, though there are indications that they tax cognitive resources. The current study assessed the effect of cognitive load manipulation on feigning strategies. To achieve this aim, we utilized a 2 × 2 experimental design; condition (simulators/honest responders) and cognitive load (load/no load) were manipulated while participants ( N = 154) performed a well-established performance validity test (PVT). The cognitive load manipulation reduced the quantity of feigning strategies, while also affecting their composition (i.e., strategies tended to be more intuitive). This suggests that reduced cognitive resources among those feigning cognitive impairment may impact the use of in-vivo feigning strategies. These findings, though preliminary, will hopefully encourage further research that will uncover the cognitive factors involved in the utilization of feigning strategies in neuropsychological assessments.
Collapse
Affiliation(s)
- Elad Omer
- Department of Psychology, Ariel University, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Israel
| |
Collapse
|
38
|
Messerly J, Soble JR, Webber TA, Alverson WA, Fullen C, Kraemer LD, Marceaux JC. Evaluation of the classification accuracy of multiple performance validity tests in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY. ADULT 2021; 28:727-736. [PMID: 31835915 DOI: 10.1080/23279095.2019.1698581] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Test of Memory Malingering (TOMM) and Word Memory Test (WMT) are among the most well-known performance validity tests (PVTs) and regarded as gold standard measures. Due to the many factors that impact PVT selection, it is imperative that clinicians make informed clinical decisions with respect to additional or alternative PVTs that demonstrate similar classification accuracy as these well-validated measures. The present archival study evaluated the agreement/classification accuracy of a large battery consisting of multiple other freestanding/embedded PVTs in a mixed clinical sample of 126 veterans. We examined failure rates for all standalone/embedded PVTs using established cut-scores and calculated pass/fail agreement rates and diagnostic odds ratios for various combinations of PVTs using the TOMM and WMT as criterion measures. TOMM and WMT demonstrated the best agreement, followed by Word Choice Test (WCT). The Rey Fifteen Item Test had an excessive number of false-negative errors and reduced classification accuracy. The Digit Span age-corrected scaled score (DS-ACSS) had highest agreement. Findings lend further support to the use of a combination of embedded and standalone PVTs in identifying suboptimal performance. Results provide data to enhance clinical decision making for neuropsychologists who implement combinations of PVTs in a larger clinical battery.
Collapse
Affiliation(s)
- Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Departments of Psychiatry and Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Troy A Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Mental Health and Rehabilitation and Extended Carelines, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - W Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Chrystal Fullen
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Lindsay D Kraemer
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center, San Antonio, TX, USA
| |
Collapse
|
39
|
Koenitzer JC, Herron JE, Whitlow JW, Barbuscak CM, Patel NR, Pletcher R, Christensen J. Development and Initial Validation of the Perceptual Assessment of Memory (PASSOM): A Simulator Study. Arch Clin Neuropsychol 2021; 36:1326-1340. [PMID: 33388765 DOI: 10.1093/arclin/acaa126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/08/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE Performance validity tests (PVTs) are an integral component of neuropsychological assessment. There is a need for the development of more PVTs, especially those employing covert determinations. The aim of the present study was to provide initial validation of a new computerized PVT, the Perceptual Assessment of Memory (PASSOM). METHOD Participants were 58 undergraduate students randomly assigned to a simulator (SIM) or control (CON) group. All participants were provided written instructions for their role prior to testing and were administered the PASSOM as part of a brief battery of neurocognitive tests. Indices of interest included response accuracy for Trials 1 and 2, and total errors across Trials, as well as response time (RT) for Trials 1 and 2, and total RT for both Trials. RESULTS The SIM group produced significantly more errors than the CON group for Trials 1 and 2, and committed more total errors across trials. Significantly longer response latencies were found for the SIM group compared to the CON group for all RT indices examined. Linear regression modeling indicated excellent group classification for all indices studied, with areas under the curve ranging from 0.92 to 0.95. Sensitivity and specificity rates were good for several cut scores across all of the accuracy and RT indices, and sensitivity improved greatly by combining RT cut scores with the more traditional accuracy cut scores. CONCLUSION Findings demonstrate the ability of the PASSOM to distinguish individuals instructed to feign cognitive impairment from those told to perform to the best of their ability.
Collapse
Affiliation(s)
- Justin C Koenitzer
- Neuropsychology Department, Orlando VA Medical Center, Orlando, FL 32827, USA
| | - Janice E Herron
- Neuropsychology Department, Orlando VA Medical Center, Orlando, FL 32827, USA
| | - Jesse W Whitlow
- Psychology Department, Rutgers University, Camden, NJ 08102, USA
| | | | - Nitin R Patel
- Department of Veterans Affairs, VHA Office of Community Care, Washington, DC 20420 USA
| | - Ryan Pletcher
- Psychology Department, Rutgers University, Camden, NJ 08102, USA
| | | |
Collapse
|
40
|
Exploring the Structured Inventory of Malingered Symptomatology in Patients with Multiple Sclerosis. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09424-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
41
|
Dunn A, Pyne S, Tyson B, Roth R, Shahein A, Erdodi L. Critical Item Analysis Enhances the Classification Accuracy of the Logical Memory Recognition Trial as a Performance Validity Indicator. Dev Neuropsychol 2021; 46:327-346. [PMID: 34525856 DOI: 10.1080/87565641.2021.1956499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE : Replicate previous research on Logical Memory Recognition (LMRecog) and perform a critical item analysis. METHOD : Performance validity was psychometrically operationalized in a mixed clinical sample of 213 adults. Classification of the LMRecog and nine critical items (CR-9) was computed. RESULTS : LMRecog ≤20 produced a good combination of sensitivity (.30-.35) and specificity (.89-.90). CR-9 ≥5 and ≥6 had comparable classification accuracy. CR-9 ≥5 increased sensitivity by 4% over LMRecog ≤20; CR-9 ≥6 increased specificity by 6-8% over LMRecog ≤20; CR-9 ≥7 increased specificity by 8-15%. CONCLUSIONS : Critical item analysis enhances the classification accuracy of the optimal LMRecog cutoff (≤20).
Collapse
Affiliation(s)
- Alexa Dunn
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Sadie Pyne
- Windsor Neuropsychology, Windsor, Canada
| | - Brad Tyson
- Neuroscience Institute, Evergreen Neuroscience Institute, EvergreenHealth Medical Center, Kirkland, USA
| | - Robert Roth
- Neuropsychology Services, Dartmouth-Hitchcock Medical Center, USA
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
42
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
43
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
44
|
Waldron-Perrine B, Rai JK, Chao D. Therapeutic assessment and the art of feedback: A model for integrating evidence-based assessment and therapy techniques in neurological rehabilitation. NeuroRehabilitation 2021; 49:293-306. [PMID: 34420989 DOI: 10.3233/nre-218027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Therapeutic assessment involves the integration of evidence-based approaches and humanistic principles, and there is empirical support for the use of this approach in the context of neuropsychological assessment broadly. OBJECTIVE We propose that therapeutic assessment (TA) and collaborative therapeutic neuropsychological assessment (CTNA) principles are appropriate and effective for application within a neurological rehabilitation population specifically. METHODS We review TA and CTNA principles and propose a model for their application to a neurological rehabilitation population, with an emphasis on describing the strengths of the collaborative approach, guidelines and principles for maximizing the efficacy of feedback, and transitioning the patient into psychotherapy services to further address their personal goals. A case example of a neurologically injured individual engaged in CTNA and subsequent intervention is shared to highlight the principles discussed. RESULTS AND CONCLUSION The proposed model and case study demonstrate the clinical utility of TA and CTNA principles with a neurological rehabilitation population.
Collapse
Affiliation(s)
- Brigid Waldron-Perrine
- Department of Physical Medicine andRehabilitation, Rehabilitation Psychology and Neuropsychology, University of Michigan, Ann Arbor, MI, USA
| | - Jaspreet K Rai
- Precision Neuropsychological Assessments, Edmonton, AB, Canada
| | - Dominique Chao
- Department of Physical Medicine andRehabilitation, Rehabilitation Psychology and Neuropsychology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
45
|
Braun SE, Fountain-Zaragoza S, Halliday CA, Horner MD. Demographic differences in performance validity test failure. APPLIED NEUROPSYCHOLOGY. ADULT 2021:1-9. [PMID: 34428386 DOI: 10.1080/23279095.2021.1958814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The present study investigated demographic differences in performance validity test (PVT) failure in a Veteran sample. METHOD Data were extracted from clinical neuropsychological evaluations. Only veterans who identified as men, as either European American/White (EA) or African American/Black (AA) were included (n = 1261). We investigated whether performance on two frequently used PVTs, the Test of Memory Malingering (TOMM), and the Medical Symptom Validity Test (MSVT), differed by age, education, and race using separate logistic regressions. RESULTS Veterans with younger age, less education, and Veterans Affairs (VA) service-connected disability were significantly more likely to fail both PVTs. Race was not a significant predictor of MSVT failure, but AA patients were significantly more likely than EA patients to fail the TOMM. For all significant demographic predictors in the models, effects were small. In a subsample of patients who were given both PVTs (n = 461), the effects of race on performance remained. CONCLUSIONS Performance on the TOMM and MSVT differed by age and level of education. Performance on the TOMM differed between EA and AA patients, whereas performance on the MSVT did not. These results suggest that demographic factors may play a small but measurable role in performance on specific PVTs.
Collapse
Affiliation(s)
- Sarah Ellen Braun
- Department of Neurology, Virginia Commonwealth University, Richmond, VA, USA
- Massey Cancer Center, Richmond, VA, USA
| | | | - Colleen A Halliday
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
46
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
47
|
Kanser R, O'Rourke J, Silva MA. Performance validity testing via telehealth and failure rate in veterans with moderate-to-severe traumatic brain injury: A veterans affairs TBI model systems study. NeuroRehabilitation 2021; 49:169-177. [PMID: 34397429 DOI: 10.3233/nre-218019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The COVID-19 pandemic has led to increased utilization of teleneuropsychology (TeleNP) services. Unfortunately, investigations of performance validity tests (PVT) delivered via TeleNP are sparse. OBJECTIVE The purpose of this study was to examine the specificity of the Reliable Digit Span (RDS) and 21-item test administered via telephoneMETHOD:Participants were 51 veterans with moderate-to-severe traumatic brain injury (TBI). All participants completed the RDS and 21-item test in the context of a larger TeleNP battery. Specificity rates were examined across multiple cutoffs for both PVTs. RESULTS Consistent with research employing traditional face-to-face neuropsychological evaluations, both PVTs maintained adequate specificity (i.e., > 90%) across previously established cutoffs. Specifically, defining performance invalidity as RDS < 7 or 21-item test forced choice total correct < 11 led to < 10%false positive classification errors. CONCLUSIONS Findings add to the limited body of research examining and provide preliminary support for the use of the RDS and 21-item test in TeleNP via telephone. Both measures maintained adequate specificity in veterans with moderate-to-severe TBI. Future investigations including clinical or experimental "feigners" in a counter-balanced cross-over design (i.e., face-to-face vs. TeleNP) are recommended.
Collapse
Affiliation(s)
- Robert Kanser
- Mental Health & Behavioral Sciences Section (MHBSS), James A. Haley Veterans' Hospital, Tampa, FL, USA
| | - Justin O'Rourke
- Polytrauma Section, Audie L. Murphy Memorial Veterans' Hospital, San Antonio, TX, USA
| | - Marc A Silva
- Mental Health & Behavioral Sciences Section (MHBSS), James A. Haley Veterans' Hospital, Tampa, FL, USA.,Department of Internal Medicine, University of South Florida, Tampa, FL, USA.,Department of Psychiatry and Behavioral Neurosciences, University of South Florida, Tampa, FL, USA.,Department of Psychology, University of South Florida, Tampa, FL, USA
| |
Collapse
|
48
|
The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test. Brain Sci 2021; 11:brainsci11081039. [PMID: 34439658 PMCID: PMC8393330 DOI: 10.3390/brainsci11081039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/16/2022] Open
Abstract
Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants' performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering-TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators' objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.
Collapse
|
49
|
Sullivan KA, Bennett D. An Experimental Study of the Effects of Biased Responding on the Modified Rivermead Post-concussion Symptoms Questionnaire and Validity Indicators. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09419-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
50
|
Rhoads T, Neale AC, Resch ZJ, Cohen CD, Keezer RD, Cerny BM, Jennette KJ, Ovsiew GP, Soble JR. Psychometric implications of failure on one performance validity test: a cross-validation study to inform criterion group definition. J Clin Exp Neuropsychol 2021; 43:437-448. [PMID: 34233580 DOI: 10.1080/13803395.2021.1945540] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Introduction: Research to date has supported the use of multiple performance validity tests (PVTs) for determining validity status in clinical settings. However, the implications of including versus excluding patients failing one PVT remains a source of debate, and methodological guidelines for PVT research are lacking. This study evaluated three validity classification approaches (i.e. 0 vs. ≥2, 0-1 vs. ≥2, and 0 vs. ≥1 PVT failures) using three reference standards (i.e. criterion PVT groupings) to recommend approaches best suited to establishing validity groups in PVT research methodology.Method: A mixed clinical sample of 157 patients was administered freestanding (Medical Symptom Validity Test, Dot Counting Test, Test of Memory Malingering, Word Choice Test), and embedded PVTs (Reliable Digit Span, RAVLT Effort Score, Stroop Word Reading, BVMT-R Recognition Discrimination) during outpatient neuropsychological evaluation. Three reference standards (i.e. two freestanding and three embedded PVTs from the above list) were created. Rey 15-Item Test and RAVLT Forced Choice were used solely as outcome measures in addition to two freestanding PVTs not employed in the reference standard. Receiver operating characteristic curve analyses evaluated classification accuracy using the three validity classification approaches for each reference standard.Results: When patients failing only one PVT were excluded or classified as valid, classification accuracy ranged from acceptable to excellent. However, classification accuracy was poor to acceptable when patients failing one PVT were classified as invalid. Sensitivity/specificity across two of the validity classification approaches (0 vs. ≥2; 0-1 vs. ≥2) remained reasonably stable.Conclusions: These results reflect that both inclusion and exclusion of patients failing one PVT are acceptable approaches to PVT research methodology and the choice of method likely depends on the study rationale. However, including such patients in the invalid group yields unacceptably poor classification accuracy across a number of psychometrically robust outcome measures and therefore is not recommended.
Collapse
Affiliation(s)
- Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Alec C Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|