1
|
Ramanauskas B, Nixon TM, Finley JCA, VanLandingham HB, Leese MI, Ulrich DM, Ovsiew GP, Cerny BM, Phillips MS, Soble JR, Robinson AD. Analyzing the relationship between processing speed impairment and Rey-15 item test performance. J Clin Exp Neuropsychol 2024:1-11. [PMID: 39329256 DOI: 10.1080/13803395.2024.2406241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 09/12/2024] [Indexed: 09/28/2024]
Abstract
OBJECTIVE This study investigated the relationship between processing speed impairment severity and performance on the Rey 15-Item Test (RFIT) and RFIT + Recognition. METHOD Cross-sectional data from 285 examinees (228 valid/57 invalid) referred for neuropsychological assessment who were administered the RFIT, Weschler Adult Intelligence Scale-Fourth Edition (WAIS-IV) Processing Speed Index (PSI), Brief Visuospatial Memory Test - Revised, Rey Auditory Verbal Learning Test, and three independent criterion PVTs were included. PSI bands were operationalized as Intact (≥85SS; n = 163), Reduced/Possibly Impaired (77-84SS; n = 36), or Impaired (≤76 SS; n = 29). Receiver operator characteristic (ROC) curve analyses tested the RFIT and RFIT + Recognition's classification accuracy for detecting invalid performance for the overall sample and by PSI impairment status. RESULTS Those with intact processing speed performed significantly better on the RFIT and RFIT + Recognition than those with reduced/possibly impaired and impaired processing speed. Though verbal/visual memory predicted RFIT scores independently, PSI contributed additional variance. ROC curves for RFIT and RFIT + Recognition were significant (AUC=.64-.84). Optimal cut-scores yielded modest sensitivity (30%-63%) and high specificity (89%-93%) among those with intact and reduced processing speed but yielded unacceptable accuracy in those with impaired speed (AUC=.59-.62). CONCLUSIONS Although the RFIT and RFIT + Recognition demonstrated acceptable classification accuracy in those with intact processing speed, accuracy diminished with increasing speed impairment. This finding was more pronounced for RFIT + Recognition compared to the traditional RFIT. As such, the RFIT may have limited clinical utility in examinees with more significant processing speed deficits.
Collapse
Affiliation(s)
- Brian Ramanauskas
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, The Chicago School, Chicago, IL, USA
| | - Tana M Nixon
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Midwestern University, Downers Grove, IL, USA
| | - John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Mira I Leese
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Matthew S Phillips
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Anthony D Robinson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
2
|
Webber TA, Lorkiewicz S, Woods SP, Miller B, Soble JR. Does neuropsychological intraindividual variability index cognitive dysfunction, an invalid presentation, or both? Preliminary findings from a mixed clinical older adult veteran sample. J Clin Exp Neuropsychol 2024; 46:535-556. [PMID: 39120111 DOI: 10.1080/13803395.2024.2388096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024]
Abstract
INTRODUCTION Intraindividual variability across a battery of neuropsychological tests (IIV-dispersion) can reflect normal variation in scores or arise from cognitive impairment. An alternate interpretation is IIV-dispersion reflects reduced engagement/invalid test data, although extant research addressing this interpretation is significantly limited. METHOD We used a sample of 97 older adult (mean age: 69.92), predominantly White (57%) or Black/African American (34%), and predominantly cis-gender male (87%) veterans. Examinees completed a comprehensive neuropsychological battery, including measures of reduced engagement/invalid test data (a symptom validity test [SVT], multiple performance validity tests [PVTs]), as part of a clinical evaluation. IIV-dispersion was indexed using the coefficient of variance (CoV). We tested 1) the relationships of raw scores and "failures" on SVT/PVTs with IIV-dispersion, 2) the relationship between IIV-dispersion and validity/neurocognitive disorder status, and 3) whether IIV-dispersion discriminated the validity/neurocognitive disorder groups using receiver operating characteristic (ROC) curves. RESULTS IIV-dispersion was significantly and independently associated with a selection of PVTs, with small to very large effect sizes. Participants with invalid profiles and cognitively impaired participants with valid profiles exhibited medium to large (d = .55-1.09) elevations in IIV-dispersion compared to cognitively unimpaired participants with valid profiles. A non-significant but small to medium (d = .35-.60) elevation in IIV-dispersion was observed for participants with invalid profiles compared to those with a neurocognitive disorder. IIV-dispersion was largely accurate at differentiating participants without a neurocognitive disorder from invalid participants and those with a neurocognitive disorder (areas under the Curve [AUCs]=.69-.83), while accuracy was low for differentiating invalid participants from those with a neurocognitive disorder (AUCs=.58-.65). CONCLUSIONS These preliminary data suggest IIV-dispersion may be sensitive to both neurocognitive disorders and compromised engagement. Clinicians and researchers should exercise due diligence and consider test validity (e.g. PVTs, behavioral signs of engagement) as an alternate explanation prior to interpretation of intraindividual variability as an indicator of cognitive impairment.
Collapse
Affiliation(s)
- Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
- Department of Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sara Lorkiewicz
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | | | - Brian Miller
- Department of Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
3
|
Parsons J, Rodrigues NB, Erdodi LA. The classification accuracy of Warrington's recognition memory test (words) as a performance validity Test in a neurorehabilitation setting. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-11. [PMID: 38913011 DOI: 10.1080/23279095.2024.2337130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This study was designed to evaluate the classification accuracy of the Warrington's Recognition Memory Test (RMT) in 167 patients (97 or 58.1% men; MAge = 40.4; MEducation= 13.8) medically referred for neuropsychological evaluation against five psychometrically defined criterion groups. At the optimal cutoff (≤42), the RMT produced an acceptable combination of sensitivity (.36-.60) and specificity (.85-.95), correctly classifying 68.4-83.3% of the sample. Making the cutoff more conservative (≤41) improved specificity (.88-.95) at the expense of sensitivity (.30-.60). Lowering the cutoff to ≤40 achieved uniformly high specificity (.91-.95) but diminished sensitivity (.27-.48). RMT scores were unrelated to lateral dominance, education, or gender. The RMT was sensitive to a three-way classification of performance validity (Pass/Borderline/Fail), further demonstrating its discriminant power. Despite a notable decline in research studies focused on its classification accuracy within the last decade, the RMT remains an effective free-standing PVT that is robust to demographic variables. Relatively low sensitivity is its main liability. Further research is needed on its cross-cultural validity (sensitivity to limited English proficiency).
Collapse
Affiliation(s)
- Jenna Parsons
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Nelson B Rodrigues
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
- Star UBB Institute, Babeș-Bolyai University, Cluj-Napoca, Romania
| |
Collapse
|
4
|
Giromini L, Pignolo C, Zennaro A, Sellbom M. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI. Assessment 2024:10731911241235465. [PMID: 38468147 DOI: 10.1177/10731911241235465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Collapse
|
5
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
6
|
Dong H, Koerts J, Pijnenborg GHM, Scherbaum N, Müller BW, Fuermaier ABM. Cognitive Underperformance in a Mixed Neuropsychiatric Sample at Diagnostic Evaluation of Adult ADHD. J Clin Med 2023; 12:6926. [PMID: 37959391 PMCID: PMC10647211 DOI: 10.3390/jcm12216926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 11/15/2023] Open
Abstract
(1) Background: The clinical assessment of attention-deficit/hyperactivity disorder (ADHD) in adulthood is known to show non-trivial base rates of noncredible performance and requires thorough validity assessment. (2) Objectives: The present study estimated base rates of noncredible performance in clinical evaluations of adult ADHD on one or more of 17 embedded validity indicators (EVIs). This study further examines the effect of the order of test administration on EVI failure rates, the association between cognitive underperformance and symptom overreporting, and the prediction of cognitive underperformance by clinical information. (3) Methods: A mixed neuropsychiatric sample (N = 464, ADHD = 227) completed a comprehensive neuropsychological assessment battery on the Vienna Test System (VTS; CFADHD). Test performance allows the computation of 17 embedded performance validity indicators (PVTs) derived from eight different neuropsychological tests. Further, all participants completed several self- and other-report symptom rating scales assessing depressive symptoms and cognitive functioning. The Conners' Adult ADHD Rating Scale and the Beck Depression Inventory-II were administered to derive embedded symptom validity measures (SVTs). (4) Results and conclusion: Noncredible performance occurs in a sizeable proportion of about 10% up to 30% of individuals throughout the entire battery. Tests for attention and concentration appear to be the most adequate and sensitive for detecting underperformance. Cognitive underperformance represents a coherent construct and seems dissociable from symptom overreporting. These results emphasize the importance of performing multiple PVTs, at different time points, and promote more accurate calculation of the positive and negative predictive values of a given validity measure for noncredible performance during clinical assessments. Future studies should further examine whether and how the present results stand in other clinical populations, by implementing rigorous reference standards of noncredible performance, characterizing those failing PVT assessments, and differentiating between underlying motivations.
Collapse
Affiliation(s)
- Hui Dong
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Janneke Koerts
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Gerdina H. M. Pijnenborg
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Norbert Scherbaum
- LVR University Hospital, Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Duisburg-Essen, 45147 Essen, Germany; (N.S.); (B.W.M.)
| | - Bernhard W. Müller
- LVR University Hospital, Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Duisburg-Essen, 45147 Essen, Germany; (N.S.); (B.W.M.)
- Department of Psychology, University of Wuppertal, 42119 Wuppertal, Germany
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| |
Collapse
|
7
|
Finley JCA, Brooks JM, Nili AN, Oh A, VanLandingham HB, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Multivariate examination of embedded indicators of performance validity for ADHD evaluations: A targeted approach. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-14. [PMID: 37703401 DOI: 10.1080/23279095.2023.2256440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Amanda N Nili
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Medical Social Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
8
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
9
|
Tyson BT, Pyne SR, Crisan I, Calamia M, Holcomb M, Giromini L, Erdodi LA. Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-10. [PMID: 36881969 DOI: 10.1080/23279095.2023.2179400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales-Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). METHOD The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). RESULTS The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33-.87) and specificity (.92-.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91-.92) and relatively sensitive (.48-.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25-.42). There was no difference in failure rate as a function of TBI severity. CONCLUSIONS In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.
Collapse
Affiliation(s)
- Brad T Tyson
- Evergreen Neuroscience Institute, Evergreen Health Medical Center, Kirkland, WA, USA
| | | | - Iulia Crisan
- Department of Psychology, West University of Timisoara, Timisoara, Romania
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | | | | | - Laszlo A Erdodi
- Jefferson Neurobehavioral Group, New Orleans, LA, USA
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
10
|
Becke M, Tucha L, Butzbach M, Aschenbrenner S, Weisbrod M, Tucha O, Fuermaier ABM. Feigning Adult ADHD on a Comprehensive Neuropsychological Test Battery: An Analogue Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4070. [PMID: 36901080 PMCID: PMC10001580 DOI: 10.3390/ijerph20054070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
The evaluation of performance validity is an essential part of any neuropsychological evaluation. Validity indicators embedded in routine neuropsychological tests offer a time-efficient option for sampling performance validity throughout the assessment while reducing vulnerability to coaching. By administering a comprehensive neuropsychological test battery to 57 adults with ADHD, 60 neurotypical controls, and 151 instructed simulators, we examined each test's utility in detecting noncredible performance. Cut-off scores were derived for all available outcome variables. Although all ensured at least 90% specificity in the ADHD Group, sensitivity differed significantly between tests, ranging from 0% to 64.9%. Tests of selective attention, vigilance, and inhibition were most useful in detecting the instructed simulation of adult ADHD, whereas figural fluency and task switching lacked sensitivity. Five or more test variables demonstrating results in the second to fourth percentile were rare among cases of genuine adult ADHD but identified approximately 58% of instructed simulators.
Collapse
Affiliation(s)
- Miriam Becke
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Lara Tucha
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
| | - Marah Butzbach
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Steffen Aschenbrenner
- Department of Clinical Psychology and Neuropsychology, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
| | - Matthias Weisbrod
- Department of Psychiatry and Psychotherapy, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
- Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg, 69115 Heidelberg, Germany
| | - Oliver Tucha
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
- Department of Psychology, National University of Ireland, W23 F2K8 Maynooth, Ireland
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| |
Collapse
|
11
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
12
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|