1
|
Buchholz AS, Reckess GZ, Del Bene VA, Testa SM, Crawford JL, Schretlen DJ. Within-Person Test Score Distributions: How Typical Is "Normal"? Assessment 2024; 31:1089-1099. [PMID: 37876148 DOI: 10.1177/10731911231201159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
We evaluated within-person variability across a cognitive test battery by analyzing the shape of the distribution of each individual's scores within a battery of tests. We hypothesized that most healthy adults would produce test scores that are normally distributed around their own personal battery-wide, within-person (wp) mean. Using cross-sectional data from 327 neurologically healthy adults, we computed each person's mean, standard deviation, skew, and kurtosis for 30 neuropsychological measures. Raw scores were converted to T-scores using three degrees of calibration: (a) none, (b) age, and (c) age, sex, race, education, and estimated premorbid IQ. Regardless of calibration, no participant showed abnormal within-person skew (wpskew) and only 10 (3.1%) to 16 (4.9%) showed wpkurtosis greater than 2. If replicated in other samples and measures, these findings could illuminate how healthy individuals are endowed with different cognitive abilities and provide the foundation for a new method of inference in clinical neuropsychology.
Collapse
Affiliation(s)
| | - Gila Z Reckess
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Victor A Del Bene
- The University of Alabama at Birmingham Heersink School of Medicine, USA
| | - S Marc Testa
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Sandra and Malcolm Berman Brain & Spine Institute, Baltimore, MD, USA
| | | | | |
Collapse
|
2
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
3
|
Jinkerson JD, Lu LH, Kennedy J, Armistead-Jehle P, Nelson JT, Seegmiller RA. Grooved Pegboard adds incremental value over memory-apparent performance validity tests in predicting psychiatric symptom report. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-9. [PMID: 37094095 DOI: 10.1080/23279095.2023.2192409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
The present study evaluated whether Grooved Pegboard (GPB), when used as a performance validity test (PVT), can incrementally predict psychiatric symptom report elevations beyond memory-apparent PVTs. Participants (N = 111) were military personnel and were predominantly White (84%), male (76%), with a mean age of 43 (SD = 12) and having on average 16 years of education (SD = 2). Individuals with disorders potentially compromising motor dexterity were excluded. Participants were administered GPB, three memory-apparent PVTs (Medical Symptom Validity Test, Non-Verbal Medical Symptom Validity Test, Reliable Digit Span), and a symptom validity test (Personality Assessment Inventory Negative Impression Management [NIM]). Results from the three memory-apparent PVTs were entered into a model for predicting NIM, where failure of two or more PVTs was categorized as evidence of non-credible responding. Hierarchical regression revealed that non-dominant hand GPB T-score incrementally predicted NIM beyond memory-apparent PVTs (F(2,108) = 16.30, p < .001; R2 change = .05, β = -0.24, p < .01). In a second hierarchical regression, GPB performance was dichotomized into pass or fail, using T-score cutoffs (≤29 for either hand, ≤31 for both). Non-dominant hand GPB again predicted NIM beyond memory-apparent PVTs (F(2,108) = 18.75, p <.001; R2 change = .08, β = -0.28, p < .001). Results indicated that noncredible/failing GPB performance adds incremental value over memory-apparent PVTs in predicting psychiatric symptom report.
Collapse
Affiliation(s)
| | - Lisa H Lu
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
- TBI Center of Excellence (TBICoE), Arlington, VA, USA
- General Dynamics Information Technology, Falls Church, VA, USA
| | - Jan Kennedy
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
- TBI Center of Excellence (TBICoE), Arlington, VA, USA
- General Dynamics Information Technology, Falls Church, VA, USA
| | | | | | | |
Collapse
|
4
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
5
|
Chang F, Cerny BM, Tse PKY, Rauch AA, Khan H, Phillips MS, Fletcher NB, Resch ZJ, Ovsiew GP, Jennette KJ, Soble JR. Using the Grooved Pegboard Test as an Embedded Validity Indicator in a Mixed Neuropsychiatric Sample with Varying Cognitive Impairment: Cross-Validation Problems. Percept Mot Skills 2023; 130:770-789. [PMID: 36634223 DOI: 10.1177/00315125231151779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Embedded validity indicators (EVIs) derived from motor tests have received less empirical attention than those derived from tests of other neuropsychological abilities, particularly memory. Preliminary evidence suggests that the Grooved Pegboard Test (GPB) may function as an EVI, but existing studies were largely conducted using simulators and population samples without cognitive impairment. In this study we aimed to evaluate the GPB's classification accuracy as an EVI among a mixed clinical neuropsychiatric sample with and without cognitive impairment. This cross-sectional study comprised 223 patients clinically referred for neuropsychological testing. GPB raw and T-scores for both dominant and nondominant hands were examined as EVIs. A known-groups design, based on ≤1 failure on a battery of validated, independent criterion PVTs, showed that GPB performance differed significantly by validity group. Within the valid group, receiver operating characteristic curve analyses revealed that only the dominant hand raw score displayed acceptable classification accuracy for detecting invalid performance (area under curve [AUC] = .72), with an optimal cut-score of ≥106 seconds (33% sensitivity/88% specificity). All other scores had marginally lower classification accuracy (AUCs = .65-.68) for differentiating valid from invalid performers. Therefore, the GPB demonstrated limited utility as an EVI in a clinical sample containing patients with bona fide cognitive impairment.
Collapse
Affiliation(s)
- Fini Chang
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, 12247University of Illinois at Chicago, Chicago, Illinois, United States
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Phoebe Ka Yin Tse
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Andrew A Rauch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Loyola University Chicago, Chicago, Illinois, United States
| | - Humza Khan
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Matthew S Phillips
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Noah B Fletcher
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| |
Collapse
|
6
|
Link JS, Lu LH, Armistead-Jehle P, Seegmiller RA. Validation of grooved pegboard cutoffs as an additional embedded measure of performance validity. Clin Neuropsychol 2022; 36:2331-2341. [PMID: 34495812 DOI: 10.1080/13854046.2021.1942556] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
OBJECTIVE Using embedded performance validity (PVT) comparisons, Erdodi et al. suggested that Grooved Pegboard (GPB) T-score cutoffs for either hand (≤ 29) or both hands (≤ 31) could be used as additional embedded PVTs. The current study evaluated the relationship between these proposed cutoff scores and established PVTs (Medical Symptom Validity Test [MSVT]; Non-Verbal Medical Symptom Validity Test [NV-MSVT], and Reliable Digit Span [RDS]). METHOD Participants (N = 178) were predominately Caucasian (84%) males (79%) with a mean age and education of 41 (SD = 11.7) and 15.8 years (SD = 2.3), respectively. Participants were stratified as "passing" or "failing" the GPBviaErdodi's proposed criteria. "Failures" on the MSVT, NV-MSVT, and RDS were based on conventional recommendations. RESULTS Moderate correlations between GPB classification and a condition of interest (COI; i.e. at least two failures on reference PVTs) were observed for dominant (χ2 (1, n = 178) = 34.72, ϕ = .44, p < .001), non-dominant (χ2 (1, n = 178) = 16.46, ϕ = .30, p = .001), and both hand conditions (χ2 (1, n = 178) = 32.48, ϕ = .43, p < .001). Sensitivity, specificity, and predictive power were generally higher than Erdodi et al.'s initial findings. CONCLUSION These findingsprovide supportfor the clinical utility of the GPB as an additional embedded PVT. More specifically, dominant and both hand cutoffs were found to be more robust measures ofnon-genuine performance in those without motor deficits. While promising, sensitivity continues to be low; therefore, it is ill-advised to use the GPB as a sole measure of -performance validity.
Collapse
Affiliation(s)
- Jared S Link
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
| | - Lisa H Lu
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA.,Traumatic Brain Injury Center of Excellence (TBICoE), JBSA - Ft Sam Houston, San Antonio, TX, USA.,General Dynamics Information Technology, Falls Church, VA, USA
| | | | | |
Collapse
|
7
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
8
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
9
|
Introducing the ImPACT-5: An Empirically Derived Multivariate Validity Composite. J Head Trauma Rehabil 2021; 36:103-113. [PMID: 32472832 DOI: 10.1097/htr.0000000000000576] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
OBJECTIVE To create novel Immediate Post-Concussion and Cognitive Testing (ImPACT)-based embedded validity indicators (EVIs) and to compare the classification accuracy to 4 existing EVIImPACT. METHOD The ImPACT was administered to 82 male varsity football players during preseason baseline cognitive testing. The classification accuracy of existing EVIImPACT was compared with a newly developed index (ImPACT-5A and B). The ImPACT-5A represents the number of cutoffs failed on the 5 ImPACT composite scores at a liberal cutoff (0.85 specificity); ImPACT-5B is the sum of failures on conservative cutoffs (≥0.90 specificity). RESULTS ImPACT-5A ≥1 was sensitive (0.81), but not specific (0.49) to invalid performance, consistent with EVIImPACT developed by independent researchers (0.68 sensitivity at 0.73-0.75 specificity). Conversely, ImPACT-5B ≥3 was highly specific (0.98), but insensitive (0.22), similar to Default EVIImPACT (0.04 sensitivity at 1.00 specificity). ImPACT-5A ≥3 or ImPACT-5B ≥2 met forensic standards of specificity (0.91-0.93) at 0.33 to 0.37 sensitivity. Also, the ImPACT-5s had the strongest linear relationship with clinically meaningful levels of invalid performance of existing EVIImPACT. CONCLUSIONS The ImPACT-5s were superior to the standard EVIImPACT and comparable to existing aftermarket EVIImPACT, with the flexibility to optimize the detection model for either sensitivity or specificity. The wide range of ImPACT-5 cutoffs allows for a more nuanced clinical interpretation.
Collapse
|
10
|
Dunn A, Pyne S, Tyson B, Roth R, Shahein A, Erdodi L. Critical Item Analysis Enhances the Classification Accuracy of the Logical Memory Recognition Trial as a Performance Validity Indicator. Dev Neuropsychol 2021; 46:327-346. [PMID: 34525856 DOI: 10.1080/87565641.2021.1956499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE : Replicate previous research on Logical Memory Recognition (LMRecog) and perform a critical item analysis. METHOD : Performance validity was psychometrically operationalized in a mixed clinical sample of 213 adults. Classification of the LMRecog and nine critical items (CR-9) was computed. RESULTS : LMRecog ≤20 produced a good combination of sensitivity (.30-.35) and specificity (.89-.90). CR-9 ≥5 and ≥6 had comparable classification accuracy. CR-9 ≥5 increased sensitivity by 4% over LMRecog ≤20; CR-9 ≥6 increased specificity by 6-8% over LMRecog ≤20; CR-9 ≥7 increased specificity by 8-15%. CONCLUSIONS : Critical item analysis enhances the classification accuracy of the optimal LMRecog cutoff (≤20).
Collapse
Affiliation(s)
- Alexa Dunn
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Sadie Pyne
- Windsor Neuropsychology, Windsor, Canada
| | - Brad Tyson
- Neuroscience Institute, Evergreen Neuroscience Institute, EvergreenHealth Medical Center, Kirkland, USA
| | - Robert Roth
- Neuropsychology Services, Dartmouth-Hitchcock Medical Center, USA
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
11
|
Sanborn V, Lace J, Gunstad J, Galioto R. Considerations regarding noncredible performance in the neuropsychological assessment of patients with multiple sclerosis: A case series. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 30:458-467. [PMID: 34514920 DOI: 10.1080/23279095.2021.1971229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Determining the validity of data during clinical neuropsychological assessment is crucial for proper interpretation, and extensive literature has emphasized myriad methods of doing so in diverse samples. However, little research has considered noncredible presentation in persons with multiple sclerosis (pwMS). PwMS often experience one or more factors known to impact validity of data, including major neurocognitive impairment, psychological distress/psychogenic interference, and secondary gain. This case series aimed to illustrate the potential relationships between these factors and performance validity testing in pwMS. Six cases from an IRB-approved database containing pwMS referred for neuropsychological assessment at a large, academic medical center involving at least one of the above-stated factors were identified. Backgrounds, neuropsychological test data, and clinical considerations for each were reviewed. Interestingly, no pwMS diagnosed with major neurocognitive impairment was found to have noncredible performance, nor was any patient with noncredible performance in the absence of notable psychological distress. Given the variability of noncredible performance and multiplicity of factors affecting performance validity in pwMS, clinicians are strongly encouraged to consider psychometrically appropriate methods for evaluating validity of cognitive data in pwMS. Additional research aiming to elucidate base rates of, mechanisms begetting, and methods for assessing noncredible performance in pwMS is imperative.
Collapse
Affiliation(s)
| | - John Lace
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA
| | - John Gunstad
- Psychological Sciences, Kent State University, Kent, OH, USA.,Brain Health Research Institute, Kent State University, Kent, OH, USA
| | - Rachel Galioto
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA.,Cleveland Clinic, Mellen Center for Multiple Sclerosis, Cleveland, OH, USA
| |
Collapse
|
12
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
13
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
14
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
15
|
Sabelli AG, Messa I, Giromini L, Lichtenstein JD, May N, Erdodi LA. Symptom Versus Performance Validity in Patients with Mild TBI: Independent Sources of Non-credible Responding. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09400-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
16
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
17
|
Erdodi LA, Abeare CA. Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Arch Clin Neuropsychol 2020; 35:188-204. [PMID: 31696203 DOI: 10.1093/arclin/acz032] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/18/2019] [Accepted: 06/22/2019] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). METHOD Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. RESULTS Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). CONCLUSIONS In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. BRIEF SUMMARY Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
18
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
19
|
Giromini L, Viglione DJ, Zennaro A, Maffei A, Erdodi LA. SVT Meets PVT: Development and Initial Validation of the Inventory of Problems – Memory (IOP-M). PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09385-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
Mørkved N, Johnsen E, Kroken R, Gjestad R, Winje D, Thimm J, Fathian F, Rettenbacher M, Anda L, Løberg E. Does childhood trauma influence cognitive functioning in schizophrenia? The association of childhood trauma and cognition in schizophrenia spectrum disorders. SCHIZOPHRENIA RESEARCH-COGNITION 2020; 21:100179. [PMID: 32461919 PMCID: PMC7240182 DOI: 10.1016/j.scog.2020.100179] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 04/29/2020] [Accepted: 05/02/2020] [Indexed: 02/06/2023]
Abstract
Childhood trauma (CT) is a risk factor for schizophrenia spectrum disorders (SSDs), and cognitive impairment is a core feature and a vulnerability marker of SSDs. Studies of the relationship between CT and cognitive impairment in SSDs are inconclusive. In addition, few studies have examined differential effects of CT subtypes, e.g. physical, sexual or emotional abuse/neglect, on cognitive functioning. The present study therefore aimed to examine the effects of CT and CT subtypes on cognitive impairment in SSD. Participants (n = 78) with SSDs completed a comprehensive neuropsychological test battery and the Childhood Trauma Questionnaire Short-Form (CTQ-SF). We compared global cognitive performance as well as scores in seven subdomains (verbal abilities, visuospatial abilities, learning, memory, attention/working memory, executive abilities and processing speed) between participants reporting no CT and those reporting CT experiences using independent samples t-tests as well as linear regression analyses to control for possible confounders. CT subtype physical neglect was associated with attention and working memory after controlling for positive and negative psychosis symptoms, years of education, antipsychotics, gender and age, and adjustment of multiple testing. Our results indicate that the observed heterogeneity in cognitive impairment in SSDs, especially attention/working memory abilities, may in part be associated with childhood physical neglect. Research on childhood trauma and cognitive impairment in SSDs is inconclusive Few studies investigated if CT subtypes (abuse and neglect) could explain the heterogeneity in cognitive impairment in SSDs CT subtype physical neglect was associated with impairment in attention/working memory abilities The observed heterogeneity in cognitive impairment in SSDs may in part be associated with CT subtypes
Collapse
Affiliation(s)
- N. Mørkved
- Mosjøen District Psychiatric Centre, Helgeland Hospital, Skjervengan 17, 8657 Mosjøen, Norway
- Department of Psychology, UiT The Arctic University of Norway, Pb 6050 Langnes, 9037 Tromsø, Norway
- Corresponding author at: Skjervengan 17, 8657 Mosjøen, Norway.
| | - E. Johnsen
- NORMENT Centre of Excellence and Division of Psychiatry, Haukeland University Hospital, Jonas Lies vei 65, 5021 Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Pb 7800, 5020 Bergen, Norway
| | - R.A. Kroken
- NORMENT Centre of Excellence and Division of Psychiatry, Haukeland University Hospital, Jonas Lies vei 65, 5021 Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Pb 7800, 5020 Bergen, Norway
| | - R. Gjestad
- NORMENT Centre of Excellence and Division of Psychiatry, Haukeland University Hospital, Jonas Lies vei 65, 5021 Bergen, Norway
- Centre for Research and Education in Forensic Psychiatry, Haukeland University Hospital, Sandviksleitet 1, 5036 Bergen, Norway
| | - D. Winje
- Faculty of Psychology, Department of Clinical Psychology, University of Bergen, Christies gate 13, 5015 Bergen, Norway
| | - J. Thimm
- Department of Psychology, UiT The Arctic University of Norway, Pb 6050 Langnes, 9037 Tromsø, Norway
| | - F. Fathian
- NKS Olaviken Gerontopsychiatric Hospital, Ulriksdal 8, 5009 Bergen, Norway
| | - M. Rettenbacher
- Department of Psychiatry and Psychotherapy, Medical University Innsbruck, Innsbruck, Austria
| | - L.G. Anda
- Department of Biological and Medical Psychology, Faculty of Psychology, University of Bergen, Jonas Liesvei 91, BB-building, 5009 Bergen, Norway
- Clinics for Mental Health Care, Stavanger University Hospital, Jan Johnsens gate 12, 4011 Stavanger, Norway
| | - E.M. Løberg
- NORMENT Centre of Excellence and Division of Psychiatry, Haukeland University Hospital, Jonas Lies vei 65, 5021 Bergen, Norway
- Faculty of Psychology, Department of Clinical Psychology, University of Bergen, Christies gate 13, 5015 Bergen, Norway
- Department of Addiction Medicine, Haukeland University Hospital, Østre Murallmenningen 7, 5012 Bergen, Norway
| |
Collapse
|
21
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
22
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
23
|
|
24
|
Rai JK, Erdodi LA. Impact of criterion measures on the classification accuracy of TOMM-1. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:185-196. [PMID: 31187632 DOI: 10.1080/23279095.2019.1613994] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
This study was designed to examine the effect of various criterion measures on the classification accuracy of Trial 1 of the Test of Memory Malingering (TOMM-1), a free-standing performance validity test (PVT). Archival data were collected from a case sequence of 91 (M Age = 42.2 years; M Education = 12.7) patients clinically referred for neuropsychological assessment. Trials 2 and Retention of the TOMM, the Word Choice Test, and three validity composites were used as criterion PVTs. Classification accuracy varied systematically as a function of criterion PVT. TOMM-1 ≤ 43 emerged as the optimal cutoff, resulting in a wide range of sensitivity (.47-1.00), with perfect overall specificity. Failing the TOMM-1 was unrelated to age, education or gender, but was associated with elevated self-reported depression. Results support the utility of TOMM-1 as an independent, free-standing, single-trial PVT. Consistent with previous reports, the choice of criterion measure influences parameter estimates of the PVT being calibrated. The methodological implications of modality specificity to PVT research and clinical/forensic practice should be considered when evaluating cutoffs or interpreting scores in the failing range.
Collapse
Affiliation(s)
- Jaspreet K Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada.,University of Windsor, Edmonton, Alberta, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
25
|
Erdodi LA, Taylor B, Sabelli AG, Malleck M, Kirsch NL, Abeare CA. Demographically Adjusted Validity Cutoffs on the Finger Tapping Test Are Superior to Raw Score Cutoffs in Adults with TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09352-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
26
|
Abeare C, Sabelli A, Taylor B, Holcomb M, Dumitrescu C, Kirsch N, Erdodi L. The Importance of Demographically Adjusted Cutoffs: Age and Education Bias in Raw Score Cutoffs Within the Trail Making Test. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09353-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|