1
|
Crişan I, Erdodi L. Examining the cross-cultural validity of the test of memory malingering and the Rey 15-item test. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:721-731. [PMID: 35476611 DOI: 10.1080/23279095.2022.2064753] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study was designed to investigate the cross-cultural validity of two freestanding performance validity tests (PVTs), the Test of Memory Malingering - Trial 1 (TOMM-1) and the Rey Fifteen Item Test (Rey-15) in Romanian-speaking patients. METHODS The TOMM-1 and Rey-15 free recall (FR) and the combination score incorporating the recognition trial (COMB) were administered to a mixed clinical sample of 61 adults referred for cognitive evaluation, 24 of whom had external incentives to appear impaired. Average scores on PVTs were compared between the two groups. Classification accuracies were computed using one PVT against another. RESULTS Patients with identifiable external incentives to appear impaired produced significantly lower scores and more errors on validity indicators. The largest effect sizes emerged on TOMM-1 (Cohen's d = 1.00-1.19). TOMM-1 was a significant predictor of the Rey-15 COMB ≤20 (AUC = .80; .38 sensitivity; .89 specificity at a cutoff of ≤39). Similarly, both Rey-15 indicators were significant predictors of TOMM-1 at ≤39 as the criterion (AUCs = .73-.76; .33 sensitivity; .89-.90 specificity). CONCLUSION Results offer a proof of concept for the cross-cultural validity of the TOMM-1 and Rey-15 in a Romanian clinical sample.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
2
|
Ashendorf L, Withrow S, Ward SH, Sullivan SK, Sugarman MA. Decision rules for an abbreviated administration of the Test of Memory Malingering. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:382-391. [PMID: 35068279 DOI: 10.1080/23279095.2022.2026948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The present study investigated abbreviation methods for the Test of Memory Malingering (TOMM) in relation to traditional manual-based test cutoffs and independently derived more stringent cutoffs suggested by recent research (≤48 on Trial 2 or 3). Consecutively referred outpatient U.S. military veterans (n = 260) were seen for neuropsychological evaluation for mild traumatic brain injury or possible attention-deficit/hyperactivity disorder. Performance on TOMM Trial 1 was evaluated, including the total score and errors on the first 10 items (TOMMe10), to determine correspondence and redundancy with Trials 2 and 3. Using the traditional cutoff, valid performance on Trials 2 and 3 was predicted by zero errors on TOMMe10 and by Trial 1 scores greater than 41. Invalid performance was predicted by commission of more than three errors on TOMMe10 and by Trial 1 scores less than 34. For revised TOMM cutoffs, a Trial 1 score above 46 was predictive of a valid score, and a TOMMe10 score of three or more errors or a Trial 1 score below 36 was associated with invalid TOMM performance. Conditional abbreviation of the TOMM is feasible in a vast majority of cases without sacrificing information regarding performance validity. Decision trees are provided to facilitate administration of the three trials.
Collapse
Affiliation(s)
- Lee Ashendorf
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
| | - Susanne Withrow
- Behavioral Health Service Line, VA Pittsburgh Healthcare System, Pittsburgh, PA, USA
| | - Sarah H Ward
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
| | - Sara K Sullivan
- Psychology Service, VA Bedford Healthcare System, Bedford, MA, USA
| | - Michael A Sugarman
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
3
|
Crișan I, Ali S, Cutler L, Matei A, Avram L, Erdodi LA. Geographic variability in limited English proficiency: A cross-cultural study of cognitive profiles. J Int Neuropsychol Soc 2023; 29:972-983. [PMID: 37246143 DOI: 10.1017/s1355617723000280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE This study was designed to evaluate the effect of limited English proficiency (LEP) on neurocognitive profiles. METHOD Romanian (LEP-RO; n = 59) and Arabic (LEP-AR; n = 30) native speakers were compared to Canadian native speakers of English (NSE; n = 24) on a strategically selected battery of neuropsychological tests. RESULTS As predicted, participants with LEP demonstrated significantly lower performance on tests with high verbal mediation relative to US norms and the NSE sample (large effects). In contrast, several tests with low verbal mediation were robust to LEP. However, clinically relevant deviations from this general pattern were observed. The level of English proficiency varied significantly within the LEP-RO and was associated with a predictable performance pattern on tests with high verbal mediation. CONCLUSIONS The heterogeneity in cognitive profiles among individuals with LEP challenges the notion that LEP status is a unitary construct. The level of verbal mediation is an imperfect predictor of the performance of LEP examinees during neuropsychological testing. Several commonly used measures were identified that are robust to the deleterious effects of LEP. Administering tests in the examinee's native language may not be the optimal solution to contain the confounding effect of LEP in cognitive evaluations.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Alina Matei
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Luisa Avram
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
4
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
5
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
6
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
7
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues. Neuropsychol Rev 2023; 33:581-603. [PMID: 37612531 DOI: 10.1007/s11065-023-09601-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 03/29/2023] [Indexed: 08/25/2023]
Abstract
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, Box 200, 1 Drexel Dr, New Orleans, LA, 70125, USA.
| |
Collapse
|
8
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
9
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
10
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
11
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
12
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
13
|
Brantuo MA, An K, Biss RK, Ali S, Erdodi LA. Neurocognitive Profiles Associated With Limited English Proficiency in Cognitively Intact Adults. Arch Clin Neuropsychol 2022; 37:1579-1600. [PMID: 35694764 DOI: 10.1093/arclin/acac019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
Collapse
Affiliation(s)
- Maame A Brantuo
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Kelly An
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Renee K Biss
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| |
Collapse
|
14
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
15
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
16
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
17
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
18
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
19
|
Sweet JJ, Heilbronner RL, Morgan JE, Larrabee GJ, Rohling ML, Boone KB, Kirkwood MW, Schroeder RW, Suhr JA. American Academy of Clinical Neuropsychology (AACN) 2021 consensus statement on validity assessment: Update of the 2009 AACN consensus conference statement on neuropsychological assessment of effort, response bias, and malingering. Clin Neuropsychol 2021; 35:1053-1106. [PMID: 33823750 DOI: 10.1080/13854046.2021.1896036] [Citation(s) in RCA: 160] [Impact Index Per Article: 53.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Objective: Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' Methods: In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled. A total of 20 individuals shared ideas and writing drafts until reaching consensus on January 21, 2021. Results: Consensus was reached regarding affirmation of prior salient points that continue to garner clinical and scientific support, as well as creation of new points. The resulting consensus statement addresses definitions and differential diagnosis, performance and symptom validity assessment, and research design and statistical issues. Conclusions/Importance: In order to provide bases for diagnoses and interpretations, the current consensus is that all clinical and forensic evaluations must proactively address the degree to which results of neuropsychological and psychological testing are valid. There is a strong and continually-growing evidence-based literature on which practitioners can confidently base their judgments regarding the selection and interpretation of validity measures.
Collapse
Affiliation(s)
- Jerry J Sweet
- Department of Psychiatry & Behavioral Sciences, NorthShore University HealthSystem, Evanston, IL, USA
| | | | | | | | - Martin L Rohling
- Psychology Department, University of South Alabama, Mobile, AL, USA
| | - Kyle B Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Michael W Kirkwood
- Department of Physical Medicine & Rehabilitation, University of Colorado School of Medicine and Children's Hospital Colorado, Aurora, CO, USA
| | - Ryan W Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine, Wichita, KS, USA
| | - Julie A Suhr
- Psychology Department, Ohio University, Athens, OH, USA
| | | |
Collapse
|
20
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
21
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
22
|
Sherman EMS, Slick DJ, Iverson GL. Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria. Arch Clin Neuropsychol 2020; 35:735-764. [PMID: 32377667 PMCID: PMC7452950 DOI: 10.1093/arclin/acaa019] [Citation(s) in RCA: 152] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 03/12/2020] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES Empirically informed neuropsychological opinion is critical for determining whether cognitive deficits and symptoms are legitimate, particularly in settings where there are significant external incentives for successful malingering. The Slick, Sherman, and Iversion (1999) criteria for malingered neurocognitive dysfunction (MND) are considered a major milestone in the field's operationalization of neurocognitive malingering and have strongly influenced the development of malingering detection methods, including serving as the criterion of malingering in the validation of several performance validity tests (PVTs) and symptom validity tests (SVTs) (Slick, D.J., Sherman, E.M.S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545-561). However, the MND criteria are long overdue for revision to address advances in malingering research and to address limitations identified by experts in the field. METHOD The MND criteria were critically reviewed, updated with reference to research on malingering, and expanded to address other forms of malingering pertinent to neuropsychological evaluation such as exaggeration of self-reported somatic and psychiatric symptoms. RESULTS The new proposed criteria simplify diagnostic categories, expand and clarify external incentives, more clearly define the role of compelling inconsistencies, address issues concerning PVTs and SVTs (i.e., number administered, false positives, and redundancy), better define the role of SVTs and of marked discrepancies indicative of malingering, and most importantly, clearly define exclusionary criteria based on the last two decades of research on malingering in neuropsychology. Lastly, the new criteria provide specifiers to better describe clinical presentations for use in neuropsychological assessment. CONCLUSIONS The proposed multidimensional malingering criteria that define cognitive, somatic, and psychiatric malingering for use in neuropsychological assessment are presented.
Collapse
Affiliation(s)
| | | | - Grant L Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- Spaulding Rehabilitation Hospital and Spaulding Research Institute, Charlestown, MA, USA
- Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Charlestown, MA, USA
| |
Collapse
|
23
|
Giromini L, Viglione DJ, Zennaro A, Maffei A, Erdodi LA. SVT Meets PVT: Development and Initial Validation of the Inventory of Problems – Memory (IOP-M). PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09385-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
25
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
26
|
|
27
|
Mooney SR, Stafford J, Seats E. Medical Evaluation Board Involvement, Non-Credible Cognitive Testing, and Emotional Response Bias in Concussed Service Members. Mil Med 2019; 183:e546-e554. [PMID: 29590406 DOI: 10.1093/milmed/usy038] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Accepted: 02/26/2018] [Indexed: 11/13/2022] Open
Abstract
Introduction Military Service Members (SMs) with post-concussive symptoms are commonly referred for further evaluation and possible treatment to Department of Defense Traumatic Brain Injury Clinics where neuropsychological screening/evaluations are being conducted. Understudied to date, the base rates of noncredible task engagement/performance validity testing (PVT) during cognitive screening/evaluations in military settings appears to be high. The current study objectives are to: (1) examine the base rates of noncredible PVTs of SMs undergoing routine clinical or Medical Evaluation Board (MEB) related workups using multiple objective performance-based indicators; (2) determine whether involvement in MEB is associated with PVT or symptom exaggeration/symptom validity testing (SVT) results; (3) elucidate which psychiatric symptoms are associated with noncredible PVT performances; and (4) determine whether MEB participation moderates the relationship between psychological symptom exaggeration and whether or not SM goes on to demonstrate PVTs failures - or vice versa. Materials and Methods Retrospective study of 71 consecutive military concussion cases drawn from a DoD TBI Clinic neuropsychology clinic database. As part of neuropsychological evaluations, patients completed several objective performance-based PVTs and SVT. Results Mean (SD) age of SMs was 36.0 (9.5), ranging from 19-59, and 93% of the sample was male. The self-identified ethnicity resulted in the following percentages: 62% Non-Hispanic White, 22.5% African American, and 15.5% Hispanic or Latino. The majority of the sample (97%) was Active Duty Army and 51% were involved in the MEB at the time of evaluation. About one-third (35.9%) of routine clinical patients demonstrated failure on one or more PVT indicators (12.8% failed 2) while PVT failure rates amongst MEB patients ranged from 15.6% to 37.5% (i.e., failed 2 or 1 PVTs, respectively). Base rates of failures on one or more PVT did not differ between routine clinical versus MEB patients (p = 0.94). MEB involvement was not associated with increased emotional symptom response bias as compared to routine clinical patients. PVT failures were positively correlated with somatization, anxiety, depressive symptoms, suspicious and hostility, atypical perceptions/alienation/subjective cognitive difficulties, borderline personality traits/features, and penchant for aggression in addition to symptom over-endorsement/exaggeration. No differences between routine clinical and MEB patients across other SVT indicators were found. MEB status did not moderate the relationship between any of the SVTs. Conclusion Study results are broadly consistent with the prior published studies that documented low to moderately high base rates of noncredible task engagement during neuropsychological evaluations in military and veteran settings. Results are in contrast to prior studies that have suggested involvement in MEB is associated with increased likelihood of poor PVT performances. This is the first to show that MEB involvement did not enhance/strengthen the association between PVT performances and evidence of SVTs. Consistent with prior studies, these results do highlight that the same SMs who fail PVTs also tend to be the ones who go on to endorse a myriad of psychiatric symptoms and proclivities. Implications of variable or poor task engagement during routine clinical and MEB neuropsychological evaluation in military settings on treatment and disposition planning cannot be overstated.
Collapse
Affiliation(s)
- Scott R Mooney
- Dwight D. Eisenhower Army Medical Center - TBI Clinic, Neuroscience & Rehabilitation Center, 300 E. Hospital Road, Fort Gordon, GA
| | - Jane Stafford
- University of South Carolina-Aiken, 471 University Parkway, Aiken, SC
| | - Elizabeth Seats
- University of South Carolina-Aiken, 471 University Parkway, Aiken, SC
| |
Collapse
|
28
|
Rai JK, Erdodi LA. Impact of criterion measures on the classification accuracy of TOMM-1. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:185-196. [PMID: 31187632 DOI: 10.1080/23279095.2019.1613994] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
This study was designed to examine the effect of various criterion measures on the classification accuracy of Trial 1 of the Test of Memory Malingering (TOMM-1), a free-standing performance validity test (PVT). Archival data were collected from a case sequence of 91 (M Age = 42.2 years; M Education = 12.7) patients clinically referred for neuropsychological assessment. Trials 2 and Retention of the TOMM, the Word Choice Test, and three validity composites were used as criterion PVTs. Classification accuracy varied systematically as a function of criterion PVT. TOMM-1 ≤ 43 emerged as the optimal cutoff, resulting in a wide range of sensitivity (.47-1.00), with perfect overall specificity. Failing the TOMM-1 was unrelated to age, education or gender, but was associated with elevated self-reported depression. Results support the utility of TOMM-1 as an independent, free-standing, single-trial PVT. Consistent with previous reports, the choice of criterion measure influences parameter estimates of the PVT being calibrated. The methodological implications of modality specificity to PVT research and clinical/forensic practice should be considered when evaluating cutoffs or interpreting scores in the failing range.
Collapse
Affiliation(s)
- Jaspreet K Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada.,University of Windsor, Edmonton, Alberta, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
29
|
Erdodi LA, Taylor B, Sabelli AG, Malleck M, Kirsch NL, Abeare CA. Demographically Adjusted Validity Cutoffs on the Finger Tapping Test Are Superior to Raw Score Cutoffs in Adults with TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09352-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
30
|
Lippa SM, Lange RT, French LM, Iverson GL. Performance Validity, Neurocognitive Disorder, and Post-concussion Symptom Reporting in Service Members with a History of Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2019; 33:606-618. [PMID: 29069278 DOI: 10.1093/arclin/acx098] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
Objective To examine the influence of different performance validity test (PVT) cutoffs on neuropsychological performance, post-concussion symptoms, and rates of neurocognitive disorder and postconcussional syndrome following mild traumatic brain injury (MTBI) in active duty service members. Method Participants were 164 service members (Age: M = 28.1 years [SD = 7.3]) evaluated on average 4.1 months (SD = 5.0) following injury. Participants were divided into three mutually exclusive groups using original and alternative cutoff scores on the Test of Memory Malingering (TOMM) and the Effort Index (EI) from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): (a) PVT-Pass, n = 85; (b) Alternative PVT-Fail, n = 53; and (c) Original PVT-Fail, n = 26. Participants also completed the Neurobehavioral Symptom Inventory. Results The PVT-Pass group performed better on cognitive testing and reported fewer symptoms than the two PVT-Fail groups. The Original PVT-Fail group performed more poorly on cognitive testing and reported more symptoms than the Alternative PVT-Fail group. Both PVT-Fail groups were more likely to meet DSM-5 Category A criteria for mild and major neurocognitive disorder and symptom reporting criteria for postconcussional syndrome than the PVT-Pass group. When alternative PVT cutoffs were used instead of original PVT cutoffs, the number of participants with valid data meeting cognitive testing criteria for neurocognitive disorder or postconcussional syndrome decreased dramatically. Conclusion PVT performance is significantly and meaningfully related to overall neuropsychological outcome. By using only original cutoffs, clinicians and researchers may miss people with invalid performances.
Collapse
Affiliation(s)
- Sara M Lippa
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Rael T Lange
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA.,Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada
| | - Louis M French
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA.,Center for Neuroscience and Regenerative Medicine, Bethesda, MD, USA.,Department of Physical Medicine and Rehabilitation, Center for Rehabilitation Sciences Research, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Grant L Iverson
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA.,Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, MA, USA.,Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Boston, MA, USA
| |
Collapse
|
31
|
Schroeder RW, Olsen DH, Martin PK. Classification accuracy rates of four TOMM validity indices when examined independently and jointly. Clin Neuropsychol 2019; 33:1373-1387. [DOI: 10.1080/13854046.2019.1619839] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ryan W. Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Phillip K. Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
32
|
Waldron-Perrine B, Gabel NM, Seagly K, Kraal AZ, Pangilinan P, Spencer RJ, Bieliauskas L. Montreal Cognitive Assessment as a screening tool: Influence of performance and symptom validity. Neurol Clin Pract 2019; 9:101-108. [PMID: 31041123 PMCID: PMC6461423 DOI: 10.1212/cpj.0000000000000604] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 12/03/2018] [Indexed: 11/15/2022]
Abstract
BACKGROUND We evaluated Montreal Cognitive Assessment (MoCA) performance in a veteran traumatic brain injury (TBI) population, considering performance validity test (PVT) and symptom validity test (SVT) data, and explored associations of MoCA performance with neuropsychological test performance and self-reported distress. METHODS Of 198 consecutively referred veterans to a Veterans Administration TBI/Polytrauma Clinic, 117 were included in the final sample. The MoCA was administered as part of the evaluation. Commonly used measures of neuropsychological functioning and performance and symptom validity were also administered to aid in diagnosis. RESULTS Successively worse MoCA performances were associated with a greater number of PVT failures (ps < 0.05). Failure of both the SVT and at least 1 PVT yielded the lowest MoCA scores. Self-reported distress (both posttraumatic stress disorder symptoms and neurobehavioral cognitive symptoms) was also related to MoCA performance. CONCLUSIONS Performance on the MoCA is influenced by task engagement and symptom validity. Causal inferences about neurologic and neurocognitive impairment, particularly in the context of mild TBI, wherein the natural course of recovery is well known, should therefore be made cautiously when such inferences are based heavily on MoCA scores. Neuropsychologists are well versed in the assessment of performance and symptom validity and thus may be well suited to explore the influences of abnormal performances on cognitive screening.
Collapse
Affiliation(s)
- Brigid Waldron-Perrine
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Nicolette M Gabel
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Katharine Seagly
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - A Zarina Kraal
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Percival Pangilinan
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Robert J Spencer
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Linas Bieliauskas
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| |
Collapse
|
33
|
The Grooved Pegboard Test as a Validity Indicator—a Study on Psychogenic Interference as a Confound in Performance Validity Research. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-018-9337-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
34
|
One-Minute PVT: Further Evidence for the Utility of the California Verbal Learning Test—Children’s Version Forced Choice Recognition Trial. JOURNAL OF PEDIATRIC NEUROPSYCHOLOGY 2018. [DOI: 10.1007/s40817-018-0057-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
35
|
Differentiating epilepsy from psychogenic nonepileptic seizures using neuropsychological test data. Epilepsy Behav 2018; 87:39-45. [PMID: 30172082 DOI: 10.1016/j.yebeh.2018.08.010] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 07/25/2018] [Accepted: 08/12/2018] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Differentiating epileptic seizures (ES) from psychogenic nonepileptic seizures (PNES) represents a challenging differential diagnosis with important treatment implications. This study was designed to explore the utility of neuropsychological test scores in differentiating ES from PNES. METHOD Psychometric data from 72 patients with ES and 33 patients with PNES were compared on various tests of cognitive ability and performance validity. Individual measures that best discriminated the diagnoses were then entered as predictors in a logistic regression equation with group membership (ES vs. PNES) as the criterion. RESULTS On most tests of cognitive ability, the PNES sample outperformed the ES sample (medium-large effect) and was less likely to fail the Reliable Digit Span. However, patients with PNES failed two embedded validity indicators at significantly higher rates (risk ratios (RR): 2.45-4.16). There were no group differences on the Test of Memory Malingering (TOMM). A logistic regression equation based on seven neuropsychological tests correctly classified 85.1% of patients. The cutoff with perfect specificity was associated with 0.47 sensitivity. CONCLUSIONS Consistent with previous research, the utility of psychometric methods of differential diagnosis is limited by the complex neurocognitive profiles associated with ES and PNES. Although individual measures might help differentiate ES from PNES, multivariate assessment models have superior discriminant power. The strongest psychometric evidence for PNES appears to be a consistent lack of impairment on tests sensitive to diffuse neurocognitive deficits such as processing speed, working memory, and verbal fluency. While video-electroencephalogram (EEG) monitoring is the gold standard of differential diagnosis, psychometric testing has the potential to enhance clinical decision-making, particularly in complex or unclear cases such as patients with nondiagnostic video-EEGs. Adopting a standardized, fixed neuropsychological battery at epilepsy centers would advance research on the differential diagnostic power of psychometric testing.
Collapse
|
36
|
Radomski MV, Davidson LF, Smith L, Finkelstein M, Cecchini A, Heaton KJ, McCulloch K, Scherer M, Weightman MM. Toward Return to Duty Decision-Making After Military Mild Traumatic Brain Injury: Preliminary Validation of the Charge of Quarters Duty Test. Mil Med 2018; 183:e214-e222. [DOI: 10.1093/milmed/usx045] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Indexed: 11/13/2022] Open
Affiliation(s)
- Mary V Radomski
- Courage Kenny Research Center, 800 E. 28th Street at Chicago, Minneapolis, MN
| | - Leslie F Davidson
- Clinical Research and Leadership - School of Medicine and Health Sciences, George Washington University, 2100 Pennsylvania Suite 358 Avenue, Washington DC
| | - Laurel Smith
- Military Performance Division, United States Army Research Institute of Environmental Medicine, 10 General Greene Ave, Natick, MA
| | - Marsha Finkelstein
- Courage Kenny Research Center, 800 E. 28th Street at Chicago, Minneapolis, MN
| | - Amy Cecchini
- Womack Army Medical Center, Intrepid Spirit, 3908 Longstreet Road Building 3-4303, Fort Bragg, NC
| | - Kristin J Heaton
- Military Performance Division, United States Army Research Institute of Environmental Medicine, 10 General Greene Ave, Natick, MA
| | - Karen McCulloch
- CB 7135, Division of Physical Therapy, Department of Allied Health, School of Medicine, University of North Carolina-Chapel Hill, Chapel Hill, NC
| | - Matthew Scherer
- Clinical and Rehabilitative Medicine Research Program, Medical Research and Materiel Command, 504 Scott Street Building 722, Fort Detrick, MD
| | | |
Collapse
|
37
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
38
|
Kanser RJ, Rapport LJ, Bashem JR, Hanks RA. Detecting malingering in traumatic brain injury: Combining response time with performance validity test accuracy. Clin Neuropsychol 2018; 33:90-107. [DOI: 10.1080/13854046.2018.1440006] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Robert J. Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J. Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Jesse R. Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A. Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| |
Collapse
|
39
|
Measuring Soldier Performance During the Patrol-Exertion Multitask: Preliminary Validation of a Postconcussive Functional Return-to-Duty Metric. Arch Phys Med Rehabil 2018; 99:S79-S85. [DOI: 10.1016/j.apmr.2017.04.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 04/12/2017] [Accepted: 04/15/2017] [Indexed: 11/22/2022]
|
40
|
Erdodi LA, Dunn AG, Seke KR, Charron C, McDermott A, Enache A, Maytham C, Hurtubise JL. The Boston Naming Test as a Measure of Performance Validity. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-017-9309-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
41
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
42
|
Dorociak KE, Schulze ET, Piper LE, Molokie RE, Janecek JK. Performance validity testing in a clinical sample of adults with sickle cell disease. Clin Neuropsychol 2017. [PMID: 28632024 DOI: 10.1080/13854046.2017.1339830] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Neuropsychologists utilize performance validity tests (PVTs) as objective means for drawing inferences about performance validity. The Test of Memory Malingering (TOMM) is a well-validated, stand-alone PVT and the Reliable Digit Span (RDS) and Reliable Digit Span-Revised (RDS-R) from the Digit Span subtest of the WAIS-IV are commonly employed, embedded PVTs. While research has demonstrated the utility of these PVTs with various clinical samples, no research has investigated their use in adults with sickle cell disease (SCD), a condition associated with multiple neurological, physical, and psychiatric symptoms. Thus, the purpose of this study was to explore PVT performance in adults with SCD. METHOD Fifty-four adults with SCD (Mage = 40.61, SD = 12.35) were consecutively referred by their hematologist for a routine clinical outpatient neuropsychological evaluation. During the evaluation, participants were administered the TOMM (Trials 1 and 2), neuropsychological measures including the WAIS-IV Digit Span subtest, and mood and behavioral questionnaires. RESULTS The average score on the TOMM was 47.70 (SD = 3.47, range = 34-50) for Trial 1 and 49.69 (SD = 1.66, range = 38-50) for Trial 2. Only one participant failed Trial 2 of the TOMM, yielding a 98.1% pass rate for the sample. Pass rates at various RDS and RDS-R values were calculated with TOMM Trial 2 performance as an external criterion. CONCLUSIONS Results support the use of the TOMM as a measure of performance validity for individuals with SCD, while RDS and RDS-R should be interpreted with caution in this population.
Collapse
Affiliation(s)
- Katherine E Dorociak
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Evan T Schulze
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Lauren E Piper
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Robert E Molokie
- b Department of Medicine , University of Illinois at Chicago , Chicago , IL , USA
| | - Julie K Janecek
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| |
Collapse
|
43
|
Psychometric Markers of Genuine and Feigned Neurodevelopmental Disorders in the Context of Applying for Academic Accommodations. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9287-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
44
|
Erdodi LA, Lichtenstein JD. Invalid before impaired: an emerging paradox of embedded validity indicators. Clin Neuropsychol 2017; 31:1029-1046. [DOI: 10.1080/13854046.2017.1323119] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Canada
| | - Jonathan D. Lichtenstein
- Department of Psychiatry, Neuropsychology Services, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA
| |
Collapse
|
45
|
Erdodi LA, Tyson BT, Abeare CA, Zuccato BG, Rai JK, Seke KR, Sagar S, Roth RM. Utility of critical items within the Recognition Memory Test and Word Choice Test. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 25:327-339. [DOI: 10.1080/23279095.2017.1298600] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, USA
| | - Bradley T. Tyson
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, USA
- Western Washington Medical Group, Everett, Washington, USA
| | | | - Brandon G. Zuccato
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristian R. Seke
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Sanya Sagar
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Robert M. Roth
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, USA
| |
Collapse
|
46
|
An KY, Kaploun K, Erdodi LA, Abeare CA. Performance validity in undergraduate research participants: a comparison of failure rates across tests and cutoffs. Clin Neuropsychol 2016; 31:193-206. [DOI: 10.1080/13854046.2016.1217046] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Kristen Kaploun
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | |
Collapse
|
47
|
Eglit GML, Lynch JK, McCaffrey RJ. Not all performance validity tests are created equal: The role of recollection and familiarity in the Test of Memory Malingering and Word Memory Test. J Clin Exp Neuropsychol 2016; 39:173-189. [DOI: 10.1080/13803395.2016.1210573] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
48
|
Fazio RL, Denning JH, Denney RL. TOMM Trial 1 as a performance validity indicator in a criminal forensic sample. Clin Neuropsychol 2016; 31:251-267. [DOI: 10.1080/13854046.2016.1213316] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
| | - John H. Denning
- Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Robert L. Denney
- Neuropsychological Associates of Southwest Missouri, Springfield, MO, USA
| |
Collapse
|
49
|
Jones A. Cutoff Scores for MMPI-2 and MMPI-2-RF Cognitive-Somatic Validity Scales for Psychometrically Defined Malingering Groups in a Military Sample. Arch Clin Neuropsychol 2016; 31:786-801. [DOI: 10.1093/arclin/acw035] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/02/2016] [Indexed: 11/13/2022] Open
|
50
|
The BDAE Complex Ideational Material—a Measure of Receptive Language or Performance Validity? PSYCHOLOGICAL INJURY & LAW 2016. [DOI: 10.1007/s12207-016-9254-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|