1
|
Crişan I, Erdodi L. Examining the cross-cultural validity of the test of memory malingering and the Rey 15-item test. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:721-731. [PMID: 35476611 DOI: 10.1080/23279095.2022.2064753] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study was designed to investigate the cross-cultural validity of two freestanding performance validity tests (PVTs), the Test of Memory Malingering - Trial 1 (TOMM-1) and the Rey Fifteen Item Test (Rey-15) in Romanian-speaking patients. METHODS The TOMM-1 and Rey-15 free recall (FR) and the combination score incorporating the recognition trial (COMB) were administered to a mixed clinical sample of 61 adults referred for cognitive evaluation, 24 of whom had external incentives to appear impaired. Average scores on PVTs were compared between the two groups. Classification accuracies were computed using one PVT against another. RESULTS Patients with identifiable external incentives to appear impaired produced significantly lower scores and more errors on validity indicators. The largest effect sizes emerged on TOMM-1 (Cohen's d = 1.00-1.19). TOMM-1 was a significant predictor of the Rey-15 COMB ≤20 (AUC = .80; .38 sensitivity; .89 specificity at a cutoff of ≤39). Similarly, both Rey-15 indicators were significant predictors of TOMM-1 at ≤39 as the criterion (AUCs = .73-.76; .33 sensitivity; .89-.90 specificity). CONCLUSION Results offer a proof of concept for the cross-cultural validity of the TOMM-1 and Rey-15 in a Romanian clinical sample.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
2
|
Crișan I, Ali S, Cutler L, Matei A, Avram L, Erdodi LA. Geographic variability in limited English proficiency: A cross-cultural study of cognitive profiles. J Int Neuropsychol Soc 2023; 29:972-983. [PMID: 37246143 DOI: 10.1017/s1355617723000280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE This study was designed to evaluate the effect of limited English proficiency (LEP) on neurocognitive profiles. METHOD Romanian (LEP-RO; n = 59) and Arabic (LEP-AR; n = 30) native speakers were compared to Canadian native speakers of English (NSE; n = 24) on a strategically selected battery of neuropsychological tests. RESULTS As predicted, participants with LEP demonstrated significantly lower performance on tests with high verbal mediation relative to US norms and the NSE sample (large effects). In contrast, several tests with low verbal mediation were robust to LEP. However, clinically relevant deviations from this general pattern were observed. The level of English proficiency varied significantly within the LEP-RO and was associated with a predictable performance pattern on tests with high verbal mediation. CONCLUSIONS The heterogeneity in cognitive profiles among individuals with LEP challenges the notion that LEP status is a unitary construct. The level of verbal mediation is an imperfect predictor of the performance of LEP examinees during neuropsychological testing. Several commonly used measures were identified that are robust to the deleterious effects of LEP. Administering tests in the examinee's native language may not be the optimal solution to contain the confounding effect of LEP in cognitive evaluations.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Alina Matei
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Luisa Avram
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
3
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
4
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
5
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
6
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
7
|
Cohen CD, Rhoads T, Keezer RD, Jennette KJ, Williams CP, Hansen ND, Ovsiew GP, Resch ZJ, Soble JR. All of the accuracy in half of the time: Assessing abbreviated versions of the Test of Memory Malingering in the context of verbal and visual memory impairment. Clin Neuropsychol 2022; 36:1933-1949. [PMID: 33836622 DOI: 10.1080/13854046.2021.1908596] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveThe Test of Memory Malingering (TOMM) Trial 1 (T1) and errors on the first 10 items of T1 (T1-e10) were developed as briefer versions of the TOMM to minimize evaluation time and burden, although the effect of genuine memory impairment on these indices is not well established. This study examined whether increasing material-specific verbal and visual memory impairment affected T1 and T1-e10 performance and accuracy for detecting invalidity. Method: Data from 155 neuropsychiatric patients administered the TOMM, Rey Auditory Verbal Learning Test (RAVLT), and Brief Visuospatial Memory Test-Revised (BVMT-R) during outpatient evaluation were examined. Valid (N = 125) and invalid (N = 30) groups were established by four independent criterion performance validity tests. Verbal/visual memory impairment was classified as ≥37 T (normal memory); 30 T-36T (mild impairment); and ≤29 T (severe impairment). Results: Overall, T1 had outstanding accuracy, with 77% sensitivity/90% specificity. T1-e10 was less accurate but had excellent discriminability, with 60% sensitivity/87% specificity. T1 maintained excellent accuracy regardless of memory impairment severity, with 77% sensitivity/≥88% specificity and a relatively invariant cut-score even among those with severe verbal/visual memory impairment. T1-e10 had excellent classification accuracy among those with normal memory and mild impairment, but accuracy and sensitivity dropped with severe impairment and the optimal cut-score had to be increased to maintain adequate specificity. Conclusion: TOMM T1 is an effective performance validity test with strong psychometric properties regardless of material-specificity and severity of memory impairment. By contrast, T1-e10 functions relatively well in the context of mild memory impairment but has reduced discriminability with severe memory impairment.
Collapse
Affiliation(s)
- Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,School of Psychology, Counseling, and Family Therapy, Wheaton College, Wheaton, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
8
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
9
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
10
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
11
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
12
|
Brantuo MA, An K, Biss RK, Ali S, Erdodi LA. Neurocognitive Profiles Associated With Limited English Proficiency in Cognitively Intact Adults. Arch Clin Neuropsychol 2022; 37:1579-1600. [PMID: 35694764 DOI: 10.1093/arclin/acac019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
Collapse
Affiliation(s)
- Maame A Brantuo
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Kelly An
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Renee K Biss
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| |
Collapse
|
13
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
14
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
15
|
Soble JR, Cerny BM, Ovsiew GP, Rhoads T, Reynolds TP, Sharp DW, Jennette KJ, Marceaux JC, O'Rourke JJF, Critchfield EA, Resch ZJ. Comparing the Independent and Aggregated Accuracy of Trial 1 and the First 10 TOMM Items for Detecting Invalid Neuropsychological Test Performance Across Civilian and Veteran Clinical Samples. Percept Mot Skills 2022; 129:269-288. [PMID: 35139315 DOI: 10.1177/00315125211066399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Previous studies support using two abbreviated tests of the Test of Memory Malingering (TOMM), including (a) Trial 1 (T1) and (b) the number of errors on the first 10 items of T1 (T1e10), as performance validity tests (PVTs). In this study, we examined the independent and aggregated predictive utility of TOMM T1 and T1e10 for identifying invalid neuropsychological test performance across two clinical samples. We employed cross-sectional research to examine two independent and demographically diverse mixed samples of military veterans and civilians (VA = 108; academic medical center = 234) of patients who underwent neuropsychological evaluations. We determined validity groups by patient performance on four independent criterion PVTs. We established concordances between passing/failing the TOMM T1e10 and T1, followed by logistic regression to determine individual and aggregated accuracy of T1e10 and T1 for predicting validity group membership. Concordance between passing T1e10 and T1 was high, as was overall validity (87-98%) across samples. By contrast, T1e10 failure was more highly concordant with T1 failure (69-77%) than with overall invalidity status (59-60%) per criterion PVTs, whereas T1 failure was more highly concordant with invalidity status (72-88%) per criterion PVTs. Logistic regression analyses demonstrated similar results, with T1 accounting for more variance than T1e10. However, combining T1e10 and T1 accounted for the most variance of any model, with T1e10 and T1 each emerging as significant predictors. TOMM T1 and, to a lesser extent, T1e10 were significant predictors of independent criterion-derived validity status across two distinct clinical samples, but they did not offer improved classification accuracy when aggregated.
Collapse
Affiliation(s)
- Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tristan P Reynolds
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Dillion W Sharp
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
16
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
17
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
18
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
19
|
Martinez KA, Sayers C, Hayes C, Martin PK, Clark CB, Schroeder RW. Normal cognitive test scores cannot be interpreted as accurate measures of ability in the context of failed performance validity testing: A symptom- and detection-coached simulation study. J Clin Exp Neuropsychol 2021; 43:301-309. [PMID: 33998369 DOI: 10.1080/13803395.2021.1926435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Introduction: While use of performance validity tests (PVTs) has become a standard of practice in neuropsychology, there are differing opinions regarding whether to interpret cognitive test data when standard scores fall within normal limits despite PVTs being failed. This study is the first to empirically determine whether normal cognitive test scores underrepresent functioning when PVTs are failed.Method: Participants, randomly assigned to either a simulated malingering group (n = 50) instructed to mildly suppress test performances or a best-effort/control group (n = 50), completed neuropsychological tests which included the North American Adult Reading Test (NAART), California Verbal Learning Test - 2nd Edition (CVLT-II), and Test of Memory Malingering (TOMM).Results: Groups were not significantly different in age, sex, education, or NAART predicted intellectual ability, but simulators performed significantly worse than controls on the TOMM, CVLT-II Forced Choice Recognition, and CVLT-II Short Delay Free Recall. The groups did not significantly differ on other examined CVLT-II measures. Of simulators who failed validity testing, 36% scored no worse than average and 73% scored no worse than low average on any of the examined CVLT-II indices.Conclusions: Of simulated malingerers who failed validity testing, nearly three-fourths were able to produce cognitive test scores that were within normal limits, which indicates that normal cognitive performances cannot be interpreted as accurately reflecting an individual's capabilities when obtained in the presence of validity test failure. At the same time, only 2 of 50 simulators were successful in passing validity testing while scoring within an impaired range on cognitive testing. This latter finding indicates that successfully feigning cognitive deficits is difficult when PVTs are utilized within the examination.
Collapse
Affiliation(s)
- Karen A Martinez
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Courtney Sayers
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Charles Hayes
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Phillip K Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| | - C Brendan Clark
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Ryan W Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| |
Collapse
|
20
|
Ovsiew GP, Carter DA, Rhoads T, Resch ZJ, Jennette KJ, Soble JR. Concordance Between Standard and Abbreviated Administrations of the Test of Memory Malingering: Implications for Streamlining Performance Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09408-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
21
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
22
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
23
|
Rinaldi A, Stewart-Willis JJ, Scarisbrick D, Proctor-Weber Z. Clinical utility of the TOMMe10 scoring criteria for detecting suboptimal effort in an mTBI veteran sample. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:670-676. [PMID: 32780587 DOI: 10.1080/23279095.2020.1803870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In the context of diminishing reimbursement and patient access demands, researchers continually refine performance validity measures (PVMs) to maximize efficiency while maintaining confidence in obtained data. This is particularly true for high PVM failure populations (e.g., mTBI patients). The TOMMe10 (number of errors on first 10 TOMM items) is one method this study utilized for classifying PVM performance as pass/fail (fail defined as failure on 2 of 6 PVM scores, pass defined as 0/1 failures). The present study hypothesized that the TOMMe10 would have equitable sensitivity/specificity for identifying non-credible cognitive performance among veterans with mTBI compared to previous research findings and commonly used performance validity measures (e.g., TOMM or WMT). Data were analyzed from 54 veterans assigned to a pass and fail group based on their performance across six recognized PVMs. Results revealed pass/fail groups were not significantly different regarding age, educational, or racial background. ROC analyses found the TOMMe10 demonstrated excellent discriminability (AUC = .803 ±.128), indicating that the TOMMe10 could have clinical utility within an mTBI veteran sample, particularly in conjunction with a second PVM. Specific population limitations are discussed. Additional research should elucidate this measure's performance with additional populations, including non-veteran mTBI, dementia, moderate-severe TBI, and inpatient populations.
Collapse
Affiliation(s)
- Anthony Rinaldi
- Department of Psychology, Gaylord Specialty Healthcare, Wallingford, CT, USA
| | | | - David Scarisbrick
- WVU Department of Behavioral Medicine and Psychiatry, WVU Department of Neuroscience, West Virginia School of Medicine, Morgantown, VA, USA
| | - Zoe Proctor-Weber
- Department of Psychology, C.W. Bill Young Bay Pines VAHCS, Bay Pines, FL, USA
| |
Collapse
|
24
|
Psychological Symptoms and Rates of Performance Validity Improve Following Trauma-Focused Treatment in Veterans with PTSD and History of Mild-to-Moderate TBI. J Int Neuropsychol Soc 2020; 26:108-118. [PMID: 31658923 DOI: 10.1017/s1355617719000997] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Iraq and Afghanistan Veterans with posttraumatic stress disorder (PTSD) and traumatic brain injury (TBI) history have high rates of performance validity test (PVT) failure. The study aimed to determine whether those with scores in the invalid versus valid range on PVTs show similar benefit from psychotherapy and if psychotherapy improves PVT performance. METHOD Veterans (N = 100) with PTSD, mild-to-moderate TBI history, and cognitive complaints underwent neuropsychological testing at baseline, post-treatment, and 3-month post-treatment. Veterans were randomly assigned to cognitive processing therapy (CPT) or a novel hybrid intervention integrating CPT with TBI psychoeducation and cognitive rehabilitation strategies from Cognitive Symptom Management and Rehabilitation Therapy (CogSMART). Performance below standard cutoffs on any PVT trial across three different PVT measures was considered invalid (PVT-Fail), whereas performance above cutoffs on all measures was considered valid (PVT-Pass). RESULTS Although both PVT groups exhibited clinically significant improvement in PTSD symptoms, the PVT-Pass group demonstrated greater symptom reduction than the PVT-Fail group. Measures of post-concussive and depressive symptoms improved to a similar degree across groups. Treatment condition did not moderate these results. Rate of valid test performance increased from baseline to follow-up across conditions, with a stronger effect in the SMART-CPT compared to CPT condition. CONCLUSION Both PVT groups experienced improved psychological symptoms following treatment. Veterans who failed PVTs at baseline demonstrated better test engagement following treatment, resulting in higher rates of valid PVTs at follow-up. Veterans with invalid PVTs should be enrolled in trauma-focused treatment and may benefit from neuropsychological assessment after, rather than before, treatment.
Collapse
|
25
|
Olsen DH, Schroeder RW, Martin PK. Cross-validation of the Invalid Forgetting Frequency Index (IFFI) from the Test of Memory Malingering. Arch Clin Neuropsychol 2019; 36:437-441. [DOI: 10.1093/arclin/acz064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 10/08/2019] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objective
To increase sensitivity of the Test of Memory Malingering (TOMM), adjustments have been proposed, including adding consistency indices. The Invalid Forgetting Frequency Index (IFFI) is the most recently developed consistency index. While strong classification accuracy rates were originally reported, it currently lacks cross-validation.
Method
A sample of 184 outpatients was utilized. Valid performers passed all criterion performance validity tests (PVTs) and invalid performers failed two or more PVTs. Classification accuracy statistics were calculated.
Results
AUC for the IFFI was 0.80, demonstrating adequate discrimination between valid and invalid groups. A score of 3 or more inconsistent responses resulted in sensitivity and specificity rates of 63% and 92%, respectively.
Conclusions
This is the first article to cross-validate the IFFI. In both the original IFFI study and the current study, the same cut-off was found to maintain at least 90% specificity while producing higher sensitivity rates than those achieved by traditional TOMM indices.
Collapse
Affiliation(s)
- Daniel H Olsen
- University of Kansas School of Medicine – Wichita, Department of Psychiatry and Behavioral Sciences, Wichita, Kansas, United States
| | - Ryan W Schroeder
- University of Kansas School of Medicine – Wichita, Department of Psychiatry and Behavioral Sciences, Wichita, Kansas, United States
| | - Phillip K Martin
- University of Kansas School of Medicine – Wichita, Department of Psychiatry and Behavioral Sciences, Wichita, Kansas, United States
| |
Collapse
|
26
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
27
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
28
|
|
29
|
Rai JK, Erdodi LA. Impact of criterion measures on the classification accuracy of TOMM-1. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:185-196. [PMID: 31187632 DOI: 10.1080/23279095.2019.1613994] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
This study was designed to examine the effect of various criterion measures on the classification accuracy of Trial 1 of the Test of Memory Malingering (TOMM-1), a free-standing performance validity test (PVT). Archival data were collected from a case sequence of 91 (M Age = 42.2 years; M Education = 12.7) patients clinically referred for neuropsychological assessment. Trials 2 and Retention of the TOMM, the Word Choice Test, and three validity composites were used as criterion PVTs. Classification accuracy varied systematically as a function of criterion PVT. TOMM-1 ≤ 43 emerged as the optimal cutoff, resulting in a wide range of sensitivity (.47-1.00), with perfect overall specificity. Failing the TOMM-1 was unrelated to age, education or gender, but was associated with elevated self-reported depression. Results support the utility of TOMM-1 as an independent, free-standing, single-trial PVT. Consistent with previous reports, the choice of criterion measure influences parameter estimates of the PVT being calibrated. The methodological implications of modality specificity to PVT research and clinical/forensic practice should be considered when evaluating cutoffs or interpreting scores in the failing range.
Collapse
Affiliation(s)
- Jaspreet K Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada.,University of Windsor, Edmonton, Alberta, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
30
|
Erdodi LA, Taylor B, Sabelli AG, Malleck M, Kirsch NL, Abeare CA. Demographically Adjusted Validity Cutoffs on the Finger Tapping Test Are Superior to Raw Score Cutoffs in Adults with TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09352-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
31
|
Alverson WA, O’Rourke JJF, Soble JR. The Word Memory Test genuine memory impairment profile discriminates genuine memory impairment from invalid performance in a mixed clinical sample with cognitive impairment. Clin Neuropsychol 2019; 33:1420-1435. [DOI: 10.1080/13854046.2019.1599071] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- W. Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | | | - Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
32
|
Denning JH. When 10 is enough: Errors on the first 10 items of the Test of Memory Malingering (TOMMe10) and administration time predict freestanding performance validity tests (PVTs) and underperformance on memory measures. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:35-47. [PMID: 30950290 DOI: 10.1080/23279095.2019.1588122] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
It is critical that we develop more efficient performance validity tests (PVTs). A shorter version of the Test of Memory Malingering (TOMM) that utilizes errors on the first 10 items (TOMMe10) has shown promise as a freestanding PVT. Retrospective review included 397 consecutive veterans administered TOMM trial 1 (TOMM1), the Medical Symptom Validity Test (MSVT), and the Brief Visuospatial Memory Test-Revised (BVMT-R). TOMMe10 accuracy and administration time were used to predict performance on freestanding PVTs (TOMM1, MSVT). The impact of failing TOMMe10 (2 or more errors) on independent memory measures was also explored. TOMMe10 was a robust predictor of TOMM1 (area under the curve [AUC] = 0.97) and MSVT (AUC = 0.88) with sensitivities = 0.76 to 0.89 and specificities = 0.89 to 0.96. Administration time predicted PVT performance but did not improve accuracy compared to TOMMe10 alone. Failing TOMMe10 was associated with clinically and statistically significant declines on the BVMT-R and MSVT Paired Associates and Free Recall memory tests (d = -0.32 to -1.31). Consistent with prior research, TOMMe10 at 2 or more errors was highly accurate in predicting performance on other well-validated freestanding PVTs. Failing just 1 freestanding PVT (TOMMe10) significantly impacted memory measures and likely reflects invalid test performance.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
33
|
The Grooved Pegboard Test as a Validity Indicator—a Study on Psychogenic Interference as a Confound in Performance Validity Research. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-018-9337-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
34
|
Further Validation of the Test of Memory Malingering (TOMM) Trial 1 Performance Validity Index: Examination of False Positives and Convergent Validity. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-018-9335-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
35
|
One-Minute PVT: Further Evidence for the Utility of the California Verbal Learning Test—Children’s Version Forced Choice Recognition Trial. JOURNAL OF PEDIATRIC NEUROPSYCHOLOGY 2018. [DOI: 10.1007/s40817-018-0057-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
36
|
Differentiating epilepsy from psychogenic nonepileptic seizures using neuropsychological test data. Epilepsy Behav 2018; 87:39-45. [PMID: 30172082 DOI: 10.1016/j.yebeh.2018.08.010] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 07/25/2018] [Accepted: 08/12/2018] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Differentiating epileptic seizures (ES) from psychogenic nonepileptic seizures (PNES) represents a challenging differential diagnosis with important treatment implications. This study was designed to explore the utility of neuropsychological test scores in differentiating ES from PNES. METHOD Psychometric data from 72 patients with ES and 33 patients with PNES were compared on various tests of cognitive ability and performance validity. Individual measures that best discriminated the diagnoses were then entered as predictors in a logistic regression equation with group membership (ES vs. PNES) as the criterion. RESULTS On most tests of cognitive ability, the PNES sample outperformed the ES sample (medium-large effect) and was less likely to fail the Reliable Digit Span. However, patients with PNES failed two embedded validity indicators at significantly higher rates (risk ratios (RR): 2.45-4.16). There were no group differences on the Test of Memory Malingering (TOMM). A logistic regression equation based on seven neuropsychological tests correctly classified 85.1% of patients. The cutoff with perfect specificity was associated with 0.47 sensitivity. CONCLUSIONS Consistent with previous research, the utility of psychometric methods of differential diagnosis is limited by the complex neurocognitive profiles associated with ES and PNES. Although individual measures might help differentiate ES from PNES, multivariate assessment models have superior discriminant power. The strongest psychometric evidence for PNES appears to be a consistent lack of impairment on tests sensitive to diffuse neurocognitive deficits such as processing speed, working memory, and verbal fluency. While video-electroencephalogram (EEG) monitoring is the gold standard of differential diagnosis, psychometric testing has the potential to enhance clinical decision-making, particularly in complex or unclear cases such as patients with nondiagnostic video-EEGs. Adopting a standardized, fixed neuropsychological battery at epilepsy centers would advance research on the differential diagnostic power of psychometric testing.
Collapse
|
37
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
38
|
Verroulx K, Hirst RB, Lin G, Peery S. Embedded performance validity indicator for children: California Verbal Learning Test – Children’s Edition, forced choice. APPLIED NEUROPSYCHOLOGY-CHILD 2018; 8:206-212. [DOI: 10.1080/21622965.2018.1426463] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Kristin Verroulx
- San Francisco Neuropsychology PC, San Francisco, California, USA
| | - Rayna B. Hirst
- San Francisco Neuropsychology PC, San Francisco, California, USA
- Pacific Graduate School of Psychology, Palo Alto University, Palo Alto, California, USA
| | - George Lin
- Department of Psychiatry, Geisel School of Medicine at Dartmouth/DHMC, Hanover, New Hampshire, USA
| | - Shelley Peery
- San Francisco Neuropsychology PC, San Francisco, California, USA
| |
Collapse
|
39
|
Erdodi LA, Dunn AG, Seke KR, Charron C, McDermott A, Enache A, Maytham C, Hurtubise JL. The Boston Naming Test as a Measure of Performance Validity. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-017-9309-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
40
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|
41
|
Grabyan JM, Collins RL, Alverson WA, Chen DK. Performance on the Test of Memory Malingering is predicted by the number of errors on its first 10 items on an inpatient epilepsy monitoring unit. Clin Neuropsychol 2017; 32:468-478. [DOI: 10.1080/13854046.2017.1368715] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Jonathan M. Grabyan
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Psychiatry, Baylor College of Medicine, Houston, TX, USA
| | - Robert L. Collins
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| | - W. Alexander Alverson
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
| | - David K. Chen
- Neurology Care Line, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, USA
- Department of Neurology, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
42
|
Denning JH, Shura RD. Cost of malingering mild traumatic brain injury-related cognitive deficits during compensation and pension evaluations in the veterans benefits administration. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 26:1-16. [DOI: 10.1080/23279095.2017.1350684] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- John H. Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Robert D. Shura
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, North Carolina, USA
- Mental Health and Behavioral Science Service Line, W. G. (Bill) Hefner Veterans Affairs Medical Center (VAMC), Salisbury, North Carolina, USA
- Department of Psychiatry and Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| |
Collapse
|
43
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
44
|
Young G. PTSD in Court III: Malingering, assessment, and the law. INTERNATIONAL JOURNAL OF LAW AND PSYCHIATRY 2017; 52:81-102. [PMID: 28366496 DOI: 10.1016/j.ijlp.2017.03.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 03/02/2017] [Indexed: 06/07/2023]
Abstract
This journal's third article on PTSD in Court focuses especially on the topic's "court" component. It first considers the topic of malingering, including in terms of its definition, certainties, and uncertainties. As with other areas of the study of psychological injury and law, generally, and PTSD (posttraumatic stress disorder), specifically, malingering is a contentious area not only definitionally but also empirically, in terms of establishing its base rate in the index populations assessed in the field. Both current research and re-analysis of past research indicates that the malingering prevalence rate at issue is more like 15±15% as opposed to 40±10%. As for psychological tests used to assess PTSD, some of the better ones include the TSI-2 (Trauma Symptom Inventory, Second Edition; Briere, 2011), the MMPI-2-RF (Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form; Ben-Porath & Tellegen, 2008/2011), and the CAPS-5 (The Clinician-Administered PTSD Scale for DSM-5; Weathers, Blake, Schnurr, Kaloupek, Marx, & Keane, 2013b). Assessors need to know their own possible biases, the applicable laws (e.g., the Daubert trilogy), and how to write court-admissible reports. Overall conclusions reflect a moderate approach that navigates the territory between the extreme plaintiff or defense allegiances one frequently encounters in this area of forensic practice.
Collapse
|
45
|
Erdodi LA, Tyson BT, Abeare CA, Zuccato BG, Rai JK, Seke KR, Sagar S, Roth RM. Utility of critical items within the Recognition Memory Test and Word Choice Test. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 25:327-339. [DOI: 10.1080/23279095.2017.1298600] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, USA
| | - Bradley T. Tyson
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, USA
- Western Washington Medical Group, Everett, Washington, USA
| | | | - Brandon G. Zuccato
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristian R. Seke
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Sanya Sagar
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Robert M. Roth
- Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, USA
| |
Collapse
|
46
|
CVLT-II Forced Choice Recognition Trial as an Embedded Validity Indicator: A Systematic Review of the Evidence. J Int Neuropsychol Soc 2016; 22:851-8. [PMID: 27619108 DOI: 10.1017/s1355617716000746] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVES The Forced Choice Recognition (FCR) trial of the California Verbal Learning Test, 2nd edition, was designed as an embedded performance validity test (PVT). To our knowledge, this is the first systematic review of classification accuracy against reference PVTs. METHODS Results from peer-reviewed studies with FCR data published since 2002 encompassing a variety of clinical, research, and forensic samples were summarized, including 37 studies with FCR failure rates (N=7575) and 17 with concordance rates with established PVTs (N=4432). RESULTS All healthy controls scored >14 on FCR. On average, 16.9% of the entire sample scored ≤14, while 25.9% failed reference PVTs. Presence or absence of external incentives to appear impaired (as identified by researchers) resulted in different failure rates (13.6% vs. 3.5%), as did failing or passing reference PVTs (49.0% vs. 6.4%). FCR ≤14 produced an overall classification accuracy of 72%, demonstrating higher specificity (.93) than sensitivity (.50) to invalid performance. Failure rates increased with the severity of cognitive impairment. CONCLUSIONS In the absence of serious neurocognitive disorder, FCR ≤14 is highly specific, but only moderately sensitive to invalid responding. Passing FCR does not rule out a non-credible presentation, but failing FCR rules it in with high accuracy. The heterogeneity in sample characteristics and reference PVTs, as well as the quality of the criterion measure across studies, is a major limitation of this review and the basic methodology of PVT research in general. (JINS, 2016, 22, 851-858).
Collapse
|
47
|
An KY, Kaploun K, Erdodi LA, Abeare CA. Performance validity in undergraduate research participants: a comparison of failure rates across tests and cutoffs. Clin Neuropsychol 2016; 31:193-206. [DOI: 10.1080/13854046.2016.1217046] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Kristen Kaploun
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | |
Collapse
|
48
|
Fazio RL, Denning JH, Denney RL. TOMM Trial 1 as a performance validity indicator in a criminal forensic sample. Clin Neuropsychol 2016; 31:251-267. [DOI: 10.1080/13854046.2016.1213316] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
| | - John H. Denning
- Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Robert L. Denney
- Neuropsychological Associates of Southwest Missouri, Springfield, MO, USA
| |
Collapse
|
49
|
Affiliation(s)
- Kristi Morin
- Department of Educational Psychology, Texas A&M University, College Station, Texas, United States
| | - John L. Davis
- Department of Educational Psychology, University of Utah, Salt Lake City, Utah, United States
| |
Collapse
|
50
|
Ashendorf L, Sugarman MA. Evaluation of performance validity using a Rey Auditory Verbal Learning Test forced-choice trial. Clin Neuropsychol 2016; 30:599-609. [DOI: 10.1080/13854046.2016.1172668] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Lee Ashendorf
- Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA, USA
- Boston University School of Medicine, Boston, MA, USA
| | - Michael A. Sugarman
- Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA, USA
- Wayne State University, Detroit, MI, USA
| |
Collapse
|