1
|
Ramanauskas B, Nixon TM, Finley JCA, VanLandingham HB, Leese MI, Ulrich DM, Ovsiew GP, Cerny BM, Phillips MS, Soble JR, Robinson AD. Analyzing the relationship between processing speed impairment and Rey-15 item test performance. J Clin Exp Neuropsychol 2024:1-11. [PMID: 39329256 DOI: 10.1080/13803395.2024.2406241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 09/12/2024] [Indexed: 09/28/2024]
Abstract
OBJECTIVE This study investigated the relationship between processing speed impairment severity and performance on the Rey 15-Item Test (RFIT) and RFIT + Recognition. METHOD Cross-sectional data from 285 examinees (228 valid/57 invalid) referred for neuropsychological assessment who were administered the RFIT, Weschler Adult Intelligence Scale-Fourth Edition (WAIS-IV) Processing Speed Index (PSI), Brief Visuospatial Memory Test - Revised, Rey Auditory Verbal Learning Test, and three independent criterion PVTs were included. PSI bands were operationalized as Intact (≥85SS; n = 163), Reduced/Possibly Impaired (77-84SS; n = 36), or Impaired (≤76 SS; n = 29). Receiver operator characteristic (ROC) curve analyses tested the RFIT and RFIT + Recognition's classification accuracy for detecting invalid performance for the overall sample and by PSI impairment status. RESULTS Those with intact processing speed performed significantly better on the RFIT and RFIT + Recognition than those with reduced/possibly impaired and impaired processing speed. Though verbal/visual memory predicted RFIT scores independently, PSI contributed additional variance. ROC curves for RFIT and RFIT + Recognition were significant (AUC=.64-.84). Optimal cut-scores yielded modest sensitivity (30%-63%) and high specificity (89%-93%) among those with intact and reduced processing speed but yielded unacceptable accuracy in those with impaired speed (AUC=.59-.62). CONCLUSIONS Although the RFIT and RFIT + Recognition demonstrated acceptable classification accuracy in those with intact processing speed, accuracy diminished with increasing speed impairment. This finding was more pronounced for RFIT + Recognition compared to the traditional RFIT. As such, the RFIT may have limited clinical utility in examinees with more significant processing speed deficits.
Collapse
Affiliation(s)
- Brian Ramanauskas
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, The Chicago School, Chicago, IL, USA
| | - Tana M Nixon
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Midwestern University, Downers Grove, IL, USA
| | - John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Mira I Leese
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Matthew S Phillips
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Anthony D Robinson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
2
|
Crişan I, Erdodi L. Examining the cross-cultural validity of the test of memory malingering and the Rey 15-item test. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:721-731. [PMID: 35476611 DOI: 10.1080/23279095.2022.2064753] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study was designed to investigate the cross-cultural validity of two freestanding performance validity tests (PVTs), the Test of Memory Malingering - Trial 1 (TOMM-1) and the Rey Fifteen Item Test (Rey-15) in Romanian-speaking patients. METHODS The TOMM-1 and Rey-15 free recall (FR) and the combination score incorporating the recognition trial (COMB) were administered to a mixed clinical sample of 61 adults referred for cognitive evaluation, 24 of whom had external incentives to appear impaired. Average scores on PVTs were compared between the two groups. Classification accuracies were computed using one PVT against another. RESULTS Patients with identifiable external incentives to appear impaired produced significantly lower scores and more errors on validity indicators. The largest effect sizes emerged on TOMM-1 (Cohen's d = 1.00-1.19). TOMM-1 was a significant predictor of the Rey-15 COMB ≤20 (AUC = .80; .38 sensitivity; .89 specificity at a cutoff of ≤39). Similarly, both Rey-15 indicators were significant predictors of TOMM-1 at ≤39 as the criterion (AUCs = .73-.76; .33 sensitivity; .89-.90 specificity). CONCLUSION Results offer a proof of concept for the cross-cultural validity of the TOMM-1 and Rey-15 in a Romanian clinical sample.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
3
|
Phillips MS, Wisinger AM, Cerny BM, Khan H, Chang F, Tse KYP, Ovsiew GP, Resch ZJ, Shapiro G, Soble JR, Jennette KJ. Effect of processing speed and memory performance on classification accuracy of the dot counting test in a mixed neuropsychiatric sample. J Clin Exp Neuropsychol 2024; 46:522-534. [PMID: 38847827 DOI: 10.1080/13803395.2024.2363978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 05/29/2024] [Indexed: 09/03/2024]
Abstract
OBJECTIVE This study examined the impact of impairment in two specific cognitive abilities, processing speed and memory, on Dot Counting Test (DCT) classification accuracy by evaluating performance validity classification accuracy across cognitively unimpaired, single-domain impairment, and multidomain impairment subgroups within a mixed clinical sample. METHOD Cross-sectional data were analyzed from 348 adult outpatients classified as valid (n = 284) or invalid (n = 64) based on four independent criterion performance validity tests (PVTs). Unimpaired (n = 164), single-domain processing speed impairment (n = 24), single-domain memory impairment (n = 53), and multidomain processing speed and memory impairment (n = 43) clinical subgroups were established among the valid group. Both the traditional DCT E-score and unrounded E-score were examined. RESULTS Overall, the DCT demonstrated acceptable to excellent classification accuracy across the unimpaired (area under the curve [AUC] traditional E-score=.855; unrounded E-score=.855) and single-domain impairment groups (traditional E-score AUCs = .690-.754; unrounded E-score AUCs = .692-747). However, it did not reliably discriminate the multidomain processing speed and memory impairment group from the invalid performers (traditional and unrounded E-scores AUC = .557). CONCLUSIONS Findings support the DCT as a non-memory-based freestanding PVT for use with single-domain cognitive impairment, with traditional E-score ≥17 (unrounded E-score ≥16.95) recommended for those with memory impairment and traditional E-score ≥19 (unrounded ≥18.08) with processing speed impairment. Moreover, results replicated previously established optimal cutoffs for unimpaired groups using both the traditional (≥14) and unrounded (≥13.84) E-scores. However, the DCT did not reliably discriminate between invalid performance and multidomain cognitive impairment, indicating caution is warranted when using the DCT with patients suspected of greater cognitive impairment.
Collapse
Affiliation(s)
- Matthew S Phillips
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Amanda M Wisinger
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Humza Khan
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Fini Chang
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois, Chicago, IL, USA
| | - Ka Yin Phoebe Tse
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Clinical Psychology, The Chicago School, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Greg Shapiro
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Clinical Psychology, The Chicago School, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
4
|
Deloria R, Kivisto AJ, Swier-Vosnos A, Elwood L. Optimal per test cutoff scores and combinations of failure on multiple embedded performance validity tests in detecting performance invalidity in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:716-726. [PMID: 34528833 DOI: 10.1080/23279095.2021.1973005] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We tested the usefulness of six embedded performance validity tests (EPVTs) in identifying performance invalidity in a mixed clinical sample. Using a retrospective design, 181 adults were classified as valid (n = 146) or invalid (n = 35) performance based upon their performance on one of three standalone PVTs (Test of Memory Malingering, Victoria Symptom Validity Test, Dot Counting Test). Multiple cutoffs were identified corresponding to predetermined false positive rates of 0, 5, 10, and 15% for each of six EPVTs. EPVT cutoffs corresponding to the predetermined false positive benchmarks were generally more conservative than currently established scores. Sensitivity was low (.0%-42.9%) for individual EPVTs across these cutoffs and was moderately improved by the combination of multiple EPVT failures. The optimal number of EPVT failures using the 10% false positive rate was ≥ 2. Although the overall classification accuracy of 80.7% and specificity of 89.0% were comparable to prior research, the sensitivity of 45.7% was more modest than previous estimates. Low sensitivities indicate that this combination of EPVTs failed to detect a majority of invalid performers.
Collapse
Affiliation(s)
- Rebecca Deloria
- Graduate Department of Clinical Psychology, University of Indianapolis, Indianapolis, IN, United States
| | - Aaron J Kivisto
- Graduate Department of Clinical Psychology, University of Indianapolis, Indianapolis, IN, United States
| | | | - Lisa Elwood
- Graduate Department of Clinical Psychology, University of Indianapolis, Indianapolis, IN, United States
| |
Collapse
|
5
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
6
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
7
|
Brooks BL, Fay-McClymont TB, MacAllister WS, Vasserman M, Mish S, Sherman EMS. New Child and Adolescent Memory Profile Embedded Performance Validity Test. ARCHIVES OF CLINICAL NEUROPSYCHOLOGY : THE OFFICIAL JOURNAL OF THE NATIONAL ACADEMY OF NEUROPSYCHOLOGISTS 2023:6972889. [PMID: 36617240 DOI: 10.1093/arclin/acac110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/14/2022] [Indexed: 01/09/2023]
Abstract
OBJECTIVE It is essential to interpret performance validity tests (PVTs) that are well-established and have strong psychometrics. This study evaluated the Child and Adolescent Memory Profile (ChAMP) Validity Indicator (VI) using a pediatric sample with traumatic brain injury (TBI). METHOD A cross-sectional sample of N = 110 youth (mean age = 15.1 years, standard deviation [SD] = 2.4 range = 8-18) on average 32.7 weeks (SD = 40.9) post TBI (71.8% mild/concussion; 3.6% complicated mild; 24.6% moderate-to-severe) were administered the ChAMP and two stand-alone PVTs. Criterion for valid performance was scores above cutoffs on both PVTs; criterion for invalid performance was scores below cutoffs on both PVTs. Classification statistics were used to evaluate the existing ChAMP VI and establish a new VI cutoff score if needed. RESULTS There were no significant differences in demographics or time since injury between those deemed valid (n = 96) or invalid (n = 14), but all ChAMP scores were significantly lower in those deemed invalid. The original ChAMP VI cutoff score was highly specific (no false positives) but also highly insensitive (sensitivity [SN] = .07, specificity [SP] = 1.0). Based on area under the curve (AUC) analysis (0.94), a new cutoff score was established using the sum of scaled scores (VI-SS). A ChAMP VI-SS score of 32 or lower achieved strong SN (86%) and SP (92%). Using a 15% base rate, positive predictive value was 64% and negative predictive value was 97%. CONCLUSIONS The originally proposed ChAMP VI has insufficient SN in pediatric TBI. However, this study yields a promising new ChAMP VI-SS, with classification metrics that exceed any other current embedded PVT in pediatrics.
Collapse
Affiliation(s)
- Brian L Brooks
- Neurosciences Program, Alberta Children's Hospital, Calgary, Alberta T3B 6A8, Canada.,Departments of Pediatrics, Clinical Neurosciences, and Psychology, University of Calgary, Calgary, Alberta T2N 1N4, Canada.,Child Brain and Mental Health Section, Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta T2N 1N4, Canada
| | - Taryn B Fay-McClymont
- Neurosciences Program, Alberta Children's Hospital, Calgary, Alberta T3B 6A8, Canada.,Child Brain and Mental Health Section, Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta T2N 1N4, Canada.,Department of Pediatrics, University of Calgary, Calgary, Alberta T2N 1N4, Canada.,Department of Psychology, University of British Columbia Okanagan, Kelowna, British Columbia V1V 1V7, Canada
| | - William S MacAllister
- Neurosciences Program, Alberta Children's Hospital, Calgary, Alberta T3B 6A8, Canada.,Child Brain and Mental Health Section, Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta T2N 1N4, Canada.,Department of Pediatrics, University of Calgary, Calgary, Alberta T2N 1N4, Canada
| | - Marsha Vasserman
- Neurosciences Program, Alberta Children's Hospital, Calgary, Alberta T3B 6A8, Canada.,Child Brain and Mental Health Section, Alberta Children's Hospital Research Institute, University of Calgary, Calgary, Alberta T2N 1N4, Canada.,Department of Pediatrics, University of Calgary, Calgary, Alberta T2N 1N4, Canada
| | - Sandra Mish
- Neurosciences Program, Alberta Children's Hospital, Calgary, Alberta T3B 6A8, Canada
| | | |
Collapse
|
8
|
Donders J, Vos M. Utility of CVLT-3 response bias as a measure of performance validity after traumatic brain injury. Clin Neuropsychol 2023; 37:91-100. [PMID: 35285406 DOI: 10.1080/13854046.2022.2051152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE We sought to determine the utility of a new performance validity index that was recently proposed. In particular, we wanted to determine if this index would be associated with a specificity of at least .90, a sensitivity of at least .40, and an Area Under the Curve of at least .70 in a traumatic brain injury (TBI) sample. METHOD We used logistic regression to investigate how well this new index could distinguish persons with TBI (n = 148) who were evaluated within 1-36 months after injury. All participants had been classified on the basis of at least two independent performance validity tests as having provided valid performance (n = 128) or invalid performance (n = 20). RESULTS The new performance validity index had acceptable specificity (.96) but had suboptimal sensitivity (.35) and Area Under the Curve (.66). It was concerning that almost half (5/12) of the cases that were identified by this index as providing invalid effort were false positives. Although a slightly more liberal cut-off improved sensitivity, the problem with poor positive predictive power remained. The conventional Forced Choice index had relatively better classification accuracy. CONCLUSION Differences in base rates between the original sample of Martin et al. and the current one most likely affected positive predictive power of the new index. Although their performance validity has excellent specificity, the current results do not support the application of this index in the clinical evaluation of patients with traumatic brain injury when base rates of invalid performance differ markedly from those in the original study.
Collapse
Affiliation(s)
- Jacobus Donders
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| | - Matthew Vos
- Department of Psychology, Calvin College, Grand Rapids, MI, USA
| |
Collapse
|
9
|
Gur N, Hegedish O, Hoofien D, Pilowsky Peleg T. The Temporal Memory Sequence Test (TMST) in children: Validity test performance in clinically referred children. APPLIED NEUROPSYCHOLOGY. CHILD 2023; 12:9-16. [PMID: 34870554 DOI: 10.1080/21622965.2021.2008936] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Validity evaluation is fundamental in neuropsychological assessment in adults, with increasing interest among pediatric neuropsychologists. Although some measures exist, given time constraints placed on clinicians, and children's limited sustained attention, development of less time-consuming measures is beneficial. We explored the use of the Temporal Memory Sequence Test (TMST), a new performance validity test, in clinically referred children. One minor adaptation included reading the instructions and labels to non-fluent readers. Participants were 68 consecutive clinically referred children and adolescents, aged 6-18 years, with neurological (n = 46) or behavioral (n = 22) difficulties. Applying the adult cutoff, 83.8% passed the TMST. Age, gender, and diagnosis did not differ between children passing the TMST cutoff and those who failed it. Classification accuracy calculated against three embedded measures of performance validity (Wechsler scale Digit Span, Coding, and Processing Speed Index) indicated specificity over 90% (Digit Span: 94%, Coding: 96%, Processing Speed Index: 92%) and sensitivity between 30 and 33%. For individuals without Intellectual Disability (ID), 90.9% passed the TMST, and intelligence did not predict success. Thus, the use of the TMST with the adult cutoff was supported in children without ID, offering an additional validity measure for clinically referred children.
Collapse
Affiliation(s)
- N Gur
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Neuropsychological Unit, Schneider Children's Medical Center, Petach Tikvah, Israel
| | - O Hegedish
- Department of Psychology, University of Haifa, Haifa, Israel
| | - D Hoofien
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel.,Department of Psychology, Tel Aviv-Jaffa Academic College, Israel.,The National Institute for Neuropsychological Rehabilitation, Israel
| | - T Pilowsky Peleg
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Neuropsychological Unit, Schneider Children's Medical Center, Petach Tikvah, Israel
| |
Collapse
|
10
|
Weigard A, Spencer RJ. Benefits and challenges of using logistic regression to assess neuropsychological performance validity: Evidence from a simulation study. Clin Neuropsychol 2023; 37:34-59. [PMID: 35006042 PMCID: PMC9273108 DOI: 10.1080/13854046.2021.2023650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 12/22/2021] [Indexed: 02/07/2023]
Abstract
Logistic regression (LR) is recognized as a promising method for making decisions about neuropsychological performance validity by integrating information across multiple measures. However, this method has yet to be widely adopted in clinical practice, likely because several open questions remain about its utility relative to simpler methods, its effectiveness across different clinical contexts, and its feasibility at sample sizes common in the field. The current study addresses these questions by assessing classification performance of logistic regression and alternative methods across an array of simulated data sets. We simulated scores of valid and invalid performers on 6 tests designed to mimic the psychometric and distributional properties of real performance validity measures. Out-of-sample predictive performance of LR and a commonly used alternative ("vote counting") was assessed across different base rates, validity measure properties, and sample sizes. LR improved classification accuracy by 2%-12% across simulation conditions, primarily by improving sensitivity. False positives and negatives can be further reduced when LR predictions are interpreted as continuous, rather than binary. LR made robust predictions at sample sizes feasible for neuropsychology research (N = 307) and when as few as 2 tests with good psychometric properties were used. Although training and test data sets of at least several hundred individuals may be required to develop and evaluate LR models for use in clinical practice, LR promises to be an efficient and powerful tool for improving judgements about performance validity. We offer several recommendations for model development and LR interpretation in a clinical setting.
Collapse
Affiliation(s)
| | - Robert J. Spencer
- Department of Psychiatry, University of Michigan
- VA Ann Arbor Healthcare System
| |
Collapse
|
11
|
Boone KB, Sherman D, Mishler J, Daoud G, Cottingham M, Victor TL, Ziegler E, Zeller MA, Wright M. Cross-validation of RAVLT performance validity indicators and the RAVLT/RO discriminant function in a large known groups sample. Clin Neuropsychol 2022; 36:2342-2360. [PMID: 34311662 DOI: 10.1080/13854046.2021.1948611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
OBJECTIVE To cross-validate RAVLT performance validity cut-offs and the RAVLT/RO discriminant function in a large neuropsychological sample. METHOD RAVLT scores and the RAVLT/RO discriminant function were compared in credible (n = 100) and noncredible (n = 353) neuropsychology referrals. RESULTS Noncredible patients scored lower than credible patients on RAVLT scores and the RAVLT/RO discriminant function. With cut-offs set to ≥90% specificity, highest sensitivities were observed for the discriminant function (cut-off ≤.064; 55.8%), recognition total (cut-off ≤9; 53.1%), the recognition combination score (≤10; 47.7%), and total learning across trials (cut-off ≤31; 45.3%). Individuals with histories of learning difficulties were over-represented in the 10% of credible patients exceeding cut-offs. When these individuals were removed, cut-offs could be tightened while still maintaining at least 90% specificity, and thereby increasing sensitivity (e.g., recognition total cut-off ≤10, 65% sensitivity; RAVLT/RO discriminant function cut-off ≤.176, 58% sensitivity). When three of the most sensitive, non-overlapping scores were considered in combination, 17% of credible patients failed ≥1 of the three cut-offs, while 3% failed two, and only 1% failed all three. In contrast, in the noncredible sample, more than two-thirds failed one or more of the three cut-offs, nearly half failed ≥2, and nearly a quarter failed all three. CONCLUSIONS RAVLT PVT cut-offs and the RAVLT/RO discriminant function achieve approximately 50% sensitivity, and approach 65% sensitivity when cut-offs specific to samples without histories of learning problems are employed, confirming that RAVLT cut-offs and the RAVLT/RO discriminant function continue to be valuable techniques in the identification of performance invalidity.
Collapse
Affiliation(s)
- Kyle B Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Dale Sherman
- University of Southern California, Los Angeles, CA, USA
| | - Jamie Mishler
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Georg Daoud
- California School of Forensic Studies, Alliant Internal University, Los Angeles, CA, USA
| | - Maria Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Tara L Victor
- California State University, Dominguez Hills, Carson, CA, USA
| | | | - Michelle A Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, CA, USA
| | | |
Collapse
|
12
|
Donders J, Hayden A. Utility of the D-KEFS color word interference test as an embedded measure of performance validity after traumatic brain injury. Clin Neuropsychol 2022; 36:1964-1974. [PMID: 33327855 DOI: 10.1080/13854046.2020.1861659] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveWe sought to determine the accuracy of embedded performance measures for the D-KEFS Color Word Interference Test that were recently proposed by Eglit et al. In particular, we wanted to determine if these indices would be associated with a specificity of at least .90, an Area Under the Curve of at least .70 and a positive likelihood ratio of at least 2. Method: We used logistic regression to investigate how well these indices could distinguish persons with traumatic brain injury (n = 169) who were evaluated within 1-12 months after injury. All participants had been classified on the basis of at least three independent performance validity tests as valid performance (n = 145) or invalid performance (n = 24). Results: None of the three indices that Eglit et al. had proposed as embedded performance measures for the D-KEFS Color Word Interference Test achieved the a priori defined minimally acceptable level of specificity. One of them did meet the criteria for Area Under the Curve as well as positive likelihood ratio. Conclusion: The current results do not support the application of the Eglit et al. embedded performance validity measures for the D-KEFS Color Word Interference Test in the clinical evaluation of patients with traumatic brain injury.
Collapse
Affiliation(s)
- Jacobus Donders
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| | - Ashley Hayden
- Department of Psychology, Hope College, Holland, MI, USA
| |
Collapse
|
13
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
14
|
Fox ME, King TZ. Considerations for Reliable Digit Span as a performance validity test for long-term survivors of childhood brain tumors. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:469-477. [PMID: 32503366 DOI: 10.1080/23279095.2020.1771714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The Reliable Digit Span (RDS) is a performance validity test (PVT) used widely within non-clinical samples, but its utility is in question in clinical groups with cognitive impairment. To investigate, RDS scores were calculated and correlated with the Neurological Predictor Scale, an informant-reported Activities of Daily Living score, and a proxy measure of intelligence (Vocabulary) for 83 adult survivors of childhood brain tumors and 105 healthy controls. Analyses were covaried for age at examination. Participants were divided into passing and failing groups at each RDS cutoff, and ANCOVAs for each of the three variables of interest covaried for age at the examination were run. RDS was correlated with all three variables of interest in survivors but only Vocabulary in controls. At the ≤7 cutoff, passing and failing survivors demonstrated significant differences across all variables of interest, while passing and failing controls differed only on Vocabulary. Differences were also found between passing and failing survivors at lower cutoffs. RDS is related to and likely impacted by various neurological and cognitive challenges faced by brain tumor survivors. Using the standard RDS cutoff of ≤7 may result in inaccurate interpretation of valid performance in this population; therefore, the use of other PVTs is recommended.
Collapse
Affiliation(s)
| | - Tricia Z King
- Department of Psychology and the Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
15
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
16
|
D Hood E, B Boone K, S Miora D, E Cottingham M, L Victor T, A Zeigler E, A Zeller M, J Wright M. Are there differences in performance validity test scores between African American and White American neuropsychology clinic patients? J Clin Exp Neuropsychol 2022; 44:31-41. [PMID: 35670549 DOI: 10.1080/13803395.2022.2069230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
OBJECTIVE The purpose of the present study was to compare performance on a wide range of PVTs in a neuropsychology clinic sample of African Americans and White Americans to determine if there are differences in mean scores or cut-off failure rates between the two groups, and to identify factors that may account for false positive PVT results in African American patients. METHOD African American and White American non-compensation-seeking neuropsychology clinic patients were compared on a wide range of standalone and embedded PVTs: Dot Counting Test, b Test, Warrington Recognition Memory Test, Rey 15-item plus recognition, Rey Word Recognition Test, Digit Span (ACSS, RDS, 3-digit time, 4-digit time), WAIS-III Picture Completion (Most discrepant index), WAIS-III Digit Symbol/Coding (recognition equation), Rey Auditory Verbal Learning Test, Rey Complex figure, WMS-III Logical Memory, Comalli Stroop Test, Trails A, and Wisconsin Card Sorting Test. RESULTS When groups were equated for age and education, African Americans obtained mean performances significantly worse than White Americans on only four of 25 PVT scores across the 14 different measures (Stroop Word Reading and Color Naming, Trails A, Digit Span 3-digit time); however, FSIQ was also significantly higher in White American patients. When subjects with borderline IQ (FSIQ = 70 to 79) were excluded (resulting in 74 White Americans and 25 African Americans), groups no longer differed in IQ and only continued to differ on a single PVT cutoff (Trails A). Further, specificity rates in African Americans were comparable to those of White Americans with the exception of the b Test, the Dot Counting Test, and Stroop B. CONCLUSIONS PVT performance generally does not differ as a function of Black versus White race once the impact of intellectual level is controlled, and most PVT cutoffs appear appropriate for use in African Americans of low average IQ or higher.
Collapse
Affiliation(s)
- Elexsia D Hood
- California School of Forensic Studies, Alliant International University, Los Angeles, USA
| | - Kyle B Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, USA.,Department of Psychiatry and Biobehavioral Sciences, UCLA, Los Angeles, USA
| | - Deborah S Miora
- California School of Forensic Studies, Alliant International University, Los Angeles, USA
| | - Maria E Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, USA
| | - Tara L Victor
- Department of Psychology, California State University, Dominguez Hills, Carson, USA
| | | | - Michelle A Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, USA
| | - Matthew J Wright
- Department of Psychiatry, Harbor-UCLA Medical Center, Torrance, USA
| |
Collapse
|
17
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
18
|
Differentiating Functional Cognitive Disorder from Early Neurodegeneration: A Clinic-Based Study. Brain Sci 2021; 11:brainsci11060800. [PMID: 34204389 PMCID: PMC8234331 DOI: 10.3390/brainsci11060800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/11/2021] [Accepted: 06/13/2021] [Indexed: 11/22/2022] Open
Abstract
Functional cognitive disorder (FCD) is a relatively common cause of cognitive symptoms, characterised by inconsistency between symptoms and observed or self-reported cognitive functioning. We aimed to improve the clinical characterisation of FCD, in particular its differentiation from early neurodegeneration. Two patient cohorts were recruited from a UK-based tertiary cognitive clinic, diagnosed following clinical assessment, investigation and expert multidisciplinary team review: FCD, (n = 21), and neurodegenerative Mild Cognitive Impairment (nMCI, n = 17). We separately recruited a healthy control group (n = 25). All participants completed an assessment battery including: Hopkins Verbal Learning Test-Revised (HVLT-R), Trail Making Test Part B (TMT-B); Depression Anxiety and Stress Scale (DASS) and Minnesota Multiphasic Personality Inventory (MMPI-2RF). In comparison to healthy controls, the FCD and nMCI groups were equally impaired on trail making, immediate recall, and recognition tasks; had equally elevated mood symptoms; showed similar aberration on a range of personality measures; and had similar difficulties on inbuilt performance validity tests. However, participants with FCD performed significantly better than nMCI on HVLT-R delayed free recall and retention (regression coefficient −10.34, p = 0.01). Mood, personality and certain cognitive abilities were similarly altered across nMCI and FCD groups. However, those with FCD displayed spared delayed recall and retention, in comparison to impaired immediate recall and recognition. This pattern, which is distinct from that seen in prodromal neurodegeneration, is a marker of internal inconsistency. Differentiating FCD from nMCI is challenging, and the identification of positive neuropsychometric features of FCD is an important contribution to this emerging area of cognitive neurology.
Collapse
|
19
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
20
|
Grant AF, Werner NJ. Retrospective Analysis of the Test of Memory Malingering in a Low Intellectual Quotient Intractable Epilepsy Sample. Arch Clin Neuropsychol 2020; 35:726-734. [PMID: 32377674 DOI: 10.1093/arclin/acaa022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 03/02/2020] [Accepted: 03/11/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE The Test of Memory Malingering (TOMM) is commonly used by neuropsychologists (Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists' beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology, 22 (2), 213-223); however there is variable research regarding its use in low intelligence and epileptic populations (Hill, S. K., Ryan, L. M., Kennedy, C. H., & Malamut, B. L. (2003). The relationship between measures of declarative memory and the Test of Memory Malingering in patients with and without temporal lobe dysfunction. Journal of Forensic Neuropsychology, 3 (3), 1-18; Hurley, K. E., & Deal, W. P. (2006). Assessment instruments measuring malingering used with individuals who have mental retardation: Potential problems and issues. Mental Retardation, 44 (2), 112-119; Simon, M. J. (2007). Performance of mentally retarded forensic patients on the Test of Memory Malingering. Journal of Clinical Psychology, 63 (4), 339-344). The present study evaluates whether the standard TOMM cutoffs are resistant to low estimated IQ (≤80) in a clinical sample of patients with intractable epilepsy. A second aim is to decipher possible relationships between the TOMM and memory performance. METHODS Retrospective data analysis was conducted between 2010 and 2019 on 42 adults with intractable epilepsy who completed a comprehensive neuropsychological evaluation as part of screening procedures for epilepsy surgery. IQ estimates and TOMM were administered to all participants. Some were also administered memory- and mood-related measures. RESULTS Traditional TOMM cutoffs demonstrated excellent specificity with only one participant scoring below the cutoff score on the Retention Trial, but not on Trial 2. The TOMM significantly correlated with several scores on various memory tests. CONCLUSIONS The TOMM may be appropriate for use in low intellectually functioning populations with intractable epilepsy given the excellent specificity seen in this study. Future studies may seek to better understand the relationship between TOMM and memory performance in other low-functioning populations.
Collapse
Affiliation(s)
- Alexandra F Grant
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Nicole J Werner
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| |
Collapse
|
21
|
Sherman EMS, Slick DJ, Iverson GL. Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria. Arch Clin Neuropsychol 2020; 35:735-764. [PMID: 32377667 PMCID: PMC7452950 DOI: 10.1093/arclin/acaa019] [Citation(s) in RCA: 152] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 03/12/2020] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES Empirically informed neuropsychological opinion is critical for determining whether cognitive deficits and symptoms are legitimate, particularly in settings where there are significant external incentives for successful malingering. The Slick, Sherman, and Iversion (1999) criteria for malingered neurocognitive dysfunction (MND) are considered a major milestone in the field's operationalization of neurocognitive malingering and have strongly influenced the development of malingering detection methods, including serving as the criterion of malingering in the validation of several performance validity tests (PVTs) and symptom validity tests (SVTs) (Slick, D.J., Sherman, E.M.S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545-561). However, the MND criteria are long overdue for revision to address advances in malingering research and to address limitations identified by experts in the field. METHOD The MND criteria were critically reviewed, updated with reference to research on malingering, and expanded to address other forms of malingering pertinent to neuropsychological evaluation such as exaggeration of self-reported somatic and psychiatric symptoms. RESULTS The new proposed criteria simplify diagnostic categories, expand and clarify external incentives, more clearly define the role of compelling inconsistencies, address issues concerning PVTs and SVTs (i.e., number administered, false positives, and redundancy), better define the role of SVTs and of marked discrepancies indicative of malingering, and most importantly, clearly define exclusionary criteria based on the last two decades of research on malingering in neuropsychology. Lastly, the new criteria provide specifiers to better describe clinical presentations for use in neuropsychological assessment. CONCLUSIONS The proposed multidimensional malingering criteria that define cognitive, somatic, and psychiatric malingering for use in neuropsychological assessment are presented.
Collapse
Affiliation(s)
| | | | - Grant L Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- Spaulding Rehabilitation Hospital and Spaulding Research Institute, Charlestown, MA, USA
- Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Charlestown, MA, USA
| |
Collapse
|
22
|
A Meta-Analysis of Neuropsychological Effort Test Performance in Psychotic Disorders. Neuropsychol Rev 2020; 30:407-424. [PMID: 32766940 DOI: 10.1007/s11065-020-09448-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 07/15/2020] [Indexed: 12/28/2022]
Abstract
Psychotic disorders are characterized by a generalized neurocognitive deficit (i.e., performance 1.5 SD below controls across neuropsychological domains with no specific profile of differential deficits). A motivational account of the generalized neurocognitive deficit has been proposed, which attributes poor neuropsychological testing performance to low effort. However, findings are inconsistent regarding effort test failure rate in individuals with psychotic disorders across studies (0-72%), and moderators are unclear, making it difficult to know whether the motivational explanation is viable. To address these issues, a meta-analysis was performed on data from 2205 individuals with psychotic disorders across 19 studies with 24 independent effects. Effort failure rate was examined along with moderators of effort test type, forensic status, IQ, positive symptoms, negative symptoms, diagnosis, age, gender, education, and antipsychotic use. The pooled weighted effort test failure rate was 18% across studies and there was a moderate pooled association between effort failure rate and global neurocognitive performance (r = .57). IQ and education significantly moderated failure rate. Collectively, these findings suggest that a nontrivial proportion of individuals with a psychotic disorder fail effort testing, and failure rate is associated with global neuropsychological impairment. However, given that effort tests are not immune to the effects of IQ in psychotic disorders, these results cannot attest to the viability of the motivational account of the generalized neurocognitive deficit. Furthermore, the significant moderating effect of IQ and education on effort test performance suggests that effort tests have questionable validity in this population and should be interpreted with caution.
Collapse
|
23
|
Abstract
OBJECTIVES A number of commonly used performance validity tests (PVTs) may be prone to high failure rates when used for individuals with severe neurocognitive deficits. This study investigated the validity of 10 PVT scores in justice-involved adults with fetal alcohol spectrum disorder (FASD), a neurodevelopmental disability stemming from prenatal alcohol exposure and linked with severe neurocognitive deficits. METHOD The sample comprised 80 justice-involved adults (ages 19-40) including 25 with confirmed or possible FASD and 55 where FASD was ruled out. Ten PVT scores were calculated, derived from Word Memory Test, Genuine Memory Impairment Profile, Advanced Clinical Solutions (Word Choice), the Wechsler Adult Intelligence Scale - Fourth Edition (Reliable Digit Span and age-corrected scaled scores (ACSS) from Digit Span, Coding, Symbol Search, Coding - Symbol Search, Vocabulary - Digit Span), and the Wechsler Memory Scale - Fourth Edition (Logical Memory II Recognition). RESULTS Participants with diagnosed/possible FASD were more likely to fail any single PVT, and failed a greater number of PVTs overall, compared to those without FASD. They were also more likely to fail based on Word Memory Test, Digit Span ACSS, Coding ACSS, Symbol Search ACSS, and Logical Memory II Recognition, compared to controls (35-76%). Across both groups, substantially more participants with IQ <70 failed two or more PVTs (90%), compared to those with an IQ ≥70 (44%). CONCLUSIONS Results highlight the need for additional research examining the use of PVTs in justice-involved populations with FASD.
Collapse
|
24
|
Martin PK, Schroeder RW, Olsen DH. Performance validity in the dementia clinic: Specificity of validity tests when used individually and in aggregate across levels of cognitive impairment severity. Clin Neuropsychol 2020; 36:165-188. [DOI: 10.1080/13854046.2020.1778790] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | |
Collapse
|
25
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
26
|
Rai JK, Erdodi LA. Impact of criterion measures on the classification accuracy of TOMM-1. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:185-196. [PMID: 31187632 DOI: 10.1080/23279095.2019.1613994] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
This study was designed to examine the effect of various criterion measures on the classification accuracy of Trial 1 of the Test of Memory Malingering (TOMM-1), a free-standing performance validity test (PVT). Archival data were collected from a case sequence of 91 (M Age = 42.2 years; M Education = 12.7) patients clinically referred for neuropsychological assessment. Trials 2 and Retention of the TOMM, the Word Choice Test, and three validity composites were used as criterion PVTs. Classification accuracy varied systematically as a function of criterion PVT. TOMM-1 ≤ 43 emerged as the optimal cutoff, resulting in a wide range of sensitivity (.47-1.00), with perfect overall specificity. Failing the TOMM-1 was unrelated to age, education or gender, but was associated with elevated self-reported depression. Results support the utility of TOMM-1 as an independent, free-standing, single-trial PVT. Consistent with previous reports, the choice of criterion measure influences parameter estimates of the PVT being calibrated. The methodological implications of modality specificity to PVT research and clinical/forensic practice should be considered when evaluating cutoffs or interpreting scores in the failing range.
Collapse
Affiliation(s)
- Jaspreet K Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada.,University of Windsor, Edmonton, Alberta, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
27
|
Gabel NM, Waldron-Perrine B, Spencer RJ, Pangilinan PH, Hale AC, Bieliauskas LA. Suspiciously slow: timed digit span as an embedded performance validity measure in a sample of veterans with mTBI. Brain Inj 2018; 33:377-382. [DOI: 10.1080/02699052.2018.1553311] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Nicolette M. Gabel
- Department of Physical Medicine and Rehabilitation, Michigan Medicine, Ann Arbor, USA
| | | | - Robert J. Spencer
- Mental Health Services, VA Ann Arbor Healthcare System, Ann Arbor, USA
| | - Percival H. Pangilinan
- Department of Physical Medicine and Rehabilitation, Michigan Medicine/VA Ann Arbor Healthcare System, Ann Arbor, USA
| | - Andrew C. Hale
- Mental Health Services, VA Ann Arbor Healthcare System, Ann Arbor, USA
| | - Linas A. Bieliauskas
- Department of Neuropsychology, University of Michigan Health System, Ann Arbor, USA
| |
Collapse
|
28
|
Poynter K, Boone KB, Ermshar A, Miora D, Cottingham M, Victor TL, Ziegler E, Zeller MA, Wright M. Wait, There’s a Baby in this Bath Water! Update on Quantitative and Qualitative Cut-Offs for Rey 15-Item Recall and Recognition. Arch Clin Neuropsychol 2018; 34:1367-1380. [DOI: 10.1093/arclin/acy087] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 10/10/2018] [Accepted: 10/17/2018] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objective
Evaluate the effectiveness of Rey 15-item plus recognition data in a large neuropsychological sample.
Method
Rey 15-item plus recognition scores were compared in credible (n = 138) and noncredible (n = 353) neuropsychology referrals.
Results
Noncredible patients scored significantly worse than credible patients on all Rey 15-item plus recognition scores. When cut-offs were selected to maintain at least 89.9% specificity, cut-offs could be made more stringent, with the highest sensitivity found for recognition correct (cut-off ≤11; 62.6% sensitivity) and the combination score (recall + recognition – false positives; cut-off ≤22; 60.6% sensitivity), followed by recall correct (cut-off ≤11; 49.3% sensitivity), and recognition false positive errors (≥3; 17.9% sensitivity). A cut-off of ≥4 applied to a summed qualitative error score for the recall trial resulted in 19.4% sensitivity. Approximately 10% of credible subjects failed either recall correct or recognition correct, whereas two-thirds of noncredible patients (67.7%) showed this pattern. Thirteen percent of credible patients failed either recall correct, recognition correct, or the recall qualitative error score, whereas nearly 70% of noncredible patients failed at least one of the three. Some individual qualitative recognition errors had low false positive rates (<2%) indicating that their presence was virtually pathognomonic for noncredible performance. Older age (>50) and IQ < 80 were associated with increased false positive rates in credible patients.
Conclusions
Data on a larger sample than that available in the 2002 validation study show that Rey 15-item plus recognition cut-offs can be made more stringent, and thereby detect up to 70% of noncredible test takers, but the test should be used cautiously in older individuals and in individuals with lowered IQ.
Collapse
Affiliation(s)
- Kellie Poynter
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Kyle Brauer Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Annette Ermshar
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Deborah Miora
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Maria Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Tara L Victor
- California State University, Dominguez Hills, Carson, CA, USA
| | | | - Michelle A Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, CA, USA
| | | |
Collapse
|
29
|
Webber TA, Critchfield EA, Soble JR. Convergent, Discriminant, and Concurrent Validity of Nonmemory-Based Performance Validity Tests. Assessment 2018; 27:1399-1415. [DOI: 10.1177/1073191118804874] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
To supplement memory-based Performance Validity Tests (PVTs) in identifying noncredible performance, we examined the validity of the two most commonly used nonmemory-based PVTs—Dot Counting Test (DCT) and Wechsler Adult Intelligence Scale–Fourth edition (WAIS-IV) Reliable Digit Span (RDS)—as well as two alternative WAIS-IV Digit Span (DS) subtest PVTs. Examinees completed DCT, WAIS-IV DS, and the following criterion PVTs: Test of Memory Malingering, Word Memory Test, and Word Choice Test. Validity groups were determined by passing 3 (valid; n = 69) or failing ⩾2 (noncredible; n = 30) criterion PVTs. DCT, RDS, RDS–Revised (RDS-R), and WAIS-IV DS Age-Corrected Scaled Score (ACSS) were significantly correlated (but uncorrelated with memory-based PVTs). Combining RDS, RDS-R, and ACSS with DCT improved classification accuracy (particularly for DCT/ACSS) for detecting noncredible performance among valid-unimpaired, but largely not valid-impaired examinees. Combining DCT with ACSS may uniquely assess and best supplement memory-based PVTs to identify noncredible neuropsychological test performance in cognitively unimpaired examinees.
Collapse
Affiliation(s)
- Troy A. Webber
- South Texas Veterans Health Care System, San Antonio, TX, USA
| | | | - Jason R. Soble
- South Texas Veterans Health Care System, San Antonio, TX, USA
- University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
30
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
31
|
McCaul C, Boone KB, Ermshar A, Cottingham M, Victor TL, Ziegler E, Zeller MA, Wright M. Cross-validation of the Dot Counting Test in a large sample of credible and non-credible patients referred for neuropsychological testing. Clin Neuropsychol 2018; 32:1054-1067. [DOI: 10.1080/13854046.2018.1425481] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Courtney McCaul
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Kyle B. Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Annette Ermshar
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Maria Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Tara L. Victor
- Department of Psychology, California State University, Dominguez Hills, Carson, CA, USA
| | | | - Michelle A. Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, CA, USA
| | - Matthew Wright
- Department of Psychiatry, Harbor-UCLA Medical Center, Torrance, CA, USA
| |
Collapse
|
32
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|
33
|
Soble JR, Santos OA, Bain KM, Kirton JW, Bailey KC, Critchfield EA, O’Rourke JJF, Highsmith JM, González DA. The Dot Counting Test adds up: Validation and response pattern analysis in a mixed clinical veteran sample. J Clin Exp Neuropsychol 2017; 40:317-325. [DOI: 10.1080/13803395.2017.1342773] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Octavio A. Santos
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Kathleen M. Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Joshua W. Kirton
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - K. Chase Bailey
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Edan A. Critchfield
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | | | | | - David Andrés González
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
34
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
35
|
Rickards TA, Cranston CC, Touradji P, Bechtold KT. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 25:219-230. [DOI: 10.1080/23279095.2017.1278602] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Tyler A. Rickards
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Christopher C. Cranston
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Pegah Touradji
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Kathleen T. Bechtold
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
36
|
Chafetz MD, Williams MA, Ben-Porath YS, Bianchini KJ, Boone KB, Kirkwood MW, Larrabee GJ, Ord JS. Official Position of the American Academy of Clinical Neuropsychology Social Security Administration Policy on Validity Testing: Guidance and Recommendations for Change. Clin Neuropsychol 2015; 29:723-40. [DOI: 10.1080/13854046.2015.1099738] [Citation(s) in RCA: 68] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
37
|
Larrabee GJ. Minimizing false positive error with multiple performance validity tests: response to Bilder, Sugar, and Hellemann (2014 this issue). Clin Neuropsychol 2014; 28:1230-42. [PMID: 25491180 DOI: 10.1080/13854046.2014.988754] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.
Collapse
|