1
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Time and money: Exploring enhancements to performance validity research designs. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:256-263. [PMID: 34932422 DOI: 10.1080/23279095.2021.2019740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
INTRODUCTION The study examined the effect of preparation time and financial incentives on healthy adults' ability to simulate traumatic brain injury (TBI) during neuropsychological evaluation. METHOD A retrospective comparison of two TBI simulator group designs: a traditional design employing a single-session of standard coaching immediately before participation (SIM-SC; n = 46) and a novel design that provided financial incentive and preparation time (SIM-IP; n = 49). Both groups completed an ecologically valid neuropsychological test battery that included widely-used cognitive tests and five common performance validity tests (PVTs). RESULTS Compared to SIM-SC, SIM-IP performed significantly worse and had higher rates of impairment on tests of processing speed and executive functioning (Trails A and B). SIM-IP were more likely than SIM-SC to avoid detection on one of the PVTs and performed somewhat better on three of the PVTs, but the effects were small and non-significant. SIM-IP did not demonstrate significantly higher rates of successful simulation (i.e., performing impaired on cognitive tests with <2 PVT failures). Overall, the rate of the successful simulation was ∼40% with a liberal criterion, requiring cognitive impairment defined as performance >1 SD below the normative mean. At a more rigorous criterion defining impairment (>1.5 SD below the normative mean), successful simulation approached 35%. CONCLUSIONS Incentive and preparation time appear to add limited incremental effect over traditional, single-session coaching analog studies of TBI simulation. Moreover, these design modifications did not translate to meaningfully higher rates of successful simulation and avoidance of detection by PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, University of North Carolina, Chapel Hill, NC, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
2
|
Crişan I, Sava FA, Maricuţoiu LP. Strategies of feigning mild head injuries related to validity indicators and types of coaching: Results of two experimental studies. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:705-715. [PMID: 34510965 DOI: 10.1080/23279095.2021.1973004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE In this paper, we analyzed differences between uncoached, symptom-coached, and test-coached simulators regarding strategies of feigning mild head injuries. METHOD Healthy undergraduates (n = 67 in the first study; n = 48 in the second study), randomized into three simulator groups, were assessed with four experimental memory tests. In the first study, tests were administered face-to-face, while in the second study, the procedure was adapted for online testing. RESULTS Online simulators showed a different approach to testing than face-to-face participants (U tests < 920, p < .05). Nevertheless, both samples favored strategies like memory loss, error making, concentration difficulties, and slow responding. Except for slow responding and concentration difficulties, the favorite strategies correlated with validity indicators. In the first study, test-coached simulators (m = 4.58-5.68, SD = 2.2-3) used strategies less than uncoached participants (m = 5.25-5.88, SD = 2.26-2.84). In the second study, test-coached participants (m = 3.8-5.6, SD = 1.51-2.2) employed strategies less than uncoached (m = 6.21-7.29, SD = 1.25-1.85) and symptom-coached participants (m = 6.14-6.79, SD = 1.69-2.76). DISCUSSION Similarities and differences between online and face-to-face assessments are discussed. Recommendations to associate heterogeneous indicators for detecting feigning strategies are issued.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | | |
Collapse
|
3
|
Scott JC, Moore TM, Roalf DR, Satterthwaite TD, Wolf DH, Port AM, Butler ER, Ruparel K, Nievergelt CM, Risbrough VB, Baker DG, Gur RE, Gur RC. Development and application of novel performance validity metrics for computerized neurocognitive batteries. J Int Neuropsychol Soc 2023; 29:789-797. [PMID: 36503573 PMCID: PMC10258222 DOI: 10.1017/s1355617722000893] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric concepts using data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). METHODS We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n = 9498); and 2) adult servicemembers from the Marine Resiliency Study-II (n = 1444). RESULTS Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. CONCLUSIONS These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.
Collapse
Affiliation(s)
- J. Cobb Scott
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel H. Wolf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Allison M. Port
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ellyn R. Butler
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kosha Ruparel
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Caroline M. Nievergelt
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Victoria B. Risbrough
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Dewleen G. Baker
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
4
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
5
|
Becke M, Tucha L, Butzbach M, Aschenbrenner S, Weisbrod M, Tucha O, Fuermaier ABM. Feigning Adult ADHD on a Comprehensive Neuropsychological Test Battery: An Analogue Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4070. [PMID: 36901080 PMCID: PMC10001580 DOI: 10.3390/ijerph20054070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
The evaluation of performance validity is an essential part of any neuropsychological evaluation. Validity indicators embedded in routine neuropsychological tests offer a time-efficient option for sampling performance validity throughout the assessment while reducing vulnerability to coaching. By administering a comprehensive neuropsychological test battery to 57 adults with ADHD, 60 neurotypical controls, and 151 instructed simulators, we examined each test's utility in detecting noncredible performance. Cut-off scores were derived for all available outcome variables. Although all ensured at least 90% specificity in the ADHD Group, sensitivity differed significantly between tests, ranging from 0% to 64.9%. Tests of selective attention, vigilance, and inhibition were most useful in detecting the instructed simulation of adult ADHD, whereas figural fluency and task switching lacked sensitivity. Five or more test variables demonstrating results in the second to fourth percentile were rare among cases of genuine adult ADHD but identified approximately 58% of instructed simulators.
Collapse
Affiliation(s)
- Miriam Becke
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Lara Tucha
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
| | - Marah Butzbach
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Steffen Aschenbrenner
- Department of Clinical Psychology and Neuropsychology, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
| | - Matthias Weisbrod
- Department of Psychiatry and Psychotherapy, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
- Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg, 69115 Heidelberg, Germany
| | - Oliver Tucha
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
- Department of Psychology, National University of Ireland, W23 F2K8 Maynooth, Ireland
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| |
Collapse
|
6
|
Winter D, Braw Y. Validating Embedded Validity Indicators of Feigned ADHD-Associated Cognitive Impairment Using the MOXO-d-CPT. J Atten Disord 2022; 26:1907-1913. [PMID: 35861241 DOI: 10.1177/10870547221112947] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
BACKGROUND The current study aimed to validate the utility of previously established validity indicators derived from MOXO-d-CPT's continuous performance test. METHOD Healthy simulators feigned impairment after searching online for relevant information, an ecologically valid coaching condition (n = 39). They were compared to ADHD patients (n = 36) and healthy controls (n = 38). RESULTS Simulators performed significantly worse than ADHD patients in all MOXO-d-CPT indices, as well as a scale that integrates their contributions (feigned ADHD scale). Three indices (attention, hyperactivity, and impulsivity) and the latter scale exhibited adequate discriminative capacity. Higher education was associated with an exaggerated impairment among simulators, easing their detection. CONCLUSION Similarity between the current study and a previous study which examned the utlity of the MOXO-d-CPT validity indicators, increases our confidence in the efficacy of the latters embedded validity indicators. Though the findings provide initial validation of these validity indicators, generalizing beyond highly functioning participants necessitates further research.
Collapse
|
7
|
Winter D, Braw Y. Online search strategies utilized in feigning attention deficit/hyperactivity disorder (ADHD) while performing a continuous performance test (CPT). APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-10. [PMID: 36201363 DOI: 10.1080/23279095.2022.2128356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND The availability of information regarding neuropsychological tests threatens their confidentiality. This concern may be particularly relevant to Attention-Deficit/Hyperactivity Disorder (ADHD) considering its widespread online coverage. The present study explored simulators' online search strategies. METHOD Simulators (n = 39) searched for information before undergoing an evaluation which included performing a continuous performance test (CPT). Their search strategies were analyzed, and their performance was compared to that of ADHD patients (n = 36) and healthy controls (n = 38). RESULTS Most simulators reached high-risk websites that provided written and video-based information regarding the test. Sixty percent, comprised mostly of 3rd-year students, reached Google Scholar. These students were also easier to detect as simulators. Common strategies included performing the CPT in accordance with typical ADHD symptoms and avoiding the endorsement of both unusual and stereotypical symptoms. CONCLUSION Simulators can access online information that contains key test data. Higher education may increase the ability to reach academic research while decreasing the ability to convincingly feign impairment. While additional research is needed to examine coaching effects on neuropsychological testing, the risk to test security that many websites pose should be acknowledged and steps, including ones taken by test publishers, should be undertaken to minimize it.
Collapse
Affiliation(s)
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
8
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Utility of WAIS-IV Digit Span indices as measures of performance validity in moderate to severe traumatic brain injury. Clin Neuropsychol 2022; 36:1950-1963. [PMID: 34044725 DOI: 10.1080/13854046.2021.1921277] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: The addition of Sequencing to WAIS-IV Digit Span (DS) brought about new Reliable Digit Span (RDS) indices and an Age-Corrected Scaled Score that includes Sequencing trials. Reports have indicated that these new performance validity tests (PVTs) are superior to the traditional RDS; however, comparisons in the context of known neurocognitive impairment are sparse. This study compared DS-derived PVT classification accuracies in a design that included adults with verified TBI. Methods: Participants included 64 adults with moderate-to-severe TBI (TBI), 51 healthy adults coached to simulate TBI (SIM), and 78 healthy comparisons (HC). Participants completed the WAIS-IV DS subtest in the context of a larger test battery. Results: Kruskal-Wallis tests indicated that all DS indices differed significantly across groups. Post hoc contrasts revealed that only RDS Forward and the traditional RDS differed significantly between SIM and TBI. ROC analyses indicated that RDS variables were comparable predictors of SIM vs. HC; however, the traditional RDS showed the highest sensitivity when approximating 90% specificity for SIM vs. TBI. A greater percentage of TBI scored RDS Sequencing < 1 compared to SIM and HC. Conclusion: In the context of moderate-to-severe TBI, the DS-derived PVTs showed comparable discriminability. However, the Greiffenstein et al. traditional RDS demonstrated the best classification accuracy with respect to specificity/sensitivity balance. This relative superiority may reflect that individuals with verified TBI are more likely to perseverate on prior instructions during DS Sequencing. Findings highlight the importance of including individuals with verified TBI when evaluating and developing PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
9
|
Henry GK. Response time measures on the Word Memory Test do not add incremental validity to accuracy scores in predicting noncredible neurocognitive dysfunction in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-7. [PMID: 36170848 DOI: 10.1080/23279095.2022.2126320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The objective of the current study was to investigate whether response time measures on the Word Memory Test (WMT) increase predictive validity on determining noncredible neurocognitive dysfunction in a large sample of mild traumatic brain injury (MTBI) litigants. Participants included 203 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT). Participants failing PVTs exhibited significantly slower response times and lower accuracy on the WMT compared to participants who passed PVTs. Response time measures did not add significant incremental validity beyond that afforded by WMT accuracy measures alone. The best predictor of PVT status was the WMT Consistency Score (CNS) which was associated with an extremely large effect size (d = 16.44), followed by Immediate Recognition (IR: d = 10.68) and Delayed Recognition (DR: d = 10.10).
Collapse
|
10
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
11
|
Stocks JK, Shields AN, DeBoer AB, Cerny BM, Ogram Buckley CM, Ovsiew GP, Jennette KJ, Resch ZJ, Basurto KS, Song W, Pliskin NH, Soble JR. The impact of visual memory impairment on Victoria Symptom Validity Test performance: A known-groups analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-10. [PMID: 34985401 DOI: 10.1080/23279095.2021.2021911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE We assessed the effect of visual learning and recall impairment on Victoria Symptom Validity Test (VSVT) accuracy and response latency for Easy, Difficult, and Total Items. METHOD A sample of 163 adult patients administered the VSVT and Brief Visuospatial Memory Test-Revised were classified as valid (114/163) or invalid (49/163) groups via independent criterion performance validity tests (PVTs). Classification accuracies for all VSVT indices were examined for the overall sample, and separately for subgroups based on visual memory functioning. RESULTS In the overall sample, all indices produced acceptable classification accuracy (areas under the curve [AUCs] ≥ 0.79). When stratified by visual learning/recall impairment, accuracy indices yielded acceptable classification for both the unimpaired (AUCs ≥0.79) and impaired subsamples (AUCs ≥0.75). Latency indices had acceptable classification accuracy for the unimpaired subsample (AUCs ≥0.74), but accuracy and sensitivity dropped for the impaired sample (AUCs ≥0.67). CONCLUSIONS VSVT accuracy and response latency yielded acceptable classification accuracies in the overall sample, and this effect was maintained in those with and without visual learning/recall impairment for the accuracy indices. Findings indicate that the VSVT is a psychometrically robust PVT with largely invariant cut-scores, even in the presence of bona fide visual learning/recall impairment.
Collapse
Affiliation(s)
- Jane K Stocks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Allison N Shields
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Adam B DeBoer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | | | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Woojin Song
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
12
|
Omer E, Braw Y. The Effects of Cognitive Load on Strategy Utilization in a Forced-Choice Recognition Memory Performance Validity Test. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2022. [DOI: 10.1027/1015-5759/a000636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Despite the importance of detecting feigned cognitive impairment, we have a limited understanding of the theoretical foundation of the phenomenon and the factors that affect it. Studies regarding the formation and implementation of feigning strategies during neuropsychological assessments are numbered, though there are indications that they tax cognitive resources. The current study assessed the effect of cognitive load manipulation on feigning strategies. To achieve this aim, we utilized a 2 × 2 experimental design; condition (simulators/honest responders) and cognitive load (load/no load) were manipulated while participants ( N = 154) performed a well-established performance validity test (PVT). The cognitive load manipulation reduced the quantity of feigning strategies, while also affecting their composition (i.e., strategies tended to be more intuitive). This suggests that reduced cognitive resources among those feigning cognitive impairment may impact the use of in-vivo feigning strategies. These findings, though preliminary, will hopefully encourage further research that will uncover the cognitive factors involved in the utilization of feigning strategies in neuropsychological assessments.
Collapse
Affiliation(s)
- Elad Omer
- Department of Psychology, Ariel University, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Israel
| |
Collapse
|
13
|
Motor Reaction Times as an Embedded Measure of Performance Validity: a Study with a Sample of Austrian Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09431-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
AbstractAmong embedded measures of performance validity, reaction time parameters appear to be less common. However, their potential may be underestimated. In the German-speaking countries, reaction time is often examined using the Alertness subtest of the Test of Attention Performance (TAP). Several previous studies have examined its suitability for validity assessment. The current study was conceived to examine a variety of reaction time parameters of the TAP Alertness subtest with a sample of 266 Austrian civil forensic patients. Classification results from the Word Memory Test (WMT) were used as an external indicator to distinguish between valid and invalid symptom presentations. Results demonstrated that the WMT fail group performed worse in reaction time as well as its intraindividual variation across trials when compared to the WMT pass group. Receiver operating characteristic analyses revealed areas under the curve of .775–.804. Logistic regression models indicated the parameter intraindividual variation of motor reaction time with warning sound as being the best predictor for invalid test performance. Suggested cut scores yielded a sensitivity of .62 and a specificity of .90, or .45 and .95, respectively, when the accepted false-positive rate was set lower. The results encourage the use of the Alertness subtest as an embedded measure of performance validity.
Collapse
|
14
|
Koenitzer JC, Herron JE, Whitlow JW, Barbuscak CM, Patel NR, Pletcher R, Christensen J. Development and Initial Validation of the Perceptual Assessment of Memory (PASSOM): A Simulator Study. Arch Clin Neuropsychol 2021; 36:1326-1340. [PMID: 33388765 DOI: 10.1093/arclin/acaa126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/08/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE Performance validity tests (PVTs) are an integral component of neuropsychological assessment. There is a need for the development of more PVTs, especially those employing covert determinations. The aim of the present study was to provide initial validation of a new computerized PVT, the Perceptual Assessment of Memory (PASSOM). METHOD Participants were 58 undergraduate students randomly assigned to a simulator (SIM) or control (CON) group. All participants were provided written instructions for their role prior to testing and were administered the PASSOM as part of a brief battery of neurocognitive tests. Indices of interest included response accuracy for Trials 1 and 2, and total errors across Trials, as well as response time (RT) for Trials 1 and 2, and total RT for both Trials. RESULTS The SIM group produced significantly more errors than the CON group for Trials 1 and 2, and committed more total errors across trials. Significantly longer response latencies were found for the SIM group compared to the CON group for all RT indices examined. Linear regression modeling indicated excellent group classification for all indices studied, with areas under the curve ranging from 0.92 to 0.95. Sensitivity and specificity rates were good for several cut scores across all of the accuracy and RT indices, and sensitivity improved greatly by combining RT cut scores with the more traditional accuracy cut scores. CONCLUSION Findings demonstrate the ability of the PASSOM to distinguish individuals instructed to feign cognitive impairment from those told to perform to the best of their ability.
Collapse
Affiliation(s)
- Justin C Koenitzer
- Neuropsychology Department, Orlando VA Medical Center, Orlando, FL 32827, USA
| | - Janice E Herron
- Neuropsychology Department, Orlando VA Medical Center, Orlando, FL 32827, USA
| | - Jesse W Whitlow
- Psychology Department, Rutgers University, Camden, NJ 08102, USA
| | | | - Nitin R Patel
- Department of Veterans Affairs, VHA Office of Community Care, Washington, DC 20420 USA
| | - Ryan Pletcher
- Psychology Department, Rutgers University, Camden, NJ 08102, USA
| | | |
Collapse
|
15
|
Braun SE, Fountain-Zaragoza S, Halliday CA, Horner MD. Demographic differences in performance validity test failure. APPLIED NEUROPSYCHOLOGY. ADULT 2021:1-9. [PMID: 34428386 DOI: 10.1080/23279095.2021.1958814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The present study investigated demographic differences in performance validity test (PVT) failure in a Veteran sample. METHOD Data were extracted from clinical neuropsychological evaluations. Only veterans who identified as men, as either European American/White (EA) or African American/Black (AA) were included (n = 1261). We investigated whether performance on two frequently used PVTs, the Test of Memory Malingering (TOMM), and the Medical Symptom Validity Test (MSVT), differed by age, education, and race using separate logistic regressions. RESULTS Veterans with younger age, less education, and Veterans Affairs (VA) service-connected disability were significantly more likely to fail both PVTs. Race was not a significant predictor of MSVT failure, but AA patients were significantly more likely than EA patients to fail the TOMM. For all significant demographic predictors in the models, effects were small. In a subsample of patients who were given both PVTs (n = 461), the effects of race on performance remained. CONCLUSIONS Performance on the TOMM and MSVT differed by age and level of education. Performance on the TOMM differed between EA and AA patients, whereas performance on the MSVT did not. These results suggest that demographic factors may play a small but measurable role in performance on specific PVTs.
Collapse
Affiliation(s)
- Sarah Ellen Braun
- Department of Neurology, Virginia Commonwealth University, Richmond, VA, USA
- Massey Cancer Center, Richmond, VA, USA
| | | | - Colleen A Halliday
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
16
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Performance validity assessment using response time on the Warrington Recognition Memory Test. Clin Neuropsychol 2021; 35:1154-1173. [PMID: 32068486 DOI: 10.1080/13854046.2020.1716997] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVE The present study tested the incremental utility of response time (RT) on the Warrington Recognition Memory Test - Words (RMT-W) in classifying bona fide versus feigned TBI. METHOD Participants were 173 adults: 55 with moderate to severe TBI, 69 healthy comparisons (HC) instructed to perform their best, and 49 healthy adults coached to simulate TBI (SIM). Participants completed a computerized version of the RMT-W in the context of a comprehensive neuropsychological battery. Groups were compared on RT indices including mean RT (overall, correct trials, incorrect trials) and variability, as well as the traditional RMT-W accuracy score. RESULTS Several RT indices differed significantly across groups, although RMT-W accuracy predicted group membership more strongly than any individual RT index. SIM showed longer average RT than both TBI and HC. RT variability and RT for incorrect trials distinguished SIM-HC but not SIM-TBI comparisons. In general, results for SIM-TBI comparisons were weaker than SIM-HC results. For SIM-HC comparisons, classification accuracy was excellent for all multivariable models incorporating RMT-W accuracy with one of the RT indices. For SIM-TBI comparisons, classification accuracies for multivariable models ranged from acceptable to excellent discriminability. In addition to mean RT and RT on correct trials, the ratio of RT on correct items to incorrect items showed incremental predictive value to accuracy. CONCLUSION Findings support the growing body of research supporting the value of combining RT with PVTs in discriminating between verified and feigned TBI. The diagnostic accuracy of the RMT-W can be improved by incorporating RT.
Collapse
Affiliation(s)
- Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, MI, USA
| | - Jesse R Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
17
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Detecting simulated versus bona fide traumatic brain injury using pupillometry. Neuropsychology 2021; 35:472-485. [PMID: 34014751 DOI: 10.1037/neu0000747] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Objective: Pupil dilation patterns are outside of conscious control and provide information regarding neuropsychological processes related to deception, cognitive effort, and familiarity. This study examined the incremental utility of pupillometry on the Test of Memory Malingering (TOMM) in classifying individuals with verified traumatic brain injury (TBI), individuals simulating TBI, and healthy comparisons. Method: Participants were 177 adults across three groups: verified TBI (n = 53), feigned cognitive impairment due to TBI (SIM, n = 52), and heathy comparisons (HC, n = 72). Results: Logistic regression and ROC curve analyses identified several pupil indices that discriminated the groups. Pupillometry discriminated best for the comparison of greatest clinical interest, verified TBI versus simulators, adding information beyond traditional accuracy scores. Simulators showed evidence of greater cognitive load than both groups instructed to perform at their best ability (HC and TBI). Additionally, the typically robust phenomenon of dilating to familiar stimuli was relatively diminished among TBI simulators compared to TBI and HC. This finding may reflect competing, interfering effects of cognitive effort that are frequently observed in pupillary reactivity during deception. However, the familiarity effect appeared on nearly half the trials for SIM participants. Among those trials evidencing the familiarity response, selection of the unfamiliar stimulus (i.e., dilation-response inconsistency) was associated with a sizeable increase in likelihood of being a simulator. Conclusions: Taken together, these findings provide strong support for multimethod assessment: adding unique performance assessments such as biometrics to standard accuracy scores. Continued study of pupillometry will enhance the identification of simulators who are not detected by traditional performance validity test scoring metrics. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
18
|
Sweet JJ, Heilbronner RL, Morgan JE, Larrabee GJ, Rohling ML, Boone KB, Kirkwood MW, Schroeder RW, Suhr JA. American Academy of Clinical Neuropsychology (AACN) 2021 consensus statement on validity assessment: Update of the 2009 AACN consensus conference statement on neuropsychological assessment of effort, response bias, and malingering. Clin Neuropsychol 2021; 35:1053-1106. [PMID: 33823750 DOI: 10.1080/13854046.2021.1896036] [Citation(s) in RCA: 145] [Impact Index Per Article: 48.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Objective: Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' Methods: In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled. A total of 20 individuals shared ideas and writing drafts until reaching consensus on January 21, 2021. Results: Consensus was reached regarding affirmation of prior salient points that continue to garner clinical and scientific support, as well as creation of new points. The resulting consensus statement addresses definitions and differential diagnosis, performance and symptom validity assessment, and research design and statistical issues. Conclusions/Importance: In order to provide bases for diagnoses and interpretations, the current consensus is that all clinical and forensic evaluations must proactively address the degree to which results of neuropsychological and psychological testing are valid. There is a strong and continually-growing evidence-based literature on which practitioners can confidently base their judgments regarding the selection and interpretation of validity measures.
Collapse
Affiliation(s)
- Jerry J Sweet
- Department of Psychiatry & Behavioral Sciences, NorthShore University HealthSystem, Evanston, IL, USA
| | | | | | | | - Martin L Rohling
- Psychology Department, University of South Alabama, Mobile, AL, USA
| | - Kyle B Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Michael W Kirkwood
- Department of Physical Medicine & Rehabilitation, University of Colorado School of Medicine and Children's Hospital Colorado, Aurora, CO, USA
| | - Ryan W Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine, Wichita, KS, USA
| | - Julie A Suhr
- Psychology Department, Ohio University, Athens, OH, USA
| | | |
Collapse
|
19
|
Braw Y. Response Time Measures as Supplementary Validity Indicators in Forced-Choice Recognition Memory Performance Validity Tests: A Systematic Review. Neuropsychol Rev 2021; 32:71-98. [PMID: 33821424 DOI: 10.1007/s11065-021-09499-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 03/05/2021] [Indexed: 01/17/2023]
Abstract
Performance validity tests (PVTs) based on the forced-choice recognition memory (FCRM) paradigm are commonly used for the detection of noncredible performance. Examinees' response times (RTs) are affected by cognitive processes associated with deception and can also be gathered without lengthening the duration of the assessment. Consequently, interest in the utility of these measures as supplementary validity indicators in FCRM-PVTs has grown over the years. The current systematic review summarizes both clinical and simulation (i.e., healthy participants simulating cognitive impairment) studies of RTs in FCRM-PVTs. The findings of 25 peer-reviewed articles (n = 26 empirical studies) indicate that noncredible performance in FCRM-PVTs is associated with longer RTs. Additionally, there are indications that noncredible performance is associated with larger variability in RTs. RT measures, however, have lower discrimination capacity than conventional accuracy measures. Their utility may therefore lie in reaching decisions regarding cases with border zone accuracy scores, as well as aiding in the detection of more sophisticated examinees who are aware of the use of accuracy-based validity indicators in FCRM-PVTs. More research, however, is required before these measures are incorporated in daily practice and clinical decision-making processes.
Collapse
Affiliation(s)
- Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.
| |
Collapse
|
20
|
Strategies to detect invalid performance in cognitive testing: An updated and extended meta-analysis. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01659-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
Sabelli AG, Messa I, Giromini L, Lichtenstein JD, May N, Erdodi LA. Symptom Versus Performance Validity in Patients with Mild TBI: Independent Sources of Non-credible Responding. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09400-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
22
|
Cerny BM, Rhoads T, Leib SI, Jennette KJ, Basurto KS, Durkin NM, Ovsiew GP, Resch ZJ, Soble JR. Mean response latency indices on the Victoria Symptom Validity Test do not contribute meaningful predictive value over accuracy scores for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1304-1311. [PMID: 33470869 DOI: 10.1080/23279095.2021.1872575] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The utility of the Victoria Symptom Validity Test (VSVT) as a performance validity test (PVT) has been primarily established using response accuracy scores. However, the degree to which response latency may contribute to accurate classification of performance invalidity over and above accuracy scores remains understudied. Therefore, this study investigated whether combining VSVT accuracy and response latency scores would increase predictive utility beyond use of accuracy scores alone. Data from a mixed clinical sample of 163 patients, who were administered the VSVT as part of a larger neuropsychological battery, were analyzed. At least four independent criterion PVTs were used to establish validity groups (121 valid/42 invalid). Logistic regression models examining each difficulty level revealed that all VSVT measures were useful in classifying validity groups, both independently and when combined. Individual predictor classification accuracy ranged from 77.9 to 81.6%, indicating acceptable to excellent discriminability across the validity indices. The results of this study support the value of both accuracy and latency scores on the VSVT to identify performance invalidity, although the accuracy scores had superior classification statistics compared to response latency, and mean latency indices provided no unique benefit for classification accuracy beyond dimensional accuracy scores alone.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
23
|
Victoria Symptom Validity Test: A Systematic Review and Cross-Validation Study. Neuropsychol Rev 2021; 31:331-348. [PMID: 33433828 DOI: 10.1007/s11065-021-09477-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 01/03/2021] [Indexed: 12/12/2022]
Abstract
The Victoria Symptom Validity Test (VSVT) is a performance validity test (PVT) with over two decades of empirical backing, although methodological limitations within the extant literature restrict its clinical and research generalizability. Chief among these constraints includes limited consensus on the most accurate index within the VSVT and the most appropriate cut-scores within each VSVT validity index. The current systematic review synthesizes existing VSVT validation studies and provides additional cross-validation in an independent sample using a known-groups design. We completed a systematic search of the literature, identifying 17 peer-reviewed studies for synthesis (7 simulation designs, 7 differential prevalence designs, and 3 known-groups designs). The independent cross-validation sample consisted of 200 mixed clinical neuropsychiatric patients referred for outpatient neuropsychological evaluation. Across all indices, Total item accuracy produced the strongest psychometric properties at an optimal cut-score of ≤ 40 (62% sensitivity/88% specificity). However, ROC curve analyses for all VSVT indices yielded statistically significant areas under the curve (AUCs; .73-81), suggestive of moderate classification accuracy. Cut-scores derived using the independent cross-validation sample converged with some previous findings supporting cut-scores of ≤ 22 for Easy item accuracy and ≤ 40 for Total item accuracy, although divergent findings were noted for Difficult item accuracy. Overall, VSVT validity indicators have adequate diagnostic accuracy across populations, with the current study providing additional support for its use as a psychometrically sound PVT in clinical settings. However, caution is recommended among patients with certain verified clinical conditions (e.g., dementia) and those with pronounced working memory deficits due to concerns for increased risk of false positives.
Collapse
|
24
|
Nikolai T, Cechova K, Bukacova K, Fendrych Mazancova A, Markova H, Bezdicek O, Hort J, Vyhnalek M. Delayed matching to sample task 48: assessment of malingering with simulating design. AGING NEUROPSYCHOLOGY AND COGNITION 2020; 28:797-811. [PMID: 32998629 DOI: 10.1080/13825585.2020.1826898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The results of neuropsychological tests may be distorted by patients who exaggerate cognitive deficits. Eighty-three patients with cognitive deficit [Amnestic Mild Cognitive Impairment (aMCI), n = 53; Alzheimer's disease (AD) dementia, n = 30], 44 healthy older adults (HA), and 30 simulators of AD (s-AD) underwent comprehensive neuropsychological assessment. Receiver Operating Characteristic (ROC) analysis revealed high specificity but low sensitivity of the Delayed Matching to Sample Task (DMS48) in differentiating s-AD from AD dementia (87 and 53%, respectively) and from aMCI (96 and 57%). The sensitivity was considerably increased by using the DMS48/Rey Auditory Verbal Learning Test (RAVLT) ratio (specificity and sensitivity 93% and 93% for AD dementia and 96% and 80% for aMCI). The DMS48 differentiates s-AD from both aMCI and AD dementia with high specificity but low sensitivity. Its predictive value greatly increased when evaluated together with the RAVLT.
Collapse
Affiliation(s)
- T Nikolai
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic.,International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic
| | - K Cechova
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2nd Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - K Bukacova
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic
| | - A Fendrych Mazancova
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic.,International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic
| | - H Markova
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2 Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - O Bezdicek
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic
| | - J Hort
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2 Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - M Vyhnalek
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2 Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| |
Collapse
|
25
|
Omer E, Elbaum T, Braw Y. Identifying Feigned Cognitive Impairment: Investigating the Utility of Diffusion Model Analyses. Assessment 2020; 29:198-208. [PMID: 32988242 DOI: 10.1177/1073191120962317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Forced-choice performance validity tests are routinely used for the detection of feigned cognitive impairment. The drift diffusion model deconstructs performance into distinct cognitive processes using accuracy and response time measures. It thereby offers a unique approach for gaining insight into examinees' speed-accuracy trade-offs and the cognitive processes that underlie their performance. The current study is the first to perform such analyses using a well-established forced-choice performance validity test. To achieve this aim, archival data of healthy participants, either simulating cognitive impairment in the Word Memory Test or performing it to the best of their ability, were analyzed using the EZ-diffusion model (N = 198). The groups differed in the three model parameters, with drift rate emerging as the best predictor of group membership. These findings provide initial evidence for the usefulness of the drift diffusion model in clarifying the cognitive processes underlying feigned cognitive impairment and encourage further research.
Collapse
|
26
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
27
|
Ovsiew GP, Resch ZJ, Nayar K, Williams CP, Soble JR. Not so fast! Limitations of processing speed and working memory indices as embedded performance validity tests in a mixed neuropsychiatric sample. J Clin Exp Neuropsychol 2020; 42:473-484. [DOI: 10.1080/13803395.2020.1758635] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kritika Nayar
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Christopher P. Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
28
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
29
|
Lace JW, Grant AF, Ruppert P, Kaufman DAS, Teague CL, Lowell K, Gfeller JD. Detecting noncredible performance with the neuropsychological assessment battery, screening module: A simulation study. Clin Neuropsychol 2019; 35:572-596. [DOI: 10.1080/13854046.2019.1694703] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Affiliation(s)
- John W. Lace
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Alex F. Grant
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Phillip Ruppert
- Department of Psychiatry and Behavioral Neuroscience, Saint Louis University, St. Louis, MO, USA
| | | | - Carson L. Teague
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | - Kimberly Lowell
- Department of Psychology, Saint Louis University, St. Louis, MO, USA
| | | |
Collapse
|
30
|
Neal J, Strothkamp S, Bedingar E, Cordero P, Wagner B, Vagnini V, Jiang Y. Discriminating Fake From True Brain Injury Using Latency of Left Frontal Neural Responses During Old/New Memory Recognition. Front Neurosci 2019; 13:988. [PMID: 31611760 PMCID: PMC6777439 DOI: 10.3389/fnins.2019.00988] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 09/02/2019] [Indexed: 11/13/2022] Open
Abstract
Traumatic brain injury (TBI) is a major public health concern that affects 69 million individuals each year worldwide. Neuropsychologists report that up to 40% of individuals undergoing evaluations for TBI may be malingering neurocognitive deficits for a compensatory reward. The memory recognition test of malingering detection is effective but can be coached behaviorally. There is great need to develop a novel neural based method for discriminating fake from true brain injury. Here we test the hypothesis that decision making of faking memory deficits prolongs frontal neural responses. We applied an advanced method measuring decision latency in milliseconds for discriminating true TBI from malingerers who fake brain injury. To test this hypothesis, latencies of memory-related brain potentials were compared among true patients with moderate or severe TBI, and healthy age-matched individuals who were assigned either to be honest or faking memory deficit. Scalp signals of electroencephalography (EEG) were recorded with a 32-channel cap during an Old/New memory recognition task in three age- and education-matched groups: honest (n = 12), malingering (n = 15), and brain injured (n = 14) individuals. Bilateral fractional latencies of late positive ERP at frontal sites were compared among the three groups under both studied (Old) and non-studied (New) memory recognition conditions. Results show a significant difference between the fractional latencies of the late positive component during recognition of studied items in malingerers (averaged latencies = 396 ms) and the true brain injured subjects (mean = 312 ms) in the frontal sites. Only malingers showed asymmetrical frontal activity compared to the two other groups. These new findings support the hypothesis that that additional frontal processing of malingering individuals is measurably different from those of actual patients with brain injury. In contrast to our previous reported method using difference waves of amplitudes at frontal to posterior midline sites during new items recognition (Vagnini et al., 2008), there was no significant latency difference among groups during recognition of New items. The current method using delayed left frontal neural responses during studied items reached sensitivity of 80% and specificity of 79% in detecting malingers from true brain injury.
Collapse
Affiliation(s)
- Jennifer Neal
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Stephanie Strothkamp
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Esias Bedingar
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States.,Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Patrick Cordero
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Benjamin Wagner
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Victoria Vagnini
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States.,Louisville VA Medical Center, Louisville, KY, United States
| | - Yang Jiang
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| |
Collapse
|
31
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
32
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
33
|
Denning JH. When 10 is enough: Errors on the first 10 items of the Test of Memory Malingering (TOMMe10) and administration time predict freestanding performance validity tests (PVTs) and underperformance on memory measures. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:35-47. [PMID: 30950290 DOI: 10.1080/23279095.2019.1588122] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
It is critical that we develop more efficient performance validity tests (PVTs). A shorter version of the Test of Memory Malingering (TOMM) that utilizes errors on the first 10 items (TOMMe10) has shown promise as a freestanding PVT. Retrospective review included 397 consecutive veterans administered TOMM trial 1 (TOMM1), the Medical Symptom Validity Test (MSVT), and the Brief Visuospatial Memory Test-Revised (BVMT-R). TOMMe10 accuracy and administration time were used to predict performance on freestanding PVTs (TOMM1, MSVT). The impact of failing TOMMe10 (2 or more errors) on independent memory measures was also explored. TOMMe10 was a robust predictor of TOMM1 (area under the curve [AUC] = 0.97) and MSVT (AUC = 0.88) with sensitivities = 0.76 to 0.89 and specificities = 0.89 to 0.96. Administration time predicted PVT performance but did not improve accuracy compared to TOMMe10 alone. Failing TOMMe10 was associated with clinically and statistically significant declines on the BVMT-R and MSVT Paired Associates and Free Recall memory tests (d = -0.32 to -1.31). Consistent with prior research, TOMMe10 at 2 or more errors was highly accurate in predicting performance on other well-validated freestanding PVTs. Failing just 1 freestanding PVT (TOMMe10) significantly impacted memory measures and likely reflects invalid test performance.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
34
|
Elbaum T, Golan L, Lupu T, Wagner M, Braw Y. Establishing supplementary response time validity indicators in the Word Memory Test (WMT) and directions for future research. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:403-413. [DOI: 10.1080/23279095.2018.1555161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tomer Elbaum
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
35
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
36
|
Huang B, Xu Q, Ye R, Xu J. Influence of tranexamic acid on cerebral hemorrhage: A meta-analysis of randomized controlled trials. Clin Neurol Neurosurg 2018; 171:174-178. [PMID: 29929173 DOI: 10.1016/j.clineuro.2018.06.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/07/2018] [Accepted: 06/11/2018] [Indexed: 11/28/2022]
Abstract
Tranexamic acid might be beneficial for cerebral hemorrhage. However, the results remained controversial. We conducted a systematic review and meta-analysis to explore the influence of tranexamic acid on cerebral hemorrhage. PubMed, EMbase, Web of science, EBSCO, and Cochrane library databases were systematically searched. Randomized controlled trials (RCTs) assessing the effect of tranexamic acid on cerebral hemorrhage were included. Two investigators independently searched articles, extracted data, and assessed the quality of included studies. This meta-analysis was performed using the random-effect model. Seven RCTs involving 1702 patients were included in the meta-analysis. Overall, compared with control intervention in cerebral hemorrhage, tranexamic acid could significantly reduce growth of hemorrhagic mass (RR = 0.78; 95% CI = 0.61-0.99; P = 0.04) and unfavorable outcome (RR = 0.75; 95% CI = 0.61-0.93; P = 0.008), but demonstrated no substantial influence on volume of hemorrhagic lesion (Std. MD = -0.10; 95% CI = -0.27 to 0.08; P = 0.28), neurologic deterioration (RR = 1.25; 95% CI = 0.60-2.60; P = 0.56), rebleeding (RR = 0.62; 95% CI = 0.35-1.09; P = 0.10), surgery requirement (RR = 0.78; 95% CI = 0.40-1.51; P = 0.46), and mortality (RR = 0.86; 95% CI = 0.69-1.05; P = 0.14). Compared to control intervention in cerebral hemorrhage, tranexamic acid was found to significantly decrease growth of hemorrhagic mass and unfavorable outcome, but showed no notable impact on volume of hemorrhagic lesion, neurologic deterioration, rebleeding, surgery requirement and mortality.
Collapse
Affiliation(s)
- Beilei Huang
- Emergency Department, Wenzhou People's Hospital, Wenzhou Maternal and Child Health Care Hospital, The Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, Zhejiang Province, 400700, PR China.
| | - Qiusheng Xu
- Emergency Department, Wenzhou People's Hospital, Wenzhou Maternal and Child Health Care Hospital, The Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, Zhejiang Province, 400700, PR China.
| | - Ru Ye
- Emergency Department, Wenzhou People's Hospital, Wenzhou Maternal and Child Health Care Hospital, The Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, Zhejiang Province, 400700, PR China.
| | - Jun Xu
- Emergency Department, Wenzhou People's Hospital, Wenzhou Maternal and Child Health Care Hospital, The Third Clinical Institute Affiliated To Wenzhou Medical University, Wenzhou, Zhejiang Province, 400700, PR China.
| |
Collapse
|