1
|
Gomes F, Ferreira I, Rosa B, Martins da Silva A, Cavaco S. Using behavior and eye-fixations to detect feigned memory impairment. Front Psychol 2024; 15:1395434. [PMID: 39372958 PMCID: PMC11450296 DOI: 10.3389/fpsyg.2024.1395434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 07/25/2024] [Indexed: 10/08/2024] Open
Abstract
Background Detecting invalid cognitive performance is an important clinical challenge in neuropsychological assessment. The aim of this study was to explore behavior and eye-fixations responses during the performance of a computerized version of the Test of Memory Malingering (TOMM-C) under standard vs. feigning conditions. Participants and methods TOMM-C with eye-tracking recording was performed by 60 healthy individuals (31 with standard instruction - SI; and 29 were instructed to feign memory impairment: 21 Naïve Simulators - NS and 8 Coached Simulators - CS) and 14 patients with Multiple Sclerosis (MS) and memory complaints performed. Number of correct responses, response time, number of fixations, and fixation time in old vs. new stimuli were recorded. Nonparametric tests were applied for group comparison. Results NS produced fewer correct responses and had longer response times in comparison to SI on all three trials. SI showed more fixations and longer fixation time on previously presented stimuli (i.e., familiarity preference) specially on Trial 1, whereas NS had more fixations and longer fixation time on new stimuli (i.e., novelty preference) specially in the Retention trial. MS patients produced longer response time and had a different fixation pattern than SI subjects. No behavioral or oculomotor difference was observed between NS and CS. Conclusion Healthy simulators have a distinct behavioral and eye-fixation response pattern, reflecting a novelty preference. Oculomotor measures may be useful to detect exaggeration or fabrication of cognitive dysfunction. Though, its application in clinical populations may be limited.
Collapse
Affiliation(s)
- Filomena Gomes
- Neuropsychology Service, Centro Hospitalar Universitário de Santo António, Porto, Portugal
- Laboratory of Neurobiology of Human Behavior, Centro Hospitalar Universitário de Santo António, Porto, Portugal
| | - Inês Ferreira
- Laboratory of Neurobiology of Human Behavior, Centro Hospitalar Universitário de Santo António, Porto, Portugal
| | - Bruno Rosa
- Laboratory of Neurobiology of Human Behavior, Centro Hospitalar Universitário de Santo António, Porto, Portugal
| | - Ana Martins da Silva
- UMIB - Unit for Multidisciplinary Research in Biomedicine, ICBAS - School of Medicine and Biomedical Sciences, University of Porto, Porto, Portugal
- ITR - Laboratory for Integrative and Translational Research in Population Health, Porto, Portugal
- Department of Neurology, Centro Hospitalar Universitário de Santo António, Porto, Portugal
| | - Sara Cavaco
- Neuropsychology Service, Centro Hospitalar Universitário de Santo António, Porto, Portugal
- Laboratory of Neurobiology of Human Behavior, Centro Hospitalar Universitário de Santo António, Porto, Portugal
- UMIB - Unit for Multidisciplinary Research in Biomedicine, ICBAS - School of Medicine and Biomedical Sciences, University of Porto, Porto, Portugal
- ITR - Laboratory for Integrative and Translational Research in Population Health, Porto, Portugal
| |
Collapse
|
2
|
Stocks JK, Shields AN, DeBoer AB, Cerny BM, Ogram Buckley CM, Ovsiew GP, Jennette KJ, Resch ZJ, Basurto KS, Song W, Pliskin NH, Soble JR. The impact of visual memory impairment on Victoria Symptom Validity Test performance: A known-groups analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:329-338. [PMID: 34985401 DOI: 10.1080/23279095.2021.2021911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE We assessed the effect of visual learning and recall impairment on Victoria Symptom Validity Test (VSVT) accuracy and response latency for Easy, Difficult, and Total Items. METHOD A sample of 163 adult patients administered the VSVT and Brief Visuospatial Memory Test-Revised were classified as valid (114/163) or invalid (49/163) groups via independent criterion performance validity tests (PVTs). Classification accuracies for all VSVT indices were examined for the overall sample, and separately for subgroups based on visual memory functioning. RESULTS In the overall sample, all indices produced acceptable classification accuracy (areas under the curve [AUCs] ≥ 0.79). When stratified by visual learning/recall impairment, accuracy indices yielded acceptable classification for both the unimpaired (AUCs ≥0.79) and impaired subsamples (AUCs ≥0.75). Latency indices had acceptable classification accuracy for the unimpaired subsample (AUCs ≥0.74), but accuracy and sensitivity dropped for the impaired sample (AUCs ≥0.67). CONCLUSIONS VSVT accuracy and response latency yielded acceptable classification accuracies in the overall sample, and this effect was maintained in those with and without visual learning/recall impairment for the accuracy indices. Findings indicate that the VSVT is a psychometrically robust PVT with largely invariant cut-scores, even in the presence of bona fide visual learning/recall impairment.
Collapse
Affiliation(s)
- Jane K Stocks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Allison N Shields
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Adam B DeBoer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | | | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Woojin Song
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
3
|
Patrick SD, Rapport LJ, Hanks RA, Kanser RJ. Detecting feigned cognitive impairment using pupillometry on the Warrington Recognition Memory Test for Words. J Clin Exp Neuropsychol 2024; 46:36-45. [PMID: 38402625 PMCID: PMC11087194 DOI: 10.1080/13803395.2024.2312624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 01/05/2024] [Indexed: 02/27/2024]
Abstract
OBJECTIVE Pupillometry provides information about physiological and psychological processes related to cognitive load, familiarity, and deception, and it is outside of conscious control. This study examined pupillary dilation patterns during a performance validity test (PVT) among adults with true and feigned impairment of traumatic brain injury (TBI). PARTICIPANTS AND METHODS Participants were 214 adults in three groups: adults with bona fide moderate to severe TBI (TBI; n = 51), healthy comparisons instructed to perform their best (HC; n = 72), and healthy adults instructed and incentivized to simulate cognitive impairment due to TBI (SIM; n = 91). The Recognition Memory Test (RMT) was administered in the context of a comprehensive neuropsychological battery. Three pupillary indices were evaluated. Two pure pupil dilation (PD) indices assessed a simple measure of baseline arousal (PD-Baseline) and a nuanced measure of dynamic engagement (PD-Range). A pupillary-behavioral index was also evaluated. Dilation-response inconsistency (DRI) captured the frequency with which examinees displayed a pupillary familiarity response to the correct answer but selected the unfamiliar stimulus (incorrect answer). RESULTS All three indices differed significantly among the groups, with medium-to-large effect sizes. PD-Baseline appeared sensitive to oculomotor dysfunction due to TBI; adults with TBI displayed significantly lower chronic arousal as compared to the two groups of healthy adults (SIM, HC). Dynamic engagement (PD-Range) yielded a hierarchical structure such that SIM were more dynamically engaged than TBI followed by HC. As predicted, simulators engaged in DRI significantly more frequently than other groups. Moreover, subgroup analyses indicated that DRI differed significantly for simulators who scored in the invalid range on the RMT (n = 45) versus adults with genuine TBI who scored invalidly (n = 15). CONCLUSIONS The findings support continued research on the application of pupillometry to performance validity assessment: Overall, the findings highlight the promise of biometric indices in multimethod assessments of performance validity.
Collapse
Affiliation(s)
- Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, Michigan, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, Michigan, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, Michigan, USA
- The University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, USA
| |
Collapse
|
4
|
Crişan I, Sava FA, Maricuţoiu LP. Strategies of feigning mild head injuries related to validity indicators and types of coaching: Results of two experimental studies. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:705-715. [PMID: 34510965 DOI: 10.1080/23279095.2021.1973004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE In this paper, we analyzed differences between uncoached, symptom-coached, and test-coached simulators regarding strategies of feigning mild head injuries. METHOD Healthy undergraduates (n = 67 in the first study; n = 48 in the second study), randomized into three simulator groups, were assessed with four experimental memory tests. In the first study, tests were administered face-to-face, while in the second study, the procedure was adapted for online testing. RESULTS Online simulators showed a different approach to testing than face-to-face participants (U tests < 920, p < .05). Nevertheless, both samples favored strategies like memory loss, error making, concentration difficulties, and slow responding. Except for slow responding and concentration difficulties, the favorite strategies correlated with validity indicators. In the first study, test-coached simulators (m = 4.58-5.68, SD = 2.2-3) used strategies less than uncoached participants (m = 5.25-5.88, SD = 2.26-2.84). In the second study, test-coached participants (m = 3.8-5.6, SD = 1.51-2.2) employed strategies less than uncoached (m = 6.21-7.29, SD = 1.25-1.85) and symptom-coached participants (m = 6.14-6.79, SD = 1.69-2.76). DISCUSSION Similarities and differences between online and face-to-face assessments are discussed. Recommendations to associate heterogeneous indicators for detecting feigning strategies are issued.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | | |
Collapse
|
5
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
6
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Utility of WAIS-IV Digit Span indices as measures of performance validity in moderate to severe traumatic brain injury. Clin Neuropsychol 2022; 36:1950-1963. [PMID: 34044725 DOI: 10.1080/13854046.2021.1921277] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: The addition of Sequencing to WAIS-IV Digit Span (DS) brought about new Reliable Digit Span (RDS) indices and an Age-Corrected Scaled Score that includes Sequencing trials. Reports have indicated that these new performance validity tests (PVTs) are superior to the traditional RDS; however, comparisons in the context of known neurocognitive impairment are sparse. This study compared DS-derived PVT classification accuracies in a design that included adults with verified TBI. Methods: Participants included 64 adults with moderate-to-severe TBI (TBI), 51 healthy adults coached to simulate TBI (SIM), and 78 healthy comparisons (HC). Participants completed the WAIS-IV DS subtest in the context of a larger test battery. Results: Kruskal-Wallis tests indicated that all DS indices differed significantly across groups. Post hoc contrasts revealed that only RDS Forward and the traditional RDS differed significantly between SIM and TBI. ROC analyses indicated that RDS variables were comparable predictors of SIM vs. HC; however, the traditional RDS showed the highest sensitivity when approximating 90% specificity for SIM vs. TBI. A greater percentage of TBI scored RDS Sequencing < 1 compared to SIM and HC. Conclusion: In the context of moderate-to-severe TBI, the DS-derived PVTs showed comparable discriminability. However, the Greiffenstein et al. traditional RDS demonstrated the best classification accuracy with respect to specificity/sensitivity balance. This relative superiority may reflect that individuals with verified TBI are more likely to perseverate on prior instructions during DS Sequencing. Findings highlight the importance of including individuals with verified TBI when evaluating and developing PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
7
|
Jennette KJ, Williams CP, Resch ZJ, Ovsiew GP, Durkin NM, O'Rourke JJF, Marceaux JC, Critchfield EA, Soble JR. Assessment of differential neurocognitive performance based on the number of performance validity tests failures: A cross-validation study across multiple mixed clinical samples. Clin Neuropsychol 2022; 36:1915-1932. [PMID: 33759699 DOI: 10.1080/13854046.2021.1900398] [Citation(s) in RCA: 40] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This cross-sectional study examined the effect of number of Performance Validity Test (PVT) failures on neuropsychological test performance among a demographically diverse Veteran (VA) sample (n = 76) and academic medical sample (AMC; n = 128). A secondary goal was to investigate the psychometric implications of including versus excluding those with one PVT failure when cross-validating a series of embedded PVTs. Method: All patients completed the same six criterion PVTs, with the AMC sample completing three additional embedded PVTs. Neurocognitive test performance differences were examined based on number of PVT failures (0, 1, 2+) for both samples, and effect of number of criterion failures on embedded PVT performance was analyzed among the AMC sample. Results: Both groups with 0 or 1 PVT failures performed better than those with ≥2 PVT failures across most cognitive tests. There were nonsignificant differences between those with 0 or 1 PVT failures except for one test in the AMC sample. Receiver operator characteristic curve analyses found no differences in optimal cut score based on number of PVT failures when retaining/excluding one PVT failure. Conclusion: Findings support the use of ≥2 PVT failures as indicative of performance invalidity. These findings strongly support including those with one PVT failure with those with zero PVT failures in diagnostic accuracy studies, given that their inclusion reflects actual clinical practice, does not reduce sample sizes, and does not artificially deflate neurocognitive test results or inflate PVT classification accuracy statistics.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
8
|
Motor Reaction Times as an Embedded Measure of Performance Validity: a Study with a Sample of Austrian Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09431-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
AbstractAmong embedded measures of performance validity, reaction time parameters appear to be less common. However, their potential may be underestimated. In the German-speaking countries, reaction time is often examined using the Alertness subtest of the Test of Attention Performance (TAP). Several previous studies have examined its suitability for validity assessment. The current study was conceived to examine a variety of reaction time parameters of the TAP Alertness subtest with a sample of 266 Austrian civil forensic patients. Classification results from the Word Memory Test (WMT) were used as an external indicator to distinguish between valid and invalid symptom presentations. Results demonstrated that the WMT fail group performed worse in reaction time as well as its intraindividual variation across trials when compared to the WMT pass group. Receiver operating characteristic analyses revealed areas under the curve of .775–.804. Logistic regression models indicated the parameter intraindividual variation of motor reaction time with warning sound as being the best predictor for invalid test performance. Suggested cut scores yielded a sensitivity of .62 and a specificity of .90, or .45 and .95, respectively, when the accepted false-positive rate was set lower. The results encourage the use of the Alertness subtest as an embedded measure of performance validity.
Collapse
|
9
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Detecting simulated versus bona fide traumatic brain injury using pupillometry. Neuropsychology 2021; 35:472-485. [PMID: 34014751 DOI: 10.1037/neu0000747] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Objective: Pupil dilation patterns are outside of conscious control and provide information regarding neuropsychological processes related to deception, cognitive effort, and familiarity. This study examined the incremental utility of pupillometry on the Test of Memory Malingering (TOMM) in classifying individuals with verified traumatic brain injury (TBI), individuals simulating TBI, and healthy comparisons. Method: Participants were 177 adults across three groups: verified TBI (n = 53), feigned cognitive impairment due to TBI (SIM, n = 52), and heathy comparisons (HC, n = 72). Results: Logistic regression and ROC curve analyses identified several pupil indices that discriminated the groups. Pupillometry discriminated best for the comparison of greatest clinical interest, verified TBI versus simulators, adding information beyond traditional accuracy scores. Simulators showed evidence of greater cognitive load than both groups instructed to perform at their best ability (HC and TBI). Additionally, the typically robust phenomenon of dilating to familiar stimuli was relatively diminished among TBI simulators compared to TBI and HC. This finding may reflect competing, interfering effects of cognitive effort that are frequently observed in pupillary reactivity during deception. However, the familiarity effect appeared on nearly half the trials for SIM participants. Among those trials evidencing the familiarity response, selection of the unfamiliar stimulus (i.e., dilation-response inconsistency) was associated with a sizeable increase in likelihood of being a simulator. Conclusions: Taken together, these findings provide strong support for multimethod assessment: adding unique performance assessments such as biometrics to standard accuracy scores. Continued study of pupillometry will enhance the identification of simulators who are not detected by traditional performance validity test scoring metrics. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
10
|
Ovsiew GP, Carter DA, Rhoads T, Resch ZJ, Jennette KJ, Soble JR. Concordance Between Standard and Abbreviated Administrations of the Test of Memory Malingering: Implications for Streamlining Performance Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09408-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
Braw Y. Response Time Measures as Supplementary Validity Indicators in Forced-Choice Recognition Memory Performance Validity Tests: A Systematic Review. Neuropsychol Rev 2021; 32:71-98. [PMID: 33821424 DOI: 10.1007/s11065-021-09499-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 03/05/2021] [Indexed: 01/17/2023]
Abstract
Performance validity tests (PVTs) based on the forced-choice recognition memory (FCRM) paradigm are commonly used for the detection of noncredible performance. Examinees' response times (RTs) are affected by cognitive processes associated with deception and can also be gathered without lengthening the duration of the assessment. Consequently, interest in the utility of these measures as supplementary validity indicators in FCRM-PVTs has grown over the years. The current systematic review summarizes both clinical and simulation (i.e., healthy participants simulating cognitive impairment) studies of RTs in FCRM-PVTs. The findings of 25 peer-reviewed articles (n = 26 empirical studies) indicate that noncredible performance in FCRM-PVTs is associated with longer RTs. Additionally, there are indications that noncredible performance is associated with larger variability in RTs. RT measures, however, have lower discrimination capacity than conventional accuracy measures. Their utility may therefore lie in reaching decisions regarding cases with border zone accuracy scores, as well as aiding in the detection of more sophisticated examinees who are aware of the use of accuracy-based validity indicators in FCRM-PVTs. More research, however, is required before these measures are incorporated in daily practice and clinical decision-making processes.
Collapse
Affiliation(s)
- Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.
| |
Collapse
|
12
|
Cerny BM, Rhoads T, Leib SI, Jennette KJ, Basurto KS, Durkin NM, Ovsiew GP, Resch ZJ, Soble JR. Mean response latency indices on the Victoria Symptom Validity Test do not contribute meaningful predictive value over accuracy scores for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1304-1311. [PMID: 33470869 DOI: 10.1080/23279095.2021.1872575] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The utility of the Victoria Symptom Validity Test (VSVT) as a performance validity test (PVT) has been primarily established using response accuracy scores. However, the degree to which response latency may contribute to accurate classification of performance invalidity over and above accuracy scores remains understudied. Therefore, this study investigated whether combining VSVT accuracy and response latency scores would increase predictive utility beyond use of accuracy scores alone. Data from a mixed clinical sample of 163 patients, who were administered the VSVT as part of a larger neuropsychological battery, were analyzed. At least four independent criterion PVTs were used to establish validity groups (121 valid/42 invalid). Logistic regression models examining each difficulty level revealed that all VSVT measures were useful in classifying validity groups, both independently and when combined. Individual predictor classification accuracy ranged from 77.9 to 81.6%, indicating acceptable to excellent discriminability across the validity indices. The results of this study support the value of both accuracy and latency scores on the VSVT to identify performance invalidity, although the accuracy scores had superior classification statistics compared to response latency, and mean latency indices provided no unique benefit for classification accuracy beyond dimensional accuracy scores alone.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
13
|
A Systematic Review and Meta-Analysis of the Diagnostic Accuracy of the Advanced Clinical Solutions Word Choice Test as a Performance Validity Test. Neuropsychol Rev 2021; 31:349-359. [PMID: 33447952 DOI: 10.1007/s11065-020-09468-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 11/29/2020] [Indexed: 10/22/2022]
Abstract
Thorough assessment of performance validity has become an established standard of practice in neuropsychological assessment. While there has been a large focus on the development and cross-validation of embedded performance validity tests (PVTs) in recent years, new freestanding PVTs have also been developed, including the Word Choice Test (WCT) as part of the Advanced Clinical Solutions Effort System. And, while the WCT's general utility for identifying invalid performance has been demonstrated in the ensuing decade since its initial publication, optimal cut-scores and associated psychometric properties have varied widely across studies. This study sought to synthesize the existing diagnostic accuracy literature regarding the WCT via a systematic review and to conduct a meta-analysis to determine the performance validity cut-score that best maximizes sensitivity while maintaining acceptable specificity. A systematic search of the literature resulted in 14 studies for synthesis, with eight of those available for meta-analysis. Meta-analytic results revealed an optimal cut-score of ≤ 42 with 54% sensitivity and 93% specificity for identifying invalid neuropsychological test performance. Collectively, the WCT demonstrated adequate diagnostic accuracy as a PVT across a variety of populations. Recommendations for future studies are also provided.
Collapse
|
14
|
Victoria Symptom Validity Test: A Systematic Review and Cross-Validation Study. Neuropsychol Rev 2021; 31:331-348. [PMID: 33433828 DOI: 10.1007/s11065-021-09477-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 01/03/2021] [Indexed: 12/12/2022]
Abstract
The Victoria Symptom Validity Test (VSVT) is a performance validity test (PVT) with over two decades of empirical backing, although methodological limitations within the extant literature restrict its clinical and research generalizability. Chief among these constraints includes limited consensus on the most accurate index within the VSVT and the most appropriate cut-scores within each VSVT validity index. The current systematic review synthesizes existing VSVT validation studies and provides additional cross-validation in an independent sample using a known-groups design. We completed a systematic search of the literature, identifying 17 peer-reviewed studies for synthesis (7 simulation designs, 7 differential prevalence designs, and 3 known-groups designs). The independent cross-validation sample consisted of 200 mixed clinical neuropsychiatric patients referred for outpatient neuropsychological evaluation. Across all indices, Total item accuracy produced the strongest psychometric properties at an optimal cut-score of ≤ 40 (62% sensitivity/88% specificity). However, ROC curve analyses for all VSVT indices yielded statistically significant areas under the curve (AUCs; .73-81), suggestive of moderate classification accuracy. Cut-scores derived using the independent cross-validation sample converged with some previous findings supporting cut-scores of ≤ 22 for Easy item accuracy and ≤ 40 for Total item accuracy, although divergent findings were noted for Difficult item accuracy. Overall, VSVT validity indicators have adequate diagnostic accuracy across populations, with the current study providing additional support for its use as a psychometrically sound PVT in clinical settings. However, caution is recommended among patients with certain verified clinical conditions (e.g., dementia) and those with pronounced working memory deficits due to concerns for increased risk of false positives.
Collapse
|
15
|
Omer E, Elbaum T, Braw Y. Identifying Feigned Cognitive Impairment: Investigating the Utility of Diffusion Model Analyses. Assessment 2020; 29:198-208. [PMID: 32988242 DOI: 10.1177/1073191120962317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Forced-choice performance validity tests are routinely used for the detection of feigned cognitive impairment. The drift diffusion model deconstructs performance into distinct cognitive processes using accuracy and response time measures. It thereby offers a unique approach for gaining insight into examinees' speed-accuracy trade-offs and the cognitive processes that underlie their performance. The current study is the first to perform such analyses using a well-established forced-choice performance validity test. To achieve this aim, archival data of healthy participants, either simulating cognitive impairment in the Word Memory Test or performing it to the best of their ability, were analyzed using the EZ-diffusion model (N = 198). The groups differed in the three model parameters, with drift rate emerging as the best predictor of group membership. These findings provide initial evidence for the usefulness of the drift diffusion model in clarifying the cognitive processes underlying feigned cognitive impairment and encourage further research.
Collapse
|
16
|
Abramson DA, Resch ZJ, Ovsiew GP, White DJ, Bernstein MT, Basurto KS, Soble JR. Impaired or invalid? Limitations of assessing performance validity using the Boston Naming Test. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:486-491. [DOI: 10.1080/23279095.2020.1774378] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Dayna A. Abramson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Matthew T. Bernstein
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S. Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|