1
|
Brown CC, Stewart-Willis JJ. A preliminary investigation of the utility of the Word Memory Test Immediate Recognition trial as a screener for noncredible performance. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-5. [PMID: 39099003 DOI: 10.1080/23279095.2024.2387233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/06/2024]
Abstract
The assessment of performance validity is an important consideration to the interpretation of neuropsychological data. However, commonly used performance validity tests such as the Test of Memory Malingering (TOMM) and Word Memory Test (WMT) have lengthy administration times (20-30 minutes). Alternatively, utilizing a screener of performance validity (e.g., the TOMM T1 or TOMMe10) has proven to be an effective method of assessing performance validity while conserving time. The present study investigates the use of the WMT Immediate Recognition (IR) Trial scores as a screening measure for performance validity using an archival mTBI polytrauma sample (n = 48). Results show that the WMT IR demonstrates a high degree of accuracy in predicting WMT Delayed Recognition (DR) Trial performance across a range of base rates suggesting that the WMT IR is a useful screening measure for noncredible performance. Clinical implications and selection of optimal cutoff are discussed.
Collapse
Affiliation(s)
- C C Brown
- Neuropsychology Department, Bay Pines Veterans' Affairs Health Care System, Bay Pines, FL, USA
| | - J J Stewart-Willis
- Neuropsychology Department, Bay Pines Veterans' Affairs Health Care System, Bay Pines, FL, USA
| |
Collapse
|
2
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Time and money: Exploring enhancements to performance validity research designs. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:256-263. [PMID: 34932422 DOI: 10.1080/23279095.2021.2019740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
INTRODUCTION The study examined the effect of preparation time and financial incentives on healthy adults' ability to simulate traumatic brain injury (TBI) during neuropsychological evaluation. METHOD A retrospective comparison of two TBI simulator group designs: a traditional design employing a single-session of standard coaching immediately before participation (SIM-SC; n = 46) and a novel design that provided financial incentive and preparation time (SIM-IP; n = 49). Both groups completed an ecologically valid neuropsychological test battery that included widely-used cognitive tests and five common performance validity tests (PVTs). RESULTS Compared to SIM-SC, SIM-IP performed significantly worse and had higher rates of impairment on tests of processing speed and executive functioning (Trails A and B). SIM-IP were more likely than SIM-SC to avoid detection on one of the PVTs and performed somewhat better on three of the PVTs, but the effects were small and non-significant. SIM-IP did not demonstrate significantly higher rates of successful simulation (i.e., performing impaired on cognitive tests with <2 PVT failures). Overall, the rate of the successful simulation was ∼40% with a liberal criterion, requiring cognitive impairment defined as performance >1 SD below the normative mean. At a more rigorous criterion defining impairment (>1.5 SD below the normative mean), successful simulation approached 35%. CONCLUSIONS Incentive and preparation time appear to add limited incremental effect over traditional, single-session coaching analog studies of TBI simulation. Moreover, these design modifications did not translate to meaningfully higher rates of successful simulation and avoidance of detection by PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, University of North Carolina, Chapel Hill, NC, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
3
|
Basso MR, Guzman D, Hoffmeister J, Mulligan R, Whiteside DM, Combs D. Use of perceptual memory as a performance validity indicator: initial validation with simulated mild traumatic brain injury. J Clin Exp Neuropsychol 2024; 46:55-66. [PMID: 38346160 DOI: 10.1080/13803395.2024.2314991] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 01/21/2024] [Indexed: 05/12/2024]
Abstract
INTRODUCTION Many commonly employed performance validity tests (PVTs) are several decades old and vulnerable to compromise, leading to a need for novel instruments. Because implicit/non-declarative memory may be robust to brain damage, tasks that rely upon such memory may serve as an effective PVT. Using a simulation design, this experiment evaluated whether novel tasks that rely upon perceptual memory hold promise as PVTs. METHOD Sixty healthy participants were provided instructions to simulate symptoms of mild traumatic brain injury (TBI), and they were compared to a group of 20 honest responding individuals. Simulator groups received varying levels of information concerning TBI symptoms, resulting in naïve, sophisticated, and test-coached groups. The Word Memory Test, Test of Memory Malingering, and California Verbal Learning Test-II Forced Choice Recognition Test were administered. To assess perceptual memory, selected images from the Gollin Incomplete Figures and Mooney Closure Test were presented as visual perception tasks. After brief delays, memory for the images was assessed. RESULTS No group differences emerged on the perception trials of the Gollin and Mooney figures, but simulators remembered fewer images than the honest responders. Simulator groups differed on the standard PVTs, but they performed equivalently on the Gollin and Mooney figures, implying robustness to coaching. Relying upon a criterion of 90% specificity, the Gollin and Mooney figures achieved at least 90% sensitivity, comparing favorably to the standard PVTs. CONCLUSIONS The Gollin and Mooney figures hold promise as novel PVTs. As perceptual memory tests, they may be relatively robust to brain damage, but future research involving clinical samples is necessary to substantiate this assertion.
Collapse
Affiliation(s)
| | | | | | - Ryan Mulligan
- VA Central Western Massachusetts, Leeds, Massachusetts
| | | | | |
Collapse
|
4
|
Crişan I, Sava FA, Maricuţoiu LP. Strategies of feigning mild head injuries related to validity indicators and types of coaching: Results of two experimental studies. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:705-715. [PMID: 34510965 DOI: 10.1080/23279095.2021.1973004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE In this paper, we analyzed differences between uncoached, symptom-coached, and test-coached simulators regarding strategies of feigning mild head injuries. METHOD Healthy undergraduates (n = 67 in the first study; n = 48 in the second study), randomized into three simulator groups, were assessed with four experimental memory tests. In the first study, tests were administered face-to-face, while in the second study, the procedure was adapted for online testing. RESULTS Online simulators showed a different approach to testing than face-to-face participants (U tests < 920, p < .05). Nevertheless, both samples favored strategies like memory loss, error making, concentration difficulties, and slow responding. Except for slow responding and concentration difficulties, the favorite strategies correlated with validity indicators. In the first study, test-coached simulators (m = 4.58-5.68, SD = 2.2-3) used strategies less than uncoached participants (m = 5.25-5.88, SD = 2.26-2.84). In the second study, test-coached participants (m = 3.8-5.6, SD = 1.51-2.2) employed strategies less than uncoached (m = 6.21-7.29, SD = 1.25-1.85) and symptom-coached participants (m = 6.14-6.79, SD = 1.69-2.76). DISCUSSION Similarities and differences between online and face-to-face assessments are discussed. Recommendations to associate heterogeneous indicators for detecting feigning strategies are issued.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | | |
Collapse
|
5
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
6
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
7
|
Omer E, Braw Y. The Effects of Cognitive Load on Strategy Utilization in a Forced-Choice Recognition Memory Performance Validity Test. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2022. [DOI: 10.1027/1015-5759/a000636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Despite the importance of detecting feigned cognitive impairment, we have a limited understanding of the theoretical foundation of the phenomenon and the factors that affect it. Studies regarding the formation and implementation of feigning strategies during neuropsychological assessments are numbered, though there are indications that they tax cognitive resources. The current study assessed the effect of cognitive load manipulation on feigning strategies. To achieve this aim, we utilized a 2 × 2 experimental design; condition (simulators/honest responders) and cognitive load (load/no load) were manipulated while participants ( N = 154) performed a well-established performance validity test (PVT). The cognitive load manipulation reduced the quantity of feigning strategies, while also affecting their composition (i.e., strategies tended to be more intuitive). This suggests that reduced cognitive resources among those feigning cognitive impairment may impact the use of in-vivo feigning strategies. These findings, though preliminary, will hopefully encourage further research that will uncover the cognitive factors involved in the utilization of feigning strategies in neuropsychological assessments.
Collapse
Affiliation(s)
- Elad Omer
- Department of Psychology, Ariel University, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Israel
| |
Collapse
|
8
|
Mulligan R, Basso MR, Hoffmeister J, Lau L, Whiteside DM, Combs D. Classification accuracy of the word memory test genuine memory impairment index. J Clin Exp Neuropsychol 2021; 43:655-662. [PMID: 34686108 DOI: 10.1080/13803395.2021.1988520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE The Word Memory Test (WMT) assesses non-credible performance in neuropsychological assessment. To mitigate risk of false positives among patients with severe cognitive dysfunction, the Genuine Memory Impairment Profile was derived. Only a modest number of investigations has evaluated classification accuracy among clinical samples, leaving the GMIP's accuracy largely uncertain. Accordingly, a simulation experiment evaluated the classification accuracy of the GMIP in a group of healthy individuals coached to simulate mild traumatic brain injury (TBI) related memory impairment on the WMT. PARTICIPANTS AND METHODS Eighty healthy individuals were randomly assigned to one of the four experimental groups. One group was provided superficial information concerning TBI symptoms (naïve simulators), another was provided extensive information concerning TBI symptoms (sophisticated simulators), and a third group was provided extensive TBI symptom information and tactics to evade detection by performance validity tests (PVT) (test-coached). An honest responding control group was directed to give their best performance. All participants were administered the California Verbal Learning Test-2 (CVLT-2) and the WMT. RESULTS Among the TBI simulators, 90% of the test-coached, 95% of the sophisticated simulators, and 100% of the naïve simulators were correctly classified as exaggerating memory impairment on the primary WMT indices. The simulator groups performed worse than the honest responding group on the CVLT-2. Of those who exceeded the WMT cutoffs, 60%, 27%, and 6% of the naïve-, sophisticated-, and test-coached simulators manifested the GMIP profile, respectively. CONCLUSIONS The GMIP is apt to misclassify individuals as having genuine memory impairment, especially if a naïve or unsophisticated effort is made to exert non-credible performance. Indeed, individuals who employ the least sophisticated efforts to exaggerate cognitive impairment appear most likely to manifest the GMIP. The GMIP should be used cautiously to discriminate genuine impairment from non-credible performance, especially among people with mild TBI.
Collapse
Affiliation(s)
- Ryan Mulligan
- Department of Psychology, University of Tulsa, Tulsa, US
| | - Michael R Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, US
| | | | - Lily Lau
- Department of Psychology, University of Tulsa, Tulsa, US
| | - Douglas M Whiteside
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, US
| | - Dennis Combs
- Department of Psychology, University of Texas, Austin, US
| |
Collapse
|
9
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
10
|
Sullivan KA, Bennett D. An Experimental Study of the Effects of Biased Responding on the Modified Rivermead Post-concussion Symptoms Questionnaire and Validity Indicators. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09419-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
12
|
Carvalho LDF, Reis A, Colombarolli MS, Pasian SR, Miguel FK, Erdodi LA, Viglione DJ, Giromini L. Discriminating Feigned from Credible PTSD Symptoms: a Validation of a Brazilian Version of the Inventory of Problems-29 (IOP-29). PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09403-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
13
|
Omer E, Elbaum T, Braw Y. Identifying Feigned Cognitive Impairment: Investigating the Utility of Diffusion Model Analyses. Assessment 2020; 29:198-208. [PMID: 32988242 DOI: 10.1177/1073191120962317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Forced-choice performance validity tests are routinely used for the detection of feigned cognitive impairment. The drift diffusion model deconstructs performance into distinct cognitive processes using accuracy and response time measures. It thereby offers a unique approach for gaining insight into examinees' speed-accuracy trade-offs and the cognitive processes that underlie their performance. The current study is the first to perform such analyses using a well-established forced-choice performance validity test. To achieve this aim, archival data of healthy participants, either simulating cognitive impairment in the Word Memory Test or performing it to the best of their ability, were analyzed using the EZ-diffusion model (N = 198). The groups differed in the three model parameters, with drift rate emerging as the best predictor of group membership. These findings provide initial evidence for the usefulness of the drift diffusion model in clarifying the cognitive processes underlying feigned cognitive impairment and encourage further research.
Collapse
|
14
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
15
|
Bhowmick C, Hirst R, Green P. Comparison of the Word Memory Test and the Test of Memory Malingering in detecting invalid performance in neuropsychological testing. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:486-496. [PMID: 31519112 DOI: 10.1080/23279095.2019.1658585] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Given the prevalence of compensation seeking patients who exaggerate or fabricate their symptoms, the assessment of performance and symptom validity throughout testing is vital in neuropsychological evaluations. Two of the most commonly utilized performance validity tests (PVTs) are the Word Memory Test (WMT) and the Test of Memory Malingering (TOMM). While both have proven successful in detecting invalid performance, some studies suggest greater sensitivity in the WMT relative to the TOMM. To improve upon previous research, this study compared performance in individuals who completed both the WMT and TOMM during a neuropsychological evaluation. Participants included 268 cases from a clinical private practice consisting of primarily disability claimants. One-way multivariate analysis of variance (MANOVA) compared neuropsychological performance of participants who passed both PVTs (n = 198) versus those who failed the WMT but passed the TOMM (n = 70). Global suppression of neuropsychological scores was found for participants who failed the WMT but passed the TOMM, as well as more psychiatric symptoms reported on questionnaires, relative to those who passed both PVTs. These findings suggest that those passing the TOMM but failing the WMT demonstrated performance invalidity, which illustrates the WMT's enhanced sensitivity.
Collapse
Affiliation(s)
- Chloe Bhowmick
- Department of Psychology, Palo Alto University, Palo Alto, CA, USA
| | - Rayna Hirst
- Department of Psychology, Palo Alto University, Palo Alto, CA, USA
| | - Paul Green
- William Green, Greens Publishing, Kelowna, British Columbia, Canada
| |
Collapse
|
16
|
Elbaum T, Golan L, Lupu T, Wagner M, Braw Y. Establishing supplementary response time validity indicators in the Word Memory Test (WMT) and directions for future research. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:403-413. [DOI: 10.1080/23279095.2018.1555161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tomer Elbaum
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
17
|
Binder LM. The patient-psychologist relationship and informed consent in neuropsychological evaluations. Clin Neuropsychol 2018; 33:988-1015. [PMID: 30545281 DOI: 10.1080/13854046.2018.1529816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Objective: To discuss specific issues regarding consent for neuropsychological evaluation and the patient-psychologist relationship within the context of the Ethics Code of the American Psychological Association and relevant literature. Method: The author makes recommendations based on the Ethics Code and published sources. This article is advisory and does not prescribe ethical practice. Conclusions: The presence or absence of a patient-psychologist relationship is an essential consideration. The consent process varies, depending on the absence or existence of a patient-psychologist relationship and the type of evaluation. Circumstances when the examiner has the option of establishing a patient-psychologist relationship and guidelines regarding multiple relationships affecting legal testimony by treating providers are considered. Differences in the consent process between clinical and forensic evaluations, and the need for tailoring the consent process for the specific type of clinical or forensic evaluation, are emphasized. Specific provisions that can be included in consent forms in clinical and forensic evaluations, the rationale for their inclusion, and the benefits of consent to both the examiner and the examinee are considered. Circumstances are defined that dictate the need for assent rather than consent. The consent process is discussed in relation to evaluations of fitness for duty and civil capacity. Mandatory reporting of impaired drivers in some jurisdictions, fee agreements, and other issues are considered. Guidance is provided on role limitations in legal testimony by a clinical evaluator that addresses conflicting recommendations now in the literature.
Collapse
|
18
|
Tomer E, Lupu T, Golan L, Wagner M, Braw Y. Eye tracking as a mean to detect feigned cognitive impairment in the word memory test. APPLIED NEUROPSYCHOLOGY-ADULT 2018; 27:49-61. [PMID: 30183408 DOI: 10.1080/23279095.2018.1480483] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Eye movements showed initial promise for the detection of deception and may be harder to consciously manipulate than conventional accuracy measures. Therefore, we integrated an eye-tracker with the Word Memory Test (WMT) and tested its usefulness for the detection of feigned cognitive impairment. As part of the study, simulators (n = 44) and honest controls (n = 41) performed WMT's immediate-recognition (IR) subtest while their eye movements were recorded. In comparison to the control group, simulators spent less time gazing at relevant stimuli, spent more time gazing at irrelevant stimuli, and had a lower saccade rate. Group classification using a scale that combined the eye movement measures and the WMT's accuracy measure showed tentative promise (i.e., it enhanced classification compared to the use of the accuracy measure as the sole predictor of group membership). Overall, integration of an eye-tracker with the WMT was found to be feasible and the eye movement measures showed initial promise for the detection of feigned cognitive impairment. Moreover, eye movement measures proved useful in enhancing our understanding of strategies utilized by the simulators and the cognitive processes that affect their behavior. While the findings are clearly preliminary, we hope that they will encourage further research of these promising psychophysiological measures.
Collapse
Affiliation(s)
- Elbaum Tomer
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Psychology, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.,Emotion and Cognition Research Center, Shalvata Mental Health Center, Hod HaSharon, Israel
| |
Collapse
|
19
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
20
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|