1
|
Crișan I, Ali S, Cutler L, Matei A, Avram L, Erdodi LA. Geographic variability in limited English proficiency: A cross-cultural study of cognitive profiles. J Int Neuropsychol Soc 2023; 29:972-983. [PMID: 37246143 DOI: 10.1017/s1355617723000280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE This study was designed to evaluate the effect of limited English proficiency (LEP) on neurocognitive profiles. METHOD Romanian (LEP-RO; n = 59) and Arabic (LEP-AR; n = 30) native speakers were compared to Canadian native speakers of English (NSE; n = 24) on a strategically selected battery of neuropsychological tests. RESULTS As predicted, participants with LEP demonstrated significantly lower performance on tests with high verbal mediation relative to US norms and the NSE sample (large effects). In contrast, several tests with low verbal mediation were robust to LEP. However, clinically relevant deviations from this general pattern were observed. The level of English proficiency varied significantly within the LEP-RO and was associated with a predictable performance pattern on tests with high verbal mediation. CONCLUSIONS The heterogeneity in cognitive profiles among individuals with LEP challenges the notion that LEP status is a unitary construct. The level of verbal mediation is an imperfect predictor of the performance of LEP examinees during neuropsychological testing. Several commonly used measures were identified that are robust to the deleterious effects of LEP. Administering tests in the examinee's native language may not be the optimal solution to contain the confounding effect of LEP in cognitive evaluations.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Alina Matei
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Luisa Avram
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
2
|
Scott JC, Moore TM, Roalf DR, Satterthwaite TD, Wolf DH, Port AM, Butler ER, Ruparel K, Nievergelt CM, Risbrough VB, Baker DG, Gur RE, Gur RC. Development and application of novel performance validity metrics for computerized neurocognitive batteries. J Int Neuropsychol Soc 2023; 29:789-797. [PMID: 36503573 PMCID: PMC10258222 DOI: 10.1017/s1355617722000893] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric concepts using data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). METHODS We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n = 9498); and 2) adult servicemembers from the Marine Resiliency Study-II (n = 1444). RESULTS Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. CONCLUSIONS These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.
Collapse
Affiliation(s)
- J. Cobb Scott
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel H. Wolf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Allison M. Port
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ellyn R. Butler
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kosha Ruparel
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Caroline M. Nievergelt
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Victoria B. Risbrough
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Dewleen G. Baker
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
3
|
Bajjaleh C, Braw YC, Elkana O. Adaptation and initial validation of the Arabic version of the Word Memory Test (WMT ARB). APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:204-213. [PMID: 34043924 DOI: 10.1080/23279095.2021.1923495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. METHODS We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. RESULTS The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. CONCLUSION The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
Collapse
Affiliation(s)
- Christine Bajjaleh
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| | - Yoram C Braw
- Department of Psychology, Ariel University, Ariel, Israel
| | - Odelia Elkana
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| |
Collapse
|
4
|
Henry GK. Response time measures on the Word Memory Test do not add incremental validity to accuracy scores in predicting noncredible neurocognitive dysfunction in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-7. [PMID: 36170848 DOI: 10.1080/23279095.2022.2126320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The objective of the current study was to investigate whether response time measures on the Word Memory Test (WMT) increase predictive validity on determining noncredible neurocognitive dysfunction in a large sample of mild traumatic brain injury (MTBI) litigants. Participants included 203 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT). Participants failing PVTs exhibited significantly slower response times and lower accuracy on the WMT compared to participants who passed PVTs. Response time measures did not add significant incremental validity beyond that afforded by WMT accuracy measures alone. The best predictor of PVT status was the WMT Consistency Score (CNS) which was associated with an extremely large effect size (d = 16.44), followed by Immediate Recognition (IR: d = 10.68) and Delayed Recognition (DR: d = 10.10).
Collapse
|
5
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
6
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
7
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
8
|
Motor Reaction Times as an Embedded Measure of Performance Validity: a Study with a Sample of Austrian Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09431-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
AbstractAmong embedded measures of performance validity, reaction time parameters appear to be less common. However, their potential may be underestimated. In the German-speaking countries, reaction time is often examined using the Alertness subtest of the Test of Attention Performance (TAP). Several previous studies have examined its suitability for validity assessment. The current study was conceived to examine a variety of reaction time parameters of the TAP Alertness subtest with a sample of 266 Austrian civil forensic patients. Classification results from the Word Memory Test (WMT) were used as an external indicator to distinguish between valid and invalid symptom presentations. Results demonstrated that the WMT fail group performed worse in reaction time as well as its intraindividual variation across trials when compared to the WMT pass group. Receiver operating characteristic analyses revealed areas under the curve of .775–.804. Logistic regression models indicated the parameter intraindividual variation of motor reaction time with warning sound as being the best predictor for invalid test performance. Suggested cut scores yielded a sensitivity of .62 and a specificity of .90, or .45 and .95, respectively, when the accepted false-positive rate was set lower. The results encourage the use of the Alertness subtest as an embedded measure of performance validity.
Collapse
|
9
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
10
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
11
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Performance validity assessment using response time on the Warrington Recognition Memory Test. Clin Neuropsychol 2021; 35:1154-1173. [PMID: 32068486 DOI: 10.1080/13854046.2020.1716997] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVE The present study tested the incremental utility of response time (RT) on the Warrington Recognition Memory Test - Words (RMT-W) in classifying bona fide versus feigned TBI. METHOD Participants were 173 adults: 55 with moderate to severe TBI, 69 healthy comparisons (HC) instructed to perform their best, and 49 healthy adults coached to simulate TBI (SIM). Participants completed a computerized version of the RMT-W in the context of a comprehensive neuropsychological battery. Groups were compared on RT indices including mean RT (overall, correct trials, incorrect trials) and variability, as well as the traditional RMT-W accuracy score. RESULTS Several RT indices differed significantly across groups, although RMT-W accuracy predicted group membership more strongly than any individual RT index. SIM showed longer average RT than both TBI and HC. RT variability and RT for incorrect trials distinguished SIM-HC but not SIM-TBI comparisons. In general, results for SIM-TBI comparisons were weaker than SIM-HC results. For SIM-HC comparisons, classification accuracy was excellent for all multivariable models incorporating RMT-W accuracy with one of the RT indices. For SIM-TBI comparisons, classification accuracies for multivariable models ranged from acceptable to excellent discriminability. In addition to mean RT and RT on correct trials, the ratio of RT on correct items to incorrect items showed incremental predictive value to accuracy. CONCLUSION Findings support the growing body of research supporting the value of combining RT with PVTs in discriminating between verified and feigned TBI. The diagnostic accuracy of the RMT-W can be improved by incorporating RT.
Collapse
Affiliation(s)
- Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, MI, USA
| | - Jesse R Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
12
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Detecting simulated versus bona fide traumatic brain injury using pupillometry. Neuropsychology 2021; 35:472-485. [PMID: 34014751 DOI: 10.1037/neu0000747] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Objective: Pupil dilation patterns are outside of conscious control and provide information regarding neuropsychological processes related to deception, cognitive effort, and familiarity. This study examined the incremental utility of pupillometry on the Test of Memory Malingering (TOMM) in classifying individuals with verified traumatic brain injury (TBI), individuals simulating TBI, and healthy comparisons. Method: Participants were 177 adults across three groups: verified TBI (n = 53), feigned cognitive impairment due to TBI (SIM, n = 52), and heathy comparisons (HC, n = 72). Results: Logistic regression and ROC curve analyses identified several pupil indices that discriminated the groups. Pupillometry discriminated best for the comparison of greatest clinical interest, verified TBI versus simulators, adding information beyond traditional accuracy scores. Simulators showed evidence of greater cognitive load than both groups instructed to perform at their best ability (HC and TBI). Additionally, the typically robust phenomenon of dilating to familiar stimuli was relatively diminished among TBI simulators compared to TBI and HC. This finding may reflect competing, interfering effects of cognitive effort that are frequently observed in pupillary reactivity during deception. However, the familiarity effect appeared on nearly half the trials for SIM participants. Among those trials evidencing the familiarity response, selection of the unfamiliar stimulus (i.e., dilation-response inconsistency) was associated with a sizeable increase in likelihood of being a simulator. Conclusions: Taken together, these findings provide strong support for multimethod assessment: adding unique performance assessments such as biometrics to standard accuracy scores. Continued study of pupillometry will enhance the identification of simulators who are not detected by traditional performance validity test scoring metrics. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
13
|
Berger C, Lev A, Braw Y, Elbaum T, Wagner M, Rassovsky Y. Detection of Feigned ADHD Using the MOXO-d-CPT. J Atten Disord 2021; 25:1032-1047. [PMID: 31364437 DOI: 10.1177/1087054719864656] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Objective: The objective of this study was to assess the MOXO-d-CPT utility in detecting feigned ADHD and establish cutoffs with adequate specificity and sensitivity. Method: The study had two phases. First, using a prospective design, healthy adults who simulated ADHD were compared with healthy controls and ADHD patients who performed the tasks to the best of their ability (n = 47 per group). Participants performed the MOXO-d-CPT and an established performance validity test (PVT). Second, the MOXO-d-CPT classification accuracy, employed in Phase 1, was retrospectively compared with archival data of 47 ADHD patients and age-matched healthy controls. Results: Simulators performed significantly worse on all MOXO-d-CPT indices than healthy controls and ADHD patients. Three MOXO-d-CPT indices (attention, hyperactivity, impulsivity) and a scale combining these indices showed adequate discriminative capacity. Conclusion: The MOXO-d-CPT showed promise for the detection of feigned ADHD and, pending replication, can be employed for this aim in clinical practice and ADHD research.
Collapse
Affiliation(s)
| | - Astar Lev
- Bar-Ilan University, Ramat Gan, Israel
| | | | | | | | - Yuri Rassovsky
- Bar-Ilan University, Ramat Gan, Israel.,University of California, Los Angeles, USA
| |
Collapse
|
14
|
Braw Y. Response Time Measures as Supplementary Validity Indicators in Forced-Choice Recognition Memory Performance Validity Tests: A Systematic Review. Neuropsychol Rev 2021; 32:71-98. [PMID: 33821424 DOI: 10.1007/s11065-021-09499-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 03/05/2021] [Indexed: 01/17/2023]
Abstract
Performance validity tests (PVTs) based on the forced-choice recognition memory (FCRM) paradigm are commonly used for the detection of noncredible performance. Examinees' response times (RTs) are affected by cognitive processes associated with deception and can also be gathered without lengthening the duration of the assessment. Consequently, interest in the utility of these measures as supplementary validity indicators in FCRM-PVTs has grown over the years. The current systematic review summarizes both clinical and simulation (i.e., healthy participants simulating cognitive impairment) studies of RTs in FCRM-PVTs. The findings of 25 peer-reviewed articles (n = 26 empirical studies) indicate that noncredible performance in FCRM-PVTs is associated with longer RTs. Additionally, there are indications that noncredible performance is associated with larger variability in RTs. RT measures, however, have lower discrimination capacity than conventional accuracy measures. Their utility may therefore lie in reaching decisions regarding cases with border zone accuracy scores, as well as aiding in the detection of more sophisticated examinees who are aware of the use of accuracy-based validity indicators in FCRM-PVTs. More research, however, is required before these measures are incorporated in daily practice and clinical decision-making processes.
Collapse
Affiliation(s)
- Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.
| |
Collapse
|
15
|
Sabelli AG, Messa I, Giromini L, Lichtenstein JD, May N, Erdodi LA. Symptom Versus Performance Validity in Patients with Mild TBI: Independent Sources of Non-credible Responding. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09400-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
16
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
17
|
Omer E, Elbaum T, Braw Y. Identifying Feigned Cognitive Impairment: Investigating the Utility of Diffusion Model Analyses. Assessment 2020; 29:198-208. [PMID: 32988242 DOI: 10.1177/1073191120962317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Forced-choice performance validity tests are routinely used for the detection of feigned cognitive impairment. The drift diffusion model deconstructs performance into distinct cognitive processes using accuracy and response time measures. It thereby offers a unique approach for gaining insight into examinees' speed-accuracy trade-offs and the cognitive processes that underlie their performance. The current study is the first to perform such analyses using a well-established forced-choice performance validity test. To achieve this aim, archival data of healthy participants, either simulating cognitive impairment in the Word Memory Test or performing it to the best of their ability, were analyzed using the EZ-diffusion model (N = 198). The groups differed in the three model parameters, with drift rate emerging as the best predictor of group membership. These findings provide initial evidence for the usefulness of the drift diffusion model in clarifying the cognitive processes underlying feigned cognitive impairment and encourage further research.
Collapse
|
18
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
19
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
20
|
|
21
|
Elbaum T, Golan L, Lupu T, Wagner M, Braw Y. Establishing supplementary response time validity indicators in the Word Memory Test (WMT) and directions for future research. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:403-413. [DOI: 10.1080/23279095.2018.1555161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tomer Elbaum
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
22
|
Tomer E, Lupu T, Golan L, Wagner M, Braw Y. Eye tracking as a mean to detect feigned cognitive impairment in the word memory test. APPLIED NEUROPSYCHOLOGY-ADULT 2018; 27:49-61. [PMID: 30183408 DOI: 10.1080/23279095.2018.1480483] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Eye movements showed initial promise for the detection of deception and may be harder to consciously manipulate than conventional accuracy measures. Therefore, we integrated an eye-tracker with the Word Memory Test (WMT) and tested its usefulness for the detection of feigned cognitive impairment. As part of the study, simulators (n = 44) and honest controls (n = 41) performed WMT's immediate-recognition (IR) subtest while their eye movements were recorded. In comparison to the control group, simulators spent less time gazing at relevant stimuli, spent more time gazing at irrelevant stimuli, and had a lower saccade rate. Group classification using a scale that combined the eye movement measures and the WMT's accuracy measure showed tentative promise (i.e., it enhanced classification compared to the use of the accuracy measure as the sole predictor of group membership). Overall, integration of an eye-tracker with the WMT was found to be feasible and the eye movement measures showed initial promise for the detection of feigned cognitive impairment. Moreover, eye movement measures proved useful in enhancing our understanding of strategies utilized by the simulators and the cognitive processes that affect their behavior. While the findings are clearly preliminary, we hope that they will encourage further research of these promising psychophysiological measures.
Collapse
Affiliation(s)
- Elbaum Tomer
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Psychology, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.,Emotion and Cognition Research Center, Shalvata Mental Health Center, Hod HaSharon, Israel
| |
Collapse
|