1
|
Weymann T, Achenbach J, Guevara JE, Bassler M, Karst M, Lambrecht A. EMG measured reaction time as a predictor of invalid symptom report in psychosomatic patients. Clin Neuropsychol 2024; 38:1210-1226. [PMID: 37917133 DOI: 10.1080/13854046.2023.2276480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 10/23/2023] [Indexed: 11/03/2023]
Abstract
Background: Symptom validity tests (SVTs) and performance validity tests (PVTs) are important tools in sociomedical assessments, especially in the psychosomatic context where diagnoses mainly depend on clinical observation and self-report measures. This study examined the relationship between reaction times (RTs) and scores on the Structured Inventory of Malingered Symptomatology (SIMS). It was proposed that slower RTs and larger standard deviations of reaction times (RTSDs) would be observed in participants who scored above the SIMS cut-off (>16). Methods: Direct surface electromyography (EMG) was used to capture RTs during a computer-based RT test in 152 inpatients from a psychosomatic rehabilitation clinic in Germany. Correlation analyses and Mann-Whitney U were used to examine the relationship between RTs and SIMS scores and to assess the potential impact of covariates such as demographics, medical history, and vocational challenges on RTs. Therefore, dichotomized groups based on each potential covariate were compared. Results: Significantly longer RTs and larger RTSDs were found in participants who scored above the SIMS cut-off. Current treatment with psychopharmacological medication, diagnosis of depression, and age had no significant influence on the RT measures. However, work-related problems had a significant impact on RTSDs. Conclusion: There was a significant relationship between longer and more inconsistent RTs and indicators of exaggerated or feigned symptom report on the SIMS in psychosomatic rehabilitation inpatients. Findings from this study provide a basis for future research developing a new RT-based PVT.
Collapse
Affiliation(s)
- Thorben Weymann
- Department of Psychosomatic Medicine, Rehazentrum Oberharz, Clausthal-Zellerfeld, Germany
| | - Johannes Achenbach
- Department of Anesthesiology, Intensive Care Medicine, Emergency Medicine and Pain Medicine, KRH Klinikum Nordstadt, Hannover, Germany
- Department of Anesthesiology and Intensive Care Medicine, Pain Clinic, Hannover Medical School, Hannover, Germany
| | - Jasmin E Guevara
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Markus Bassler
- Department of Economics and Social Sciences, University of Applied Science Nordhausen, Nordhausen, Germany
| | - Matthias Karst
- Department of Anesthesiology and Intensive Care Medicine, Pain Clinic, Hannover Medical School, Hannover, Germany
| | - Alexandra Lambrecht
- Department of Psychosomatic Medicine, Rehazentrum Oberharz, Clausthal-Zellerfeld, Germany
| |
Collapse
|
2
|
Becke M, Tucha L, Butzbach M, Aschenbrenner S, Weisbrod M, Tucha O, Fuermaier ABM. Feigning Adult ADHD on a Comprehensive Neuropsychological Test Battery: An Analogue Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4070. [PMID: 36901080 PMCID: PMC10001580 DOI: 10.3390/ijerph20054070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
The evaluation of performance validity is an essential part of any neuropsychological evaluation. Validity indicators embedded in routine neuropsychological tests offer a time-efficient option for sampling performance validity throughout the assessment while reducing vulnerability to coaching. By administering a comprehensive neuropsychological test battery to 57 adults with ADHD, 60 neurotypical controls, and 151 instructed simulators, we examined each test's utility in detecting noncredible performance. Cut-off scores were derived for all available outcome variables. Although all ensured at least 90% specificity in the ADHD Group, sensitivity differed significantly between tests, ranging from 0% to 64.9%. Tests of selective attention, vigilance, and inhibition were most useful in detecting the instructed simulation of adult ADHD, whereas figural fluency and task switching lacked sensitivity. Five or more test variables demonstrating results in the second to fourth percentile were rare among cases of genuine adult ADHD but identified approximately 58% of instructed simulators.
Collapse
Affiliation(s)
- Miriam Becke
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Lara Tucha
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
| | - Marah Butzbach
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Steffen Aschenbrenner
- Department of Clinical Psychology and Neuropsychology, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
| | - Matthias Weisbrod
- Department of Psychiatry and Psychotherapy, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
- Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg, 69115 Heidelberg, Germany
| | - Oliver Tucha
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
- Department of Psychology, National University of Ireland, W23 F2K8 Maynooth, Ireland
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| |
Collapse
|
3
|
The Construct Validity of Intellect and Openness as Distinct Aspects of Personality through Differential Associations with Reaction Time. J Intell 2023; 11:jintelligence11020030. [PMID: 36826928 PMCID: PMC9961456 DOI: 10.3390/jintelligence11020030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 01/23/2023] [Accepted: 01/25/2023] [Indexed: 02/05/2023] Open
Abstract
The construct validity of group factor models of personality, which are typically derived from factor analysis of questionnaire items, relies on the ability of each factor to predict meaningful and differentiated real-world outcomes. In a sample of 481 participants, we used the Big Five Aspect Scales (BFAS) personality questionnaire, two laboratory-measured reaction time (RT) tasks, and a short-form test of cognitive ability (ICAR-16) to test the hypothesis that the Intellect and Openness aspects of Big Five Openness to Experience differentially correlate with reaction time moments. We found that higher scores on the Intellect aspect significantly correlate with faster and less variable response times, while no such association is observed for the Openness aspect. Further, we found that this advantage lies solely in the decisional, but not perceptual, stage of information processing; no other Big Five aspect showed a similar pattern of results. In sum, these findings represent the largest and most comprehensive study to date on personality factors and reaction time, and the first to demonstrate a mechanistic validation of BFAS Intellect through a differential pattern of associations with RT and Big Five personality aspects.
Collapse
|
4
|
Fuermaier ABM, Dandachi-Fitzgerald B, Lehrner J. Attention Performance as an Embedded Validity Indicator in the Cognitive Assessment of Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09468-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
AbstractThe assessment of performance validity is essential in any neuropsychological evaluation. However, relatively few measures exist that are based on attention performance embedded within routine cognitive tasks. The present study explores the potential value of a computerized attention test, the Cognitrone, as an embedded validity indicator in the neuropsychological assessment of early retirement claimants. Two hundred and sixty-five early retirement claimants were assessed with the Word Memory Test (WMT) and the Cognitrone. WMT scores were used as the independent criterion to determine performance validity. Speed and accuracy measures of the Cognitrone were analyzed in receiver operating characteristics (ROC) to classify group membership. The Cognitrone was sensitive in revealing attention deficits in early retirement claimants. Further, 54% (n = 143) of the individuals showed noncredible cognitive performance, whereas 46% (n = 122) showed credible cognitive performance. Individuals failing the performance validity assessment showed slower (AUC = 79.1%) and more inaccurate (AUC = 79.5%) attention performance than those passing the performance validity assessment. A compound score integrating speed and accuracy revealed incremental value as indicated by AUC = 87.9%. Various cut scores are suggested, resulting in equal rates of 80% sensitivity and specificity (cut score = 1.297) or 69% sensitivity with 90% specificity (cut score = 0.734). The present study supports the sensitivity of the Cognitrone for the assessment of attention deficits in early retirement claimants and its potential value as an embedded validity indicator. Further research on different samples and with multidimensional criteria for determining invalid performance are required before clinical application can be suggested.
Collapse
|
5
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
6
|
Hirsch O, Fuermaier ABM, Tucha O, Albrecht B, Chavanon ML, Christiansen H. Symptom and performance validity in samples of adults at clinical evaluation of ADHD: a replication study using machine learning algorithms. J Clin Exp Neuropsychol 2022; 44:171-184. [PMID: 35906728 DOI: 10.1080/13803395.2022.2105821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
INTRODUCTION Research has shown non-trivial base rates of noncredible symptom report and performance in the clinical evaluation of attention-deficit/hyperactivity disorder (ADHD) in adulthood. The goal of this study is to estimate and replicate base rates of symptom and performance validity test failure in the clinical evaluation of adult ADHD and derive prediction models based on routine clinical measures. METHODS This study reuses data of a previous publication of 196 adults seeking ADHD assessment and replicates the findings on an independent sample of 700 adults recruited in the same referral context. Measures of symptom and performance validity (one SVT, two PVTs) were applied to estimate base rates. Prediction models were developed using machine learning. RESULTS Both samples showed substantial rates of noncredible symptom report (one SVT failure: 35.7% - 36.6%), noncredible test performance (one PVT failure: 32.1% - 49.3%; two PVT failures: 18.9% - 27.3%), or both (each one SVT and PVT failure: 13.3% - 22.4%; one SVT and two PVT failures: 9.7% - 13.7%). Machine learning algorithms resulted in generally moderate to weak prediction models, with advantages of the reused sample compared to the independent replication sample. Associations between measures of symptom and performance validity were negligible to small. CONCLUSIONS This study highlights the necessity to include measures of symptom and performance validity in the clinical evaluation of adult ADHD. Further, this study demonstrates the difficulty to characterize the group failing symptom or performance validity assessment.
Collapse
Affiliation(s)
- Oliver Hirsch
- Department of Psychology, FOM University of Applied Sciences, Siegen, Germany
| | - Anselm B M Fuermaier
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, Groningen, The Netherlands
| | - Oliver Tucha
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Rostock, Germany.,Department of Psychology, Maynooth University, National University of Ireland, Maynooth, Ireland
| | - Björn Albrecht
- Department of Psychology, Clinical Child and Adolescent Psychology/Philipps University Marburg, Marburg, Germany
| | - Mira-Lynn Chavanon
- Department of Psychology, Clinical Child and Adolescent Psychology/Philipps University Marburg, Marburg, Germany
| | - Hanna Christiansen
- Department of Psychology, Clinical Child and Adolescent Psychology/Philipps University Marburg, Marburg, Germany
| |
Collapse
|
7
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
8
|
Scores in Self-Report Questionnaires Assessing Adult ADHD Can Be Influenced by Negative Response Bias but Are Unrelated to Performance on Executive Function and Attention Tests. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09448-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
AbstractSelf-report questionnaires are in widespread use in the assessment of adults with suspected attention-deficit and hyperactivity disorder (ADHD). Notwithstanding the high degree of validity, these questionnaires are considered to possess, their stand-alone use in assessment for adult ADHD may result in false-positive diagnoses due to the risk of negative response bias. Most of the self-report questionnaires in typical use are based on the diagnostic systems DSM-5 or ICD-10. From a neuropsychological point of view, however, testing of various executive function abilities and attentional performance is important in the assessment of adult ADHD. The present study (N = 211) found no evidence linking executive function (working memory and inhibitory processes) and attentional performance (processing speed) to the results of a self-report questionnaire, the ADHS-LE. The number of failures on the three symptom or performance validity tests (SVT/PVT) used provided the sole, and significant, explanation for the response behavior reported on the ADHS-LE. Of these three SVT/PVTs (the German version of the Structured Inventory of Malingered Symptomatology, SIMS, the reliable digit span, and the standard deviation of simple reaction time), only the SIMS was found to be a significant predictor variable. In the clinical context of this study, 32.6% of subjects produced at least one invalid SVT/PVT result. The use of a more conservative criterion—failure on at least two of the three SVT/PVTs deemed to be feigning ADHD—reduced the proportion of participants generating invalid values to 5%.
Collapse
|
9
|
Motor Reaction Times as an Embedded Measure of Performance Validity: a Study with a Sample of Austrian Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09431-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
AbstractAmong embedded measures of performance validity, reaction time parameters appear to be less common. However, their potential may be underestimated. In the German-speaking countries, reaction time is often examined using the Alertness subtest of the Test of Attention Performance (TAP). Several previous studies have examined its suitability for validity assessment. The current study was conceived to examine a variety of reaction time parameters of the TAP Alertness subtest with a sample of 266 Austrian civil forensic patients. Classification results from the Word Memory Test (WMT) were used as an external indicator to distinguish between valid and invalid symptom presentations. Results demonstrated that the WMT fail group performed worse in reaction time as well as its intraindividual variation across trials when compared to the WMT pass group. Receiver operating characteristic analyses revealed areas under the curve of .775–.804. Logistic regression models indicated the parameter intraindividual variation of motor reaction time with warning sound as being the best predictor for invalid test performance. Suggested cut scores yielded a sensitivity of .62 and a specificity of .90, or .45 and .95, respectively, when the accepted false-positive rate was set lower. The results encourage the use of the Alertness subtest as an embedded measure of performance validity.
Collapse
|
10
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
11
|
Sirianni CD, Abeare CA, Ali S, Razvi P, Kennedy A, Pyne SR, Erdodi LA. The V-5 provides quick, accurate and cross-culturally valid measures of psychiatric symptoms. Psychiatry Res 2021; 298:113651. [PMID: 33618234 DOI: 10.1016/j.psychres.2020.113651] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 12/12/2020] [Indexed: 12/15/2022]
Abstract
This study was designed to cross-validate the V-5, a quick psychiatric screener, across administration formats and levels of examinee acculturation. The V-5 was administered twice (once at the beginning and once at the end of the testing session) to three samples (N = 277) with varying levels of symptom severity and English language proficiency, varying type of administration, alongside traditional self-reported symptom inventories as criterion measures. The highest rest-retest reliability was observed on the Depression (.84) and Pain scales (.85). The V-5 was sensitive to the variability in symptom severity. Classification accuracy was driven by the base rate of the target construct, and was invariant across administration format (in-person or online) or level of English proficiency. The V-5 demonstrated promise as a cross-culturally robust screening instrument that is sensitive to change over time, lends itself to online administration, and is suitable for examinees with limited English proficiency.
Collapse
Affiliation(s)
| | | | - Sami Ali
- School of Social Work, University of Windsor, Windsor ON, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor ON, Canada
| | - Arianna Kennedy
- School of Social Work, University of Windsor, Windsor ON, Canada
| | - Sadie R Pyne
- Learning Disability Association of Windsor-Essex, Windsor ON Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor ON, Canada.
| |
Collapse
|
12
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
13
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
14
|
Ovsiew GP, Resch ZJ, Nayar K, Williams CP, Soble JR. Not so fast! Limitations of processing speed and working memory indices as embedded performance validity tests in a mixed neuropsychiatric sample. J Clin Exp Neuropsychol 2020; 42:473-484. [DOI: 10.1080/13803395.2020.1758635] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kritika Nayar
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Christopher P. Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
15
|
Bailey KC, Webber TA, Phillips JI, Kraemer LDR, Marceaux JC, Soble JR. When Time is of the Essence: Preliminary Findings for a Quick Administration of the Dot Counting Test. Arch Clin Neuropsychol 2019; 36:403-413. [DOI: 10.1093/arclin/acz058] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 08/14/2019] [Indexed: 11/13/2022] Open
Abstract
Abstract
Objective
Performance validity research has emphasized the need for briefer measures and, more recently, abbreviated versions of established free-standing tests to minimize neuropsychological evaluation costs/time burden. This study examined the accuracy of multiple abbreviated versions of the Dot Counting Test (“quick” DCT) for detecting invalid performance in isolation and in combination with the Test of Memory Malingering Trial 1 (TOMMT1).
Method
Data from a mixed clinical sample of 107 veterans (80 valid/27 invalid per independent validity measures and structured criteria) were included in this cross-sectional study; 47% of valid participants were cognitively impaired. Sensitivities/specificities of various 6- and 4-card DCT combinations were calculated and compared to the full, 12-card DCT. Combined models with the most accurate 6- and 4-card combinations and TOMMT1 were then examined.
Results
Receiver operator characteristic curve analyses were significant for all 6- and 4-card DCT combinations with areas under the curve of .868–.897. The best 6-card combination (cards, 1-3-5-8-11-12) had 56% sensitivity/90% specificity (E-score cut-off, ≥14.5), and the best 4-card combination (cards, 3-4-8-11) had 63% sensitivity/94% specificity (cut-off, ≥16.75). The full DCT had 70% sensitivity/90% specificity (cut-off, ≥16.00). Logistic regression revealed 95% classification accuracy when 6-card or 4-card “quick” combinations were combined with TOMMT1, with the DCT combinations and TOMMT1 both emerging as significant predictors.
Conclusions
Abbreviated DCT versions utilizing 6- and 4-card combinations yielded comparable sensitivity/specificity as the full DCT. When these “quick” DCT combinations were further combined with an abbreviated memory-based performance validity test (i.e., TOMMT1), overall classification accuracy for identifying invalid performance was 95%.
Collapse
Affiliation(s)
- K Chase Bailey
- Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Troy A Webber
- Rehabilitation and Extended Care Line, Michael E. DeBakey VA Medical Center, Houston, TX 77030, USA
| | - Jacob I Phillips
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Lindsay D R Kraemer
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX 78229, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL 60612, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL 60612, USA
| |
Collapse
|
16
|
Mahoney EJ, Kapur N, Osmon DC, Hannula DE. Eye Tracking as a Tool for the Detection of Simulated Memory Impairment. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2018. [DOI: 10.1016/j.jarmac.2018.05.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|