1
|
Webber TA, Lorkiewicz S, Woods SP, Miller B, Soble JR. Does neuropsychological intraindividual variability index cognitive dysfunction, an invalid presentation, or both? Preliminary findings from a mixed clinical older adult veteran sample. J Clin Exp Neuropsychol 2024:1-22. [PMID: 39120111 DOI: 10.1080/13803395.2024.2388096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 07/31/2024] [Indexed: 08/10/2024]
Abstract
INTRODUCTION Intraindividual variability across a battery of neuropsychological tests (IIV-dispersion) can reflect normal variation in scores or arise from cognitive impairment. An alternate interpretation is IIV-dispersion reflects reduced engagement/invalid test data, although extant research addressing this interpretation is significantly limited. METHOD We used a sample of 97 older adult (mean age: 69.92), predominantly White (57%) or Black/African American (34%), and predominantly cis-gender male (87%) veterans. Examinees completed a comprehensive neuropsychological battery, including measures of reduced engagement/invalid test data (a symptom validity test [SVT], multiple performance validity tests [PVTs]), as part of a clinical evaluation. IIV-dispersion was indexed using the coefficient of variance (CoV). We tested 1) the relationships of raw scores and "failures" on SVT/PVTs with IIV-dispersion, 2) the relationship between IIV-dispersion and validity/neurocognitive disorder status, and 3) whether IIV-dispersion discriminated the validity/neurocognitive disorder groups using receiver operating characteristic (ROC) curves. RESULTS IIV-dispersion was significantly and independently associated with a selection of PVTs, with small to very large effect sizes. Participants with invalid profiles and cognitively impaired participants with valid profiles exhibited medium to large (d = .55-1.09) elevations in IIV-dispersion compared to cognitively unimpaired participants with valid profiles. A non-significant but small to medium (d = .35-.60) elevation in IIV-dispersion was observed for participants with invalid profiles compared to those with a neurocognitive disorder. IIV-dispersion was largely accurate at differentiating participants without a neurocognitive disorder from invalid participants and those with a neurocognitive disorder (areas under the Curve [AUCs]=.69-.83), while accuracy was low for differentiating invalid participants from those with a neurocognitive disorder (AUCs=.58-.65). CONCLUSIONS These preliminary data suggest IIV-dispersion may be sensitive to both neurocognitive disorders and compromised engagement. Clinicians and researchers should exercise due diligence and consider test validity (e.g. PVTs, behavioral signs of engagement) as an alternate explanation prior to interpretation of intraindividual variability as an indicator of cognitive impairment.
Collapse
Affiliation(s)
- Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
- Department of Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sara Lorkiewicz
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | | | - Brian Miller
- Department of Psychiatry & Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
2
|
Brown CC, Stewart-Willis JJ. A preliminary investigation of the utility of the Word Memory Test immediate Recognition trial as a screener for noncredible performance. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-5. [PMID: 39099003 DOI: 10.1080/23279095.2024.2387233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/06/2024]
Abstract
The assessment of performance validity is an important consideration to the interpretation of neuropsychological data. However, commonly used performance validity tests such as the Test of Memory Malingering (TOMM) and Word Memory Test (WMT) have lengthy administration times (20-30 minutes). Alternatively, utilizing a screener of performance validity (e.g., the TOMM T1 or TOMMe10) has proven to be an effective method of assessing performance validity while conserving time. The present study investigates the use of the WMT Immediate Recognition (IR) Trial scores as a screening measure for performance validity using an archival mTBI polytrauma sample (n = 48). Results show that the WMT IR demonstrates a high degree of accuracy in predicting WMT Delayed Recognition (DR) Trial performance across a range of base rates suggesting that the WMT IR is a useful screening measure for noncredible performance. Clinical implications and selection of optimal cutoff are discussed.
Collapse
Affiliation(s)
- C C Brown
- Neuropsychology Department, Bay Pines Veterans' Affairs Health Care System, Bay Pines, FL, USA
| | - J J Stewart-Willis
- Neuropsychology Department, Bay Pines Veterans' Affairs Health Care System, Bay Pines, FL, USA
| |
Collapse
|
3
|
O'Connor V, Shura R, Armistead-Jehle P, Cooper DB. Neuropsychological Evaluation in Traumatic Brain Injury. Phys Med Rehabil Clin N Am 2024; 35:593-605. [PMID: 38945653 DOI: 10.1016/j.pmr.2024.02.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Neuropsychological evaluations can be helpful in the aftermath of traumatic brain injury. Cognitive functioning is assessed using standardized assessment tools and by comparing an individual's scores on testing to normative data. These evaluations examine objective cognitive functioning as well as other factors that have been shown to influence performance on cognitive tests (eg, psychiatric conditions, sleep) in an attempt to answer a specific question from referring providers. Referral questions may focus on the extent of impairment, the trajectory of recovery, or ability to return to work, sport, or the other previous activity.
Collapse
Affiliation(s)
- Victoria O'Connor
- Department of Veterans Affairs, W. G. (Bill) Hefner VA Healthcare System, 1601 Brenner Avenue (11M), Salisbury, NC 28144, USA; Veterans Integrated Service Networks (VISN)-6 Mid-Atlantic Mental Illness, Research Education and Clinical Center (MIRECC), Durham, NC, USA; Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Robert Shura
- Department of Veterans Affairs, W. G. (Bill) Hefner VA Healthcare System, 1601 Brenner Avenue (11M), Salisbury, NC 28144, USA; Veterans Integrated Service Networks (VISN)-6 Mid-Atlantic Mental Illness, Research Education and Clinical Center (MIRECC), Durham, NC, USA; Wake Forest School of Medicine, Winston-Salem, NC, USA; Via College of Osteopathic Medicine, Blacksburg, VA, USA
| | - Patrick Armistead-Jehle
- Department of Veterans Affairs, Concussion Clinic, Munson Army Health Center, 550 Pope Avenue, Fort Leavenworth, KS 66027, USA
| | - Douglas B Cooper
- Department of Psychiatry, University of Texas Health Science Center (UT-Health), South Texas VA Healthcare System, San Antonio Polytrauma Rehabilitation Center, 7400 Merton Minter Boulevard, San Antonio, TX 78229, USA; Department of Rehabilitation Medicine, University of Texas Health Science Center (UT-Health), South Texas VA Healthcare System, San Antonio Polytrauma Rehabilitation Center, 7400 Merton Minter Boulevard, San Antonio, TX 78229, USA
| |
Collapse
|
4
|
Finley JCA, Leese MI, Roseberry JE, Hill SK. Multivariable utility of the Memory Integrated Language and Making Change Test. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-8. [PMID: 39073594 DOI: 10.1080/23279095.2024.2385439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Recent reports indicate that the Memory Integrated Language Test (MIL) and Making Change Test Abbreviated Index (MCT-AI), two web-based performance validity tests (PVTs), have good sensitivity and specificity when used independently. This study investigated whether using these PVTs together could improve the detection of invalid performance in a mixed neuropsychiatric sample. Participants were 129 adult outpatients who underwent a neuropsychological evaluation and were classified into valid (n = 104) or invalid (n = 25) performance groups based on several commonly used PVTs. Using cut scores of ≤41 on the MIL and ≥1.05 on the MCT-AI together enhanced classification accuracy, yielding an area under the curve of .84 (95% CI: .75, .93). As compared to using the MIL and MCT-AI independently, the combined use increased the sensitivity from .10-.31 to.70 while maintaining ≥.90 specificity. Findings also indicated that failing either the MIL or MCT-AI was associated with somewhat lower cognitive test scores, but failing both was associated with markedly lower scores. Overall, using the MIL and MCT-AI together may be an effective way to identify invalid test performance during a neuropsychological evaluation. Furthermore, pairing these tests is consistent with current practice guidelines to include multiple PVTs in a neuropsychological test battery.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Mira I Leese
- Department of Psychology, Rosalind Franklin University of Medicine and Science, Chicago, IL, USA
| | | | - S Kristian Hill
- Department of Psychology, Rosalind Franklin University of Medicine and Science, Chicago, IL, USA
| |
Collapse
|
5
|
Roor JJ, Dandachi-FitzGerald B, Peters MJV, Ponds RWHM. Providing a brief corrective statement does not improve test performance in patients invalidating testing: A multisite, single-blind randomized controlled trial. Clin Neuropsychol 2024:1-23. [PMID: 39056491 DOI: 10.1080/13854046.2024.2382340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 07/16/2024] [Indexed: 07/28/2024]
Abstract
Objective: Performance below the actual abilities of the examinee can be measured using performance validity tests (PVTs). PVT failure negatively impacts the quality of the neuropsychological assessment. In our study, we addressed this issue by providing a brief corrective statement regarding invalidity to improve test-taking behavior. Methods: This study is a multisite single-blind randomized controlled trial in a consecutive sample of clinically referred adult patients (N = 196) in a general hospital setting. Patients who failed a PVT (n = 71) were randomly allocated to a corrective statement approach (CS; n = 39), in which a brief verbal corrective statement was given by the technician, or received no corrective statement upon indications of invalid performance (NO-CS; n = 32). Both groups (CS and NO-CS) were provided with the same subsequently repeated and newly administered tests. Results: There were no group (CS vs. NO-CS) differences on both the repeated and single-administered PVTs and standard cognitive tests. Furthermore, invalid performing participants benefited significantly less from the repeated test administration compared to the valid performing group. Conclusions: Our study found that a brief corrective within-session statement, to address PVT failure and improve test-taking behavior, did not improve consequent test performance. These results suggest limited value of a brief verbal corrective statement to influence performance below best of capabilities. It highlights the need for more research to identify more effective approaches that can enhance patients test-taking behavior. Ultimately, such efforts are critical in ensuring accurate diagnosis and effective treatment recommendations for patients.
Collapse
Affiliation(s)
- Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, the Netherlands
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Brechje Dandachi-FitzGerald
- Faculty of Psychology, Open University Heerlen, Heerlen, the Netherlands
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Maarten J V Peters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Rudolf W H M Ponds
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, the Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, location VU, Amsterdam, the Netherlands
| |
Collapse
|
6
|
Lavigne S, Rios A, Davis JJ. Does generative artificial intelligence pose a risk to performance validity test security? Clin Neuropsychol 2024:1-14. [PMID: 39034486 DOI: 10.1080/13854046.2024.2379023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 07/08/2024] [Indexed: 07/23/2024]
Abstract
OBJECTIVE We examined the performance validity test (PVT) security risk presented by artificial intelligence (AI) chatbots asking questions about neuropsychological evaluation and PVTs on two popular generative AI sites. METHOD In 2023 and 2024, multiple questions were posed to ChatGPT-3 and Bard (now Gemini). One set started generally and refined follow-up questions based on AI responses. A second set asked how to feign, fake, or cheat. Responses were aggregated and independently rated for inaccuracy and threat. Responses not identified as inaccurate were assigned a four-level threat rating (no, mild, moderate, or high threat). Combined inaccuracy and threat ratings were examined cross-sectionally and longitudinally. RESULTS Combined inaccuracy rating percentages were 35 to 42% in 2023 and 16 to 28% in 2024. Combined moderate/high threat ratings were observed in 24 to 41% of responses in 2023 and in 17 to 31% of responses in 2024. More ChatGPT-3 responses were rated moderate or high threat compared to Bard/Gemini responses. Over time, ChatGPT-3 responses became more accurate with a similar threat level, but Bard/Gemini responses did not change in accuracy or threat. Responses to how to feign queries demonstrated ethical opposition to feigning. Responses to similar queries in 2024 showed even stronger ethical opposition. CONCLUSIONS AI chatbots are a threat to PVT test security. A proportion of responses were rated as moderate or high threat. Although ethical opposition to feigning guidance increased over time, the natural language interface and the volume of AI chatbot responses represent a potentially greater threat than traditional search engines.
Collapse
Affiliation(s)
- Shannon Lavigne
- Department of Neurology, The University of Texas Health Science Center at San Antonio, TXUSA
| | - Anthony Rios
- Department of Information Systems and Cyber Security, The University of Texas at San Antonio, TX, USA
| | - Jeremy J Davis
- Department of Neurology, The University of Texas Health Science Center at San Antonio, TXUSA
| |
Collapse
|
7
|
Verveen A, Verfaillie SCJ, Visser D, Koch DW, Verwijk E, Geurtsen GJ, Roor J, Appelman B, Boellaard R, van Heugten CM, Horn J, Hulst HE, de Jong MD, Kuut TA, van der Maaden T, van Os YMG, Prins M, Visser-Meily JMA, van Vugt M, van den Wijngaard CC, Nieuwkerk PT, van Berckel B, Tolboom N, Knoop H. Neuropsychological functioning after COVID-19: minor differences between individuals with and without persistent complaints after SARS-CoV-2 infection. Clin Neuropsychol 2024:1-16. [PMID: 39016843 DOI: 10.1080/13854046.2024.2379508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Accepted: 07/09/2024] [Indexed: 07/18/2024]
Abstract
Objective: It is unclear how self-reported severe fatigue and difficulty concentrating after SARS-CoV-2 infection relate to objective neuropsychological functioning. The study aimed to compare neuropsychological functioning between individuals with and without these persistent subjective complaints. Method: Individuals with and without persistent severe fatigue (Checklist Individual Strength (CIS) fatigue ≥ 35) and difficulty concentrating (CIS concentration ≥ 18) at least 3 months after SARS-CoV-2 infection were included. Neuropsychological assessment was performed on overall cognitive functioning, attention, processing speed, executive functioning, memory, visuo-construction, and language (18 tests). T-scores -1.5 SD below population normative data (T ≤ 35) were classified as "impaired". Results: 230 participants were included in the study, of whom 22 were excluded from the analysis due to invalid performance. Of the participants included in the analysis, 111 reported persistent complaints of severe fatigue and difficulty concentrating and 97 did not. Median age was 54 years, 59% (n = 126) were female, and participants were assessed a median of 23 months after first infection (IQR: 16-28). With bivariate logistic regression, individuals with persistent complaints had an increased likelihood of slower information processing speed performance on the Stroop word reading (OR = 2.45, 95%CI = 1.02-5.84) compared to those without persistent complaints. Demographic or clinical covariates (e.g. hospitalization) did not influence this association. With linear regression techniques, persistent complaints were associated with lower t-scores on the D2 CP, TMT B, and TMT B|A. There were no differences in performance on the other neuropsychological tests. Conclusions: Individuals with subjective severe fatigue and difficulty concentrating after COVID-19 do not typically demonstrate cognitive impairment on extensive neuropsychological testing.
Collapse
Affiliation(s)
- Anouk Verveen
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Public Health, Amsterdam, The Netherlands
| | - Sander C J Verfaillie
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Radiology & Nuclear Medicine, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Amsterdam, The Netherlands
- GGz inGeest Specialized Mental Health Care, Amsterdam, The Netherlands
| | - Denise Visser
- Radiology & Nuclear Medicine, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Dook W Koch
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Division of Imaging and Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Esmée Verwijk
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Amsterdam, The Netherlands
- Psychology department, Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| | - Jeroen Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, The Netherlands
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Brent Appelman
- Center for Experimental and Molecular Medicine, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| | - Ronald Boellaard
- Radiology & Nuclear Medicine, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Caroline M van Heugten
- Department of Neuropsychology and Psychopharmacology, and Limburg Brain Injury Center, Faculty of Psychology, Neuroscience Maastricht University, Maastricht, The Netherlands
| | - Janneke Horn
- Amsterdam Neuroscience, Amsterdam, The Netherlands
- Intensive Care, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| | - Hanneke E Hulst
- Department of Medical, Health and Neuropsychology, Leiden University, Leiden, The Netherlands
| | - Menno D de Jong
- Infectious Diseases, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Medical Microbiology & Infection Prevention, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| | - Tanja A Kuut
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Public Health, Amsterdam, The Netherlands
| | - Tessa van der Maaden
- Center for Infectious Disease Control, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands
| | - Yvonne M G van Os
- Occupational Health Office, Department of Human Resources, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Maria Prins
- Infectious Diseases, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Infectious Diseases, Amsterdam Institute for Infection and Immunity, Amsterdam, The Netherlands
- Department of Infectious Diseases, Public Health Service of Amsterdam, Amsterdam, The Netherlands
| | - Johanna M A Visser-Meily
- Department of Rehabilitation, Physical Therapy Science and Sports, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Michele van Vugt
- Internal Medicine, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
| | - Cees C van den Wijngaard
- Center for Infectious Disease Control, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands
| | - Pythia T Nieuwkerk
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Public Health, Amsterdam, The Netherlands
| | - Bart van Berckel
- Radiology & Nuclear Medicine, Amsterdam UMC location Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Amsterdam, The Netherlands
| | - Nelleke Tolboom
- Department of Radiology and Nuclear Medicine, Division of Imaging and Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Hans Knoop
- Department of Medical Psychology, Amsterdam UMC location University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Public Health, Amsterdam, The Netherlands
| |
Collapse
|
8
|
Weymann T, Achenbach J, Guevara JE, Bassler M, Karst M, Lambrecht A. EMG measured reaction time as a predictor of invalid symptom report in psychosomatic patients. Clin Neuropsychol 2024; 38:1210-1226. [PMID: 37917133 DOI: 10.1080/13854046.2023.2276480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 10/23/2023] [Indexed: 11/03/2023]
Abstract
Background: Symptom validity tests (SVTs) and performance validity tests (PVTs) are important tools in sociomedical assessments, especially in the psychosomatic context where diagnoses mainly depend on clinical observation and self-report measures. This study examined the relationship between reaction times (RTs) and scores on the Structured Inventory of Malingered Symptomatology (SIMS). It was proposed that slower RTs and larger standard deviations of reaction times (RTSDs) would be observed in participants who scored above the SIMS cut-off (>16). Methods: Direct surface electromyography (EMG) was used to capture RTs during a computer-based RT test in 152 inpatients from a psychosomatic rehabilitation clinic in Germany. Correlation analyses and Mann-Whitney U were used to examine the relationship between RTs and SIMS scores and to assess the potential impact of covariates such as demographics, medical history, and vocational challenges on RTs. Therefore, dichotomized groups based on each potential covariate were compared. Results: Significantly longer RTs and larger RTSDs were found in participants who scored above the SIMS cut-off. Current treatment with psychopharmacological medication, diagnosis of depression, and age had no significant influence on the RT measures. However, work-related problems had a significant impact on RTSDs. Conclusion: There was a significant relationship between longer and more inconsistent RTs and indicators of exaggerated or feigned symptom report on the SIMS in psychosomatic rehabilitation inpatients. Findings from this study provide a basis for future research developing a new RT-based PVT.
Collapse
Affiliation(s)
- Thorben Weymann
- Department of Psychosomatic Medicine, Rehazentrum Oberharz, Clausthal-Zellerfeld, Germany
| | - Johannes Achenbach
- Department of Anesthesiology, Intensive Care Medicine, Emergency Medicine and Pain Medicine, KRH Klinikum Nordstadt, Hannover, Germany
- Department of Anesthesiology and Intensive Care Medicine, Pain Clinic, Hannover Medical School, Hannover, Germany
| | - Jasmin E Guevara
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Markus Bassler
- Department of Economics and Social Sciences, University of Applied Science Nordhausen, Nordhausen, Germany
| | - Matthias Karst
- Department of Anesthesiology and Intensive Care Medicine, Pain Clinic, Hannover Medical School, Hannover, Germany
| | - Alexandra Lambrecht
- Department of Psychosomatic Medicine, Rehazentrum Oberharz, Clausthal-Zellerfeld, Germany
| |
Collapse
|
9
|
Dandachi-FitzGerald B, Merckelbach H, Merten T. Cry for help as a root cause of poor symptom validity: A critical note. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:527-532. [PMID: 35196463 DOI: 10.1080/23279095.2022.2040025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
When patients fail symptom validity tests (SVTs) and/or performance validity tests (PVTs), their self-reported symptoms and test profiles are unreliable and cannot be taken for granted. There are many well-established causes of poor symptom validity and malingering is only of them. Some authors have proposed that a cry for help may underlie poor symptom validity. In this commentary, we argue that cry for help is a (1) metaphorical concept that is (2) difficult to operationalize and, at present, (3) impossible to falsify. We conclude that clinicians or forensic experts should not invoke cry for help as an explanation for poor symptom validity. To encourage conceptual clarity, we propose a tentative framework for explaining poor symptom validity.
Collapse
Affiliation(s)
| | - Harald Merckelbach
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Thomas Merten
- Vivantes Klinikum im Friedrichshain, Berlin, Germany
| |
Collapse
|
10
|
Liu BC, Iverson GL, Cook NE, Schatz P, Berkner P, Gaudet CE. The prevalence and correlates of scores falling below ImPACT embedded validity indicators among adolescent student athletes. Clin Neuropsychol 2024; 38:1175-1192. [PMID: 38233364 DOI: 10.1080/13854046.2023.2287777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 11/20/2023] [Indexed: 01/19/2024]
Abstract
Objective: Valid performance on preseason baseline neurocognitive testing is essential for accurate comparison between preseason and post-concussion test results. Immediate Post-Concussion and Cognitive Testing (ImPACT) is commonly used to measure baseline neurocognitive function in athletes. We examined the prevalence of invalid performance on ImPACT baseline testing and identified correlates of invalid performance. Method: The sample included 66,998 adolescents (ages 14-18, M = 15.51 years, SD = 1.22) who completed ImPACT baseline tests between 2009 and 2019. Invalid performance was determined by the embedded validity indicators (EVI). Associations between invalid performance, demographic characteristics, and health conditions were assessed using chi-square tests and odds ratios (ORs). Results: Overall, 7.2% of adolescents had baseline tests identified as invalid by one or more of the EVIs. Individual validity indicators classified between 0.5% and 3.7% tests as invalid. Higher frequencies of invalid scores were observed among youth with neurodevelopmental, academic, and medical conditions. Youth who reported having learning disabilities (n = 3126), receiving special education (n = 3563), or problems with attention-deficit/hyperactivity disorder (ADHD; n = 5104) obtained invalid baselines at frequencies of 16.4%, 16.0%, and 11.1%, respectively. Moreover, youth who reported receiving treatment for a substance use disorder (n = 311) or epilepsy (n = 718) obtained invalid baselines at frequencies of 17.0% and 11.1%, respectively. Conclusions: The base rate of invalid performance on ImPACT's EVIs was approximately 7%, consistent with prior research. Adolescents self-reporting neurodevelopmental conditions, academic difficulties, or a history of treatment for medical conditions obtained invalid baseline tests at higher frequencies. More research is needed to better understand invalid scores in youth with pre-existing conditions.
Collapse
Affiliation(s)
- Brian C Liu
- Mass General for Children Sports Concussion Program, Waltham, MA, USA
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, MA, USA
| | - Grant L Iverson
- Mass General for Children Sports Concussion Program, Waltham, MA, USA
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, MA, USA
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital and the Schoen Adams Research Institute at Spaulding Rehabilitation, Charlestown, MA, USA
| | - Nathan E Cook
- Mass General for Children Sports Concussion Program, Waltham, MA, USA
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, MA, USA
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
| | - Philip Schatz
- Department of Psychology, Saint Joseph's University, Philadelphia, PA, USA
| | - Paul Berkner
- College of Osteopathic Medicine, University of New England, Biddeford, ME, USA
| | - Charles E Gaudet
- Mass General for Children Sports Concussion Program, Waltham, MA, USA
- Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, MA, USA
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
11
|
Rohling ML, Binder LM, Larrabee GJ, Langhinrichsen-Rohling J. Forced choice test score of p ≤ .20 and failures on ≥ six performance validity tests results in similar Overall Test Battery Means. Clin Neuropsychol 2024; 38:1193-1209. [PMID: 38041021 DOI: 10.1080/13854046.2023.2284975] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 11/13/2023] [Indexed: 12/03/2023]
Abstract
Objective: To determine if similar levels of performance on the Overall Test Battery Mean (OTBM) occur at different forced choice test (FCT) p-value score failures. Second, to determine the OTBM levels that are associated with failures at above chance on various performance validity (PVT) tests. Method: OTBMs were computed from archival data obtained from four practices. We calculated each examinee's Estimated Premorbid Global Ability (EPGA) and OTBM. The sample size was 5,103 examinees with 282 (5.5%) of these scoring below chance at p ≤ .20 on at least one FCT. Results: The OTBM associated with a failure at p ≤ .20 was equivalent to the OTBM that was associated with failing 6 or more PVTs at above-chance cutoffs. The mean OTBMs relative to increasingly strict FCT p cutoffs were similar (T scores in the 30s). As expected, there was an inverse relationship between the number of PVTs failed and examinees' OTBMs. Conclusions: The data support the use of p ≤ .20 as the probability level for testing the significance of below chance performance on FCTs. The OTBM can be used to index the influence of invalid performance on outcomes, especially when an examinee scores below chance.
Collapse
|
12
|
Marcopulos BA, Kaufmann P, Patel AC. Forensic neuropsychological assessment. BEHAVIORAL SCIENCES & THE LAW 2024; 42:265-277. [PMID: 38583136 DOI: 10.1002/bsl.2656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 02/19/2024] [Accepted: 02/26/2024] [Indexed: 04/08/2024]
Abstract
With its firm establishment as a neuropsychology subspecialty, forensic neuropsychological assessment is integral to many criminal and civil forensic evaluations. In addition to evaluating cognitive deficits, forensic neuropsychologists can provide reliable information regarding symptom magnification, malingering, and other neurocognitive and psychological issues that may impact the outcome of a particular legal case. This article is an overview and introduction to neuropsychological assessment in the forensic mental health context. Major issues impacting the current practice of forensic neuropsychology are summarized, and several examples from case law are highlighted.
Collapse
Affiliation(s)
- Bernice A Marcopulos
- Department of Graduate Psychology, James Madison University, Harrisonburg, Virginia, USA
- Department of Psychiatry and Neurobehavioral Sciences, University of Virginia School of Medicine, Charlottesville, Virginia, USA
| | - Paul Kaufmann
- Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Anisha C Patel
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee, USA
| |
Collapse
|
13
|
Ashendorf L, Withrow S, Ward SH, Sullivan SK, Sugarman MA. Decision rules for an abbreviated administration of the Test of Memory Malingering. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:382-391. [PMID: 35068279 DOI: 10.1080/23279095.2022.2026948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The present study investigated abbreviation methods for the Test of Memory Malingering (TOMM) in relation to traditional manual-based test cutoffs and independently derived more stringent cutoffs suggested by recent research (≤48 on Trial 2 or 3). Consecutively referred outpatient U.S. military veterans (n = 260) were seen for neuropsychological evaluation for mild traumatic brain injury or possible attention-deficit/hyperactivity disorder. Performance on TOMM Trial 1 was evaluated, including the total score and errors on the first 10 items (TOMMe10), to determine correspondence and redundancy with Trials 2 and 3. Using the traditional cutoff, valid performance on Trials 2 and 3 was predicted by zero errors on TOMMe10 and by Trial 1 scores greater than 41. Invalid performance was predicted by commission of more than three errors on TOMMe10 and by Trial 1 scores less than 34. For revised TOMM cutoffs, a Trial 1 score above 46 was predictive of a valid score, and a TOMMe10 score of three or more errors or a Trial 1 score below 36 was associated with invalid TOMM performance. Conditional abbreviation of the TOMM is feasible in a vast majority of cases without sacrificing information regarding performance validity. Decision trees are provided to facilitate administration of the three trials.
Collapse
Affiliation(s)
- Lee Ashendorf
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
| | - Susanne Withrow
- Behavioral Health Service Line, VA Pittsburgh Healthcare System, Pittsburgh, PA, USA
| | - Sarah H Ward
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
| | - Sara K Sullivan
- Psychology Service, VA Bedford Healthcare System, Bedford, MA, USA
| | - Michael A Sugarman
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
14
|
Stocks JK, Shields AN, DeBoer AB, Cerny BM, Ogram Buckley CM, Ovsiew GP, Jennette KJ, Resch ZJ, Basurto KS, Song W, Pliskin NH, Soble JR. The impact of visual memory impairment on Victoria Symptom Validity Test performance: A known-groups analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:329-338. [PMID: 34985401 DOI: 10.1080/23279095.2021.2021911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE We assessed the effect of visual learning and recall impairment on Victoria Symptom Validity Test (VSVT) accuracy and response latency for Easy, Difficult, and Total Items. METHOD A sample of 163 adult patients administered the VSVT and Brief Visuospatial Memory Test-Revised were classified as valid (114/163) or invalid (49/163) groups via independent criterion performance validity tests (PVTs). Classification accuracies for all VSVT indices were examined for the overall sample, and separately for subgroups based on visual memory functioning. RESULTS In the overall sample, all indices produced acceptable classification accuracy (areas under the curve [AUCs] ≥ 0.79). When stratified by visual learning/recall impairment, accuracy indices yielded acceptable classification for both the unimpaired (AUCs ≥0.79) and impaired subsamples (AUCs ≥0.75). Latency indices had acceptable classification accuracy for the unimpaired subsample (AUCs ≥0.74), but accuracy and sensitivity dropped for the impaired sample (AUCs ≥0.67). CONCLUSIONS VSVT accuracy and response latency yielded acceptable classification accuracies in the overall sample, and this effect was maintained in those with and without visual learning/recall impairment for the accuracy indices. Findings indicate that the VSVT is a psychometrically robust PVT with largely invariant cut-scores, even in the presence of bona fide visual learning/recall impairment.
Collapse
Affiliation(s)
- Jane K Stocks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Allison N Shields
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Adam B DeBoer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | | | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Woojin Song
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
15
|
Parsons J, Rodrigues NB, Erdodi LA. The classification accuracy of Warrington's recognition memory test (words) as a performance validity Test in a neurorehabilitation setting. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-11. [PMID: 38913011 DOI: 10.1080/23279095.2024.2337130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This study was designed to evaluate the classification accuracy of the Warrington's Recognition Memory Test (RMT) in 167 patients (97 or 58.1% men; MAge = 40.4; MEducation= 13.8) medically referred for neuropsychological evaluation against five psychometrically defined criterion groups. At the optimal cutoff (≤42), the RMT produced an acceptable combination of sensitivity (.36-.60) and specificity (.85-.95), correctly classifying 68.4-83.3% of the sample. Making the cutoff more conservative (≤41) improved specificity (.88-.95) at the expense of sensitivity (.30-.60). Lowering the cutoff to ≤40 achieved uniformly high specificity (.91-.95) but diminished sensitivity (.27-.48). RMT scores were unrelated to lateral dominance, education, or gender. The RMT was sensitive to a three-way classification of performance validity (Pass/Borderline/Fail), further demonstrating its discriminant power. Despite a notable decline in research studies focused on its classification accuracy within the last decade, the RMT remains an effective free-standing PVT that is robust to demographic variables. Relatively low sensitivity is its main liability. Further research is needed on its cross-cultural validity (sensitivity to limited English proficiency).
Collapse
Affiliation(s)
- Jenna Parsons
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Nelson B Rodrigues
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
- Star UBB Institute, Babeș-Bolyai University, Cluj-Napoca, Romania
| |
Collapse
|
16
|
Potter BS, Crabtree VM, Ashford JM, Li Y, Liang J, Guo Y, Wise MS, Skoda ES, Merchant TE, Conklin HM. Performance and symptom validity indicators among children undergoing cognitive surveillance following treatment for craniopharyngioma. Neurooncol Pract 2024; 11:319-327. [PMID: 38737617 PMCID: PMC11085848 DOI: 10.1093/nop/npae005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
Background Performance validity tests (PVTs) and symptom validity tests (SVTs) are essential to neuropsychological evaluations, helping ensure findings reflect true abilities or concerns. It is unclear how PVTs and SVTs perform in children who received radiotherapy for brain tumors. Accordingly, we investigated the rate of noncredible performance on validity indicators as well as associations with fatigue and lower intellectual functioning. Methods Embedded PVTs and SVTs were investigated in 98 patients with pediatric craniopharyngioma undergoing proton radiotherapy (PRT). The contribution of fatigue, sleepiness, and lower intellectual functioning to embedded PVT performance was examined. Further, we investigated PVTs and SVTs in relation to cognitive performance at pre-PRT baseline and change over time. Results SVTs on parent measures were not an area of concern. PVTs identified 0-31% of the cohort as demonstrating possible noncredible performance at baseline, with stable findings 1 year following PRT. Reliable digit span (RDS) noted the highest PVT failure rate; RDS has been criticized for false positives in pediatric populations, especially children with neurological impairment. Objective sleepiness was strongly associated with PVT failure, stressing need to consider arousal level when interpreting cognitive performance in children with craniopharyngioma. Lower intellectual functioning also needs to be considered when interpreting task engagement indices as it was strongly associated with PVT failure. Conclusions Embedded PVTs should be used with caution in pediatric craniopharyngioma patients who have received PRT. Future research should investigate different cut-off scores and validity indicator combinations to best differentiate noncredible performance due to task engagement versus variable arousal and/or lower intellectual functioning.
Collapse
Affiliation(s)
- Brian S Potter
- Department of Psychology and Biobehavioral Sciences, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Valerie McLaughlin Crabtree
- Department of Psychology and Biobehavioral Sciences, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Jason M Ashford
- Department of Psychology and Biobehavioral Sciences, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Yimei Li
- Department of Biostatistics, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Jia Liang
- Department of Biostatistics, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Yian Guo
- Department of Biostatistics, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Merrill S Wise
- Mid-South Pulmonary and Sleep Specialists, PC, Memphis, Tennessee, USA
| | - Evelyn S Skoda
- Department of Psychology and Biobehavioral Sciences, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Thomas E Merchant
- Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| | - Heather M Conklin
- Department of Psychology and Biobehavioral Sciences, St. Jude Children’s Research Hospital, Memphis, Tennessee, USA
| |
Collapse
|
17
|
Boone KB, Kaufmann PM, Sweet JJ, Leatherberry D, Beattey RA, Silva D, Victor TL, Boone RP, Spector J, Hebben N, Hanks RA, James J. Attorney demands for protected psychological test information: Is access necessary for cross examination or does it lead to misinformation? An interorganizational* position paper. Clin Neuropsychol 2024; 38:889-906. [PMID: 38418959 DOI: 10.1080/13854046.2024.2323222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 02/20/2024] [Indexed: 03/02/2024]
Abstract
Objective: Some attorneys claim that to adequately cross examine neuropsychological experts, they require direct access to protected test information, rather than having test data analyzed by retained neuropsychological experts. The objective of this paper is to critically examine whether direct access to protected test materials by attorneys is indeed necessary, appropriate, and useful to the trier-of-fact. Method: Examples are provided of the types of nonscientific misinformation that occur when attorneys, who lack adequate training in testing, attempt to independently interpret neurocognitive/psychological test data. Results: Release of protected test information to attorneys introduces inaccurate information to the trier of fact, and jeopardizes future use of tests because non-psychologists are not ethically bound to protect test content. Conclusion: The public policy underlying the right of attorneys to seek possibly relevant documents should not outweigh the damage to tests and resultant misinformation that arise when protected test information is released directly to attorneys. The solution recommended by neuropsychological/psychological organizations and test publishers is to have protected psychological test information exchanged directly and only between clinical psychologist/neuropsychologist experts.
Collapse
Affiliation(s)
| | | | - Jerry J Sweet
- NorthShore University HealthSystem, Evanston, Illinois, USA
| | - David Leatherberry
- Leatherberry Law, a Professional Corporation, San Diego, California, USA
| | - Robert A Beattey
- University of California, Davis School of Medicine, Sacramento, California, USA
| | - Delia Silva
- Independent Practice, San Diego, California, USA
| | - Tara L Victor
- California State University, Dominguez Hills, California, USA
| | | | - Jack Spector
- Independent Practice, Baltimore, Maryland, USA
- Independent Practice, Alexandria, Virginia, USA
- Independent Practice, Charlotte, North Carolina, USA
| | - Nancy Hebben
- Department of Psychiatry, Harvard Medical School, Boston, Massachusetts, USA
- Department of Psychiatry, Cambridge Health Alliance, Cambridge, Massachusetts, USA
- Independent Practice, Newton, Massachusetts, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Joette James
- Alina Assessment Services, Washington, District of Columbia, USA
| |
Collapse
|
18
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Time and money: Exploring enhancements to performance validity research designs. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:256-263. [PMID: 34932422 DOI: 10.1080/23279095.2021.2019740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
INTRODUCTION The study examined the effect of preparation time and financial incentives on healthy adults' ability to simulate traumatic brain injury (TBI) during neuropsychological evaluation. METHOD A retrospective comparison of two TBI simulator group designs: a traditional design employing a single-session of standard coaching immediately before participation (SIM-SC; n = 46) and a novel design that provided financial incentive and preparation time (SIM-IP; n = 49). Both groups completed an ecologically valid neuropsychological test battery that included widely-used cognitive tests and five common performance validity tests (PVTs). RESULTS Compared to SIM-SC, SIM-IP performed significantly worse and had higher rates of impairment on tests of processing speed and executive functioning (Trails A and B). SIM-IP were more likely than SIM-SC to avoid detection on one of the PVTs and performed somewhat better on three of the PVTs, but the effects were small and non-significant. SIM-IP did not demonstrate significantly higher rates of successful simulation (i.e., performing impaired on cognitive tests with <2 PVT failures). Overall, the rate of the successful simulation was ∼40% with a liberal criterion, requiring cognitive impairment defined as performance >1 SD below the normative mean. At a more rigorous criterion defining impairment (>1.5 SD below the normative mean), successful simulation approached 35%. CONCLUSIONS Incentive and preparation time appear to add limited incremental effect over traditional, single-session coaching analog studies of TBI simulation. Moreover, these design modifications did not translate to meaningfully higher rates of successful simulation and avoidance of detection by PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, University of North Carolina, Chapel Hill, NC, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
19
|
Kim S, Currao A, Brown E, Milberg WP, Fortier CB. Importance of validity testing in psychiatric assessment: evidence from a sample of multimorbid post-9/11 veterans. J Int Neuropsychol Soc 2024; 30:410-419. [PMID: 38014547 DOI: 10.1017/s1355617723000711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
OBJECTIVE Performance validity (PVTs) and symptom validity tests (SVTs) are necessary components of neuropsychological testing to identify suboptimal performances and response bias that may impact diagnosis and treatment. The current study examined the clinical and functional characteristics of veterans who failed PVTs and the relationship between PVT and SVT failures. METHOD Five hundred and sixteen post-9/11 veterans participated in clinical interviews, neuropsychological testing, and several validity measures. RESULTS Veterans who failed 2+ PVTs performed significantly worse than veterans who failed one PVT in verbal memory (Cohen's d = .60-.69), processing speed (Cohen's d = .68), working memory (Cohen's d = .98), and visual memory (Cohen's d = .88-1.10). Individuals with 2+ PVT failures had greater posttraumatic stress (PTS; β = 0.16; p = .0002), and worse self-reported depression (β = 0.17; p = .0001), anxiety (β = 0.15; p = .0007), sleep (β = 0.10; p = .0233), and functional outcomes (β = 0.15; p = .0009) compared to veterans who passed PVTs. 7.8% veterans failed the SVT (Validity-10; ≥19 cutoff); Multiple PVT failures were significantly associated with Validity-10 failure at the ≥19 and ≥23 cutoffs (p's < .0012). The Validity-10 had moderate correspondence in predicting 2+ PVTs failures (AUC = 0.83; 95% CI = 0.76, 0.91). CONCLUSION PVT failures are associated with psychiatric factors, but not traumatic brain injury (TBI). PVT failures predict SVT failure and vice versa. Standard care should include SVTs and PVTs in all clinical assessments, not just neuropsychological assessments, particularly in clinically complex populations.
Collapse
Affiliation(s)
- Sahra Kim
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Alyssa Currao
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Emma Brown
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - William P Milberg
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Catherine B Fortier
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
20
|
Henry GK. Ability of the Wisconsin Card-Sorting Test-64 as an embedded measure to identify noncredible neurocognitive performance in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-7. [PMID: 38684109 DOI: 10.1080/23279095.2024.2348012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
OBJECTIVE To investigate the ability of selective measures on the Wisconsin Card Sorting Test-64 (WCST-64) to predict noncredible neurocognitive dysfunction in a large sample of mild traumatic brain injury (mTBI) litigants. METHOD Participants included 114 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity (PVT). RESULTS Participants failing PVTs performed worse across all WCST-64 dependent variables of interest compared to participants who passed PVTs. Receiver operating curve analysis revealed that only categories completed was a significant predictors of PVT status. Multivariate logistic regression did not add to classification accuracy. CONCLUSION Consideration of noncredible executive functioning may be warranted in mild traumatic brain injury (mTBI) litigants who complete ≤ 1 category on the WCST-64.
Collapse
|
21
|
Garmoe W, Rao K, Gorter B, Kantor R. Neurocognitive Impairment in Post-COVID-19 Condition in Adults: Narrative Review of the Current Literature. Arch Clin Neuropsychol 2024; 39:276-289. [PMID: 38520374 DOI: 10.1093/arclin/acae017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/25/2024] Open
Abstract
The severe acute respiratory syndrome coronavirus 2 virus has, up to the time of this article, resulted in >770 million cases of COVID-19 illness worldwide, and approximately 7 million deaths, including >1.1 million in the United States. Although defined as a respiratory virus, early in the pandemic, it became apparent that considerable numbers of people recovering from COVID-19 illness experienced persistence or new onset of multi-system health problems, including neurologic and cognitive and behavioral health concerns. Persistent multi-system health problems are defined as Post-COVID-19 Condition (PCC), Post-Acute Sequelae of COVID-19, or Long COVID. A significant number of those with PCC report cognitive problems. This paper reviews the current state of scientific knowledge on persisting cognitive symptoms in adults following COVID-19 illness. A brief history is provided of the emergence of concerns about persisting cognitive problems following COVID-19 illness and the definition of PCC. Methodologic factors that complicate clear understanding of PCC are reviewed. The review then examines research on patterns of cognitive impairment that have been found, factors that may contribute to increased risk, behavioral health variables, and interventions being used to ameliorate persisting symptoms. Finally, recommendations are made about ways neuropsychologists can improve the quality of existing research.
Collapse
Affiliation(s)
- William Garmoe
- Director of Psychology, MedStar National Rehabilitation Network, Washington, DC, USA
| | - Kavitha Rao
- Clinical Neuropsychologist, MedStar Good Samaritan Hospital, Baltimore, MD, USA
| | - Bethany Gorter
- Neuropsychology Post-Doctoral Fellow, MedStar National Rehabilitation Hospital, Washington, DC, USA
| | - Rachel Kantor
- Neuropsychology Post-Doctoral Fellow, MedStar National Rehabilitation Hospital, Washington, DC, USA
| |
Collapse
|
22
|
van Vliet FIM, van Schothorst HP, Donker-Cools BHPM, Schaafsma FG, Ponds RWHM, Geurtsen GJ. Validity of the Groningen Effort Test in patients with suspected chronic solvent-induced encephalopathy. Arch Clin Neuropsychol 2024:acae025. [PMID: 38572600 DOI: 10.1093/arclin/acae025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/11/2024] [Accepted: 02/27/2024] [Indexed: 04/05/2024] Open
Abstract
INTRODUCTION The use of performance validity tests (PVTs) in a neuropsychological assessment to determine indications of invalid performance has been a common practice for over a decade. Most PVTs are memory-based; therefore, the Groningen Effort Test (GET), a non-memory-based PVT, has been developed. OBJECTIVES This study aimed to validate the GET in patients with suspected chronic solvent-induced encephalopathy (CSE) using the criterion standard of 2PVTs. A second goal was to determine diagnostic accuracy for GET. METHOD Sixty patients with suspected CSE referred for NPA were included. The GET was compared to the criterion standard of 2PVTs based on the Test of Memory Malingering and the Amsterdam Short Term Memory Test. RESULTS The frequency of invalid performance using the GET was significantly higher compared to the criterion of 2PVTs (51.7% vs. 20.0% respectively; p < 0.001). For the GET index, the sensitivity was 75% and the specificity was 54%, with a Youden's Index of 27. CONCLUSION The GET showed significantly more invalid performance compared to the 2PVTs criterion suggesting a high number of false positives. The general accepted minimum norm of specificity for PVTs of >90% was not met. Therefore, the GET is of limited use in clinical practice with suspected CSE patients.
Collapse
Affiliation(s)
- Fabienne I M van Vliet
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Henrita P van Schothorst
- Department of Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Birgit H P M Donker-Cools
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Frederieke G Schaafsma
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Rudolf W H M Ponds
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| |
Collapse
|
23
|
Puente-López E, Pina D, Rambaud-Quiñones P, Ruiz-Hernández JA, Nieto-Cañaveras MD, Shura RD, Alcazar-Crevillén A, Martinez-Jarreta B. Classification accuracy and resistance to coaching of the Spanish version of the Inventory of Problems-29 and the Inventory of Problems-Memory: A simulation study with mTBI patients. Clin Neuropsychol 2024; 38:738-762. [PMID: 37615421 DOI: 10.1080/13854046.2023.2249171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Objective: The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Method: Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. Results: The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached group and 89.09% for uncoached; specificity = 95.12%). The IOP-M also showed an excellent classification accuracy (cutoff ≤ 29; sensitivity = 87.27% for coached group and 93.55% for uncoached; specificity = 97.56%). Both instruments proved to be resistant to symptom information coaching and performance warnings. Conclusions: The results confirm that both of the IOP measures offer a similarly valid but different perspective compared to SIMS when assessing the credibility of symptoms of mTBI. The encouraging findings indicate that both tests are a valuable addition to the symptom validity practices of forensic professionals. Additional research in multiple contexts and with diverse conditions is warranted.
Collapse
Affiliation(s)
| | - David Pina
- Applied Psychology Service, Universidad de Murcia, Murcia, Spain
| | | | | | | | - Robert D Shura
- Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Salisbury VA Medical Center, Salisbury, NC, USA
| | | | - Begoña Martinez-Jarreta
- Mutua MAZ, Zaragoza, Spain
- Department of Pathological Anatomy, Forensic and Legal Medicine and Toxicology, Universidad de Zaragoza, Zaragoza, Spain
| |
Collapse
|
24
|
Finley JCA, Cladek A, Gonzalez C, Brook M. Perceived cognitive impairment is related to internalizing psychopathology but unrelated to objective cognitive performance among nongeriatric adults presenting for outpatient neuropsychological evaluation. Clin Neuropsychol 2024; 38:644-667. [PMID: 37518890 DOI: 10.1080/13854046.2023.2241190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 07/20/2023] [Indexed: 08/01/2023]
Abstract
Objective: This study investigated the relationship between perceived cognitive impairment, objective cognitive performance, and intrapersonal variables thought to influence ratings of perceived cognitive impairment. Method: Study sample comprised 194 nongeriatric adults who were seen in a general outpatient neuropsychology clinic for a variety of referral questions. The cognition subscale score from the WHO Disability Assessment Schedule served as the measure of perceived cognitive impairment. Objective cognitive performance was indexed via a composite score derived from a comprehensive neuropsychological battery. Internalizing psychopathology was indexed via a composite score derived from anxiety and depression measures. Medical and neuropsychiatric comorbidities were indexed by the number of different ICD diagnostic categories documented in medical records. Demographics included age, sex, race, and years of education. Results: Objective cognitive performance was unrelated to subjective concerns, explaining <1% of the variance in perceived cognitive impairment ratings. Conversely, internalizing psychopathology was significantly predictive, explaining nearly one-third of the variance in perceived cognitive impairment ratings, even after accounting for test performance, demographics, and number of comorbidities. Internalizing psychopathology was also highly associated with a greater discrepancy between scores on perceived and objective cognitive measures among participants with greater cognitive concerns. Clinically significant somatic symptoms uniquely contributed to the explained variance in perceived cognitive impairment (by ∼13%) when analyzed in a model with internalizing symptoms. Conclusions: These findings suggest that perceived cognitive impairment may be more indicative of the extent of internalizing psychopathology and somatic concerns than cognitive ability.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Andrea Cladek
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | | | - Michael Brook
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| |
Collapse
|
25
|
Khalid E, VanLandingham HB, Basurto KS, Nili AN, Gonzalez C, Guilfoyle JL, Ovsiew GP, Durkin NM, Ulrich DM, Resch ZJ, Pliskin NH, Soble JR, Cerny BM. Exploring Subfactors of Adult Cognitive Disengagement Syndrome and Impact on Neuropsychological Performance. J Atten Disord 2024; 28:957-969. [PMID: 38178579 DOI: 10.1177/10870547231218945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Abstract
OBJECTIVE This study investigated subfactors of cognitive disengagement syndrome (CDS; previously referred as sluggish cognitive tempo) among adults referred for neuropsychological evaluation of attentiondeficit/hyperactivity disorder (ADHD). METHOD Retrospective analyses of data from 164 outpatient neuropsychological evaluations examined associations between CDS subfactors and self-reported psychological symptoms and cognitive performance. RESULTS Factor analysis produced two distinct but positively correlated constructs: "Cognitive Complaints'' and "Lethargy." Both correlated positively with symptom reports (rs = 0.26-0.57). Cognitive Complaints correlated negatively with working memory, processing speed, and executive functioning performance (rs = -0.21 to -0.37), whereas Lethargy correlated negatively only with processing speed and executive functioning performance (rs = -0.26 to -0.42). Both predicted depression symptoms, but only Cognitive Complaints predicted inattention symptoms. Both subfactors demonstrated modest to nonsignificant associations with cognitive performance after accounting for estimated premorbid intelligence and inattention. CONCLUSION Findings indicate a bidimensional conceptualization of CDS, with differential associations between its constituent subfactors, reported symptoms, and cognitive performance.
Collapse
Affiliation(s)
- Elmma Khalid
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Hannah B VanLandingham
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Karen S Basurto
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Amanda N Nili
- University of Illinois College of Medicine, Chicago, IL, USA
- Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Christopher Gonzalez
- University of Illinois College of Medicine, Chicago, IL, USA
- Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Janna L Guilfoyle
- University of Illinois College of Medicine, Chicago, IL, USA
- Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | | | - Nicole M Durkin
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- University of Illinois College of Medicine, Chicago, IL, USA
- Illinois Institute of Technology, Chicago, IL, USA
| |
Collapse
|
26
|
Lace JW, Sanborn V, Galioto R. Standalone Performance Validity Tests May Be Differentially Related to Measures of Working Memory, Processing Speed, and Verbal Memory in Patients With Multiple Sclerosis. Assessment 2024; 31:732-744. [PMID: 37303186 DOI: 10.1177/10731911231178289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Cognitive functioning may account for minimal levels (i.e., 5%-14%) of variance of performance validity test (PVT) scores in clinical examinees. The present study extended this research twofold: (a) by determining the variance cognitive functioning explains within three distinct PVTs (b) in a sample of patients with multiple sclerosis (pwMS). Seventy-five pwMS (Mage = 48.50, 70.6% female, 80.9% White) completed the Victoria Symptom Validity Test (VSVT), Word Choice Test (WCT), Dot Counting Test (DCT), and three objective measures of working memory, processing speed, and verbal memory as part of clinical neuropsychological assessment. Regression analyses in credible groups (ns ranged from 54 to 63) indicated that cognitive functioning explained 24% to 38% of the variance in logarithmically transformed PVT variables. Variance from cognitive testing differed across PVTs: verbal memory significantly influenced both VSVT and WCT scores; working memory influenced VSVT and DCT scores; and processing speed influenced DCT scores. The WCT appeared least related to cognitive functioning of the included PVTs. Alternative plausible explanations, including the apparent domain/modality specificity hypothesis of PVTs versus the potential sensitivity of these PVTs to neurocognitive dysfunction in pwMS were discussed. Continued psychometric investigations into factors affecting performance validity, especially in multiple sclerosis, are warranted.
Collapse
Affiliation(s)
- John W Lace
- Cleveland Clinic Foundation, OH, USA
- Prevea Health, Green Bay, WI, USA
| | - Victoria Sanborn
- Kent State University, OH, USA
- VA Boston Healthcare System, Boston, MA, USA
| | - Rachel Galioto
- Cleveland Clinic Foundation, Mellen Center for Multiple Sclerosis, OH, USA
| |
Collapse
|
27
|
Giromini L, Pignolo C, Zennaro A, Sellbom M. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI. Assessment 2024:10731911241235465. [PMID: 38468147 DOI: 10.1177/10731911241235465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Collapse
|
28
|
Roor JJ, Peters MJV, Dandachi-FitzGerald B, Ponds RWHM. Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates. Neuropsychol Rev 2024; 34:299-319. [PMID: 36872398 PMCID: PMC10920461 DOI: 10.1007/s11065-023-09582-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 11/16/2022] [Indexed: 03/07/2023]
Abstract
Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97, p < .001; I2 = 91%; τ2 = 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.
Collapse
Affiliation(s)
- Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, The Netherlands.
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Maarten J V Peters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Brechje Dandachi-FitzGerald
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology, Open University, Heerlen, The Netherlands
| | - Rudolf W H M Ponds
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, location VU, Amsterdam, The Netherlands
| |
Collapse
|
29
|
Ingram PB, Armistead-Jehle P, Childers LG, Herring TT. Cross validation of the response bias scale and the response bias scale-19 in active-duty personnel: use on the MMPI-2-RF and MMPI-3. J Clin Exp Neuropsychol 2024; 46:141-151. [PMID: 38493366 DOI: 10.1080/13803395.2024.2330727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024]
Abstract
The Response Bias Scale (RBS) is the central measure of cognitive over-reporting in the MMPI-family of instruments. Relative to other clinical populations, the research evaluating the detection of over-reporting is more limited in Veteran and Active-Duty personnel, which has produced some psychometric variability across studies. Some have suggested that the original scale construction methods resulted in items which negatively impact classification accuracy and in response crafted an abbreviated version of the RBS (RBS-19; Ratcliffe et al., 2022; Spencer et al., 2022). In addition, the most recent edition of the MMPI is based on new normative data, which impacts the ability to use existing literature to determine effective cut-scores for the RBS (despite all items having been retained across MMPI versions). To date, no published research exists for the MMPI-3 RBS. The current study examined the utility of the RBS and the RBS-19 in a sample of Active-Duty personnel (n = 186) referred for neuropsychological evaluation. Using performance validity tests as the study criterion, we found that the RBS-19 was generally equitably to RBS in classification. Correlations with other MMPI-2-RF over- and under-reporting symptom validity tests were slightly stronger for RBS-19. Implications and directions for research and practice with RBS/RBS-19 are discussed, along with implications for neuropsychological assessment and response validity theory.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, USA, KS
| | | | - Lucas G Childers
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| | - Tristan T Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| |
Collapse
|
30
|
Ingram PB, Keen MA, Greene TE, Morris C, Armistead-Jehle PJ. Development and initial validation of the Scale of Scales (SOS) overreporting scores for the MMPI family of instruments. J Clin Exp Neuropsychol 2024; 46:95-110. [PMID: 38726688 DOI: 10.1080/13803395.2024.2320453] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/08/2024] [Indexed: 05/31/2024]
Abstract
Overreporting is a common problem that complicates psychological evaluations. A challenge facing the effective detection of overreporting is that many of the identified strategies (e.g., symptom severity approaches; see Rogers & Bender, 2020) are not incorporated into broadband measures of personality and psychopathology (e.g., Minnesota Multiphasic Personality Inventory family of instruments). While recent efforts have worked to incorporate some of these newer strategies, no such work has been conducted on the MMPI-3. For instance, recent symptom severity approaches have been used to identify patterns of multivariate base rate "skyline" elevations on the BASC, and similar strategies have been adopted into the PAI to measure psychopathology (Multi-Feigning Index; Gaines et al., 2013) and cognitive symptoms (Cognitive Bias Scale of Scales; Boress et al., 2022b). This study used data from a simulation study (n = 318) and an Active-Duty (AD) clinical sample (n = 290) to develop and cross-validate such a scale on the MMPI-2-RF and MMPI-3. Results suggest that the MMPI SOS (Scale of Scales) scores perform equitably to existing measures of overreporting on the MMPI-2-RF and MMPI-3 and incrementally predict a PVT-classified "known-group" of Active Duty service members. Effects were generally large in magnitude. Classification accuracy achieved desired specificity (.90) and approximated expected sensitivity (.30). Implications of these findings are discussed, which emphasize how alternative overreporting detection strategies may be useful to consider for the MMPI. These alternative strategies have room for expansion and refinement.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
- Eastern Kansas Veteran Affair Healthcare System, Levenworth, Kansas
| | - Megan A Keen
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | - Tina E Greene
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | - Cole Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | | |
Collapse
|
31
|
Schroeder RW, Bieu RK. Exploration of PCL-5 symptom validity indices for detection of exaggerated and feigned PTSD. J Clin Exp Neuropsychol 2024; 46:152-161. [PMID: 38353609 DOI: 10.1080/13803395.2024.2314728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 12/03/2023] [Indexed: 05/12/2024]
Abstract
INTRODUCTION There are very few symptom validity indices directly examining overreported posttraumatic stress disorder (PTSD) symptomatology, and, until recently, there were no symptom validity indices embedded within the PTSD Checklist for the DSM-5 (PCL-5), which is one of the most commonly used PTSD measures. Given this, the current study sought to develop and cross-validate symptom validity indices for the PCL-5. METHOD Multiple criterion groups comprised of Veteran patients were utilized (N = 210). Patients were determined to be valid or invalid responders based on Personality Asessment Inventory symptom validity indices. Three PCL-5 symptom validity indices were then examined: the PCL-5 Symptom Severity scale (PSS), the PCL-5 Extreme Symptom scale (PES), and the PCL-5 Rare Items scale (PRI). RESULTS Area under the curve statistics ranged from .78 to .85. The PSS and PES both met classification accuracy statistic goals, with the PES achieving the highest sensitivity rate (.39) when maintaining specificity at .90 or above across all criterion groups. When an ad hoc analysis was performed, which included only patients with exceptionally strong evidence of invalidity, sensitivity rates increased to .60 for the PES while maintaining specificity at .90. CONCLUSIONS These findings provide preliminary support for new PTSD symptom validity indices embedded within one of the most frequently used PTSD measures.
Collapse
Affiliation(s)
- Ryan W Schroeder
- Department is Behavioral Health, Robert J. Dole VA Medical Center, Wichita, KS, USA
| | - Rachel K Bieu
- Department is Behavioral Health, Robert J. Dole VA Medical Center, Wichita, KS, USA
| |
Collapse
|
32
|
Shura RD, Sapp A, Ingram PB, Brearly TW. Evaluation of telehealth administration of MMPI symptom validity scales. J Clin Exp Neuropsychol 2024; 46:86-94. [PMID: 38375629 DOI: 10.1080/13803395.2024.2314734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/11/2024] [Indexed: 02/21/2024]
Abstract
INTRODUCTION Telehealth assessment (TA) is a quickly emerging practice, offered with increasing frequency across many different clinical contexts. TA is also well-received by most patients, and there are numerous guidelines and training opportunities which can support effective telehealth practice. Although there are extensive recommended practices, these guidelines have rarely been evaluated empirically, particularly on personality measures. While existing research is limited, it does generally support the idea that TA and in-person assessment (IA) produce fairly equitable test scores. The MMPI-3, a recently released and highly popular personality and psychopathology measure has been the subject of several of those experimental or student (non-client) based studies; however, no study to date has evaluated these trends within a clinical sample. This study empirically tests for differences in TA and IA test scores on the MMPI-3 validity scores when following recommended administration procedures. METHOD Data were from a retrospective chart review. Veterans (n = 550) who underwent psychological assessment in a Veterans Affairs Medical Center ADHD evaluation clinic were contrasted between in person and telehealth assessment modalities on the MMPI-2-RF and MMPI-3. Groups were compared using t tests, chi square, and base rates. RESULTS Results suggest that there were minimal differences in elevation rates or mean scores across modality, supporting the use of TA. CONCLUSIONS This study's findings support the use of the MMPI via TA with ADHD evaluations, Veterans, and in neuro/psychological evaluation settings more generally. Observed elevation rates and mean scores of this study were notably different from those seen in other VA service clinics sampled nationally, which is an area of future investigation.
Collapse
Affiliation(s)
- Robert D Shura
- Research & Academic Affairs Service Line, Salisbury VA Healthcare System, Salisbury, NC, USA
- Neurocognition Research Lab, VA Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Durham, NC, USA
- Department of Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Alison Sapp
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| | - Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
- Department of Veterans Affairs Eastern Kansas Healthcare, Leavenworth VAMC, Leavenworth, KS, USA
| | - Timothy W Brearly
- Department of Neurology, Penn State Milton S. Hershey Medical Center, Hershey, PA, USA
- Penn State College of Medicine, Department of Neurology, Hershey, PA, USA
| |
Collapse
|
33
|
Denney RL, Thinda S, Finn PM, Fazio RL, Chen MJ, Walsh MR. Development of a measure for assessing malingered incompetency in criminal proceedings: Denney competency related test (D-CRT). J Clin Exp Neuropsychol 2024; 46:124-140. [PMID: 38346168 DOI: 10.1080/13803395.2024.2314731] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/08/2023] [Indexed: 05/12/2024]
Abstract
INTRODUCTION Experts frequently assess competency in criminal settings where the rate of feigning cognitive deficit is demonstrably elevated. We describe the construction and validation of the Denney Competency Related Test (D-CRT) to assess feigned incompetency of defendants in the criminal adjudicative setting. It was expected the D-CRT would prove effective at identifying feigned incompetence based on its two alternative, forced-choice and performance curve characteristics. METHOD Development and validation of the D-CRT occurred in described phases. Items were developed to measure competency based upon expert review. Item analysis and adjustments were completed with 304 young teenage volunteers to obtain a proper spread of item difficulty in preparation for eventual performance curve analysis (PCA). Test-retest reliability was assessed with 44 adult community volunteers. Validation included an analog simulation design with 101 jail detainees using MacArthur Competency Assessment Test-Criminal Adjudication and Word Memory Test as criterion measures. Effects of racial/ethnic demographic differences were examined in a separate study of 208 undergraduate volunteers. D-CRT specificity was identified with 46 elderly clinic referrals diagnosed with mild cognitive impairment and dementia. RESULTS Item development, adjustment, and repeat analysis resulted in item probabilities evenly spread from .28 to 1.0. Test-retest correlation was good (.83). Internal consistency of items was excellent (KR-20 > .91). D-CRT demonstrated convergent validity in regard to measuring competency related information and as well as malingering. The test successfully differentiated between jail inmates asked to perforfm their best and inmates asked to simulate incompetency (AUC = .945). There were no statistically significant differences found in performance across racial/ethnic backgrounds. D-CRT specificity remained excellent among elderly clinic referrals with significant cognitive compromise at the recommended total score cutoff. CONCLUSIONS D-CRT is an effective measure of feigned criminal incompetency in the context of potential cognitive deficiency, and PCA is assistive in the determination. Additional validation using knowns groups designs with various mental health-related conditions are needed.
Collapse
Affiliation(s)
- Robert L Denney
- Missouri Memory Center, Citizens Memorial Healthcare, Bolivar, MO, USA
| | | | - Patrick M Finn
- Department of Mental Health, William S. Middleton Memorial Veterans Hospital, Madison, WI, USA
| | | | - Michelle J Chen
- Department of Psychological Sciences, University of North Carolina, Charlotte, North Carolina, USA
| | - Michael R Walsh
- Departments of Forensic Psychology and Neuropsychology, Burrell Behavioral Health, Springfield, MO, USA
| |
Collapse
|
34
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
35
|
Whiteside DM, Basso MR. Innovations in performance and symptom validity testing: introduction to symptom validity section of the special issue. J Clin Exp Neuropsychol 2024; 46:81-85. [PMID: 38654620 DOI: 10.1080/13803395.2024.2346022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Affiliation(s)
| | - Michael R Basso
- Department of Psychiatry and Psychology, Mayo Clinic-Rochester
| |
Collapse
|
36
|
Finley JCA, Cerny BM, Brooks JM, Obolsky MA, Haneda A, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Cross-validating the Clinical Assessment of Attention Deficit-Adult symptom validity scales for assessment of attention deficit/hyperactivity disorder in adults. J Clin Exp Neuropsychol 2024; 46:111-123. [PMID: 37994688 DOI: 10.1080/13803395.2023.2283940] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 11/09/2023] [Indexed: 11/24/2023]
Abstract
INTRODUCTION The Clinical Assessment of Attention Deficit-Adult is among the few questionnaires that offer validity indicators (i.e., Negative Impression [NI], Infrequency [IF], and Positive Impression [PI]) for classifying underreporting and overreporting of attention-deficit/hyperactivity disorder (ADHD) symptoms. This is the first study to cross-validate the NI, IF, and PI scales in a sample of adults with suspected or known ADHD. METHOD Univariate and multivariate analyses were conducted to examine the independent and combined value of the NI, IF, and PI scores in predicting invalid symptom reporting and neurocognitive performance in a sample of 543 adults undergoing ADHD evaluation. RESULTS The NI scale demonstrated better classification accuracy than the IF scale in discriminating patients with and without valid scores on measures of overreporting. Only NI scores significantly predicted validity status when used in combination with IF scores. Optimal cut-scores for the NI (≤51; 30% sensitivity / 90% specificity) and IF (≥4; 18% sensitivity / 90% specificity) scales were consistent with those reported in the original manual; however, these indicators poorly discriminated patients with invalid and valid neurocognitive performance. The PI scale demonstrated acceptable classification accuracy in discriminating patients with invalid and valid scores on measures of underreporting, albeit with an optimal cut-score (≥27; 36% sensitivity / 90% specificity) lower than that described in the manual. CONCLUSION Findings provide preliminary evidence of construct validity for these scales as embedded validity indicators of symptom overreporting and underreporting. However, these scales should not be used to guide clinical judgment regarding the validity of neurocognitive test performance.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychology, Illinois Institute of Technology Chicago, Chicago, IL, USA
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Maximillian A Obolsky
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Aya Haneda
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
37
|
Walitt B, Singh K, LaMunion SR, Hallett M, Jacobson S, Chen K, Enose-Akahata Y, Apps R, Barb JJ, Bedard P, Brychta RJ, Buckley AW, Burbelo PD, Calco B, Cathay B, Chen L, Chigurupati S, Chen J, Cheung F, Chin LMK, Coleman BW, Courville AB, Deming MS, Drinkard B, Feng LR, Ferrucci L, Gabel SA, Gavin A, Goldstein DS, Hassanzadeh S, Horan SC, Horovitz SG, Johnson KR, Govan AJ, Knutson KM, Kreskow JD, Levin M, Lyons JJ, Madian N, Malik N, Mammen AL, McCulloch JA, McGurrin PM, Milner JD, Moaddel R, Mueller GA, Mukherjee A, Muñoz-Braceras S, Norato G, Pak K, Pinal-Fernandez I, Popa T, Reoma LB, Sack MN, Safavi F, Saligan LN, Sellers BA, Sinclair S, Smith B, Snow J, Solin S, Stussman BJ, Trinchieri G, Turner SA, Vetter CS, Vial F, Vizioli C, Williams A, Yang SB, Nath A. Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome. Nat Commun 2024; 15:907. [PMID: 38383456 PMCID: PMC10881493 DOI: 10.1038/s41467-024-45107-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 01/16/2024] [Indexed: 02/23/2024] Open
Abstract
Post-infectious myalgic encephalomyelitis/chronic fatigue syndrome (PI-ME/CFS) is a disabling disorder, yet the clinical phenotype is poorly defined, the pathophysiology is unknown, and no disease-modifying treatments are available. We used rigorous criteria to recruit PI-ME/CFS participants with matched controls to conduct deep phenotyping. Among the many physical and cognitive complaints, one defining feature of PI-ME/CFS was an alteration of effort preference, rather than physical or central fatigue, due to dysfunction of integrative brain regions potentially associated with central catechol pathway dysregulation, with consequences on autonomic functioning and physical conditioning. Immune profiling suggested chronic antigenic stimulation with increase in naïve and decrease in switched memory B-cells. Alterations in gene expression profiles of peripheral blood mononuclear cells and metabolic pathways were consistent with cellular phenotypic studies and demonstrated differences according to sex. Together these clinical abnormalities and biomarker differences provide unique insight into the underlying pathophysiology of PI-ME/CFS, which may guide future intervention.
Collapse
Affiliation(s)
- Brian Walitt
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Komudi Singh
- National Heart, Lung and Blood Institute (NHLBI), Bethesda, MD, USA
| | - Samuel R LaMunion
- National Institute of Diabetes, Digestion, and Kidney Disease (NIDDK), Bethesda, MD, USA
| | - Mark Hallett
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Steve Jacobson
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Kong Chen
- National Institute of Diabetes, Digestion, and Kidney Disease (NIDDK), Bethesda, MD, USA
| | | | - Richard Apps
- NIH Center for Human Immunology, Autoimmunity, and Inflammation (CHI), Bethesda, MD, USA
| | | | - Patrick Bedard
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Robert J Brychta
- National Institute of Diabetes, Digestion, and Kidney Disease (NIDDK), Bethesda, MD, USA
| | | | - Peter D Burbelo
- National Institute of Dental and Craniofacial Research (NIDCR), Bethesda, MD, USA
| | - Brice Calco
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Brianna Cathay
- Texas A&M School of Engineering Medicine, College Station, TX, USA
| | - Li Chen
- Affiliated Hospital of North Sichuan Medical College, Sichuan, China
| | - Snigdha Chigurupati
- George Washington University Hospital, District of Columbia, Washington, DC, USA
| | - Jinguo Chen
- NIH Center for Human Immunology, Autoimmunity, and Inflammation (CHI), Bethesda, MD, USA
| | - Foo Cheung
- NIH Center for Human Immunology, Autoimmunity, and Inflammation (CHI), Bethesda, MD, USA
| | | | | | - Amber B Courville
- National Institute of Diabetes, Digestion, and Kidney Disease (NIDDK), Bethesda, MD, USA
| | | | | | | | | | - Scott A Gabel
- National Institute of Environmental Health Sciences (NIEHS), Chapel Hill, NC, USA
| | - Angelique Gavin
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - David S Goldstein
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | | | - Sean C Horan
- Sidney Kimmel Medical College, Philadelphia, PA, USA
| | - Silvina G Horovitz
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Kory R Johnson
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Anita Jones Govan
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Kristine M Knutson
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Joy D Kreskow
- National Institute of Nursing Research (NINR), Bethesda, MD, USA
| | - Mark Levin
- National Heart, Lung and Blood Institute (NHLBI), Bethesda, MD, USA
| | - Jonathan J Lyons
- National Institute of Allergy and Infectious Disease (NIAID), Bethesda, MD, USA
| | - Nicholas Madian
- National Center for Complementary and Integrative Health (NCCIH), Bethesda, MD, USA
| | - Nasir Malik
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Andrew L Mammen
- National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), Bethesda, MD, USA
| | | | - Patrick M McGurrin
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | | | - Ruin Moaddel
- National Institute of Aging (NIA), Baltimore, MD, USA
| | - Geoffrey A Mueller
- National Institute of Environmental Health Sciences (NIEHS), Chapel Hill, NC, USA
| | - Amrita Mukherjee
- NIH Center for Human Immunology, Autoimmunity, and Inflammation (CHI), Bethesda, MD, USA
| | - Sandra Muñoz-Braceras
- National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), Bethesda, MD, USA
| | - Gina Norato
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Katherine Pak
- National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), Bethesda, MD, USA
| | - Iago Pinal-Fernandez
- National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), Bethesda, MD, USA
| | - Traian Popa
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Lauren B Reoma
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Michael N Sack
- National Heart, Lung and Blood Institute (NHLBI), Bethesda, MD, USA
| | - Farinaz Safavi
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
- National Institute of Allergy and Infectious Disease (NIAID), Bethesda, MD, USA
| | - Leorey N Saligan
- National Institute of Nursing Research (NINR), Bethesda, MD, USA
| | - Brian A Sellers
- NIH Center for Human Immunology, Autoimmunity, and Inflammation (CHI), Bethesda, MD, USA
| | | | - Bryan Smith
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Joseph Snow
- National Institute of Mental Health (NIMH), Bethesda, MD, USA
| | | | - Barbara J Stussman
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
- National Center for Complementary and Integrative Health (NCCIH), Bethesda, MD, USA
| | | | | | | | - Felipe Vial
- Clínica Alemana Universidad del Desarrollo, Santiago, Chile
| | - Carlotta Vizioli
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA
| | - Ashley Williams
- Oakland University William Beaumont School of Medicine, Rochester, NY, USA
| | | | - Avindra Nath
- National Institute of Neurological Diseases and Stroke (NINDS), Bethesda, MD, USA.
| |
Collapse
|
38
|
Basso MR, Whiteside DM, Combs D. Introduction to the special issue on performance validity: what are we doing? What should we do? J Clin Exp Neuropsychol 2024; 46:1-5. [PMID: 38678395 DOI: 10.1080/13803395.2024.2347119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/30/2024]
|
39
|
Silverberg ND, Rush BK. Neuropsychological evaluation of functional cognitive disorder: A narrative review. Clin Neuropsychol 2024; 38:302-325. [PMID: 37369579 DOI: 10.1080/13854046.2023.2228527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 06/17/2023] [Indexed: 06/29/2023]
Abstract
Objective: To critically review contemporary theoretical models, diagnostic approaches, clinical features, and assessment findings in Functional Cognitive Disorder (FCD), and make recommendations for neuropsychological evaluation of this condition. Method: Narrative review. Results: FCD is common in neuropsychological practice. It is characterized by cognitive symptoms that are not better explained by another medical or psychiatric disorder. The cognitive symptoms are associated with distress and/or limitations in daily functioning, but are potentially reversible with appropriate identification and treatment. Historically, a variety of diagnostic frameworks have attempted to capture this condition. A contemporary conceptualization of FCD positions it as a subtype of Functional Neurological Disorder, with shared and unique etiological factors. Patients with FCD tend to perform normally on neuropsychological testing or demonstrate relatively weak memory acquisition (e.g. list learning trials) in comparison to strong attention and delayed recall performance. Careful history-taking and behavioral observations are essential to support the diagnosis of FCD. Areas of ongoing controversy include operationalizing "internal inconsistencies" and the role of performance validity testing. Evidence for targeted interventions remains scarce. Conclusions: Neuropsychologists familiar with FCD can uniquely contribute to the care of patients with this condition by improving diagnostic clarity, richening case formulation, communicating effectively with referrers, and leading clinical management. Further research is needed to refine diagnosis, prognosis, and treatment.
Collapse
Affiliation(s)
- Noah D Silverberg
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- Rehabilitation Research Program, Centre for Aging SMART, Vancouver Coastal Health Research Institute, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Centre for Brain Health, Vancouver, British Columbia, Canada
| | - Beth K Rush
- Department of Psychiatry & Psychology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
40
|
Denning JH, Horner MD. The impact of race and other demographic factors on the false positive rates of five embedded Performance Validity Tests (PVTs) in a Veteran sample. J Clin Exp Neuropsychol 2024; 46:25-35. [PMID: 38353039 DOI: 10.1080/13803395.2024.2314737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 01/11/2024] [Indexed: 05/12/2024]
Abstract
INTRODUCTION It is common to use normative adjustments based on race to maintain accuracy when interpreting cognitive test results during neuropsychological assessment. However, embedded performance validity tests (PVTs) do not adjust for these racial differences and may result in elevated rates of false positives in African American/Black (AA) samples compared to European American/White (EA) samples. METHODS Veterans without Major Neurocognitive Disorder completed an outpatient neuropsychological assessment and were deemed to be performing in a valid manner (e.g., passing both the Test of Memory Malingering Trial 1 (TOMM1) and the Medical Symptom Validity Test (MSVT), (n = 531, EA = 473, AA = 58). Five embedded PVTs were administered to all patients: WAIS-III/IV Processing Speed Index (PSI), Brief Visuospatial Memory Test-Revised: Discrimination Index (BVMT-R), TMT-A (secs), California Verbal Learning Test-II (CVLT-II) Forced Choice, and WAIS-III/IV Digit Span Scaled Score. Individual PVT false positive rates, as well as the rate of failing two or more embedded PVTs, were calculated. RESULTS Failure rates of two embedded PVTs (PSI, TMT-A), and the total number of PVTs failed, were higher in the AA sample. The PSI and TMT-A remained significantly impacted by race after accounting for age, education, sex, and presence of Mild Neurocognitive Disorder. There were PVT failure rates greater than 10% (and considered false positives) in both groups (AA: PSI, TMT-A, and BVMT-R, 12-24%; EA: BVMT-R, 17%). Failing 2 or more PVTs (AA = 9%, EA = 4%) was impacted by education and Mild Neurocognitive Disorder but not by race. CONCLUSIONS Individual (timed) PVTs showed higher false positive rates in the AA sample even after accounting for demographic factors and diagnosis of Mild Neurocognitive Disorder. Requiring failure on 2 or more embedded PVTs reduced false positive rates to acceptable levels across both groups (10% or less) and was not significantly influenced by race.
Collapse
Affiliation(s)
- John H Denning
- Mental Health Service, Ralph H. Johnson Veterans Affairs Health Care System, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Mental Health Service, Ralph H. Johnson Veterans Affairs Health Care System, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
41
|
Whiteside DM, Basso MR, Shen C, Fry L, Naini S, Waldron EJ, Holker E, Porter J, Eskridge C, Logemann A, Minor GN. The relationship between performance validity testing, external incentives, and cognitive functioning in long COVID. J Clin Exp Neuropsychol 2024; 46:6-15. [PMID: 38299800 DOI: 10.1080/13803395.2024.2312625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 12/13/2023] [Indexed: 02/02/2024]
Abstract
INTRODUCTION Performance validity test (PVT) failures occur in clinical practice and at higher rates with external incentives. However, little PVT research has been applied to the Long COVID population. This study aims to address this gap. METHODS Participants were 247 consecutive individuals with Long COVID seen for neuropsychological evaluation who completed 4 PVTs and a standardized neuropsychological battery. The sample was 84.2% White and 66% female. The mean age was 51.16 years and mean education was 14.75 years. Medical records were searched for external incentive (e.g., disability claims). Three groups were created based on PVT failures (Pass [no failures], Intermediate [1 failure], and Fail [2+ failures]). RESULTS A total of 8.9% participants failed 2+ PVTs, 6.4% failed one PVT, and 85% passed PVTs. From the full sample, 25.1% were identified with external incentive. However, there was a significant difference between the rates of external incentives in the Fail group (54.5%) compared to the Pass (22.1%) and Intermediate (20%) groups. Further, the Fail group had lower cognitive scores and higher frequency of impaired range scores, consistent with PVT research in other populations. External incentives were uncorrelated with cognitive performance. CONCLUSIONS Consistent with other populations, results suggest Long COVID cases are not immune to PVT failure and external incentives are associated with PVT failure. Results indicated that individuals in the Pass and Intermediate groups showed no evidence for significant cognitive deficits, but the Fail group had significantly poorer cognitive performance. Thus, PVTs should be routinely administered in Long COVID cases and research.
Collapse
Affiliation(s)
- Douglas M Whiteside
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Michael R Basso
- Department of Psychiatry and Psychology, Mayo Clinic-Rochester, Rochester, MN, USA
| | - Chen Shen
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Laura Fry
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Savana Naini
- Department of Neurology, University of Virginia, Charlottesville, VA, USA
| | - Eric J Waldron
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Erin Holker
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Jim Porter
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Courtney Eskridge
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Allison Logemann
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| | - Greta N Minor
- Department of Rehabilitation Medicine, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
42
|
Beach J, Bain K, Valencia J, Marceaux J, Soble J. Validation and psychometric properties of the Word Choice Test-10 as an abbreviated performance validity test. Clin Neuropsychol 2024; 38:493-507. [PMID: 37266928 DOI: 10.1080/13854046.2023.2218576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 05/23/2023] [Indexed: 06/03/2023]
Abstract
Objective: The objective of the current investigation was to validate and establish the psychometric properties of an abbreviated, 10-item version of the Word Choice Test (WCT). Method: Data from one hundred ten clinically-referred participants (M age = 55.92, SD = 14.07; M education = 13.74, SD = 2.43; 84.5% Male) in a Veterans Affairs neuropsychology outpatient clinic was analyzed. All participants completed the WCT, the TOMM T1, the WMT, and the Digit Span subtest of the WAIS-IV as part of a larger battery of neuropsychological tests. Results: Correlation analyses revealed significant relationships between the 10-item WCT-10, the TOMM T1, the RDS forward/backward, as well as the IR, DR, and CNS subtests of the WMT. ROC analysis for the WCT-10 indicated optimal cutoff of 2 or more errors, with 52% sensitivity and 97% specificity (AUC=.786, p<.001), compared with the standard administration of the WCT with a cutoff of 8 or more errors, which had 67% sensitivity and 91% specificity. Specificity/sensitivity values remained adequate at a cutoff of two or more errors when participants with cognitive impairment (Sensitivity=.52, Specificity=.92) and without cognitive impairment (Sensitivity=.52, Specificity = 1.0) were examined separately. Conclusions: The present investigation revealed that the WCT-10, an abbreviated free-standing PVT comprised of the initial 10 items of the WCT, demonstrated clinical utility in a mixed clinical sample of Veterans and was robust to cognitive impairment. This abbreviated PVT may benefit researchers and clinicians through adequate identification of invalid performance while minimizing completion time.
Collapse
Affiliation(s)
- Jameson Beach
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Kathleen Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Julianna Valencia
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason Soble
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
43
|
Peak AM, Marceaux JC, Chicota-Carroll C, Soble JR. Cross-validation of the Trail Making Test as a non-memory-based embedded performance validity test among veterans with and without cognitive impairment. J Clin Exp Neuropsychol 2024; 46:16-24. [PMID: 38007610 DOI: 10.1080/13803395.2023.2287784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/20/2023] [Indexed: 11/27/2023]
Abstract
OBJECTIVE This study cross-validated multiple Trail Making Test (TMT) Parts A and B scores as non-memory-based embedded performance validity tests (PVTs) for detecting invalid neuropsychological performance among veterans with and without cognitive impairment. METHOD Data were collected from a demographically and diagnostically diverse mixed clinical sample of 100 veterans undergoing outpatient neuropsychological evaluation at a Southwestern VA Medical Center. As part of a larger battery of neuropsychological tests, all veterans completed TMT A and B and four independent criterion PVTs, which were used to classify veterans into valid (n = 75) and invalid (n = 25) groups. Among the valid group 47% (n = 35) were cognitively impaired. RESULTS Among the overall sample, all embedded PVTs derived from TMT A and B raw and demographically corrected T-scores significantly differed between validity groups (ηp2 = .21-.31) with significant areas under the curve (AUCs) of .72-.78 and 32-48% sensitivity (≥91% specificity) at optimal cut-scores. When subdivided by cognitive impairment status (i.e., valid-unimpaired vs. invalid; valid-impaired vs. invalid), all TMT scores yielded significant AUCs of .80-.88 and 56%-72% sensitivity (≥90% specificity) at optimal cut-scores. Among veterans with cognitive impairment, neither TMT A or B raw scores were able to significantly differentiate the invalid from the valid-cognitively impaired group; however, demographically corrected T-scores were able to significantly differentiate groups but had poor classification accuracy (AUCs = .66-.68) and reduced sensitivity of 28%-44% (≥91% specificity). CONCLUSIONS Embedded PVTs derived from TMT Parts A and B raw and T-scores were able to accurately differentiate valid from invalid neuropsychological performance among veterans without cognitive impairment; however, the demographically corrected T-scores generally were more robust and consistent with prior studies compared to raw scores. By contrast, TMT embedded PVTs had poor accuracy and low sensitivity among veterans with cognitive impairment, suggesting limited utility as PVTs among populations with cognitive dysfunction.
Collapse
Affiliation(s)
- Ashley M Peak
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | | | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
44
|
Rohling ML, Demakis GJ, Langhinrichsen-Rohling J. Lowered cutoffs to reduce false positives on the Word Memory Test. J Clin Exp Neuropsychol 2024; 46:67-79. [PMID: 38362939 DOI: 10.1080/13803395.2024.2314736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/11/2024] [Indexed: 02/17/2024]
Abstract
OBJECTIVE To adjust the decision criterion for the Word Memory Test (WMT, Green, 2003) to minimize the frequency of false positives. METHOD Archival data were combined into a database (n = 3,210) to examine the best cut score for the WMT. We compared results based on the original scoring rules and those based on adjusted scoring rules using a criterion based on 16 performance validity tests (PVTs) exclusive of the WMT. Cutoffs based on peer-reviewed publications and test manuals were used. The resulting PVT composite was considered the best estimate of validity status. We focused on a specificity of .90 with a false-positive rate of less than .10 across multiple samples. RESULTS Each examinee was administered the WMT, as well as on average 5.5 (SD = 2.5) other PVTs. Based on the original scoring rules of the WMT, 31.8% of examinees failed. Using a single failure on the criterion PVT (C-PVT), the base rate of failure was 45.9%. When requiring two or more failures on the C-PVT, the failure rate dropped to 22.8%. Applying a contingency analysis (i.e., X2) to the two failures model on the C-PVT measure and using the original rules for the WMT resulted in only 65.3% agreement. However, using our adjusted rules for the WMT, which consisted of relying on only the IR and DR WMT subtest scores with a cutoff of 77.5%, agreement between the adjusted and the C-PVT criterion equaled 80.8%, for an improvement of 12.1% identified. The adjustmeny resulted in a 49.2% reduction in false positives while preserving a sensitivity of 53.6%. The specificity for the new rules was 88.8%, for a false positive rate of 11.2%. CONCLUSIONS Results supported lowering of the cut score for correct responding from 82.5% to 77.5% correct. We also recommend discontinuing the use of the Consistency subtest score in the determination of WMT failure.
Collapse
|
45
|
Patrick SD, Rapport LJ, Hanks RA, Kanser RJ. Detecting feigned cognitive impairment using pupillometry on the Warrington Recognition Memory Test for Words. J Clin Exp Neuropsychol 2024; 46:36-45. [PMID: 38402625 PMCID: PMC11087194 DOI: 10.1080/13803395.2024.2312624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 01/05/2024] [Indexed: 02/27/2024]
Abstract
OBJECTIVE Pupillometry provides information about physiological and psychological processes related to cognitive load, familiarity, and deception, and it is outside of conscious control. This study examined pupillary dilation patterns during a performance validity test (PVT) among adults with true and feigned impairment of traumatic brain injury (TBI). PARTICIPANTS AND METHODS Participants were 214 adults in three groups: adults with bona fide moderate to severe TBI (TBI; n = 51), healthy comparisons instructed to perform their best (HC; n = 72), and healthy adults instructed and incentivized to simulate cognitive impairment due to TBI (SIM; n = 91). The Recognition Memory Test (RMT) was administered in the context of a comprehensive neuropsychological battery. Three pupillary indices were evaluated. Two pure pupil dilation (PD) indices assessed a simple measure of baseline arousal (PD-Baseline) and a nuanced measure of dynamic engagement (PD-Range). A pupillary-behavioral index was also evaluated. Dilation-response inconsistency (DRI) captured the frequency with which examinees displayed a pupillary familiarity response to the correct answer but selected the unfamiliar stimulus (incorrect answer). RESULTS All three indices differed significantly among the groups, with medium-to-large effect sizes. PD-Baseline appeared sensitive to oculomotor dysfunction due to TBI; adults with TBI displayed significantly lower chronic arousal as compared to the two groups of healthy adults (SIM, HC). Dynamic engagement (PD-Range) yielded a hierarchical structure such that SIM were more dynamically engaged than TBI followed by HC. As predicted, simulators engaged in DRI significantly more frequently than other groups. Moreover, subgroup analyses indicated that DRI differed significantly for simulators who scored in the invalid range on the RMT (n = 45) versus adults with genuine TBI who scored invalidly (n = 15). CONCLUSIONS The findings support continued research on the application of pupillometry to performance validity assessment: Overall, the findings highlight the promise of biometric indices in multimethod assessments of performance validity.
Collapse
Affiliation(s)
- Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, Michigan, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, Michigan, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, Michigan, USA
- The University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, USA
| |
Collapse
|
46
|
Basso MR, Guzman D, Hoffmeister J, Mulligan R, Whiteside DM, Combs D. Use of perceptual memory as a performance validity indicator: initial validation with simulated mild traumatic brain injury. J Clin Exp Neuropsychol 2024; 46:55-66. [PMID: 38346160 DOI: 10.1080/13803395.2024.2314991] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 01/21/2024] [Indexed: 05/12/2024]
Abstract
INTRODUCTION Many commonly employed performance validity tests (PVTs) are several decades old and vulnerable to compromise, leading to a need for novel instruments. Because implicit/non-declarative memory may be robust to brain damage, tasks that rely upon such memory may serve as an effective PVT. Using a simulation design, this experiment evaluated whether novel tasks that rely upon perceptual memory hold promise as PVTs. METHOD Sixty healthy participants were provided instructions to simulate symptoms of mild traumatic brain injury (TBI), and they were compared to a group of 20 honest responding individuals. Simulator groups received varying levels of information concerning TBI symptoms, resulting in naïve, sophisticated, and test-coached groups. The Word Memory Test, Test of Memory Malingering, and California Verbal Learning Test-II Forced Choice Recognition Test were administered. To assess perceptual memory, selected images from the Gollin Incomplete Figures and Mooney Closure Test were presented as visual perception tasks. After brief delays, memory for the images was assessed. RESULTS No group differences emerged on the perception trials of the Gollin and Mooney figures, but simulators remembered fewer images than the honest responders. Simulator groups differed on the standard PVTs, but they performed equivalently on the Gollin and Mooney figures, implying robustness to coaching. Relying upon a criterion of 90% specificity, the Gollin and Mooney figures achieved at least 90% sensitivity, comparing favorably to the standard PVTs. CONCLUSIONS The Gollin and Mooney figures hold promise as novel PVTs. As perceptual memory tests, they may be relatively robust to brain damage, but future research involving clinical samples is necessary to substantiate this assertion.
Collapse
Affiliation(s)
| | | | | | - Ryan Mulligan
- VA Central Western Massachusetts, Leeds, Massachusetts
| | | | | |
Collapse
|
47
|
Scheeren AM, Olde Dubbelink L, Lever AG, Geurts HM. Two validation studies of a performance validity test for autistic adults. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-13. [PMID: 38279835 DOI: 10.1080/23279095.2024.2305206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2024]
Abstract
In two studies we examined the potential of a simple emotion recognition task, the Morel Emotional Numbing Test (MENT), as a performance validity test (PVT) for autism-related cognitive difficulties in adulthood. The aim of a PVT is to indicate non-credible performance, which can aid the interpretation of psychological assessments. There are currently no validated PVTs for autism-related difficulties in adulthood. In Study 1, non-autistic university students (aged 18-46 years) were instructed to simulate that they were autistic during a psychological assessment (simulation condition; n = 26). These students made more errors on the MENT than those instructed to do their best (control condition; n = 26). In Study 2, we tested how well autistic adults performed on the MENT. We found that clinically diagnosed autistic adults and non-autistic adults (both n = 25; 27-57 years; IQ > 80) performed equally well on the MENT. Moreover, autistic adults made significantly fewer errors than the instructed simulators in Study 1. The MENT reached a specificity of ≥98% (identifying 100% of non-simulators as non-simulator in Study 1 and 98% in Study 2) and a sensitivity of 96% (identifying 96% of simulators as simulator). Together these findings provide the first empirical evidence for the validity of the MENT as a potential PVT for autism-related cognitive difficulties.
Collapse
Affiliation(s)
- Anke M Scheeren
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Linda Olde Dubbelink
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Anne Geeke Lever
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Hilde M Geurts
- Dutch Autism & ADHD Research Center, Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Dr. Leo Kannerhuis, autism clinic, Amsterdam, the Netherlands
| |
Collapse
|
48
|
Ton Loy AF, Lee JE, Asimakopoulos G, Sakamoto MS, Merritt VC. Symptom attribution is a stronger predictor of PVT-failure than symptom endorsement in treatment-seeking Veterans with remote mTBI history: A pilot study. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-6. [PMID: 38113857 DOI: 10.1080/23279095.2023.2293979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
OBJECTIVE To examine relationships between performance validity testing (PVT), neurobehavioral symptom endorsement, and symptom attribution in Veterans with a history of mild traumatic brain injury (mTBI). METHOD Participants included treatment-seeking Veterans (n = 37) with remote mTBI histories who underwent a neuropsychological assessment and completed a modified version of the Neurobehavioral Symptom Inventory (NSI) to assess symptom endorsement and symptom attribution (the latter evaluated by having Veterans indicate whether they believed each NSI symptom was caused by their mTBI). Veterans were divided into two subgroups, PVT-Valid (n = 25) and PVT-Invalid (n = 12). RESULTS Independent samples t-tests showed that two of five symptom endorsement variables and all five symptom attribution variables were significantly different between PVT groups (PVT-Invalid > PVT-Valid; Cohen's d = 0.67-1.02). Logistic regression analyses adjusting for PTSD symptoms showed that symptom endorsement (Nagelkerke's R2 = .233) and symptom attribution (Nagelkerke's R2 = .279) significantly distinguished between PVT groups. According to the Wald criterion, greater symptom endorsement (OR = 1.09) and higher attribution of symptoms to mTBI (OR = 1.21) each reliably predicted PVT-failure. CONCLUSIONS While both symptom endorsement and symptom attribution were significantly associated with PVT-failure, our preliminary results suggest that symptom attribution is a stronger predictor of PVT-failure. Results highlight the importance of assessing symptom attribution to mTBI in this population.
Collapse
Affiliation(s)
- Adan F Ton Loy
- Research Service, VA San Diego Healthcare System (VASDHS), San Diego, CA, USA
| | - Jeong-Eun Lee
- Research Service, VA San Diego Healthcare System (VASDHS), San Diego, CA, USA
| | | | - McKenna S Sakamoto
- Department of Psychology, Penn State University, University Park, PA, USA
| | - Victoria C Merritt
- Research Service, VA San Diego Healthcare System (VASDHS), San Diego, CA, USA
- Department of Psychiatry, School of Medicine, UC San Diego, La Jolla, CA, USA
- Center of Excellence for Stress and Mental Health, VASDHS, San Diego, CA, USA
| |
Collapse
|
49
|
Henry GK. Detection of noncredible cognitive performance with Wechsler Memory Scale-IV measures in mild traumatic brain injury litigants. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-8. [PMID: 38039520 DOI: 10.1080/23279095.2023.2287139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2023]
Abstract
OBJECTIVE To investigate the operating characteristics of selective measures on the Wechsler Memory Scale-IV (WMS-IV) to predict noncredible neurocognitive dysfunction in a sample of mild traumatic brain injury (mTBI) litigants. METHOD Participants included 110 adults who underwent a comprehensive neuropsychological examination. Criterion groups were formed based upon their performance on stand-alone measures of cognitive performance validity testing (PVT). RESULTS Participants failing two stand-alone PVTs exhibited significantly lower scores across all WMS-IV dependent variables of interest compared to participants who passed both PVTs. Participants who failed one PVT were excluded. Bivariate logistic regression revealed that all six dependent variables were significant predictors of PVT status. The best prediction model consisted of three WMS-IV variables including Logical Memory Delayed Recall (LM2), Logical Memory Recognition (LMR), and Visual Reproduction Recognition (VRR). This model demonstrated an accuracy of 90.2%, 0.89 sensitivity, 0.92 specificity, and a Receiver Operating Curve (ROC) of 0.957. CONCLUSION The current empirically-derived cutscores and logit equation for the WMS-IV may be an additional consideration in analyzing database validity and noncredible performance in mTBI personal injury litigants ages 18-69.
Collapse
|
50
|
Crișan I, Sava FA. Validity assessment in Eastern Europe: cross-validation of the Dot Counting Test and MODEMM against the TOMM-1 and Rey-15 in a Romanian mixed clinical sample. Arch Clin Neuropsychol 2023:acad085. [PMID: 37961918 DOI: 10.1093/arclin/acad085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/15/2023] Open
Abstract
OBJECTIVE This study investigated performance validity in the understudied Romanian clinical population by exploring classification accuracies of the Dot Counting Test (DCT) and the first Romanian performance validity test (PVT) (Memory of Objects and Digits and Evaluation of Memory Malingering/MODEMM) in a heterogeneous clinical sample. METHODS We evaluated 54 outpatients (26 females; MAge = 62.02; SDAge = 12.3; MEducation = 2.41, SDEducation = 2.82) with the Test of Memory Malingering 1 (TOMM-1), Rey Fifteen Items Test (Rey-15) (free recall and recognition trials), DCT, MODEMM, and MMSE/MoCA as part of their neuropsychological assessment. Accuracy parameters and base failure rates were computed for the DCT and MODEMM indicators against the TOMM-1 and Rey-15. Two patient groups were constructed according to psychometrically defined credible/noncredible performance (i.e., pass/fail both TOMM-1 and Rey-15). RESULTS Similar to other cultures, a cutoff of ≥18 on the DCT E score produced the best combination between sensitivity (0.50-0.57) and specificity (≥0.90). MODEMM indicators based on recognition accuracy, inconsistencies, and inclusion false positives generated 0.75-0.86 sensitivities at ≥0.90 specificities. Multivariable models of MODEMM indicators reached perfect sensitivities at ≥0.90 specificities against two PVTs. Patients who failed the TOMM-1 and Rey-15 were significantly more likely to fail the DCT and MODEMM than patients who passed both PVTs. CONCLUSIONS Our results offer proof of concept for the DCT's cross-cultural validity and the applicability of the MODEMM on Romanian clinical examinees, further recommending the use of heterogeneous validity indicators in clinical assessments.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara 300223, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timișoara 300223, Romania
| |
Collapse
|