1
|
Turner TH, Scott EP, Barlis K, Rodriguez-Porcel F, Sartori AC, Joseph J. The Rapid Access Memory Program for Addressing Concerns of Incipient Dementia in Academic Primary Care Settings. J Geriatr Psychiatry Neurol 2024; 37:255-262. [PMID: 38156442 DOI: 10.1177/08919887231225482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
BACKGROUND Expedient diagnosis of incipient dementia is often hindered by time constraints in primary care visits, shortage of dementia specialists, and extended waitlists for comprehensive neuropsychological evaluations. METHODS We developed the Rapid Access Memory Program (RAMP) to improve access of neuropsychological services for older adults presenting to our institutional primary care clinics with concerns of cognitive decline. RAMP provides abbreviated neurocognitive assessment, same-day patient feedback, expedited reporting to referring providers, and is financially self-supported. Here, we describe development of RAMP and clinical outcomes from the first 3 years. RESULTS Of 160 patients seen, dementia was diagnosed in 30% and Mild Cognitive Impairment in 50%; Alzheimer's disease was the most common suspected etiology. New psychiatric diagnosis was made in about one-third (n = 54). Most frequent recommendations involved medication adjustments (initiating cholinesterase inhibitors, deprescribing anticholinergics), safety (driving, decision-making), and specialist referrals. Additionally, 27 (17%) subsequently enrolled in local research. CONCLUSIONS Results support feasibility and utility of RAMP for connecting older adults in primary care with neuropsychological services.
Collapse
Affiliation(s)
- Travis H Turner
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
- WCG Clinical Endpoint Solutions, Princeton, NJ, USA
| | - Emmi P Scott
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
| | - Katherine Barlis
- Department of Neurosciences, Medical University of South Carolina, Charleston, SC, USA
- Department of Psychology, University of Arizona, Tucson, AZ, USA
| | | | - Andrea C Sartori
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
| | - Jane Joseph
- Department of Neurosciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
2
|
Parsons J, Rodrigues NB, Erdodi LA. The classification accuracy of Warrington's recognition memory test (words) as a performance validity Test in a neurorehabilitation setting. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-11. [PMID: 38913011 DOI: 10.1080/23279095.2024.2337130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This study was designed to evaluate the classification accuracy of the Warrington's Recognition Memory Test (RMT) in 167 patients (97 or 58.1% men; MAge = 40.4; MEducation= 13.8) medically referred for neuropsychological evaluation against five psychometrically defined criterion groups. At the optimal cutoff (≤42), the RMT produced an acceptable combination of sensitivity (.36-.60) and specificity (.85-.95), correctly classifying 68.4-83.3% of the sample. Making the cutoff more conservative (≤41) improved specificity (.88-.95) at the expense of sensitivity (.30-.60). Lowering the cutoff to ≤40 achieved uniformly high specificity (.91-.95) but diminished sensitivity (.27-.48). RMT scores were unrelated to lateral dominance, education, or gender. The RMT was sensitive to a three-way classification of performance validity (Pass/Borderline/Fail), further demonstrating its discriminant power. Despite a notable decline in research studies focused on its classification accuracy within the last decade, the RMT remains an effective free-standing PVT that is robust to demographic variables. Relatively low sensitivity is its main liability. Further research is needed on its cross-cultural validity (sensitivity to limited English proficiency).
Collapse
Affiliation(s)
- Jenna Parsons
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Nelson B Rodrigues
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
- Star UBB Institute, Babeș-Bolyai University, Cluj-Napoca, Romania
| |
Collapse
|
3
|
Floden DP, Hogue O, Postle AF, Busch RM. Validation of Self-Administered Visual and Verbal Episodic Memory Tasks in Healthy Controls and a Clinical Sample. Assessment 2024; 31:933-946. [PMID: 37710410 DOI: 10.1177/10731911231195844] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
This study evaluated the performance characteristics, construct validity, and reliability of two computerized, self-administered verbal and visual recognition memory tests based on the Remember-Know paradigm. Around 250 healthy control participants and 440 patients referred for neuropsychological assessment used an iPad to complete the Words and Faces recognition memory tests before or after concurrent neuropsychological testing. Performance accuracy was high but without ceiling effects. Education, but not age, was related to overall performance for both samples while the influence of gender and race differed across samples. In the clinical sample, overall performance was worse in those patients demonstrating memory impairment on clinical assessment. Words and Faces subtests demonstrated the strongest correlations with neuropsychological measures of verbal and nonverbal memory, respectively. Both showed moderate correlations with processing speed while Faces was also correlated with visuospatial skills. The memory tests showed good test-retest reliability over two testing sessions. These findings demonstrate acceptable psychometric properties in clinical and community samples and suggest that this computerized format is feasible for memory assessment in clinical contexts.
Collapse
|
4
|
Kanser RJ, Rapport LJ, Hanks RA, Patrick SD. Time and money: Exploring enhancements to performance validity research designs. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:256-263. [PMID: 34932422 DOI: 10.1080/23279095.2021.2019740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
INTRODUCTION The study examined the effect of preparation time and financial incentives on healthy adults' ability to simulate traumatic brain injury (TBI) during neuropsychological evaluation. METHOD A retrospective comparison of two TBI simulator group designs: a traditional design employing a single-session of standard coaching immediately before participation (SIM-SC; n = 46) and a novel design that provided financial incentive and preparation time (SIM-IP; n = 49). Both groups completed an ecologically valid neuropsychological test battery that included widely-used cognitive tests and five common performance validity tests (PVTs). RESULTS Compared to SIM-SC, SIM-IP performed significantly worse and had higher rates of impairment on tests of processing speed and executive functioning (Trails A and B). SIM-IP were more likely than SIM-SC to avoid detection on one of the PVTs and performed somewhat better on three of the PVTs, but the effects were small and non-significant. SIM-IP did not demonstrate significantly higher rates of successful simulation (i.e., performing impaired on cognitive tests with <2 PVT failures). Overall, the rate of the successful simulation was ∼40% with a liberal criterion, requiring cognitive impairment defined as performance >1 SD below the normative mean. At a more rigorous criterion defining impairment (>1.5 SD below the normative mean), successful simulation approached 35%. CONCLUSIONS Incentive and preparation time appear to add limited incremental effect over traditional, single-session coaching analog studies of TBI simulation. Moreover, these design modifications did not translate to meaningfully higher rates of successful simulation and avoidance of detection by PVTs.
Collapse
Affiliation(s)
- Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, University of North Carolina, Chapel Hill, NC, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| | - Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
5
|
Kim S, Currao A, Brown E, Milberg WP, Fortier CB. Importance of validity testing in psychiatric assessment: evidence from a sample of multimorbid post-9/11 veterans. J Int Neuropsychol Soc 2024; 30:410-419. [PMID: 38014547 DOI: 10.1017/s1355617723000711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
OBJECTIVE Performance validity (PVTs) and symptom validity tests (SVTs) are necessary components of neuropsychological testing to identify suboptimal performances and response bias that may impact diagnosis and treatment. The current study examined the clinical and functional characteristics of veterans who failed PVTs and the relationship between PVT and SVT failures. METHOD Five hundred and sixteen post-9/11 veterans participated in clinical interviews, neuropsychological testing, and several validity measures. RESULTS Veterans who failed 2+ PVTs performed significantly worse than veterans who failed one PVT in verbal memory (Cohen's d = .60-.69), processing speed (Cohen's d = .68), working memory (Cohen's d = .98), and visual memory (Cohen's d = .88-1.10). Individuals with 2+ PVT failures had greater posttraumatic stress (PTS; β = 0.16; p = .0002), and worse self-reported depression (β = 0.17; p = .0001), anxiety (β = 0.15; p = .0007), sleep (β = 0.10; p = .0233), and functional outcomes (β = 0.15; p = .0009) compared to veterans who passed PVTs. 7.8% veterans failed the SVT (Validity-10; ≥19 cutoff); Multiple PVT failures were significantly associated with Validity-10 failure at the ≥19 and ≥23 cutoffs (p's < .0012). The Validity-10 had moderate correspondence in predicting 2+ PVTs failures (AUC = 0.83; 95% CI = 0.76, 0.91). CONCLUSION PVT failures are associated with psychiatric factors, but not traumatic brain injury (TBI). PVT failures predict SVT failure and vice versa. Standard care should include SVTs and PVTs in all clinical assessments, not just neuropsychological assessments, particularly in clinically complex populations.
Collapse
Affiliation(s)
- Sahra Kim
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Alyssa Currao
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - Emma Brown
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
| | - William P Milberg
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Catherine B Fortier
- Translational Research Center for TBI and Stress Disorders and Geriatric Research Education and Clinical Center, VA Boston Healthcare System, Boston, MA, USA
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Lippa SM, Bailie JM, French LM, Brickell TA, Lange RT. Lifetime blast exposure is not related to cognitive performance or psychiatric symptoms in US military personnel. Clin Neuropsychol 2024:1-23. [PMID: 38494345 DOI: 10.1080/13854046.2024.2328881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 03/05/2024] [Indexed: 03/19/2024]
Abstract
Objective: The present study aimed to examine the impact of lifetime blast exposure (LBE) on neuropsychological functioning in service members and veterans (SMVs). Method: Participants were 282 SMVs, with and without history of traumatic brain injury (TBI), who were prospectively enrolled in a Defense and Veterans Brain Injury Center (DVBIC)-Traumatic Brain Injury Center of Excellence (TBICoE) Longitudinal TBI Study. A cross-sectional analysis of baseline data was conducted. LBE was based on two factors: Military Occupational Speciality (MOS) and SMV self-report. Participants were divided into three groups based on LBE: Blast Naive (n = 61), Blast + Low Risk MOS (n = 96), Blast + High Risk MOS (n = 125). Multivariate analysis of variance (MANOVA) was used to examine group differences on neurocognitive domains and the Minnesota Multiphasic Personality Inventory-2 Restructured Form. Results: There were no statistically significant differences in attention/working memory, processing speed, executive functioning, and memory (Fs < 1.75, ps > .1, ηp2s < .032) or in General Cognition (Fs < 0.95, ps > .3, ηp2s < .008). Prior to correction for covariates, lifetime blast exposure was related to Restructured Clinical (F(18,542) = 1.77, p = .026, ηp2 = .055), Somatic/Cognitive (F(10,550) = 1.99, p = .033, ηp2 = .035), and Externalizing Scales (F(8,552) = 2.17, p = .028, ηp2 = .030); however, these relationships did not remain significant after correction for covariates (Fs < 1.53, ps > .145, ηp2s < .032). Conclusions: We did not find evidence of a relationship between LBE and neurocognitive performance or psychiatric symptoms. This stands in contrast to prior studies demonstrating an association between lifetime blast exposure and highly sensitive blood biomarkers and/or neuroimaging. Overall, findings suggest the neuropsychological impact of lifetime blast exposure is minimal in individuals remaining in or recently retired from military service.
Collapse
Affiliation(s)
- Sara M Lippa
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Jason M Bailie
- Traumatic Brain Injury Center of Excellence, Bethesda, MD, USA
- Naval Hospital Camp Pendleton, Oceanside, CA, USA
- General Dynamics Information Technology, Fairfax, VA, USA
| | - Louis M French
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
- Traumatic Brain Injury Center of Excellence, Bethesda, MD, USA
| | - Tracey A Brickell
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
- Traumatic Brain Injury Center of Excellence, Bethesda, MD, USA
- General Dynamics Information Technology, Fairfax, VA, USA
| | - Rael T Lange
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
- Uniformed Services University of the Health Sciences, Bethesda, MD, USA
- Traumatic Brain Injury Center of Excellence, Bethesda, MD, USA
- General Dynamics Information Technology, Fairfax, VA, USA
- Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
7
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
8
|
Robertson-Benta CR, Pabbathi Reddy S, Stephenson DD, Sicard V, Hergert DC, Dodd AB, Campbell RA, Phillips JP, Meier TB, Quinn DK, Mayer AR. Cognition and post-concussive symptom status after pediatric mild traumatic brain injury. Child Neuropsychol 2024; 30:203-220. [PMID: 36825526 PMCID: PMC10447629 DOI: 10.1080/09297049.2023.2181946] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 02/13/2023] [Indexed: 02/25/2023]
Abstract
Cognitive impairment and post-concussive symptoms (PCS) represent hallmark sequelae of pediatric mild traumatic brain injury (pmTBI). Few studies have directly compared cognition as a function of PCS status longitudinally. Cognitive outcomes were therefore compared for asymptomatic pmTBI, symptomatic pmTBI, and healthy controls (HC) during sub-acute (SA; 1-11 days) and early chronic (EC; approximately 4 months) post-injury phases. We predicted worse cognitive performance for both pmTBI groups relative to HC at the SA visit. At the EC visit, we predicted continued impairment from the symptomatic group, but no difference between asymptomatic pmTBI and HCs. A battery of clinical (semi-structured interviews and self-report questionnaires) and neuropsychological measures were administered to 203 pmTBI and 139 HC participants, with greater than 80% retention at the EC visit. A standardized change method classified pmTBI into binary categories of asymptomatic or symptomatic based on PCS scores. Symptomatic pmTBI performed significantly worse than HCs on processing speed, attention, and verbal memory at SA visit, whereas lower performance was only present for verbal memory for asymptomatic pmTBI. Lower performance in verbal memory persisted for both pmTBI groups at the EC visit. Surprisingly, a minority (16%) of pmTBI switched from asymptomatic to symptomatic status at the EC visit. Current findings suggest that PCS and cognition are more closely coupled during the first week of injury but become decoupled several months post-injury. Evidence of lower performance in verbal memory for both asymptomatic and symptomatic pmTBI suggests that cognitive recovery may be a process separate from the resolution of subjective symptomology.
Collapse
Affiliation(s)
- Cidney R Robertson-Benta
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
| | - Sharvani Pabbathi Reddy
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
| | - David D Stephenson
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
| | - Veronik Sicard
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
| | - Danielle C Hergert
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
| | - Andrew B Dodd
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
| | - Richard A Campbell
- Department of Psychiatry and Behavioral Sciences, University of New Mexico, Albuquerque, NM, USA
| | - John P Phillips
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
- Departments of Psychology and Neurology, University of New Mexico, Albuquerque, NM, USA
| | - Timothy B Meier
- Department of Neurosurgery, Medical College of Wisconsin, Milwaukee, WI, USA
- Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI, USA
- Department of Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Davin K Quinn
- Department of Psychiatry and Behavioral Sciences, University of New Mexico, Albuquerque, NM, USA
| | - Andrew R Mayer
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, NM, USA
- Department of Psychiatry and Behavioral Sciences, University of New Mexico, Albuquerque, NM, USA
- Departments of Psychology and Neurology, University of New Mexico, Albuquerque, NM, USA
| |
Collapse
|
9
|
Denning JH, Horner MD. The impact of race and other demographic factors on the false positive rates of five embedded Performance Validity Tests (PVTs) in a Veteran sample. J Clin Exp Neuropsychol 2024; 46:25-35. [PMID: 38353039 DOI: 10.1080/13803395.2024.2314737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 01/11/2024] [Indexed: 05/12/2024]
Abstract
INTRODUCTION It is common to use normative adjustments based on race to maintain accuracy when interpreting cognitive test results during neuropsychological assessment. However, embedded performance validity tests (PVTs) do not adjust for these racial differences and may result in elevated rates of false positives in African American/Black (AA) samples compared to European American/White (EA) samples. METHODS Veterans without Major Neurocognitive Disorder completed an outpatient neuropsychological assessment and were deemed to be performing in a valid manner (e.g., passing both the Test of Memory Malingering Trial 1 (TOMM1) and the Medical Symptom Validity Test (MSVT), (n = 531, EA = 473, AA = 58). Five embedded PVTs were administered to all patients: WAIS-III/IV Processing Speed Index (PSI), Brief Visuospatial Memory Test-Revised: Discrimination Index (BVMT-R), TMT-A (secs), California Verbal Learning Test-II (CVLT-II) Forced Choice, and WAIS-III/IV Digit Span Scaled Score. Individual PVT false positive rates, as well as the rate of failing two or more embedded PVTs, were calculated. RESULTS Failure rates of two embedded PVTs (PSI, TMT-A), and the total number of PVTs failed, were higher in the AA sample. The PSI and TMT-A remained significantly impacted by race after accounting for age, education, sex, and presence of Mild Neurocognitive Disorder. There were PVT failure rates greater than 10% (and considered false positives) in both groups (AA: PSI, TMT-A, and BVMT-R, 12-24%; EA: BVMT-R, 17%). Failing 2 or more PVTs (AA = 9%, EA = 4%) was impacted by education and Mild Neurocognitive Disorder but not by race. CONCLUSIONS Individual (timed) PVTs showed higher false positive rates in the AA sample even after accounting for demographic factors and diagnosis of Mild Neurocognitive Disorder. Requiring failure on 2 or more embedded PVTs reduced false positive rates to acceptable levels across both groups (10% or less) and was not significantly influenced by race.
Collapse
Affiliation(s)
- John H Denning
- Mental Health Service, Ralph H. Johnson Veterans Affairs Health Care System, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Mental Health Service, Ralph H. Johnson Veterans Affairs Health Care System, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
10
|
Beach J, Bain K, Valencia J, Marceaux J, Soble J. Validation and psychometric properties of the Word Choice Test-10 as an abbreviated performance validity test. Clin Neuropsychol 2024; 38:493-507. [PMID: 37266928 DOI: 10.1080/13854046.2023.2218576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 05/23/2023] [Indexed: 06/03/2023]
Abstract
Objective: The objective of the current investigation was to validate and establish the psychometric properties of an abbreviated, 10-item version of the Word Choice Test (WCT). Method: Data from one hundred ten clinically-referred participants (M age = 55.92, SD = 14.07; M education = 13.74, SD = 2.43; 84.5% Male) in a Veterans Affairs neuropsychology outpatient clinic was analyzed. All participants completed the WCT, the TOMM T1, the WMT, and the Digit Span subtest of the WAIS-IV as part of a larger battery of neuropsychological tests. Results: Correlation analyses revealed significant relationships between the 10-item WCT-10, the TOMM T1, the RDS forward/backward, as well as the IR, DR, and CNS subtests of the WMT. ROC analysis for the WCT-10 indicated optimal cutoff of 2 or more errors, with 52% sensitivity and 97% specificity (AUC=.786, p<.001), compared with the standard administration of the WCT with a cutoff of 8 or more errors, which had 67% sensitivity and 91% specificity. Specificity/sensitivity values remained adequate at a cutoff of two or more errors when participants with cognitive impairment (Sensitivity=.52, Specificity=.92) and without cognitive impairment (Sensitivity=.52, Specificity = 1.0) were examined separately. Conclusions: The present investigation revealed that the WCT-10, an abbreviated free-standing PVT comprised of the initial 10 items of the WCT, demonstrated clinical utility in a mixed clinical sample of Veterans and was robust to cognitive impairment. This abbreviated PVT may benefit researchers and clinicians through adequate identification of invalid performance while minimizing completion time.
Collapse
Affiliation(s)
- Jameson Beach
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Kathleen Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Julianna Valencia
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason Soble
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
11
|
Ashton Rennison VL, Chovaz CJ, Zirul S. Cognition and psychological well-being in adults with post COVID-19 condition and analyses of symptom sequelae. Clin Neuropsychol 2024; 38:326-353. [PMID: 37350239 DOI: 10.1080/13854046.2023.2227407] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 06/14/2023] [Indexed: 06/24/2023]
Abstract
OBJECTIVE As the coronavirus disease 2019 (COVID-19) pandemic moves into its fourth year, gaining a better clinical understanding of individuals with post COVID-19 condition is paramount. The current study examined the neurocognitive and psychological status of adults with post COVID-19 condition, as well as explored the impact of high psychological burden on objective neurocognitive functioning and the relationship between subjective cognitive concerns and objective neurocognitive findings. METHOD Valid neuropsychological assessments were completed with 51 symptomatic adults who were 297.55 days, on average, following a confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Participants completed brief self-report depression, anxiety, and PTSD questionnaires, a questionnaire with subjective ratings of cognitive abilities, and standardized neurocognitive tests that examined performance validity, attention, processing speed, verbal learning and memory, naming, visual construction, and executive functioning. RESULTS The participants were mostly Caucasian (80.39%), middle-aged (average 47.37 years), women (82.35%), who were never hospitalized (86.27%). Despite all individuals reporting cognitive problems in daily life, mean performances on objective testing did not reveal any neurocognitive deficits (at or below the 8th percentile) at a group level. Approximately half (49.02%) of the participants reported co-occurring mental health symptoms that were considered clinically elevated based on questionnaire results. High psychological symptom burden was associated with greater subjective cognitive difficulties but did not result in neurocognitive dysfunction on objective testing. CONCLUSIONS This study contributes to the literature regarding post COVID-19 condition in adults including the relationship between the cognitive and psychological symptoms. Results are summarized in key clinical learning points.
Collapse
Affiliation(s)
- V Lynn Ashton Rennison
- Psychology Department, London Health Sciences Centre, London, ON, Canada
- Schulich School of Medicine & Dentistry Department of Psychiatry, Western University, London, ON, Canada
| | - Cathy J Chovaz
- Psychology Department, King's University College at Western University, London, ON, Canada
| | - Sandra Zirul
- Psychology Department, London Health Sciences Centre, London, ON, Canada
| |
Collapse
|
12
|
Crișan I, Sava FA. Validity assessment in Eastern Europe: cross-validation of the Dot Counting Test and MODEMM against the TOMM-1 and Rey-15 in a Romanian mixed clinical sample. Arch Clin Neuropsychol 2023:acad085. [PMID: 37961918 DOI: 10.1093/arclin/acad085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/15/2023] Open
Abstract
OBJECTIVE This study investigated performance validity in the understudied Romanian clinical population by exploring classification accuracies of the Dot Counting Test (DCT) and the first Romanian performance validity test (PVT) (Memory of Objects and Digits and Evaluation of Memory Malingering/MODEMM) in a heterogeneous clinical sample. METHODS We evaluated 54 outpatients (26 females; MAge = 62.02; SDAge = 12.3; MEducation = 2.41, SDEducation = 2.82) with the Test of Memory Malingering 1 (TOMM-1), Rey Fifteen Items Test (Rey-15) (free recall and recognition trials), DCT, MODEMM, and MMSE/MoCA as part of their neuropsychological assessment. Accuracy parameters and base failure rates were computed for the DCT and MODEMM indicators against the TOMM-1 and Rey-15. Two patient groups were constructed according to psychometrically defined credible/noncredible performance (i.e., pass/fail both TOMM-1 and Rey-15). RESULTS Similar to other cultures, a cutoff of ≥18 on the DCT E score produced the best combination between sensitivity (0.50-0.57) and specificity (≥0.90). MODEMM indicators based on recognition accuracy, inconsistencies, and inclusion false positives generated 0.75-0.86 sensitivities at ≥0.90 specificities. Multivariable models of MODEMM indicators reached perfect sensitivities at ≥0.90 specificities against two PVTs. Patients who failed the TOMM-1 and Rey-15 were significantly more likely to fail the DCT and MODEMM than patients who passed both PVTs. CONCLUSIONS Our results offer proof of concept for the DCT's cross-cultural validity and the applicability of the MODEMM on Romanian clinical examinees, further recommending the use of heterogeneous validity indicators in clinical assessments.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara 300223, Romania
| | - Florin Alin Sava
- Department of Psychology, West University of Timişoara, Timișoara 300223, Romania
| |
Collapse
|
13
|
Davis JJ. Time is money: Examining the time cost and associated charges of common performance validity tests. Clin Neuropsychol 2023; 37:475-490. [PMID: 35414332 DOI: 10.1080/13854046.2022.2063190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Objective: This study presents data on the time cost and associated charges for common performance validity tests (PVTs). It also applies an approach from cost effectiveness research to comparison of tests that incorporates cost and classification accuracy. Method: A recent test usage survey was used to identify PVTs in common use among adult neuropsychologists. Data on test administration and scoring time were aggregated. Charges per test were calculated. A cost effectiveness approach was applied to compare pairs of tests from three studies using data on test administration time and classification accuracy operationalized as improvement in posterior probability beyond base rate. Charges per unit increase in posterior probability over base rate were calculated for base rates of invalidity ranging from 10 to 40%. Results: Ten commonly used PVTs measures showed a wide range in test administration and scoring time from 1 to 3 minutes to over 40 minutes with associated charge estimates from $4 to $284. Cost effectiveness comparisons illustrated the nuance in test selection and benefit of considering cost in relation to outcome rather than prioritizing time (i.e. cost minimization) classification accuracy alone. Conclusions: Findings extend recent research efforts to fill knowledge gaps related to the cost of neuropsychological evaluation. The cost effectiveness approach warrants further study in other samples with different neuropsychological and outcome measures.
Collapse
Affiliation(s)
- Jeremy J Davis
- Department of Neurology, Glenn Biggs Institute for Alzheimer's and Neurogenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
14
|
Bajjaleh C, Braw YC, Elkana O. Adaptation and initial validation of the Arabic version of the Word Memory Test (WMT ARB). APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:204-213. [PMID: 34043924 DOI: 10.1080/23279095.2021.1923495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. METHODS We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. RESULTS The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. CONCLUSION The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
Collapse
Affiliation(s)
- Christine Bajjaleh
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| | - Yoram C Braw
- Department of Psychology, Ariel University, Ariel, Israel
| | - Odelia Elkana
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| |
Collapse
|
15
|
Low rate of performance validity failures among individuals with bipolar disorder. J Int Neuropsychol Soc 2023; 29:298-305. [PMID: 35403599 DOI: 10.1017/s1355617722000145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Assessing performance validity is imperative in both clinical and research contexts as data interpretation presupposes adequate participation from examinees. Performance validity tests (PVTs) are utilized to identify instances in which results cannot be interpreted at face value. This study explored the hit rates for two frequently used PVTs in a research sample of individuals with and without histories of bipolar disorder (BD). METHOD As part of an ongoing longitudinal study of individuals with BD, we examined the performance of 736 individuals with BD and 255 individuals with no history of mental health disorder on the Test of Memory Malingering (TOMM) and the California Verbal Learning Test forced choice trial (CVLT-FC) at three time points. RESULTS Undiagnosed individuals demonstrated 100% pass rate on PVTs and individuals with BD passed over 98% of the time. A mixed effects model adjusting for relevant demographic variables revealed no significant difference in TOMM scores between the groups, a = .07, SE = .07, p = .31. On the CVLT-FC, no clinically significant differences were observed (ps < .001). CONCLUSIONS Perfect PVT scores were obtained by the majority of individuals, with no differences in failure rates between groups. The tests have approximately >98% specificity in BD and 100% specificity among non-diagnosed individuals. Further, nearly 90% of individuals with BD obtained perfect scores on both measures, a trend observed at each time point.
Collapse
|
16
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
17
|
Cerny BM, Reynolds TP, Chang F, Scimeca LM, Phillips MS, Ogram Buckley CM, Leib SI, Resch ZJ, Pliskin NH, Soble JR. Cognitive Performance and Psychiatric Self-Reports Across Adult Cognitive Disengagement Syndrome and ADHD Diagnostic Groups. J Atten Disord 2023; 27:258-269. [PMID: 36354066 DOI: 10.1177/10870547221136216] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Cognitive disengagement syndrome (CDS) is characterized by inattention, under-arousal, and fatigue and frequently co-occurs with attention-deficit/hyperactivity disorder (ADHD). Although CDS is associated with cognitive complaints, its association with objective cognitive performance is less well understood. METHOD This study investigated neuropsychological correlates of CDS symptoms among 169 adults (Mage = 29.4) referred for outpatient neuropsychological evaluation following inattention complaints. We evaluated cognitive and self-report differences across four high/low CDS and positive/negative ADHD groups, and cognitive and self-report correlates of CDS symptomology. RESULTS There were no differences in cognitive performance, significant differences in self-reported psychiatric symptoms (greater CDS symptomatology, impulsivity among the high CDS groups; greater inattention among the positive ADHD/high CDS groups; greater hyperactivity among the positive ADHD groups), significant intercorrelations within cognitive and self-report measures, nonsignificant correlations between cognitive measures and self-report measures. CONCLUSION Findings support prior work demonstrating weak to null associations between ADHD and CDS symptoms and cognitive performance among adults.
Collapse
Affiliation(s)
- Brian M Cerny
- University of Illinois College of Medicine, Chicago, USA.,Illinois Institute of Technology, Chicago, USA
| | | | - Fini Chang
- University of Illinois College of Medicine, Chicago, USA.,University of Illinois at Chicago, USA
| | - Lauren M Scimeca
- University of Illinois College of Medicine, Chicago, USA.,Illinois Institute of Technology, Chicago, USA
| | - Matthew S Phillips
- University of Illinois College of Medicine, Chicago, USA.,The Chicago School of Professional Psychology, IL, USA
| | - Caitlin M Ogram Buckley
- University of Illinois College of Medicine, Chicago, USA.,University of Rhode Island, Kingston, USA
| | - Sophie I Leib
- University of Illinois College of Medicine, Chicago, USA.,Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | | | - Neil H Pliskin
- University of Illinois College of Medicine, Chicago, USA
| | - Jason R Soble
- University of Illinois College of Medicine, Chicago, USA
| |
Collapse
|
18
|
Chang F, Cerny BM, Tse PKY, Rauch AA, Khan H, Phillips MS, Fletcher NB, Resch ZJ, Ovsiew GP, Jennette KJ, Soble JR. Using the Grooved Pegboard Test as an Embedded Validity Indicator in a Mixed Neuropsychiatric Sample with Varying Cognitive Impairment: Cross-Validation Problems. Percept Mot Skills 2023; 130:770-789. [PMID: 36634223 DOI: 10.1177/00315125231151779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Embedded validity indicators (EVIs) derived from motor tests have received less empirical attention than those derived from tests of other neuropsychological abilities, particularly memory. Preliminary evidence suggests that the Grooved Pegboard Test (GPB) may function as an EVI, but existing studies were largely conducted using simulators and population samples without cognitive impairment. In this study we aimed to evaluate the GPB's classification accuracy as an EVI among a mixed clinical neuropsychiatric sample with and without cognitive impairment. This cross-sectional study comprised 223 patients clinically referred for neuropsychological testing. GPB raw and T-scores for both dominant and nondominant hands were examined as EVIs. A known-groups design, based on ≤1 failure on a battery of validated, independent criterion PVTs, showed that GPB performance differed significantly by validity group. Within the valid group, receiver operating characteristic curve analyses revealed that only the dominant hand raw score displayed acceptable classification accuracy for detecting invalid performance (area under curve [AUC] = .72), with an optimal cut-score of ≥106 seconds (33% sensitivity/88% specificity). All other scores had marginally lower classification accuracy (AUCs = .65-.68) for differentiating valid from invalid performers. Therefore, the GPB demonstrated limited utility as an EVI in a clinical sample containing patients with bona fide cognitive impairment.
Collapse
Affiliation(s)
- Fini Chang
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, 12247University of Illinois at Chicago, Chicago, Illinois, United States
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Phoebe Ka Yin Tse
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Andrew A Rauch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Loyola University Chicago, Chicago, Illinois, United States
| | - Humza Khan
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Psychology, Illinois Institute of Technology, Chicago, Illinois, United States
| | - Matthew S Phillips
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Clinical Psychology, The Chicago School of Professional Psychology, Chicago, Illinois, United States
| | - Noah B Fletcher
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| | - Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, Illinois, United States.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, Illinois, United States
| |
Collapse
|
19
|
Denning JH. The TOMM1 discrepancy index (TDI): A new performance validity test (PVT) that differentiates between invalid cognitive testing and those diagnosed with dementia. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:83-90. [PMID: 33945362 DOI: 10.1080/23279095.2021.1910951] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
There is a need to develop performance validity tests (PVTs) that accurately identify those with severe cognitive decline but also remain sensitive to those suspected of invalid cognitive testing. The TOMM1 Discrepancy Index (TDI) attempts to address both of these issues. Veterans diagnosed with dementia (n = 251) were administered TOMM1 and the MSVT in order to develop the TDI (TOMM1 percent correct minus MSVT Free Recall percent correct). Cut offs based on the dementia sample were then used to identify those in the non-dementia sample (n = 1,226) suspected of invalid test performance (n = 401). Combining TOMM1 and the TDI in the dementia sample greatly reduced the false positive rate (specificity = 0.97) at a cut off of 28 points or less on the TDI. Those suspected of invalid testing were identified at much higher rates (sensitivity = 0.75) compared to the MSVT genuine memory impairment profile (GMIP, sensitivity = 0.49). By utilizing a neurologically plausible pattern of scores across two PVTs, the TDI correctly classified those with dementia and identified a large percentage with invalid test performance. PVTs utilizing a complex pattern of performance may help reduce one's ability to fabricate cognitive deficits.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
20
|
Mayer AR, Ling JM, Dodd AB, Stephenson DD, Pabbathi Reddy S, Robertson-Benta CR, Erhardt EB, Harms RL, Meier TB, Vakhtin AA, Campbell RA, Sapien RE, Phillips JP. Multicompartmental models and diffusion abnormalities in paediatric mild traumatic brain injury. Brain 2022; 145:4124-4137. [PMID: 35727944 DOI: 10.1093/brain/awac221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 04/29/2022] [Accepted: 06/09/2022] [Indexed: 01/23/2023] Open
Abstract
The underlying pathophysiology of paediatric mild traumatic brain injury and the time-course for biological recovery remains widely debated, with clinical care principally informed by subjective self-report. Similarly, clinical evidence indicates that adolescence is a risk factor for prolonged recovery, but the impact of age-at-injury on biomarkers has not been determined in large, homogeneous samples. The current study collected diffusion MRI data in consecutively recruited patients (n = 203; 8-18 years old) and age and sex-matched healthy controls (n = 170) in a prospective cohort design. Patients were evaluated subacutely (1-11 days post-injury) as well as at 4 months post-injury (early chronic phase). Healthy participants were evaluated at similar times to control for neurodevelopment and practice effects. Clinical findings indicated persistent symptoms at 4 months for a significant minority of patients (22%), along with residual executive dysfunction and verbal memory deficits. Results indicated increased fractional anisotropy and reduced mean diffusivity for patients, with abnormalities persisting up to 4 months post-injury. Multicompartmental geometric models indicated that estimates of intracellular volume fractions were increased in patients, whereas estimates of free water fractions were decreased. Critically, unique areas of white matter pathology (increased free water fractions or increased neurite dispersion) were observed when standard assumptions regarding parallel diffusivity were altered in multicompartmental models to be more biologically plausible. Cross-validation analyses indicated that some diffusion findings were more reproducible when ∼70% of the total sample (142 patients, 119 controls) were used in analyses, highlighting the need for large-sample sizes to detect abnormalities. Supervised machine learning approaches (random forests) indicated that diffusion abnormalities increased overall diagnostic accuracy (patients versus controls) by ∼10% after controlling for current clinical gold standards, with each diffusion metric accounting for only a few unique percentage points. In summary, current results suggest that novel multicompartmental models are more sensitive to paediatric mild traumatic brain injury pathology, and that this sensitivity is increased when using parameters that more accurately reflect diffusion in healthy tissue. Results also indicate that diffusion data may be insufficient to achieve a high degree of objective diagnostic accuracy in patients when used in isolation, which is to be expected given known heterogeneities in pathophysiology, mechanism of injury and even criteria for diagnoses. Finally, current results indicate ongoing clinical and physiological recovery at 4 months post-injury.
Collapse
Affiliation(s)
- Andrew R Mayer
- The Mind Research Network/LBERI, Albuquerque, NM 87106, USA.,Department of Psychology, University of New Mexico, Albuquerque, NM 87131, USA.,Department of Neurology, University of New Mexico, Albuquerque, NM 87131, USA.,Department of Psychiatry and Behavioral Sciences, University of New Mexico, Albuquerque, NM 87131, USA
| | - Josef M Ling
- The Mind Research Network/LBERI, Albuquerque, NM 87106, USA
| | - Andrew B Dodd
- The Mind Research Network/LBERI, Albuquerque, NM 87106, USA
| | | | | | | | - Erik B Erhardt
- Department of Mathematics and Statistics, University of New Mexico, Albuquerque, NM 87131, USA
| | | | - Timothy B Meier
- Department of Neurosurgery, Medical College of Wisconsin, Milwaukee, WI 53226, USA.,Department of Cell Biology, Neurobiology and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA.,Department of Biomedical Engineering, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | | | - Richard A Campbell
- Department of Psychiatry and Behavioral Sciences, University of New Mexico, Albuquerque, NM 87131, USA
| | - Robert E Sapien
- Department of Emergency Medicine, University of New Mexico, Albuquerque, NM 87131, USA
| | - John P Phillips
- The Mind Research Network/LBERI, Albuquerque, NM 87106, USA.,Department of Neurology, University of New Mexico, Albuquerque, NM 87131, USA
| |
Collapse
|
21
|
Guty E, Horner MD. The minimal effect of depression on cognitive functioning when accounting for TOMM performance in a sample of U.S. veterans. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-9. [PMID: 36315488 DOI: 10.1080/23279095.2022.2137026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While many studies have demonstrated a relationship between depression and cognitive deficits, most have neglected to include measurements of performance validity. This study examined the relationship between depression and cognition after accounting for noncredible performance. Participants were veterans referred for outpatient clinical evaluation. The first set of regression analyses (N = 187) included age, sex, and education in Model 1, Beck Depression Inventory-2 (BDI-2) added in Model 2, and pass/failure of Test of Memory Malingering (TOMM) added in Model 3 as predictors of 12 neuropsychological test indices. The second set of analyses (N = 559) mirrored the first but with Major Depressive Disorder (MDD) diagnosis in Models 2 and 3. In the first analyses, after including TOMM in the model, only the relationship between BDI-2 and verbal fluency remained significant, but this did not survive a Bonferroni correction. In the second analyses, after including TOMM and Bonferroni correction, MDD diagnosis was a significant predictor only for CVLT-II Short Delay Free Recall. Therefore, the relationship between depression and cognition may not be driven by frank cognitive impairment, but rather by psychological mechanisms, which has implications for addressing depressed individuals' concerns about their cognitive functioning and suggest the value of providing psychoeducation and reassurance.
Collapse
Affiliation(s)
- Erin Guty
- Psychology, The Pennsylvania State University, University Park, PA, USA
- Mental Health Service, Ralph H. Johnson VAMC, Charleston, SC, USA
| | - Michael David Horner
- Mental Health, Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
22
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Replication and cross-validation of the personality assessment inventory (PAI) cognitive bias scale (CBS) in a mixed clinical sample. Clin Neuropsychol 2022; 36:1860-1877. [PMID: 33612093 PMCID: PMC8454137 DOI: 10.1080/13854046.2021.1889681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/08/2021] [Indexed: 01/27/2023]
Abstract
Objective: This study is a cross-validation of the Cognitive Bias Scale (CBS) from the Personality Assessment Inventory (PAI), a ten-item scale designed to assess symptom endorsement associated with performance validity test failure in neuropsychological samples. The study utilized a mixed neuropsychological sample of consecutively referred patients at a large academic medical center in the Midwest. Participants and Methods: Participants were 332 patients who completed embedded and free-standing performance validity tests (PVTs) and the PAI. Pass and fail groups were created based on PVT performance to evaluate classification accuracy of the CBS. Results: The results were generally consistent with the initial study for overall classification accuracy, sensitivity, and cut-off score. Consistent with the validation study, CBS had better classification accuracy than the original PAI validity scales and a comparable effect size to that obtained in the original validation publication; however, the Somatic Complaints scale (SOM) and the Conversion subscale (SOM-C) also demonstrated good classification accuracy. The CBS had incremental predictive ability compared to existing PAI scales. Conclusions: The results supported the CBS, but further research is needed on specific populations. Findings from this present study also suggest the relationship between conversion tendencies and PVT failure may be stronger in some geographic locations or population types (forensic versus clinical patients).
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, USA
| |
Collapse
|
23
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
24
|
Jennette KJ, Williams CP, Resch ZJ, Ovsiew GP, Durkin NM, O'Rourke JJF, Marceaux JC, Critchfield EA, Soble JR. Assessment of differential neurocognitive performance based on the number of performance validity tests failures: A cross-validation study across multiple mixed clinical samples. Clin Neuropsychol 2022; 36:1915-1932. [PMID: 33759699 DOI: 10.1080/13854046.2021.1900398] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This cross-sectional study examined the effect of number of Performance Validity Test (PVT) failures on neuropsychological test performance among a demographically diverse Veteran (VA) sample (n = 76) and academic medical sample (AMC; n = 128). A secondary goal was to investigate the psychometric implications of including versus excluding those with one PVT failure when cross-validating a series of embedded PVTs. Method: All patients completed the same six criterion PVTs, with the AMC sample completing three additional embedded PVTs. Neurocognitive test performance differences were examined based on number of PVT failures (0, 1, 2+) for both samples, and effect of number of criterion failures on embedded PVT performance was analyzed among the AMC sample. Results: Both groups with 0 or 1 PVT failures performed better than those with ≥2 PVT failures across most cognitive tests. There were nonsignificant differences between those with 0 or 1 PVT failures except for one test in the AMC sample. Receiver operator characteristic curve analyses found no differences in optimal cut score based on number of PVT failures when retaining/excluding one PVT failure. Conclusion: Findings support the use of ≥2 PVT failures as indicative of performance invalidity. These findings strongly support including those with one PVT failure with those with zero PVT failures in diagnostic accuracy studies, given that their inclusion reflects actual clinical practice, does not reduce sample sizes, and does not artificially deflate neurocognitive test results or inflate PVT classification accuracy statistics.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
25
|
Cohen CD, Rhoads T, Keezer RD, Jennette KJ, Williams CP, Hansen ND, Ovsiew GP, Resch ZJ, Soble JR. All of the accuracy in half of the time: Assessing abbreviated versions of the Test of Memory Malingering in the context of verbal and visual memory impairment. Clin Neuropsychol 2022; 36:1933-1949. [PMID: 33836622 DOI: 10.1080/13854046.2021.1908596] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
ObjectiveThe Test of Memory Malingering (TOMM) Trial 1 (T1) and errors on the first 10 items of T1 (T1-e10) were developed as briefer versions of the TOMM to minimize evaluation time and burden, although the effect of genuine memory impairment on these indices is not well established. This study examined whether increasing material-specific verbal and visual memory impairment affected T1 and T1-e10 performance and accuracy for detecting invalidity. Method: Data from 155 neuropsychiatric patients administered the TOMM, Rey Auditory Verbal Learning Test (RAVLT), and Brief Visuospatial Memory Test-Revised (BVMT-R) during outpatient evaluation were examined. Valid (N = 125) and invalid (N = 30) groups were established by four independent criterion performance validity tests. Verbal/visual memory impairment was classified as ≥37 T (normal memory); 30 T-36T (mild impairment); and ≤29 T (severe impairment). Results: Overall, T1 had outstanding accuracy, with 77% sensitivity/90% specificity. T1-e10 was less accurate but had excellent discriminability, with 60% sensitivity/87% specificity. T1 maintained excellent accuracy regardless of memory impairment severity, with 77% sensitivity/≥88% specificity and a relatively invariant cut-score even among those with severe verbal/visual memory impairment. T1-e10 had excellent classification accuracy among those with normal memory and mild impairment, but accuracy and sensitivity dropped with severe impairment and the optimal cut-score had to be increased to maintain adequate specificity. Conclusion: TOMM T1 is an effective performance validity test with strong psychometric properties regardless of material-specificity and severity of memory impairment. By contrast, T1-e10 functions relatively well in the context of mild memory impairment but has reduced discriminability with severe memory impairment.
Collapse
Affiliation(s)
- Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,School of Psychology, Counseling, and Family Therapy, Wheaton College, Wheaton, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Christopher P Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Nicholas D Hansen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
26
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
27
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
28
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
29
|
Test-Retest Reliability of a Semi-Structured Interview to Aid in Pediatric Traumatic Brain Injury Diagnosis. J Int Neuropsychol Soc 2022; 28:687-699. [PMID: 34376268 PMCID: PMC8831656 DOI: 10.1017/s1355617721000928] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Retrospective self-report is typically used for diagnosing previous pediatric traumatic brain injury (TBI). A new semi-structured interview instrument (New Mexico Assessment of Pediatric TBI; NewMAP TBI) investigated test-retest reliability for TBI characteristics in both the TBI that qualified for study inclusion and for lifetime history of TBI. METHOD One-hundred and eight-four mTBI (aged 8-18), 156 matched healthy controls (HC), and their parents completed the NewMAP TBI within 11 days (subacute; SA) and 4 months (early chronic; EC) of injury, with a subset returning at 1 year (late chronic; LC). RESULTS The test-retest reliability of common TBI characteristics [loss of consciousness (LOC), post-traumatic amnesia (PTA), retrograde amnesia, confusion/disorientation] and post-concussion symptoms (PCS) were examined across study visits. Aside from PTA, binary reporting (present/absent) for all TBI characteristics exhibited acceptable (≥0.60) test-retest reliability for both Qualifying and Remote TBIs across all three visits. In contrast, reliability for continuous data (exact duration) was generally unacceptable, with LOC and PCS meeting acceptable criteria at only half of the assessments. Transforming continuous self-report ratings into discrete categories based on injury severity resulted in acceptable reliability. Reliability was not strongly affected by the parent completing the NewMAP TBI. CONCLUSIONS Categorical reporting of TBI characteristics in children and adolescents can aid clinicians in retrospectively obtaining reliable estimates of TBI severity up to a year post-injury. However, test-retest reliability is strongly impacted by the initial data distribution, selected statistical methods, and potentially by patient difficulty in distinguishing among conceptually similar medical concepts (i.e., PTA vs. confusion).
Collapse
|
30
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
31
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
32
|
Crişan I, Erdodi L. Examining the cross-cultural validity of the test of memory malingering and the Rey 15-item test. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-11. [PMID: 35476611 DOI: 10.1080/23279095.2022.2064753] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study was designed to investigate the cross-cultural validity of two freestanding performance validity tests (PVTs), the Test of Memory Malingering - Trial 1 (TOMM-1) and the Rey Fifteen Item Test (Rey-15) in Romanian-speaking patients. METHODS The TOMM-1 and Rey-15 free recall (FR) and the combination score incorporating the recognition trial (COMB) were administered to a mixed clinical sample of 61 adults referred for cognitive evaluation, 24 of whom had external incentives to appear impaired. Average scores on PVTs were compared between the two groups. Classification accuracies were computed using one PVT against another. RESULTS Patients with identifiable external incentives to appear impaired produced significantly lower scores and more errors on validity indicators. The largest effect sizes emerged on TOMM-1 (Cohen's d = 1.00-1.19). TOMM-1 was a significant predictor of the Rey-15 COMB ≤20 (AUC = .80; .38 sensitivity; .89 specificity at a cutoff of ≤39). Similarly, both Rey-15 indicators were significant predictors of TOMM-1 at ≤39 as the criterion (AUCs = .73-.76; .33 sensitivity; .89-.90 specificity). CONCLUSION Results offer a proof of concept for the cross-cultural validity of the TOMM-1 and Rey-15 in a Romanian clinical sample.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
33
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
34
|
Soble JR, Cerny BM, Ovsiew GP, Rhoads T, Reynolds TP, Sharp DW, Jennette KJ, Marceaux JC, O'Rourke JJF, Critchfield EA, Resch ZJ. Comparing the Independent and Aggregated Accuracy of Trial 1 and the First 10 TOMM Items for Detecting Invalid Neuropsychological Test Performance Across Civilian and Veteran Clinical Samples. Percept Mot Skills 2022; 129:269-288. [PMID: 35139315 DOI: 10.1177/00315125211066399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Previous studies support using two abbreviated tests of the Test of Memory Malingering (TOMM), including (a) Trial 1 (T1) and (b) the number of errors on the first 10 items of T1 (T1e10), as performance validity tests (PVTs). In this study, we examined the independent and aggregated predictive utility of TOMM T1 and T1e10 for identifying invalid neuropsychological test performance across two clinical samples. We employed cross-sectional research to examine two independent and demographically diverse mixed samples of military veterans and civilians (VA = 108; academic medical center = 234) of patients who underwent neuropsychological evaluations. We determined validity groups by patient performance on four independent criterion PVTs. We established concordances between passing/failing the TOMM T1e10 and T1, followed by logistic regression to determine individual and aggregated accuracy of T1e10 and T1 for predicting validity group membership. Concordance between passing T1e10 and T1 was high, as was overall validity (87-98%) across samples. By contrast, T1e10 failure was more highly concordant with T1 failure (69-77%) than with overall invalidity status (59-60%) per criterion PVTs, whereas T1 failure was more highly concordant with invalidity status (72-88%) per criterion PVTs. Logistic regression analyses demonstrated similar results, with T1 accounting for more variance than T1e10. However, combining T1e10 and T1 accounted for the most variance of any model, with T1e10 and T1 each emerging as significant predictors. TOMM T1 and, to a lesser extent, T1e10 were significant predictors of independent criterion-derived validity status across two distinct clinical samples, but they did not offer improved classification accuracy when aggregated.
Collapse
Affiliation(s)
- Jason R Soble
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Tristan P Reynolds
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Dillion W Sharp
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Justin J F O'Rourke
- Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Edan A Critchfield
- Psychology Service, South Texas Veterans Healthcare System, San Antonio, TX, USA.,Polytruama Rehabilitation Center, South Texas Veterans Healthcare System, San Antonio, TX, USA
| | - Zachary J Resch
- Department of Psychiatry, 12247University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
35
|
DiCarlo GM, Ernst WJ, Kneavel ME. An exploratory study of the convergent validity of the Test of Effort (TOE) in adults with acquired brain injury. Brain Inj 2022; 36:424-431. [PMID: 35113759 DOI: 10.1080/02699052.2022.2034953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
PRIMARY OBJECTIVE To examine the convergent validity of the Test of Effort (TOE), a performance validity test (PVT) currently under development that employs a two-subtest (one verbal, one visual), forced-choice recognition memory format. RESEARCH DESIGN A descriptive, correlational design was employed to describe performance on the TOE and examine the convergent validity between the TOE and comparison measures. METHODS AND PROCEDURES A sample of 53 individuals with chronic acquired brain injury (ABI) were administered the TOE and three well-validated PVTs (Reliable Digit Span [RDS], Test of Memory Malingering [TOMM] and Dot Counting Test [DCT]). MAIN OUTCOMES AND RESULTS The TOE appeared more difficult than it actually was, suggesting adequate face validity. Medium-to-large correlations were observed between the TOE and established PVTs, suggesting good convergent validity. Provisional cutoff scores are offered based on performance of a subgroup of participants with "sufficient effort." CONCLUSIONS Overall, the TOE shows promise as a PVT measure for clinical use. Future studies with larger and more diverse samples are needed to more fully determine the psychometric characteristics of the TOE.
Collapse
Affiliation(s)
| | - William J Ernst
- Department of Professional Psychology, Chestnut Hill College, Philadelphia, Pennsylvania, USA
| | - Meredith E Kneavel
- School of Nursing and Health Sciences, La Salle University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
36
|
Ashendorf L, Withrow S, Ward SH, Sullivan SK, Sugarman MA. Decision rules for an abbreviated administration of the Test of Memory Malingering. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-10. [PMID: 35068279 DOI: 10.1080/23279095.2022.2026948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The present study investigated abbreviation methods for the Test of Memory Malingering (TOMM) in relation to traditional manual-based test cutoffs and independently derived more stringent cutoffs suggested by recent research (≤48 on Trial 2 or 3). Consecutively referred outpatient U.S. military veterans (n = 260) were seen for neuropsychological evaluation for mild traumatic brain injury or possible attention-deficit/hyperactivity disorder. Performance on TOMM Trial 1 was evaluated, including the total score and errors on the first 10 items (TOMMe10), to determine correspondence and redundancy with Trials 2 and 3. Using the traditional cutoff, valid performance on Trials 2 and 3 was predicted by zero errors on TOMMe10 and by Trial 1 scores greater than 41. Invalid performance was predicted by commission of more than three errors on TOMMe10 and by Trial 1 scores less than 34. For revised TOMM cutoffs, a Trial 1 score above 46 was predictive of a valid score, and a TOMMe10 score of three or more errors or a Trial 1 score below 36 was associated with invalid TOMM performance. Conditional abbreviation of the TOMM is feasible in a vast majority of cases without sacrificing information regarding performance validity. Decision trees are provided to facilitate administration of the three trials.
Collapse
Affiliation(s)
- Lee Ashendorf
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
| | - Susanne Withrow
- Behavioral Health Service Line, VA Pittsburgh Healthcare System, Pittsburgh, PA, USA
| | - Sarah H Ward
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
| | - Sara K Sullivan
- Psychology Service, VA Bedford Healthcare System, Bedford, MA, USA
| | - Michael A Sugarman
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
37
|
Stocks JK, Shields AN, DeBoer AB, Cerny BM, Ogram Buckley CM, Ovsiew GP, Jennette KJ, Resch ZJ, Basurto KS, Song W, Pliskin NH, Soble JR. The impact of visual memory impairment on Victoria Symptom Validity Test performance: A known-groups analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-10. [PMID: 34985401 DOI: 10.1080/23279095.2021.2021911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE We assessed the effect of visual learning and recall impairment on Victoria Symptom Validity Test (VSVT) accuracy and response latency for Easy, Difficult, and Total Items. METHOD A sample of 163 adult patients administered the VSVT and Brief Visuospatial Memory Test-Revised were classified as valid (114/163) or invalid (49/163) groups via independent criterion performance validity tests (PVTs). Classification accuracies for all VSVT indices were examined for the overall sample, and separately for subgroups based on visual memory functioning. RESULTS In the overall sample, all indices produced acceptable classification accuracy (areas under the curve [AUCs] ≥ 0.79). When stratified by visual learning/recall impairment, accuracy indices yielded acceptable classification for both the unimpaired (AUCs ≥0.79) and impaired subsamples (AUCs ≥0.75). Latency indices had acceptable classification accuracy for the unimpaired subsample (AUCs ≥0.74), but accuracy and sensitivity dropped for the impaired sample (AUCs ≥0.67). CONCLUSIONS VSVT accuracy and response latency yielded acceptable classification accuracies in the overall sample, and this effect was maintained in those with and without visual learning/recall impairment for the accuracy indices. Findings indicate that the VSVT is a psychometrically robust PVT with largely invariant cut-scores, even in the presence of bona fide visual learning/recall impairment.
Collapse
Affiliation(s)
- Jane K Stocks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Allison N Shields
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Adam B DeBoer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | | | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Woojin Song
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
38
|
White DJ, Ovsiew GP, Rhoads T, Resch ZJ, Lee M, Oh AJ, Soble JR. The Divergent Roles of Symptom and Performance Validity in the Assessment of ADHD. J Atten Disord 2022; 26:101-108. [PMID: 33084457 DOI: 10.1177/1087054720964575] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study examined concordance between symptom and performance validity among clinically-referred patients undergoing neuropsychological evaluation for Attention-Deficit/Hyperactivity Disorder (ADHD). METHOD Data from 203 patients who completed the WAIS-IV Working Memory Index, the Clinical Assessment of Attention Deficit-Adult (CAT-A), and ≥4 criterion performance validity tests (PVTs) were analyzed. RESULTS Symptom and performance validity were concordant in 76% of cases, with the majority being valid performance. Of the remaining 24% of cases with divergent validity findings, patients were more likely to exhibit symptom invalidity (15%) than performance invalidity (9%). Patients demonstrating symptom invalidity endorsed significantly more ADHD symptoms than those with credible symptom reporting (ηp2 = .06-.15), but comparable working memory test performance, whereas patients with performance invalidity had significantly worse working memory performance than those with valid PVT performance (ηp2 = .18). CONCLUSION Symptom and performance invalidity represent dissociable constructs in patients undergoing neuropsychological evaluation of ADHD and should be evaluated independently.
Collapse
Affiliation(s)
- Daniel J White
- University of Illinois College of Medicine, Chicago, IL USA.,Roosevelt University, Chicago, IL, USA
| | | | - Tasha Rhoads
- University of Illinois College of Medicine, Chicago, IL USA
| | - Zachary J Resch
- University of Illinois College of Medicine, Chicago, IL USA.,Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Mary Lee
- University of Illinois College of Medicine, Chicago, IL USA
| | - Alison J Oh
- University of Illinois College of Medicine, Chicago, IL USA
| | - Jason R Soble
- University of Illinois College of Medicine, Chicago, IL USA
| |
Collapse
|
39
|
Fountain-Zaragoza S, Braun SE, Horner MD, Benitez A. Comparison of conventional and actuarial neuropsychological criteria for mild cognitive impairment in a clinical setting. J Clin Exp Neuropsychol 2021; 43:753-765. [PMID: 34962226 DOI: 10.1080/13803395.2021.2007857] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
INTRODUCTION Evidence-based practice in neuropsychology involves the use of validated tests, cutoff scores, and interpretive algorithms to identify clinically significant cognitive deficits. Recently, actuarial neuropsychological criteria (ANP) for identifying mild cognitive impairment were developed, demonstrating improved criterion validity and temporal stability compared to conventional criteria (CNP). However, benefits of the ANP criteria have not been investigated in non-research, clinical settings with varied etiologies, severities, and comorbidities. This study compared the utility of CNP and ANP criteria using data from a memory disorders clinic. METHOD Data from 500 non-demented older adults evaluated in a Veterans Affairs Medical Center memory disorders clinic were retrospectively analyzed. We applied CNP and ANP criteria to the Repeatable Battery for the Assessment of Neuropsychological Status, compared outcomes to consensus clinical diagnoses, and conducted cluster analyses of scores from each group. RESULTS The majority (72%) of patients met both the CNP and ANP criteria and both approaches were susceptible to confounding factors such as invalid test data and mood disturbance. However, the CNP approach mislabeled impairment in more patients with non-cognitive disorders and intact cognition. Comparatively, the ANP approach misdiagnosed patients with depression at a third of the rate and those with no diagnosis at nearly half the rate of CNP. Cluster analyses revealed groups with: 1) minimal impairment, 2) amnestic impairment, and 3) multi-domain impairment. The ANP approach yielded subgroups with more distinct neuropsychological profiles. CONCLUSIONS We replicated previous findings that the CNP approach is over-inclusive, particularly for those determined to have no cognitive disorder by a consensus team. The ANP approach yielded fewer false positives and better diagnostic specificity than the CNP. Despite clear benefits of the ANP vs. CNP, there was substantial overlap in their performance in this heterogeneous sample. These findings highlight the critical role of clinical interpretation when wielding these empirically-derived tools.
Collapse
Affiliation(s)
- Stephanie Fountain-Zaragoza
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA.,Ralph H. Johnson Department of Veterans Affairs Medical Center, Mental Health Service, Charleston, SC, USA
| | - Sarah Ellen Braun
- Department of Neurology, School of Medicine, Virginia Commonwealth University, Richmond, VA, USA
| | - Michael David Horner
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA.,Ralph H. Johnson Department of Veterans Affairs Medical Center, Mental Health Service, Charleston, SC, USA
| | - Andreana Benitez
- Department of Neurology, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
40
|
Gold DM, Rizzo JR, Lee YSC, Childs A, Hudson TE, Martone J, Matsuzawa YK, Fraser F, Ricker JH, Dai W, Selesnick I, Balcer LJ, Galetta SL, Rucker JC. King-Devick Test Performance and Cognitive Dysfunction after Concussion: A Pilot Eye Movement Study. Brain Sci 2021; 11:brainsci11121571. [PMID: 34942873 PMCID: PMC8699706 DOI: 10.3390/brainsci11121571] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 11/17/2021] [Accepted: 11/24/2021] [Indexed: 11/30/2022] Open
Abstract
(1) Background: The King-Devick (KD) rapid number naming test is sensitive for concussion diagnosis, with increased test time from baseline as the outcome measure. Eye tracking during KD performance in concussed individuals shows an association between inter-saccadic interval (ISI) (the time between saccades) prolongation and prolonged testing time. This pilot study retrospectively assesses the relation between ISI prolongation during KD testing and cognitive performance in persistently-symptomatic individuals post-concussion. (2) Results: Fourteen participants (median age 34 years; 6 women) with prior neuropsychological assessment and KD testing with eye tracking were included. KD test times (72.6 ± 20.7 s) and median ISI (379.1 ± 199.1 msec) were prolonged compared to published normative values. Greater ISI prolongation was associated with lower scores for processing speed (WAIS-IV Coding, r = 0.72, p = 0.0017), attention/working memory (Trails Making A, r = −0.65, p = 0.006) (Digit Span Forward, r = 0.57, p = −0.017) (Digit Span Backward, r= −0.55, p = 0.021) (Digit Span Total, r = −0.74, p = 0.001), and executive function (Stroop Color Word Interference, r = −0.8, p = 0.0003). (3) Conclusions: This pilot study provides preliminary evidence suggesting that cognitive dysfunction may be associated with prolonged ISI and KD test times in concussion.
Collapse
Affiliation(s)
- Doria M. Gold
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
| | - John-Ross Rizzo
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
- Department of Physical Medicine & Rehabilitation, New York University Grossman School of Medicine, New York, NY 10016, USA; (Y.S.C.L.); (A.C.); (Y.K.M.); (J.H.R.)
- Department of Mechanical & Aerospace Engineering, New York University Tandon School of Engineering, New York, NY 11201, USA
- Department of Biomedical Engineering, New York University Tandon School of Engineering, New York, NY 11201, USA
| | - Yuen Shan Christine Lee
- Department of Physical Medicine & Rehabilitation, New York University Grossman School of Medicine, New York, NY 10016, USA; (Y.S.C.L.); (A.C.); (Y.K.M.); (J.H.R.)
| | - Amanda Childs
- Department of Physical Medicine & Rehabilitation, New York University Grossman School of Medicine, New York, NY 10016, USA; (Y.S.C.L.); (A.C.); (Y.K.M.); (J.H.R.)
| | - Todd E. Hudson
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
- Department of Physical Medicine & Rehabilitation, New York University Grossman School of Medicine, New York, NY 10016, USA; (Y.S.C.L.); (A.C.); (Y.K.M.); (J.H.R.)
| | - John Martone
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
| | - Yuka K. Matsuzawa
- Department of Physical Medicine & Rehabilitation, New York University Grossman School of Medicine, New York, NY 10016, USA; (Y.S.C.L.); (A.C.); (Y.K.M.); (J.H.R.)
| | - Felicia Fraser
- Department of Physical Medicine & Rehabilitation, MetroHeath System, Cleveland, OH 44109, USA;
| | - Joseph H. Ricker
- Department of Physical Medicine & Rehabilitation, New York University Grossman School of Medicine, New York, NY 10016, USA; (Y.S.C.L.); (A.C.); (Y.K.M.); (J.H.R.)
| | - Weiwei Dai
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
- Department of Electrical & Computer Engineering, New York University Tandon School of Engineering, New York, NY 11201, USA;
| | - Ivan Selesnick
- Department of Electrical & Computer Engineering, New York University Tandon School of Engineering, New York, NY 11201, USA;
| | - Laura J. Balcer
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
- Department of Population Health, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Ophthalmology, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Steven L. Galetta
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
- Department of Ophthalmology, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Janet C. Rucker
- Department of Neurology, New York University Grossman School of Medicine, New York, NY 10016, USA; (D.M.G.); (J.-R.R.); (T.E.H.); (J.M.); (W.D.); (L.J.B.); (S.L.G.)
- Department of Ophthalmology, New York University Grossman School of Medicine, New York, NY 10016, USA
- Correspondence: ; Tel.: +1-212-263-7744
| |
Collapse
|
41
|
Lippa SM, French LM, Brickell TA, Driscoll AE, Glazer ME, Tippett CE, Sullivan JK, Lange RT. Post-Traumatic Stress Disorder Symptoms Are Related to Cognition after Complicated Mild and Moderate Traumatic Brain Injury but Not Severe and Penetrating Traumatic Brain Injury. J Neurotrauma 2021; 38:3137-3145. [PMID: 34409857 DOI: 10.1089/neu.2021.0120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Although post-traumatic stress disorder (PTSD) has been associated with worse cognitive outcomes after mild traumatic brain injury (TBI), its impact has not been evaluated after more severe TBI. This study aimed to determine whether PTSD symptoms are related to cognition after complicated mild, moderate, severe, and penetrating TBI. Service members (n = 137) with a history of complicated mild/moderate TBI (n = 64) or severe/penetrating TBI (n = 73) were prospectively enrolled from United States Military Treatment Facilities. Participants completed a neuropsychological assessment one year or more post-injury. Six neuropsychological composite scores and an overall test battery mean (OTBM) were considered. Participants were excluded if there was evidence of invalid responding. Hierarchical linear regressions were conducted evaluating neuropsychological performance. The interaction between TBI severity and PTSD Checklist-Civilian version total score was significant for processing speed (β = 0.208, p = 0.034) and delayed memory (β = 0.239, p = 0.021) and trended toward significance for immediate memory (β = 0.190, p = 0.057) and the OTBM (β = 0.181, p = 0.063). For each of these composite scores, the relationship between PTSD symptoms and cognition was stronger in the complicated mild/moderate TBI group than the severe/penetrating TBI group. Within the severe/penetrating TBI group, PTSD symptoms were unrelated to cognitive performance. In contrast, within the complicated mild/moderate TBI group, PTSD symptoms were significantly related to processing speed (R2Δ = 0.077, β = -0.280, p = 0.019), immediate memory (R2Δ = 0.197, β = -0.448, p < 0.001), delayed memory (R2Δ = 0.176, β = -0.423, p < 0.001), executive functioning (R2Δ = 0.100, β = -0.317, p = 0.008), and the OTBM (R2Δ = 0.162, β = -0.405, p < 0.001). The potential impact of PTSD symptoms on cognition, over and above the impact of brain injury alone, should be considered with service members and veterans with a history of complicated mild/moderate TBI. In addition, in research comparing cognitive outcomes between patients with histories of complicated-mild, moderate, severe, and/or penetrating TBI, it will be important to account for PTSD symptoms.
Collapse
Affiliation(s)
- Sara M Lippa
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Louis M French
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Traumatic Brain Injury Center of Excellence, Silver Spring, Maryland, USA.,Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| | - Tracey A Brickell
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Traumatic Brain Injury Center of Excellence, Silver Spring, Maryland, USA.,Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA.,Contractor, General Dynamics Information Technology, Falls Church, Virginia, USA.,Centre of Excellence on Post-traumatic Stress Disorder, Ottawa, ON, Canada
| | - Angela E Driscoll
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Megan E Glazer
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Traumatic Brain Injury Center of Excellence, Silver Spring, Maryland, USA.,Contractor, General Dynamics Information Technology, Falls Church, Virginia, USA
| | - Corie E Tippett
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Traumatic Brain Injury Center of Excellence, Silver Spring, Maryland, USA.,Contractor, General Dynamics Information Technology, Falls Church, Virginia, USA
| | - Jamie K Sullivan
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Traumatic Brain Injury Center of Excellence, Silver Spring, Maryland, USA.,Contractor, General Dynamics Information Technology, Falls Church, Virginia, USA
| | - Rael T Lange
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Traumatic Brain Injury Center of Excellence, Silver Spring, Maryland, USA.,Contractor, General Dynamics Information Technology, Falls Church, Virginia, USA.,University of British Columbia, Vancouver, British Columbia, Canada.,Centre of Excellence on Post-traumatic Stress Disorder, Ottawa, ON, Canada
| |
Collapse
|
42
|
Messerly J, Soble JR, Webber TA, Alverson WA, Fullen C, Kraemer LD, Marceaux JC. Evaluation of the classification accuracy of multiple performance validity tests in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY. ADULT 2021; 28:727-736. [PMID: 31835915 DOI: 10.1080/23279095.2019.1698581] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Test of Memory Malingering (TOMM) and Word Memory Test (WMT) are among the most well-known performance validity tests (PVTs) and regarded as gold standard measures. Due to the many factors that impact PVT selection, it is imperative that clinicians make informed clinical decisions with respect to additional or alternative PVTs that demonstrate similar classification accuracy as these well-validated measures. The present archival study evaluated the agreement/classification accuracy of a large battery consisting of multiple other freestanding/embedded PVTs in a mixed clinical sample of 126 veterans. We examined failure rates for all standalone/embedded PVTs using established cut-scores and calculated pass/fail agreement rates and diagnostic odds ratios for various combinations of PVTs using the TOMM and WMT as criterion measures. TOMM and WMT demonstrated the best agreement, followed by Word Choice Test (WCT). The Rey Fifteen Item Test had an excessive number of false-negative errors and reduced classification accuracy. The Digit Span age-corrected scaled score (DS-ACSS) had highest agreement. Findings lend further support to the use of a combination of embedded and standalone PVTs in identifying suboptimal performance. Results provide data to enhance clinical decision making for neuropsychologists who implement combinations of PVTs in a larger clinical battery.
Collapse
Affiliation(s)
- Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Departments of Psychiatry and Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Troy A Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Mental Health and Rehabilitation and Extended Carelines, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - W Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Chrystal Fullen
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Lindsay D Kraemer
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Janice C Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center, San Antonio, TX, USA
| |
Collapse
|
43
|
Rhoads T, Leib SI, Resch ZJ, Basurto KS, Castillo LR, Jennette KJ, Soble JR. Relative Rates of Invalidity for the Test of Memory Malingering and the Dot Counting Test Among Spanish-Speaking Patients Residing in the USA. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09423-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
44
|
Braun SE, Fountain-Zaragoza S, Halliday CA, Horner MD. Demographic differences in performance validity test failure. APPLIED NEUROPSYCHOLOGY. ADULT 2021:1-9. [PMID: 34428386 DOI: 10.1080/23279095.2021.1958814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The present study investigated demographic differences in performance validity test (PVT) failure in a Veteran sample. METHOD Data were extracted from clinical neuropsychological evaluations. Only veterans who identified as men, as either European American/White (EA) or African American/Black (AA) were included (n = 1261). We investigated whether performance on two frequently used PVTs, the Test of Memory Malingering (TOMM), and the Medical Symptom Validity Test (MSVT), differed by age, education, and race using separate logistic regressions. RESULTS Veterans with younger age, less education, and Veterans Affairs (VA) service-connected disability were significantly more likely to fail both PVTs. Race was not a significant predictor of MSVT failure, but AA patients were significantly more likely than EA patients to fail the TOMM. For all significant demographic predictors in the models, effects were small. In a subsample of patients who were given both PVTs (n = 461), the effects of race on performance remained. CONCLUSIONS Performance on the TOMM and MSVT differed by age and level of education. Performance on the TOMM differed between EA and AA patients, whereas performance on the MSVT did not. These results suggest that demographic factors may play a small but measurable role in performance on specific PVTs.
Collapse
Affiliation(s)
- Sarah Ellen Braun
- Department of Neurology, Virginia Commonwealth University, Richmond, VA, USA
- Massey Cancer Center, Richmond, VA, USA
| | | | - Colleen A Halliday
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Michael David Horner
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
45
|
The Multi-Level Pattern Memory Test (MPMT): Initial Validation of a Novel Performance Validity Test. Brain Sci 2021; 11:brainsci11081039. [PMID: 34439658 PMCID: PMC8393330 DOI: 10.3390/brainsci11081039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/30/2021] [Accepted: 08/01/2021] [Indexed: 11/16/2022] Open
Abstract
Performance validity tests (PVTs) are used for the detection of noncredible performance in neuropsychological assessments. The aim of the study was to assess the efficacy (i.e., discrimination capacity) of a novel PVT, the Multi-Level Pattern Memory Test (MPMT). It includes stages that allow profile analysis (i.e., detecting noncredible performance based on an analysis of participants' performance across stages) and minimizes the likelihood that it would be perceived as a PVT by examinees. In addition, it utilizes nonverbal stimuli and is therefore more likely to be cross-culturally valid. In Experiment 1, participants that were instructed to simulate cognitive impairment performed less accurately than honest controls in the MPMT (n = 67). Importantly, the MPMT has shown an adequate discrimination capacity, though somewhat lower than an established PVT (i.e., Test of Memory Malingering-TOMM). Experiment 2 (n = 77) validated the findings of the first experiment while also indicating a dissociation between the simulators' objective performance and their perceived cognitive load while performing the MPMT. The MPMT and the profile analysis based on its outcome measures show initial promise in detecting noncredible performance. It may, therefore, increase the range of available PVTs at the disposal of clinicians, though further validation in clinical settings is mandated. The fact that it is an open-source software will hopefully also encourage the development of research programs aimed at clarifying the cognitive processes involved in noncredible performance and the impact of PVT characteristics on clinical utility.
Collapse
|
46
|
Severity of Ongoing Post-Concussive Symptoms as a Predictor of Cognitive Performance Following a Pediatric Mild Traumatic Brain Injury. J Int Neuropsychol Soc 2021; 27:686-696. [PMID: 33243310 DOI: 10.1017/s1355617720001228] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE This study aimed to examine the predictors of cognitive performance in patients with pediatric mild traumatic brain injury (pmTBI) and to determine whether group differences in cognitive performance on a computerized test battery could be observed between pmTBI patients and healthy controls (HC) in the sub-acute (SA) and the early chronic (EC) phases of injury. METHOD 203 pmTBI patients recruited from emergency settings and 159 age- and sex-matched HC aged 8-18 rated their ongoing post-concussive symptoms (PCS) on the Post-Concussion Symptom Inventory and completed the Cogstate brief battery in the SA (1-11 days) phase of injury. A subset (156 pmTBI patients; 144 HC) completed testing in the EC (~4 months) phase. RESULTS Within the SA phase, a group difference was only observed for the visual learning task (One-Card Learning), with pmTBI patients being less accurate relative to HC. Follow-up analyses indicated higher ongoing PCS and higher 5P clinical risk scores were significant predictors of lower One-Card Learning accuracy within SA phase, while premorbid variables (estimates of intellectual functioning, parental education, and presence of learning disabilities or attention-deficit/hyperactivity disorder) were not. CONCLUSIONS The absence of group differences at EC phase is supportive of cognitive recovery by 4 months post-injury. While the severity of ongoing PCS and the 5P score were better overall predictors of cognitive performance on the Cogstate at SA relative to premorbid variables, the full regression model explained only 4.1% of the variance, highlighting the need for future work on predictors of cognitive outcomes.
Collapse
|
47
|
McClintock SM, Minto L, Denney DA, Bailey KC, Cullum CM, Dotson VM. Clinical Neuropsychological Evaluation in Older Adults With Major Depressive Disorder. Curr Psychiatry Rep 2021; 23:55. [PMID: 34255167 PMCID: PMC8764751 DOI: 10.1007/s11920-021-01267-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/25/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE OF THE REVIEW Older adults with major depressive disorder are particularly vulnerable to MDD-associated adverse cognitive effects including slowed processing speed, decreased attention, and executive dysfunction. The purpose of this review is to describe the approach to a clinical neuropsychological evaluation in older adults with MDD. Specifically, this review compares and contrasts neurocognitive screening and clinical neuropsychological evaluation procedures and details the multiple components of the clinical neuropsychological evaluation. RECENT FINDINGS Research has shown that neurocognitive screening serves a useful purpose to provide an acute and rapid assessment of global cognitive function; however, it has limited sensitivity and specificity. The clinical neuropsychological evaluation process is multifaceted and encompasses a review of available medical records, neurobehavioral status and diagnostic interview, comprehensive cognitive and clinical assessment, examination of inclusion and diversity factors as well as symptom and performance validity, and therapeutic feedback. As such, the evaluation provides invaluable information on multiple cognitive functions, establishes brain and behavior relationships, clarifies neuropsychiatric diagnoses, and can inform the etiology of cognitive impairment. Clinical neuropsychological evaluation plays a unique and critical role in integrated healthcare for older adults with MDD. Indeed, the evaluation can serve as a nexus to synthesize information across healthcare providers in order to maximize measurement-based care that can optimize personalized medicine and overall health outcomes.
Collapse
Affiliation(s)
- Shawn M McClintock
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA.
- Division of Brain Stimulation and Neurophysiology, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA.
| | - Lex Minto
- Georgia State University, Atlanta, GA, USA
| | - David A Denney
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA
| | - K Chase Bailey
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA
| | - C Munro Cullum
- Division of Psychology, Department of Psychiatry, UT Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX, 75390-8898, USA
| | - Vonetta M Dotson
- Department of Psychology, Georgia State University, P.O. Box 5010, Atlanta, GA, 30302-5010, USA
- Gerontology Institute, Georgia State University, Atlanta, GA, USA
| |
Collapse
|
48
|
Rhoads T, Neale AC, Resch ZJ, Cohen CD, Keezer RD, Cerny BM, Jennette KJ, Ovsiew GP, Soble JR. Psychometric implications of failure on one performance validity test: a cross-validation study to inform criterion group definition. J Clin Exp Neuropsychol 2021; 43:437-448. [PMID: 34233580 DOI: 10.1080/13803395.2021.1945540] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Introduction: Research to date has supported the use of multiple performance validity tests (PVTs) for determining validity status in clinical settings. However, the implications of including versus excluding patients failing one PVT remains a source of debate, and methodological guidelines for PVT research are lacking. This study evaluated three validity classification approaches (i.e. 0 vs. ≥2, 0-1 vs. ≥2, and 0 vs. ≥1 PVT failures) using three reference standards (i.e. criterion PVT groupings) to recommend approaches best suited to establishing validity groups in PVT research methodology.Method: A mixed clinical sample of 157 patients was administered freestanding (Medical Symptom Validity Test, Dot Counting Test, Test of Memory Malingering, Word Choice Test), and embedded PVTs (Reliable Digit Span, RAVLT Effort Score, Stroop Word Reading, BVMT-R Recognition Discrimination) during outpatient neuropsychological evaluation. Three reference standards (i.e. two freestanding and three embedded PVTs from the above list) were created. Rey 15-Item Test and RAVLT Forced Choice were used solely as outcome measures in addition to two freestanding PVTs not employed in the reference standard. Receiver operating characteristic curve analyses evaluated classification accuracy using the three validity classification approaches for each reference standard.Results: When patients failing only one PVT were excluded or classified as valid, classification accuracy ranged from acceptable to excellent. However, classification accuracy was poor to acceptable when patients failing one PVT were classified as invalid. Sensitivity/specificity across two of the validity classification approaches (0 vs. ≥2; 0-1 vs. ≥2) remained reasonably stable.Conclusions: These results reflect that both inclusion and exclusion of patients failing one PVT are acceptable approaches to PVT research methodology and the choice of method likely depends on the study rationale. However, including such patients in the invalid group yields unacceptably poor classification accuracy across a number of psychometrically robust outcome measures and therefore is not recommended.
Collapse
Affiliation(s)
- Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Alec C Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
49
|
Lippa SM, Kenney K, Riedy G, Ollinger J. White Matter Hyperintensities Are Not Related to Symptomatology or Cognitive Functioning in Service Members with a Remote History of Traumatic Brain Injury. Neurotrauma Rep 2021; 2:245-254. [PMID: 34223555 PMCID: PMC8244514 DOI: 10.1089/neur.2021.0002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
This study aimed to determine whether magnetic resonance imaging (MRI) white matter hyperintensities (WMHs) are associated with symptom reporting and/or cognitive performance in 1202 active-duty service members with prior single or multiple mild traumatic brain injury (mTBI). Patients with mTBI evaluated at the National Intrepid Center of Excellence (NICoE) at Walter Reed National Military Medical Center (WRNMMC) were divided into those with (n = 632) and without (n = 570) WMHs. The groups were compared on several self-report scales including the Neurobehavioral Symptom Inventory (NSI), Post-Traumatic Stress Disorder (PTSD) Checklist-Civilian Version (PCL-C), Satisfaction with Life Scale (SWLS), and Short Form-36 Health Survey (SF-36). They were also compared on several neuropsychological measures, including tests of attention, working memory, learning and memory, executive functioning, and psychomotor functioning. After correction for multiple comparisons, there were no significant differences between the two groups on any self-reported symptom scale or cognitive test. When comparing a subgroup with the highest (20+) WMH burden (n = 60) with those with no WMHs (n = 60; matched on age, education, sex, race, rank, and TBI number), only SF-36 Health Change significantly differed between the subgroups; the multiple WMH subgroup reported worsening health over the past year (t[53] = 3.52, p = 0.001, d = 0.67) compared with the no WMH subgroup. These findings build on prior research suggesting total WMHs are not associated with significant changes in self-reported symptoms or cognitive performance in patients with a remote history of mTBI. As such, clinicians are encouraged to use caution when reporting such imaging findings.
Collapse
Affiliation(s)
- Sara M Lippa
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - Kimbra Kenney
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA.,Department of Neurology, Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| | - Gerard Riedy
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| | - John Ollinger
- National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, Maryland, USA
| |
Collapse
|
50
|
Nayar K, Ventura LM, DeDios-Stern S, Oh A, Soble JR. The Impact of Learning and Memory on Performance Validity Tests in a Mixed Clinical Pediatric Population. Arch Clin Neuropsychol 2021; 37:50-62. [PMID: 34050354 DOI: 10.1093/arclin/acab040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the degree to which verbal and visuospatial memory abilities influence performance validity test (PVT) performance in a mixed clinical pediatric sample. METHOD Data from 252 consecutive clinical pediatric cases (Mage=11.23 years, SD=4.02; 61.9% male) seen for outpatient neuropsychological assessment were collected. Measures of learning and memory (e.g., The California Verbal Learning Test-Children's Version; Child and Adolescent Memory Profile [ChAMP]), performance validity (Test of Memory Malingering Trial 1 [TOMM T1]; Wechsler Intelligence Scale for Children-Fifth Edition [WISC-V] or Wechsler Adult Intelligence Scale-Fourth Edition Digit Span indices; ChAMP Overall Validity Index), and intellectual abilities (e.g., WISC-V) were included. RESULTS Learning/memory abilities were not significantly correlated with TOMM T1 and accounted for relatively little variance in overall TOMM T1 performance (i.e., ≤6%). Conversely, ChAMP Validity Index scores were significantly correlated with verbal and visual learning/memory abilities, and learning/memory accounted for significant variance in PVT performance (12%-26%). Verbal learning/memory performance accounted for 5%-16% of the variance across the Digit Span PVTs. No significant differences in TOMM T1 and Digit Span PVT scores emerged between verbal/visual learning/memory impairment groups. ChAMP validity scores were lower for the visual learning/memory impairment group relative to the nonimpaired group. CONCLUSIONS Findings highlight the utility of including PVTs as standard practice for pediatric populations, particularly when memory is a concern. Consistent with the adult literature, TOMM T1 outperformed other PVTs in its utility even among the diverse clinical sample with/without learning/memory impairment. In contrast, use of Digit Span indices appear to be best suited in the presence of visuospatial (but not verbal) learning/memory concerns. Finally, the ChAMP's embedded validity measure was most strongly impacted by learning/memory performance.
Collapse
Affiliation(s)
- Kritika Nayar
- Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Lea M Ventura
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Pediatrics, University of Illinois College of Medicine, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|