1
|
Finley JCA, Brooks JM, Nili AN, Oh A, VanLandingham HB, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Multivariate examination of embedded indicators of performance validity for ADHD evaluations: A targeted approach. Appl Neuropsychol Adult 2023:1-14. [PMID: 37703401 DOI: 10.1080/23279095.2023.2256440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Amanda N Nili
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Medical Social Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
2
|
Harrison AG, Beal AL, Armstrong IT. Predictive value of performance validity testing and symptom validity testing in psychoeducational assessment. Appl Neuropsychol Adult 2023; 30:315-329. [PMID: 34261385 DOI: 10.1080/23279095.2021.1943396] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Using archival data from 2463 psychoeducational assessments of postsecondary students we investigated whether failure on either symptom or performance validity tests (SVTs or PVTs) was associated with score differences on various cognitive, achievement, or executive functioning performance measures or on symptom report measures related to mental health or attention complaints. In total, 14.6% of students failed one or more PVT, 33.6% failed one or more SVT, and 41.6% failed at least one validity test. Individuals who failed SVTs tended to have the highest levels of self-reported symptoms relative to other groups but did not score worse on performance-based psychological tests. Those who failed PVTs scored worse on performance-based tests relative to other groups. Failure on at least one PVT and one SVT resulted in both performance and self-reported symptoms suggestive of greater impairment compared with those who passed all validity measures. Findings also highlight the need for domain-specific SVTs; failing ADHD SVTs was associated only with extreme reports of ADHD and executive functioning symptoms while failing mental health SVTs related only to extreme reports of mental health complaints. Results support using at least one PVT and one SVT in psychoeducational assessments to aid in diagnostic certainty, given the frequency of non-credible presentation in this population of postsecondary students.
Collapse
Affiliation(s)
- Allyson G Harrison
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| | | | - Irene T Armstrong
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| |
Collapse
|
3
|
Aryal K, Merten T, Akehurst L, Boskovic I. The English-language version of the Self-Report Symptom Inventory: a pilot analogue study with feigned head injury sequelae. Appl Neuropsychol Adult 2022:1-5. [PMID: 35944507 DOI: 10.1080/23279095.2022.2109158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Questionnaire-based symptom validity tests (SVTs) are an indispensable diagnostic tool for evaluating the credibility of patients' claimed symptomatology, both in forensic and in clinical assessment contexts. In 2019, the comprehensive professional manual of a new SVT, the Self-Report Symptom Inventory (SRSI), was published in German. Its English-language version was first tested in the UK. This experimental analogue study investigated 20 adults simulating minor head injury symptoms and 21 honestly responding participants. The effect sizes of differences between the two groups were large, with the simulating group endorsing a higher number of pseudosymptoms, both on the SRSI and the Structured Inventory of Malingered Symptomatology, and scoring lower on the Reliable Digit Span than the control group. The results are similar to those obtained in previous research of different SRSI language versions, supporting the effort to validate the English-language SRSI version.
Collapse
Affiliation(s)
- Kirsten Aryal
- Department of Psychology, University of Portsmouth, Portsmouth, UK
| | - Thomas Merten
- Neurology, Vivantes Klinikum im Friedrichshain, Berlin, Germany
| | - Lucy Akehurst
- Department of Psychology, University of Portsmouth, Portsmouth, UK
| | - Irena Boskovic
- Erasmus University Rotterdam, Rotterdam, Netherlands
- Maastricht University, Maastricht, Netherlands
| |
Collapse
|
4
|
Grewal KS, Trites M, Kirk A, MacDonald SWS, Morgan D, Gowda-Sookochoff R, O'Connell ME. CVLT-II short form forced choice recognition in a clinical dementia sample: Cautions for performance validity assessment. Appl Neuropsychol Adult 2022:1-10. [PMID: 35635794 DOI: 10.1080/23279095.2022.2079088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Performance validity tests are susceptible to false positives from genuine cognitive impairment (e.g., dementia); this has not been explored with the short form of the California Verbal Learning Test II (CVLT-II-SF). In a memory clinic sample, we examined whether CVLT-II-SF Forced Choice Recognition (FCR) scores differed across diagnostic groups, and how the severity of impairment [Clinical Dementia Rating Sum of Boxes (CDR-SOB) or Mini-Mental State Examination (MMSE)] modulated test performance. Three diagnostic groups were identified: subjective cognitive impairment (SCI; n = 85), amnestic mild cognitive impairment (a-MCI; n = 17), and dementia due to Alzheimer's Disease (AD; n = 50). Significant group differences in FCR were observed using one-way ANOVA; post-hoc analysis indicated the AD group performed significantly worse than the other groups. Using multiple regression, FCR performance was modeled as a function of the diagnostic group, severity (MMSE or CDR-SOB), and their interaction. Results yielded significant main effects for MMSE and diagnostic group, with a significant interaction. CDR-SOB analyses were non-significant. Increases in impairment disproportionately impacted FCR performance for persons with AD, adding caution to research-based cutoffs for performance validity in dementia. Caution is warranted when assessing performance validity in dementia populations. Future research should examine whether CVLT-II-SF-FCR is appropriately specific for best-practice testing batteries for dementia.
Collapse
Affiliation(s)
- Karl S Grewal
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| | - Michaella Trites
- Department of Psychology, University of Victoria, Victoria, Canada
| | - Andrew Kirk
- Department of Medicine, University of Saskatchewan, Saskatoon, Canada
| | | | - Debra Morgan
- Canadian Centre for Health and Safety in Agriculture, University of Saskatchewan, Saskatoon, Canada
| | | | - Megan E O'Connell
- Department of Psychology, University of Saskatchewan, Saskatoon, Canada
| |
Collapse
|
5
|
Harris M, Merz ZC. High elevation rates of the Structured Inventory of Malingered Symptomatology (SIMS) in neuropsychological patients. Appl Neuropsychol Adult 2021; 29:1344-1351. [PMID: 33662216 DOI: 10.1080/23279095.2021.1875227] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The current study examined characteristics of the Structured Inventory of Malingered Symptomatology (SIMS) in a sample of 110 patients at an adult neuropsychology clinic. Subjects with especially high or low suspicion of invalid reporting were identified based on clinician-completed questions. SIMS elevation rates were examined at different cutoffs and between these groups and were correlated with other indicators of validity. High rates of SIMS elevations were found at the standard cutoff (>14) for the total sample (45.5%), low suspicion cases (24.4%), and high suspicion cases (95.7%). Other indicators of invalidity were low (secondary gain = 8.5%, clinical suspicion of exaggeration in interview M = 2.37/5, medical records concerning for invalidity = 2.4%, mixed/poor performance validity = 6.1%). Elevations correlated with clinician concern for over-reporting in interview, subject-reported cognitive concern (r = -.610) and psychological measures (BDI-II r = -.602, PROMIS r = -.409) but not with neuropsychological memory tests or performance validity measures (all p > .23). The SIMS should be interpreted with caution, as elevations appeared largely related to cognitive concern and psychiatric distress rather than true malingering. A cutoff of > 16 could be used in neuropsychological populations, although this is still of modest specificity.
Collapse
Affiliation(s)
- Matthew Harris
- Department of Physical Medicine and Rehabilitation, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| |
Collapse
|
6
|
Nikolai T, Cechova K, Bukacova K, Fendrych Mazancova A, Markova H, Bezdicek O, Hort J, Vyhnalek M. Delayed matching to sample task 48: assessment of malingering with simulating design. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2020; 28:797-811. [PMID: 32998629 DOI: 10.1080/13825585.2020.1826898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The results of neuropsychological tests may be distorted by patients who exaggerate cognitive deficits. Eighty-three patients with cognitive deficit [Amnestic Mild Cognitive Impairment (aMCI), n = 53; Alzheimer's disease (AD) dementia, n = 30], 44 healthy older adults (HA), and 30 simulators of AD (s-AD) underwent comprehensive neuropsychological assessment. Receiver Operating Characteristic (ROC) analysis revealed high specificity but low sensitivity of the Delayed Matching to Sample Task (DMS48) in differentiating s-AD from AD dementia (87 and 53%, respectively) and from aMCI (96 and 57%). The sensitivity was considerably increased by using the DMS48/Rey Auditory Verbal Learning Test (RAVLT) ratio (specificity and sensitivity 93% and 93% for AD dementia and 96% and 80% for aMCI). The DMS48 differentiates s-AD from both aMCI and AD dementia with high specificity but low sensitivity. Its predictive value greatly increased when evaluated together with the RAVLT.
Collapse
Affiliation(s)
- T Nikolai
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic.,International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic
| | - K Cechova
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2nd Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - K Bukacova
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic
| | - A Fendrych Mazancova
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic.,International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic
| | - H Markova
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2 Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - O Bezdicek
- Department of Neurology, Neuropsychology Laboratory, 1st Faculty of Medicine and General University Hospital, Prague, Czech Republic
| | - J Hort
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2 Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| | - M Vyhnalek
- International Clinical Research Center, St. Anne's University Hospital Brno, Brno, Czech Republic.,Department of Neurology, Memory Clinic, 2 Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czech Republic
| |
Collapse
|
7
|
Bhowmick C, Hirst R, Green P. Comparison of the Word Memory Test and the Test of Memory Malingering in detecting invalid performance in neuropsychological testing. Appl Neuropsychol Adult 2019; 28:486-496. [PMID: 31519112 DOI: 10.1080/23279095.2019.1658585] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Given the prevalence of compensation seeking patients who exaggerate or fabricate their symptoms, the assessment of performance and symptom validity throughout testing is vital in neuropsychological evaluations. Two of the most commonly utilized performance validity tests (PVTs) are the Word Memory Test (WMT) and the Test of Memory Malingering (TOMM). While both have proven successful in detecting invalid performance, some studies suggest greater sensitivity in the WMT relative to the TOMM. To improve upon previous research, this study compared performance in individuals who completed both the WMT and TOMM during a neuropsychological evaluation. Participants included 268 cases from a clinical private practice consisting of primarily disability claimants. One-way multivariate analysis of variance (MANOVA) compared neuropsychological performance of participants who passed both PVTs (n = 198) versus those who failed the WMT but passed the TOMM (n = 70). Global suppression of neuropsychological scores was found for participants who failed the WMT but passed the TOMM, as well as more psychiatric symptoms reported on questionnaires, relative to those who passed both PVTs. These findings suggest that those passing the TOMM but failing the WMT demonstrated performance invalidity, which illustrates the WMT's enhanced sensitivity.
Collapse
Affiliation(s)
- Chloe Bhowmick
- Department of Psychology, Palo Alto University, Palo Alto, CA, USA
| | - Rayna Hirst
- Department of Psychology, Palo Alto University, Palo Alto, CA, USA
| | - Paul Green
- William Green, Greens Publishing, Kelowna, British Columbia, Canada
| |
Collapse
|
8
|
Reilly KJ, Kalat SS, Richardson AH, Armistead-Jehle P. Preliminary investigation of the Denver Attention Test (DAT) in a mixed clinical sample. Appl Neuropsychol Adult 2019; 28:158-164. [PMID: 31091990 DOI: 10.1080/23279095.2019.1607736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
In this pilot study, the clinical utility of a new computerized performance validity test (PVT) called the Denver Attention Test (DAT) was evaluated in a known-groups experimental design. Subjects consisted of 130 adults with mixed neurological conditions evaluated in an outpatient setting. Using the Word Memory Test (WMT) to categorize subjects into valid and invalid groups, the DAT was found to have adequate discrimination. Classification statistics for the DAT demonstrated low to moderate sensitivity and excellent specificity relative to the WMT. ROC analyses demonstrated AUCs of at least .78 for select DAT subtests. Overall, data from this pilot study suggest that the DAT has potential to serve as a useful PVT. Future research directions are discussed.
Collapse
Affiliation(s)
| | | | - Anne H Richardson
- Graduate School of Professional Psychology, University of Denver, Denver, Colorado, USA
| | | |
Collapse
|
9
|
Hirsch O, Christiansen H. Faking ADHD? Symptom Validity Testing and Its Relation to Self-Reported, Observer-Reported Symptoms, and Neuropsychological Measures of Attention in Adults With ADHD. J Atten Disord 2018; 22:269-280. [PMID: 26246589 DOI: 10.1177/1087054715596577] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE To compare ADHD patients who failed a symptom validity test with those who passed the test to explore whether there are signs of negative response bias on group level. METHOD In our outpatient department, 196 adults were diagnosed with ADHD using a comprehensive diagnostic strategy featuring a detailed clinical history, clinical interview, observer rating, several self-rating scales, and neuropsychological attention tests. The Amsterdam Short Term Memory Test (AKGT) was applied as a symptom validity measure. RESULTS Sixty-three patients (32.1%) scored below the AKGT cutoff level. The two groups did not significantly differ regarding self-report and observer ratings. Those who failed the AKGT had higher reaction time variabilities in selective, auditory and visual divided attention, and higher omission errors in sustained attention. CONCLUSION We found no strong indicators for negative response bias in ADHD patients who failed a symptom validity test. New measures and approaches to detect feigned ADHD should be developed.
Collapse
|
10
|
Parks AC, Gfeller J, Emmert N, Lammert H. Detecting feigned postconcussional and posttraumatic stress symptoms with the structured inventory of malingered symptomatology (SIMS). Appl Neuropsychol Adult 2016; 24:429-438. [PMID: 27284810 DOI: 10.1080/23279095.2016.1189426] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The Structured Inventory of Malingered Symptomatology (SIMS) is a standalone symptom validity test (SVT) designed as a screening measure to detect a variety of exaggerated psychological symptoms. A number of studies have explored the accuracy of the SIMS in litigious and clinical populations, yet few have examined the validity of the SIMS in detecting feigned symptoms of postconcussional disorder (PCD) and posttraumatic stress disorder (PTSD). The present study examined the sensitivity of the SIMS in detecting undergraduate simulators (N = 78) feigning symptoms of PCD, PTSD, and the comorbid presentation of both PCD and PTSD symptomatologies. Overall, the SIMS Total score produced the highest sensitivities for the PCD symptoms and PCD+PTSD symptoms groups (.89 and .85, respectively), and to a lesser extent, the PTSD symptoms group (.69). The Affective Disorders (AF) subscale was most sensitive to the PTSD symptoms group compared to the PCD and PCD+PTSD symptoms groups. Additional sensitivity values are presented and examined at multiple scale cutoff scores. These findings support the use of the SIMS as a SVT screening measure for PCD and PTSD symptom exaggeration in neuropsychological assessment.
Collapse
Affiliation(s)
- Adam C Parks
- a Department of Psychiatry and Psychology , Mayo Clinic Florida , Jacksonville , Florida , USA
| | - Jeffrey Gfeller
- b Department of Psychology , Saint Louis University , Saint Louis , Missouri , USA
| | - Natalie Emmert
- b Department of Psychology , Saint Louis University , Saint Louis , Missouri , USA
| | - Hannah Lammert
- c Department of Psychology , University of Minnesota Duluth , Duluth , Minnesota , USA
| |
Collapse
|
11
|
Green R, Kalina J, Ford R, Pandey K, Kister I. SymptoMScreen: A Tool for Rapid Assessment of Symptom Severity in MS Across Multiple Domains. Appl Neuropsychol Adult 2016; 24:183-189. [PMID: 27077687 DOI: 10.1080/23279095.2015.1125905] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The objective of this study was to describe SymptoMScreen, an in-house developed tool for rapid assessment of MS symptom severity in routine clinical practice, and to validate SymptoMScreen against Performance Scales (PS). MS patients typically experience symptoms in many neurologic domains. A tool that would enable MS patients to efficiently relay their symptom severity across multiple domains to the healthcare providers could lead to improved symptom management. We developed "SymptoMScreen," a battery of 7-point Likert scales for 12 distinct domains commonly affected by MS: mobility, dexterity, body pain, sensation, bladder function, fatigue, vision, dizziness, cognition, depression, and anxiety. We administered SymptoMScreen and PS scales to consecutive MS patients at a specialty MS Care Center. We assessed the criterion and construct validity of SymptoMScreen by calculating Spearmen rank correlations between the SymptoMScreen composite score and PS composite score, and between SymptoMScreen subscale and the respective PS subscale scores, where applicable. A total of 410 patients with MS (age 46.6 ± 12.9 years; 74% female; mean disease duration 12.2 ± 8.7 years) completed the SymptoMScreen and PSs during their clinic visit. Composite SymptoMScreen score correlated strongly with combined PS score (r = 0.88, p < 0.0001). SymptoMScreen sub scores correlated strongly with the criterion measures of the respective PS (r = 0.69-0.87, p < 0.0001). Test-retest reliability of SymptoMScreen and its subscales was excellent (r = 0.71-0.94, p < .0001). SymptoMScreen is a single-page battery of Likert scales that assesses symptom impact in 12 domains commonly affected in MS. It has excellent criterion and construct validity. SymptoMScreen is patient and clinician friendly, takes approximately one minute to complete, and can help better document, understand, and manage patients' symptoms in routine clinical practice. SymptoMScreen is freely available to clinicians and researchers.
Collapse
Affiliation(s)
- R Green
- a New York University Langone Medical Center, Multiple Sclerosis Comprehensive Care Center , New York , New York , USA
| | - J Kalina
- a New York University Langone Medical Center, Multiple Sclerosis Comprehensive Care Center , New York , New York , USA
| | - R Ford
- b Barnabas Health Medical Group, Multiple Sclerosis Comprehensive Care Center , Livingston , New Jersey , USA
| | - K Pandey
- b Barnabas Health Medical Group, Multiple Sclerosis Comprehensive Care Center , Livingston , New Jersey , USA
| | - I Kister
- a New York University Langone Medical Center, Multiple Sclerosis Comprehensive Care Center , New York , New York , USA.,b Barnabas Health Medical Group, Multiple Sclerosis Comprehensive Care Center , Livingston , New Jersey , USA
| |
Collapse
|
12
|
Abstract
BACKGROUND In clinical neuropsychological practice, assessment of response validity (e.g., effort, over-reporting, under-reporting) is an essential component of the assessment process. By contrast, other health care professionals, including those in neurorehabilitation settings, often omit assessment of this topic from their evaluations or only rely on subjective impressions. OBJECTIVE To provide the first comprehensive review of response validity assessment in the neurorehabilitation literature, including why the topic is often avoided, what methods are commonly used, and how to decrease false positives. METHODS A literature review and documentation of personal experience and perspectives was used to review this topic. RESULTS There is a well-established literature on the necessity and utility of assessing response validity, particularly in patients who have external incentives to embellish their presentation or to under-report symptoms. There are many reasons why non-neuropsychologists typically avoid assessment of this topic. This poses a significant problem, particularly when patients exaggerate or malinger, because it can lead to misdiagnosis and it risks increasing the cost of healthcare by performing unnecessary tests and treatments, unfair distribution of disability/compensation resources, and a reduced access to these and other health resources by patients who genuinely need them. CONCLUSIONS There is a significant need for non-neuropsychologists to develop and incorporate symptom and performance validity assessments in clinical evaluations, including those in neurorehabilitation settings.
Collapse
|
13
|
Abstract
Background: To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. Review: This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a ‘passing’ SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Conclusions: Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.
Collapse
|
14
|
Provance AJ, Terhune EB, Cooley C, Carry PM, Connery AK, Engelman GH, Kirkwood MW. The relationship between initial physical examination findings and failure on objective validity testing during neuropsychological evaluation after pediatric mild traumatic brain injury. Sports Health 2014; 6:410-5. [PMID: 25177417 PMCID: PMC4137681 DOI: 10.1177/1941738114544444] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Background: The symptomatology after mild traumatic brain injury (mTBI) is complex as symptoms are subjective and nonspecific. It is important to differentiate symptoms as neurologically based or caused by noninjury factors. Symptom exaggeration has been found to influence postinjury presentation, and objective validity tests are used to help differentiate these cases. This study examines how concussed patients seen for initial medical workup may present with noncredible effort during follow-up neuropsychological examination and identifies physical findings during evaluation that best predict noncredible performance. Hypothesis: A portion of pediatric patients will demonstrate noncredible effort during neuropsychological testing after mTBI, predicted by failure of certain vestibular and cognitive tests during initial examination. Study Design: Retrospective cohort. Level of Evidence: Level 4. Methods: Participants (n = 80) underwent evaluation by a sports medicine physician ≤3 months from injury, were subsequently seen for a neuropsychological examination, and completed the Medical Symptom Validity Test (MSVT). Variables included results of a mental status examination (orientation), serial 7s examination, Romberg test, and heel-to-toe walking test. The primary outcome variable of interest was pass/fail of the MSVT. Results: Of the participants, 51% were male and 49% were female. Eighteen of 80 (23%) failed the MSVT. Based on univariable logistic regression analysis, the outcomes of the Romberg test (P = 0.0037) and heel-to-toe walking test(P = 0.0066) were identified as significant independent predictors of MSVT failure. In a multivariable model, outcome of Romberg test was the only significant predictor of MSVT failure. The probability of MSVT failure was 66.7% (95% CI, 33.3% to 88.9%) when a subject failed the Romberg test. Conclusion: A meaningful percentage of pediatric subjects present evidence of noncredible performance during neuropsychological examination after mTBI. Initial examination findings in some cases may represent symptom exaggeration.
Collapse
Affiliation(s)
- Aaron J. Provance
- Sports Medicine Program for Young Athletes, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
- Aaron J. Provance, MD, Children’s Hospital Colorado, 13123 E 16th Ave, Aurora, CO 80045 (e-mail: )
| | - E. Bailey Terhune
- Sports Medicine Program for Young Athletes, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
| | - Christine Cooley
- Department of Pediatrics, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
| | - Patrick M. Carry
- Musculoskeletal Research Center, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
| | - Amy K. Connery
- Department of Physical Medicine & Rehabilitation, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
| | - Glenn H. Engelman
- Musculoskeletal Research Center, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
| | - Michael W. Kirkwood
- Department of Physical Medicine & Rehabilitation, Children’s Hospital Colorado and University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
15
|
Abstract
The purpose of this research was to determine if there is any need, as per the Halstead-Reitan instructions, to test each hand uninterruptedly on the Finger Oscillation Test (FoT). To the authors' knowledge, there is no widely available research addressing this issue. Enabling administration of the FoT using alternate hands would theoretically make the administration of the assessment more efficient. In this study, participants consisted of 49 graduate students. All were administered the FoT with standard instructions and using an alternating-hands method. The order of administration was counterbalanced to avoid practice effects, and subjects completed distractor tasks between administrations. Results indicated there was a difference between the two administration methods for both dominant, t(47) = -4.09, p < .001, and nondominant, t(48) = -4.17, p < .001, hands. Surprisingly, mean T-scores were significantly higher for both the dominant and nondominant hands in the alternative administration group when compared with the standard method score (50 vs. 44 and 51 vs. 44, respectively). The standard deviations for both hands were also lower using the alternative method. This study highlights the need for neuropsychologists to be aware of the established administration protocols for tests and to carefully consider how deviations from these methods could affect test scores.
Collapse
Affiliation(s)
- Kari Eng
- a Department of Psychology , Forest Institute of Professional Psychology , Springfield , Missouri
| | - Summer Rolin
- a Department of Psychology , Forest Institute of Professional Psychology , Springfield , Missouri
| | - Rachel Fazio
- a Department of Psychology , Forest Institute of Professional Psychology , Springfield , Missouri
| | - Christine Biddle
- a Department of Psychology , Forest Institute of Professional Psychology , Springfield , Missouri
| | - Megan O'Grady
- a Department of Psychology , Forest Institute of Professional Psychology , Springfield , Missouri
| | - Robert Denney
- a Department of Psychology , Forest Institute of Professional Psychology , Springfield , Missouri
| |
Collapse
|
16
|
Moore RC, Davine T, Harmell AL, Cardenas V, Palmer BW, Mausbach BT. Using the repeatable battery for the assessment of neuropsychological status (RBANS) effort index to predict treatment group attendance in patients with schizophrenia. J Int Neuropsychol Soc 2013; 19:198-205. [PMID: 23234753 PMCID: PMC3568222 DOI: 10.1017/s1355617712001221] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In a psychosocial treatment study, knowing which participants are likely to put forth adequate effort to maximize their treatment, such as attending group sessions and completing homework assignments, and knowing which participants need additional motivation before engagement in treatment is a crucial component to treatment success. This study examined the ability of the Repeatable Battery for Assessment of Neuropsychological Status (RBANS) Effort Index (EI), a newly developed measure of suboptimal effort that is embedded within the RBANS, to predict group attendance in a sample of 128 middle-aged and older adults with schizophrenia. This study was the first to evaluate the EI with a schizophrenia sample. While the EI literature recommends a cutoff score of >3 to be considered indicative of poor effort, a cutoff of >4 was identified as the optimal cutoff for this sample. Receiver Operating Characteristics curve analyses were conducted to determine if the EI could predict participants who had high versus low attendance. Results indicated that the EI was successfully able to discriminate between group attendance, and this measure of effort appears to be most valuable as a tool to identify participants who will have high attendance. Of interest, overall cognitive functioning and symptoms of psychopathology were not predictive of group attendance.
Collapse
Affiliation(s)
- Raeanne C. Moore
- Department of Psychiatry, University of California, San Diego, La Jolla, California
| | - Taylor Davine
- Department of Psychology, San Diego State University, San Diego, California
| | - Alexandrea L. Harmell
- Department of Psychiatry, University of California, San Diego, La Jolla, California
- Joint Doctoral Program in Clinical Psychology, University of California, San Diego/San Diego State University, San Diego, California
| | - Veronica Cardenas
- Department of Psychiatry, University of California, San Diego, La Jolla, California
| | - Barton W. Palmer
- Department of Psychiatry, University of California, San Diego, La Jolla, California
| | - Brent T. Mausbach
- Department of Psychiatry, University of California, San Diego, La Jolla, California
| |
Collapse
|
17
|
Duff K, Spering CC, O'Bryant SE, Beglinger LJ, Moser DJ, Bayless JD, Culp KR, Mold JW, Adams RL, Scott JG. The RBANS Effort Index: base rates in geriatric samples. Appl Neuropsychol 2011; 18:11-7. [PMID: 21390895 PMCID: PMC3074382 DOI: 10.1080/09084282.2010.523354] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.
Collapse
Affiliation(s)
- Kevin Duff
- Center for Alzheimer's Care, Imaging and Research, Department of Neurology, University of Utah, Salt Lake City, Utah 84108, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|