1
|
Boone KB, Vane RP, Victor TL. Critical Review of Recently Published Studies Claiming Long-Term Neurocognitive Abnormalities in Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2025; 40:272-288. [PMID: 39564962 DOI: 10.1093/arclin/acae079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 09/10/2024] [Indexed: 11/21/2024] Open
Abstract
Mild traumatic brain injury (mTBI) is the most common claimed personal injury condition for which neuropsychologists are retained as forensic experts in litigation. Therefore, it is critical that experts have accurate information when testifying as to neurocognitive outcome from concussion. Systematic reviews and six meta-analyses from 1997 to 2011 regarding objective neurocognitive outcome from mTBI provide no evidence that concussed individuals do not return to baseline by weeks to months post-injury. In the current manuscript, a critical review was conducted of 21 research studies published since the last meta-analysis in 2011 that have claimed to demonstrate long-term (i.e., ≥12 months post-injury) neurocognitive abnormalities in adults with mTBI. Using seven proposed methodological criteria for research investigating neurocognitive outcome from mTBI, no studies were found to be scientifically adequate. In particular, more than 50% of the 21 studies reporting cognitive dysfunction did not appropriately diagnose mTBI, employ prospective research designs, use standard neuropsychological tests, include appropriate control groups, provide information on motive to feign or use PVTs, or exclude, or adequately consider the impact of, comorbid conditions known to impact neurocognitive scores. We additionally analyzed 15 studies published during the same period that documented no longer term mTBI-related cognitive abnormalities, and demonstrate that they were generally more methodologically robust than the studies purporting to document cognitive dysfunction. The original meta-analytic conclusions remain the most empirically-sound evidence informing our current understanding of favorable outcomes following mTBI.
Collapse
Affiliation(s)
- Kyle B Boone
- Private Practice, Torrance, 24564 Hawthorne Blvd., Suite 208, Torrance, California 90505, USA
| | - Ryan P Vane
- Department of Psychology, California State University, Dominguez Hills, 1000 E. Victoria Street Carson, California 90747, USA
| | - Tara L Victor
- Department of Psychology, California State University, Dominguez Hills, 1000 E. Victoria Street Carson, California 90747, USA
| |
Collapse
|
2
|
Martin PK, Schroeder RW, Odland AP. Neuropsychological Validity Assessment Beliefs and Practices: A Survey of North American Neuropsychologists and Validity Assessment Experts. Arch Clin Neuropsychol 2025; 40:201-223. [PMID: 39564677 DOI: 10.1093/arclin/acae102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 10/07/2024] [Accepted: 10/19/2024] [Indexed: 11/21/2024] Open
Abstract
OBJECTIVE The present study sought to identify changes in neuropsychological validity assessment beliefs and practices relative to surveys of North American neuropsychologists conducted in 2015 and 2016, obtain a more nuanced understanding of such beliefs and practices, and examine salient validity assessment topics not addressed by previous surveys. METHODS Adult focused neuropsychologists (n = 445) and neuropsychological validity assessment experts (n = 16) were surveyed regarding their perceptions and practices related to the following topics: (i) importance of validity testing; (ii) multiple performance validity test (PVT) administration and interpretation; (iii) suspected causes of invalidity; (iv) reporting on malingering; (v) assessment of examinees of diverse language, culture, and nation of origin; (vi) terminology; and (vii) most frequently utilized validity measures. RESULTS There was general agreement, if not consensus, across multiple survey topics. The vast majority of neuropsychologists and experts view validity testing as mandatory in clinical and forensic evaluations, administer multiple PVTs regardless of setting, believe validity assessment to be important in the evaluation of all individuals including older adults and culturally diverse individuals, and view evaluations with few to no validity tests interspersed throughout the evaluation as being of lesser quality. Divergent opinions were also seen among respondents and between neuropsychologists and experts on some topics, including likely causes of invalidity and assessment and formal communication of malingering. CONCLUSIONS Current results highlight the necessity of formal validity assessment within both clinical and forensic neuropsychological evaluations, and findings document current trends and reported practices within the field.
Collapse
Affiliation(s)
- Phillip K Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine-Wichita, Wichita, KS, USA
| | - Ryan W Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine-Wichita, Wichita, KS, USA
- Department of Behavioral Health, Robert J. Dole VA Medical Center, Wichita, KS, USA
| | | |
Collapse
|
3
|
van Zwam-van der Wijk M, Roor JJ, Ponds R, de Vroege L. Performance validity and outcome of treatment in patients with somatic symptom and related disorders (SSRD). APPLIED NEUROPSYCHOLOGY. ADULT 2025:1-8. [PMID: 39798121 DOI: 10.1080/23279095.2024.2445715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2025]
Abstract
This study addresses the relationship between performance validity and treatment outcome in a sample of patients with somatic symptom and related disorders (SSRD). A retrospective analysis was performed in a sample of 337 patients with SSRD who received treatment. Interaction effects were determined between performance validity test (PVT) performance and raw change scores, reliable change index and clinical change of depression, anxiety and physical symptoms. Performance validity was measured by using the Test of Memory Malingering (TOMM). There was no significant difference between the PVT pass and PVT fail groups in change in depression, anxiety and physical symptoms after treatment. Both groups exhibited a comparable reduction in their symptoms of depression, anxiety and physical symptoms after treatment. There was also no association between PVT performance and raw change scores, reliable clinical changes and clinical changes on depression, anxiety, and physical symptoms. Performance validity was not related to treatment outcome in patients with SSRD, which is a clinically relevant finding. Further studies may want to look into other relevant aspects for determining the potential impact of performance (in)validity on treatment outcome in patients with SSRD, such as treatment drop-out or the number of missed/attended treatment sessions. Alternatively, as treatment outcome is usually determined based on patients' self-report, the impact that non-credible symptom reporting (i.e., symptom validity test failure) has on treatment outcomes is a logical next step for understating the impact of response bias beyond the testing session.
Collapse
Affiliation(s)
| | - Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Centre, Venlo, The Netherlands
- School of Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Rudolf Ponds
- School of Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Lars de Vroege
- Clinical Center of Excellence for Body, Mind, and Health, GGz Breburg, Tilburg, The Netherlands
- Department Tranzo, Tilburg School of Behavioral and Social Sciences, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
4
|
Finley JCA, Robinson AD, Soble JR, Rodriguez VJ. Using machine learning to detect noncredible cognitive test performance. Clin Neuropsychol 2024:1-18. [PMID: 39673209 DOI: 10.1080/13854046.2024.2440085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 12/05/2024] [Indexed: 12/16/2024]
Abstract
Objective: Advanced algorithmic methods may improve the assessment of performance validity during neuropsychological testing. This study investigated whether unsupervised machine learning (ML) could serve as one such method. Method: Participants were 359 adult outpatients who underwent a neuropsychological evaluation for various referral reasons. Data relating to participants' performance validity test scores, medical and psychiatric history, referral reason, litigation status, and disability status were examined in an unsupervised ML model. The model was programmed to synthesize the data into an unspecified number of clusters, which were then compared to predetermined ratings of whether patients had valid or invalid test performance. Ratings were established according to multiple empirical performance validity test scores. To further understand the model, we examined which data were most helpful in its clustering decision-making process. Results: Similar to the clinical determination of patients' performance on neuropsychological testing, the model identified a two-cluster profile consisting of valid and invalid data. The model demonstrated excellent predictive accuracy (area under the curve of .92 [95% CI .88, .97]) when referenced against participants' predetermined validity status. Performance validity test scores were the most influential in the differentiation of clusters, but medical history, referral reason, and disability status were also contributory. Conclusions: These findings serve as a proof of concept that unsupervised ML can accurately assess performance validity using various data obtained during a neuropsychological evaluation. The manner in which unsupervised ML evaluates such data may circumvent some of the limitations with traditional validity assessment approaches. Importantly, unsupervised ML is adaptable to emerging digital technologies within neuropsychology that can be used to further improve the assessment of performance validity.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Anthony D Robinson
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois Chicago College of Medicine, Chicago, IL, USA
| | - Violeta J Rodriguez
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
5
|
Crişan I, Erdodi L. Examining the cross-cultural validity of the test of memory malingering and the Rey 15-item test. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:721-731. [PMID: 35476611 DOI: 10.1080/23279095.2022.2064753] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study was designed to investigate the cross-cultural validity of two freestanding performance validity tests (PVTs), the Test of Memory Malingering - Trial 1 (TOMM-1) and the Rey Fifteen Item Test (Rey-15) in Romanian-speaking patients. METHODS The TOMM-1 and Rey-15 free recall (FR) and the combination score incorporating the recognition trial (COMB) were administered to a mixed clinical sample of 61 adults referred for cognitive evaluation, 24 of whom had external incentives to appear impaired. Average scores on PVTs were compared between the two groups. Classification accuracies were computed using one PVT against another. RESULTS Patients with identifiable external incentives to appear impaired produced significantly lower scores and more errors on validity indicators. The largest effect sizes emerged on TOMM-1 (Cohen's d = 1.00-1.19). TOMM-1 was a significant predictor of the Rey-15 COMB ≤20 (AUC = .80; .38 sensitivity; .89 specificity at a cutoff of ≤39). Similarly, both Rey-15 indicators were significant predictors of TOMM-1 at ≤39 as the criterion (AUCs = .73-.76; .33 sensitivity; .89-.90 specificity). CONCLUSION Results offer a proof of concept for the cross-cultural validity of the TOMM-1 and Rey-15 in a Romanian clinical sample.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
6
|
O'Connor V, Shura R, Armistead-Jehle P, Cooper DB. Neuropsychological Evaluation in Traumatic Brain Injury. Phys Med Rehabil Clin N Am 2024; 35:593-605. [PMID: 38945653 DOI: 10.1016/j.pmr.2024.02.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Neuropsychological evaluations can be helpful in the aftermath of traumatic brain injury. Cognitive functioning is assessed using standardized assessment tools and by comparing an individual's scores on testing to normative data. These evaluations examine objective cognitive functioning as well as other factors that have been shown to influence performance on cognitive tests (eg, psychiatric conditions, sleep) in an attempt to answer a specific question from referring providers. Referral questions may focus on the extent of impairment, the trajectory of recovery, or ability to return to work, sport, or the other previous activity.
Collapse
Affiliation(s)
- Victoria O'Connor
- Department of Veterans Affairs, W. G. (Bill) Hefner VA Healthcare System, 1601 Brenner Avenue (11M), Salisbury, NC 28144, USA; Veterans Integrated Service Networks (VISN)-6 Mid-Atlantic Mental Illness, Research Education and Clinical Center (MIRECC), Durham, NC, USA; Wake Forest School of Medicine, Winston-Salem, NC, USA.
| | - Robert Shura
- Department of Veterans Affairs, W. G. (Bill) Hefner VA Healthcare System, 1601 Brenner Avenue (11M), Salisbury, NC 28144, USA; Veterans Integrated Service Networks (VISN)-6 Mid-Atlantic Mental Illness, Research Education and Clinical Center (MIRECC), Durham, NC, USA; Wake Forest School of Medicine, Winston-Salem, NC, USA; Via College of Osteopathic Medicine, Blacksburg, VA, USA
| | - Patrick Armistead-Jehle
- Department of Veterans Affairs, Concussion Clinic, Munson Army Health Center, 550 Pope Avenue, Fort Leavenworth, KS 66027, USA
| | - Douglas B Cooper
- Department of Psychiatry, University of Texas Health Science Center (UT-Health), South Texas VA Healthcare System, San Antonio Polytrauma Rehabilitation Center, 7400 Merton Minter Boulevard, San Antonio, TX 78229, USA; Department of Rehabilitation Medicine, University of Texas Health Science Center (UT-Health), South Texas VA Healthcare System, San Antonio Polytrauma Rehabilitation Center, 7400 Merton Minter Boulevard, San Antonio, TX 78229, USA
| |
Collapse
|
7
|
Ingram PB, Armistead-Jehle P, Childers LG, Herring TT. Cross validation of the response bias scale and the response bias scale-19 in active-duty personnel: use on the MMPI-2-RF and MMPI-3. J Clin Exp Neuropsychol 2024; 46:141-151. [PMID: 38493366 DOI: 10.1080/13803395.2024.2330727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024]
Abstract
The Response Bias Scale (RBS) is the central measure of cognitive over-reporting in the MMPI-family of instruments. Relative to other clinical populations, the research evaluating the detection of over-reporting is more limited in Veteran and Active-Duty personnel, which has produced some psychometric variability across studies. Some have suggested that the original scale construction methods resulted in items which negatively impact classification accuracy and in response crafted an abbreviated version of the RBS (RBS-19; Ratcliffe et al., 2022; Spencer et al., 2022). In addition, the most recent edition of the MMPI is based on new normative data, which impacts the ability to use existing literature to determine effective cut-scores for the RBS (despite all items having been retained across MMPI versions). To date, no published research exists for the MMPI-3 RBS. The current study examined the utility of the RBS and the RBS-19 in a sample of Active-Duty personnel (n = 186) referred for neuropsychological evaluation. Using performance validity tests as the study criterion, we found that the RBS-19 was generally equitably to RBS in classification. Correlations with other MMPI-2-RF over- and under-reporting symptom validity tests were slightly stronger for RBS-19. Implications and directions for research and practice with RBS/RBS-19 are discussed, along with implications for neuropsychological assessment and response validity theory.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, USA, KS
| | | | - Lucas G Childers
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| | - Tristan T Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| |
Collapse
|
8
|
Ingram PB, Armistead-Jehle P, Herring TT, Morris CS. Cross validation of the Personality Assessment Inventory (PAI) Cognitive Bias Scale of Scales (CB-SOS) over-reporting indicators in a military sample. MILITARY PSYCHOLOGY 2024; 36:192-202. [PMID: 37651693 PMCID: PMC10880507 DOI: 10.1080/08995605.2022.2160151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 12/09/2022] [Indexed: 01/07/2023]
Abstract
Following the development of the Cognitive Bias Scale (CBS), three other cognitive over-reporting indicators were created. This study cross-validates these new Cognitive Bias Scale of Scales (CB-SOS) measurements in a military sample and contrasts their performance to the CBS. We analyzed data from 288 active-duty soldiers who underwent neuropsychological evaluation. Groups were established based on performance validity testing (PVT) failure. Medium effects (d = .71 to .74) were observed between those passing and failing PVTs. The CB-SOS scales have high specificity (≥.90) but low sensitivity across the suggested cut scores. While all CB-SOS were able to achieve .90, lower scores were typically needed. CBS demonstrated incremental validity beyond CB-SOS-1 and CB-SOS-3; only CB-SOS-2 was incremental beyond CBS. In a military sample, the CB-SOS scales have more limited sensitivity than in its original validation, indicating an area of limited utility despite easier calculation. The CBS performs comparably, if not better, than CB-SOS scales. CB-SOS-2's differences in performance in this study and its initial validation suggest that its psychometric properties may be sample dependent. Given their ease of calculation and relatively high specificity, our study supports the interpretation of elevated CB-SOS scores indicating those who are likely to fail concurrent PVTs.
Collapse
Affiliation(s)
- Paul B. Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, Kansas, USA
| | | | - Tristan T. Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| | - Cole S. Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| |
Collapse
|
9
|
Shura RD, Ingram PB, Miskey HM, Martindale SL, Rowland JA, Armistead-Jehle P. Validation of the personality assessment inventory (PAI) cognitive bias (CBS) and cognitive bias scale of scales (CB-SOS) in a post-deployment veteran sample. Clin Neuropsychol 2023; 37:1548-1565. [PMID: 36271822 DOI: 10.1080/13854046.2022.2131630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/27/2022] [Indexed: 11/03/2022]
Abstract
Objective: The present study evaluated the function of four cognitive, symptom validity scales on the Personality Assessment Inventory (PAI), the Cognitive Bias Scale (CBS) and the Cognitive Bias Scale of Scales (CB-SOS) 1, 2, and 3 in a sample of Veterans who volunteered for a study of neurocognitive functioning. Method: 371 Veterans (88.1% male, 66.1% White) completed a battery including the Miller Forensic Assessment of Symptoms Test (M-FAST), the Word Memory Test (WMT), and the PAI. Independent samples t-tests compared mean differences on cognitive bias scales between valid and invalid groups on the M-FAST and WMT. Area under the curve (AUC), sensitivity, specificity, and hit rate across various scale point-estimates were used to evaluate classification accuracy of the CBS and CB-SOS scales. Results: Group differences were significant with moderate effect sizes for all cognitive bias scales between the WMT-classified groups (d = .52-.55), and large effect sizes between the M-FAST-classified groups (d = 1.27-1.45). AUC effect sizes were moderate across the WMT-classified groups (.650-.676) and large across M-FAST-classified groups (.816-.854). When specificity was set to .90, sensitivity was higher for M-FAST and the CBS performed the best (sensitivity = .42). Conclusion: The CBS and CB-SOS scales seem to better detect symptom invalidity than performance invalidity in Veterans using cutoff scores similar to those found in prior studies with non-Veterans.
Collapse
Affiliation(s)
- Robert D Shura
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Paul B Ingram
- Texas Tech University, Lubbock, TX, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, KS, USA
| | - Holly M Miskey
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Sarah L Martindale
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Jared A Rowland
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | | |
Collapse
|
10
|
Davis JJ. Time is money: Examining the time cost and associated charges of common performance validity tests. Clin Neuropsychol 2023; 37:475-490. [PMID: 35414332 DOI: 10.1080/13854046.2022.2063190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Objective: This study presents data on the time cost and associated charges for common performance validity tests (PVTs). It also applies an approach from cost effectiveness research to comparison of tests that incorporates cost and classification accuracy. Method: A recent test usage survey was used to identify PVTs in common use among adult neuropsychologists. Data on test administration and scoring time were aggregated. Charges per test were calculated. A cost effectiveness approach was applied to compare pairs of tests from three studies using data on test administration time and classification accuracy operationalized as improvement in posterior probability beyond base rate. Charges per unit increase in posterior probability over base rate were calculated for base rates of invalidity ranging from 10 to 40%. Results: Ten commonly used PVTs measures showed a wide range in test administration and scoring time from 1 to 3 minutes to over 40 minutes with associated charge estimates from $4 to $284. Cost effectiveness comparisons illustrated the nuance in test selection and benefit of considering cost in relation to outcome rather than prioritizing time (i.e. cost minimization) classification accuracy alone. Conclusions: Findings extend recent research efforts to fill knowledge gaps related to the cost of neuropsychological evaluation. The cost effectiveness approach warrants further study in other samples with different neuropsychological and outcome measures.
Collapse
Affiliation(s)
- Jeremy J Davis
- Department of Neurology, Glenn Biggs Institute for Alzheimer's and Neurogenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
11
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
12
|
Horner MD, Denning JH, Cool DL. Self-reported disability-seeking predicts PVT failure in veterans undergoing clinical neuropsychological evaluation. Clin Neuropsychol 2023; 37:387-401. [PMID: 35387574 DOI: 10.1080/13854046.2022.2056923] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Objective: This study examined disability-related factors as predictors of PVT performance in Veterans who underwent neuropsychological evaluation for clinical purposes, not for determination of disability benefits. Method: Participants were 1,438 Veterans who were seen for clinical evaluation in a VA Medical Center's Neuropsychology Clinic. All were administered the TOMM, MSVT, or both. Predictors of PVT performance included (1) whether Veterans were receiving VA disability benefits ("service connection") for psychiatric or neurological conditions at the time of evaluation, and (2) whether Veterans reported on clinical interview that they were in the process of applying for disability benefits. Data were analyzed using binary logistic regression, with PVT performance as the dependent variable in separate analyses for the TOMM and MSVT. Results: Veterans who were already receiving VA disability benefits for psychiatric or neurological conditions were significantly more likely to fail both the TOMM and the MSVT, compared to Veterans who were not receiving benefits for such conditions. Independently of receiving such benefits, Veterans who reported that they were applying for disability benefits were significantly more likely to fail the TOMM and MSVT than were Veterans who denied applying for benefits at the time of evaluation. Conclusions: These findings demonstrate that simply being in the process of applying for disability benefits increases the likelihood of noncredible performance. The presence of external incentives can predict the validity of neuropsychological performance even in clinical, non-forensic settings.
Collapse
Affiliation(s)
- Michael David Horner
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - John H Denning
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Danielle L Cool
- Mental Health Service, Ralph H. Johnson Department of Veterans Affairs Medical Center, Charleston, SC, USA
| |
Collapse
|
13
|
Ingram PB, Herring TT, Armistead-Jehle P. Evaluating Personality Assessment Inventory Response Patterns in Active-Duty Personnel With Head Injury Using a Latent Class Approach. ARCHIVES OF CLINICAL NEUROPSYCHOLOGY : THE OFFICIAL JOURNAL OF THE NATIONAL ACADEMY OF NEUROPSYCHOLOGISTS 2023:6988103. [PMID: 36647732 DOI: 10.1093/arclin/acac113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/14/2022] [Accepted: 12/24/2022] [Indexed: 01/18/2023]
Abstract
OBJECTIVE Previous research has found that among those with brain injury, individuals have a variety of different potential symptom sets, which will be seen on the Personality Assessment Inventory (PAI). The number of different groups and what they measure have varied depending on the study. METHOD In active-duty personnel with a remote history of mild traumatic brain injury (n = 384) who were evaluated at a neuropsychology clinic, we used a retrospective database to examine if there are different groups of individuals who have distinct sets of symptoms as measured on the PAI. We examined the potential of distinct groups of respondents by conducting a latent class analysis of the clinical scales. Post hoc testing of group structures was conducted on concurrently administered cognitive testing, performance validity tests, and the PAI subscales. RESULTS Findings indicate a pattern of broad symptom severity as the most probable reason for multiple groups of respondents, suggesting that there are no distinct symptom sets observed within this population. Pathology levels were the most elevated on internalizing and thought disorder scales across the various class solutions. CONCLUSION Findings indicate that among active-duty service members with remote brain injury, there are no distinct groups of respondents with different sets of symptom types as has been found in prior work with other neuropsychology samples. We conclude that the groups found are likely a function of general psychopathology present in the population/sample rather than bona fide differences.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA.,Dwight D Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, KS, USA
| | - Tristan T Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| | | |
Collapse
|
14
|
Denning JH. The TOMM1 discrepancy index (TDI): A new performance validity test (PVT) that differentiates between invalid cognitive testing and those diagnosed with dementia. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:83-90. [PMID: 33945362 DOI: 10.1080/23279095.2021.1910951] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
There is a need to develop performance validity tests (PVTs) that accurately identify those with severe cognitive decline but also remain sensitive to those suspected of invalid cognitive testing. The TOMM1 Discrepancy Index (TDI) attempts to address both of these issues. Veterans diagnosed with dementia (n = 251) were administered TOMM1 and the MSVT in order to develop the TDI (TOMM1 percent correct minus MSVT Free Recall percent correct). Cut offs based on the dementia sample were then used to identify those in the non-dementia sample (n = 1,226) suspected of invalid test performance (n = 401). Combining TOMM1 and the TDI in the dementia sample greatly reduced the false positive rate (specificity = 0.97) at a cut off of 28 points or less on the TDI. Those suspected of invalid testing were identified at much higher rates (sensitivity = 0.75) compared to the MSVT genuine memory impairment profile (GMIP, sensitivity = 0.49). By utilizing a neurologically plausible pattern of scores across two PVTs, the TDI correctly classified those with dementia and identified a large percentage with invalid test performance. PVTs utilizing a complex pattern of performance may help reduce one's ability to fabricate cognitive deficits.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
15
|
The Relationship Between Cognitive Functioning and Symptoms of Depression, Anxiety, and Post-Traumatic Stress Disorder in Adults with a Traumatic Brain Injury: a Meta-Analysis. Neuropsychol Rev 2021; 32:758-806. [PMID: 34694543 DOI: 10.1007/s11065-021-09524-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 09/09/2021] [Indexed: 12/12/2022]
Abstract
A thorough understanding of the relationship between cognitive test performance and symptoms of depression, anxiety, or post-traumatic stress disorder (PTSD) in people with traumatic brain injury (TBI) is important given the high prevalence of these emotional symptoms following injury. It is also important to understand whether these relationships are affected by TBI severity, and the validity of test performance and symptom report. This meta-analysis was conducted to investigate whether these symptoms are associated with cognitive test performance alterations in adults with a TBI. This meta-analysis was prospectively registered on the PROSPERO International Prospective Register of Systematic Reviews website (registration number: CRD42018089194). The electronic databases Medline, PsycINFO, and CINAHL were searched for journal articles published up until May 2020. In total, 61 studies were included, which enabled calculation of pooled effect sizes for the cognitive domains of immediate memory (verbal and visual), recent memory (verbal and visual), attention, executive function, processing speed, and language. Depression had a small, negative relationship with most cognitive domains. These relationships remained, for the most part, when samples with mild TBI (mTBI)-only were analysed separately, but not for samples with more severe TBI (sTBI)-only. A similar pattern of results was found in the anxiety analysis. PTSD had a small, negative relationship with verbal memory, in samples with mTBI-only. No data were available for the PTSD analysis with sTBI samples. Moderator analyses indicated that the relationships between emotional symptoms and cognitive test performance may be impacted to some degree by exclusion of participants with atypical performance on performance validity tests (PVTs) or symptom validity tests (SVTs), however there were small study numbers and changes in effect size were not statistically significant. These findings are useful in synthesising what is currently known about the relationship between cognitive test performance and emotional symptoms in adults with TBI, demonstrating significant, albeit small, relationships between emotional symptoms and cognitive test performance in multiple domains, in non-military samples. Some of these relationships appeared to be mildly impacted by controlling for performance validity or symptom validity, however this was based on the relatively few studies using validity tests. More research including PVTs and SVTs whilst examining the relationship between emotional symptoms and cognitive outcomes is needed.
Collapse
|
16
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
17
|
Modiano YA, Taiwo Z, Pastorek NJ, Webber TA. The Structured Inventory of Malingered Symptomatology Amnestic Disorders Scale (SIMS-AM) Is Insensitive to Cognitive Impairment While Accurately Identifying Invalid Cognitive Symptom Reporting. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09420-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
Clausen AN, Bouchard HC, Welsh-Bohmer KA, Morey RA. Assessment of Neuropsychological Function in Veterans With Blast-Related Mild Traumatic Brain Injury and Subconcussive Blast Exposure. Front Psychol 2021; 12:686330. [PMID: 34262512 PMCID: PMC8273541 DOI: 10.3389/fpsyg.2021.686330] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 06/03/2021] [Indexed: 12/21/2022] Open
Abstract
Objective: The majority of combat-related head injuries are associated with blast exposure. While Veterans with mild traumatic brain injury (mTBI) report cognitive complaints and exhibit poorer neuropsychological performance, there is little evidence examining the effects of subconcussive blast exposure, which does not meet clinical symptom criteria for mTBI during the acute period following exposure. We compared chronic effects of combat-related blast mTBI and combat-related subconcussive blast exposure on neuropsychological performance in Veterans. Methods: Post-9/11 Veterans with combat-related subconcussive blast exposure (n = 33), combat-related blast mTBI (n = 26), and controls (n = 33) without combat-related blast exposure, completed neuropsychological assessments of intellectual and executive functioning, processing speed, and working memory via NIH toolbox, assessment of clinical psychopathology, a retrospective account of blast exposures and non-blast-related head injuries, and self-reported current medication. Huber Robust Regressions were employed to compare neuropsychological performance across groups. Results: Veterans with combat-related blast mTBI and subconcussive blast exposure displayed significantly slower processing speed compared with controls. After adjusting for post-traumatic stress disorder and depressive symptoms, those with combat-related mTBI exhibited slower processing speed than controls. Conclusion: Veterans in the combat-related blast mTBI group exhibited slower processing speed relative to controls even when controlling for PTSD and depression. Cognition did not significantly differ between subconcussive and control groups or subconcussive and combat-related blast mTBI groups. Results suggest neurocognitive assessment may not be sensitive enough to detect long-term effects of subconcussive blast exposure, or that psychiatric symptoms may better account for cognitive sequelae following combat-related subconcussive blast exposure or combat-related blast mTBI.
Collapse
Affiliation(s)
- Ashley N. Clausen
- Kansas City VA Medical Center, Kansas City, MO, United States
- Duke-University of North Carolina at Chapel Hill Brain Imaging and Analysis Center, Duke University, Durham, NC, United States
- VA Mid-Atlantic Mental Illness Research, Education and Clinical Center (MIRECC), Durham Veteran Affairs Healthcare System, Durham, NC, United States
| | - Heather C. Bouchard
- Duke-University of North Carolina at Chapel Hill Brain Imaging and Analysis Center, Duke University, Durham, NC, United States
- VA Mid-Atlantic Mental Illness Research, Education and Clinical Center (MIRECC), Durham Veteran Affairs Healthcare System, Durham, NC, United States
| | | | | | - Rajendra A. Morey
- Duke-University of North Carolina at Chapel Hill Brain Imaging and Analysis Center, Duke University, Durham, NC, United States
- VA Mid-Atlantic Mental Illness Research, Education and Clinical Center (MIRECC), Durham Veteran Affairs Healthcare System, Durham, NC, United States
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, United States
- Center for Cognitive Neuroscience, Duke University, Durham, NC, United States
| |
Collapse
|
19
|
Relations Among Performance and Symptom Validity, Mild Traumatic Brain Injury, and Posttraumatic Stress Disorder Symptom Burden in Postdeployment Veterans. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09415-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
20
|
Schneider JC, Hendrix-Bennett F, Beydoun HA, Johnstone B. A Retrospective Study of Demographic, Medical, and Psychological Predictors of Readiness in Service Members With Mild Traumatic Brain Injury. Mil Med 2021; 186:e401-e409. [PMID: 33175963 DOI: 10.1093/milmed/usaa274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/17/2020] [Accepted: 08/09/2020] [Indexed: 11/13/2022] Open
Abstract
INTRODUCTION Given the significant number of service members who have incurred mild traumatic brain injury (TBI) over the past two decades, this study was completed to determine the relative contribution of demographic, TBI-related, and psychological factors that predict the readiness of service members with primarily mild TBI. METHODS AND MATERIALS This retrospective study included 141 service members who were evaluated at an outpatient military TBI rehabilitation clinic. Information regarding demographics, TBI-related variables, and psychological factors was collected and entered into hierarchical multinomial logistic regressions to predict military work status. Demographic predictor variables included age, race, gender, rank, service branch; TBI-specific variables including time since injury and neuropsychological variables (i.e., Wechsler Adult Intelligence Scale-IV (WAIS-IV) Full Scale Intelligence Quotient (FSIQ) and Processing Speed Indices; California Verbal Learning Test-IV total recall t-score); and psychiatric variables including concomitant psychiatric diagnoses and Personality Assessment Inventory indices. The outcome variable was the service member's military work status (i.e., return to duty (RTD); Medical Evaluation Board-disabled (MEB); retired) at time of discharge from the TBI clinic. RESULTS Statistical analyses indicated that the total model predicted 31% of the variance in work status, with demographics predicting 16% of the variance, concomitant psychiatric diagnoses and WAIS-IV FSIQ predicting an additional 12%, and subjective somatic/psychological distress (Personality Assessment Inventory indices) predicting an additional 3%. Regarding the primary groups of interest (i.e., RTD vs. MEB), stepwise regressions indicated that those who RTD have higher intelligence and report less physical/psychological distress than the disabled group. CONCLUSIONS In general, those service members who were able to RTD versus those who were classified as disabled (MEB) were of higher IQ and reported less somatic/psychological distress. Of note, traditional indices of TBI severity did not predict the ability of the sample to RTD. The results suggest the importance of treating psychological conditions and identifying possible indicators of resilience (e.g., higher intelligence) to increase the readiness of service members with mild TBI.
Collapse
Affiliation(s)
| | - Felicia Hendrix-Bennett
- Fort Belvoir Intrepid Spirit Center, Fort Belvoir, VA 22060, USA.,Defense and Veterans Brain Injury Center, Fort Belvoir Intrepid Spirit Center, Fort Belvoir, VA 22060, USA.,General Dynamics Information Technology, Falls Church, VA 22042, USA
| | - Hind A Beydoun
- Department of Research Programs, Fort Belvoir Community Hospital, Fort Belvoir, VA 22060, USA
| | - Brick Johnstone
- Fort Belvoir Intrepid Spirit Center, Fort Belvoir, VA 22060, USA.,Defense and Veterans Brain Injury Center, Fort Belvoir Intrepid Spirit Center, Fort Belvoir, VA 22060, USA.,Virginia Crawford Research Institute, Shepherd Center, Atlanta, GA 30309, USA
| |
Collapse
|
21
|
Ord AS, Shura RD, Sansone AR, Martindale SL, Taber KH, Rowland JA. Performance validity and symptom validity tests: Are they measuring different constructs? Neuropsychology 2021; 35:241-251. [PMID: 33829824 DOI: 10.1037/neu0000722] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVE To evaluate the relationships among performance validity, symptom validity, symptom self-report, and objective cognitive testing. METHOD Combat Veterans (N = 338) completed a neurocognitive assessment battery and several self-report symptom measures assessing depression, posttraumatic stress disorder (PTSD) symptoms, sleep quality, pain interference, and neurobehavioral complaints. All participants also completed two performance validity tests (PVTs) and one stand-alone symptom validity test (SVT) along with two embedded SVTs. RESULTS Results of an exploratory factor analysis revealed a three-factor solution: performance validity, cognitive performance, and symptom report (SVTs loaded on the third factor). Results of t tests demonstrated that participants who failed PVTs displayed significantly more severe symptoms and significantly worse performance on most measures of neurocognitive functioning compared to those who passed. Participants who failed a stand-alone SVT also reported significantly more severe symptomatology on all symptom report measures, but the pattern of cognitive performance differed based on the selected SVT cutoff. Multiple linear regressions revealed that both SVT and PVT failure explained unique variance in symptom report, but only PVT failure significantly predicted cognitive performance. CONCLUSIONS Performance and symptom validity tests measure distinct but related constructs. SVTs and PVTs are significantly related to both cognitive performance and symptom report; however, the relationship between symptom validity and symptom report is strongest. SVTs are also differentially related to cognitive performance and symptom report based on the utilized cutoff score. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna S Ord
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center (MA-MIRECC)
| | | | | | | | | | | |
Collapse
|
22
|
Modiano YA, Webber T, Cerbone B, Haneef Z, Pastorek NJ. Predictive utility of the Minnesota Multiphasic Personality Inventory-2-RF (MMPI-2-RF) in differentiating psychogenic nonepileptic seizures and epileptic seizures in male veterans. Epilepsy Behav 2021; 116:107731. [PMID: 33517198 DOI: 10.1016/j.yebeh.2020.107731] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 11/16/2020] [Accepted: 12/20/2020] [Indexed: 10/22/2022]
Abstract
OBJECTIVE While psychogenic nonepileptic seizures (PNES) and epileptic seizures (ES) often present similarly, they are etiologically distinct, and correct diagnosis is essential for ensuring appropriate treatment and improving outcomes. The Minnesota Multiphasic Personality Inventory-2-RF (MMPI-2-RF) may assist in differential diagnosis, but prior investigations have been limited by disproportionately female samples, inconsistent accounting for profile invalidity, and limited intra-scale variability from dichotomizing variables. The current investigation addressed these gaps by assessing diagnostic utility of the MMPI-2-RF in differentiating PNES and ES in a male sample of veterans while conservatively accounting for profile invalidity and using a statistical approach that allows for consideration of continuous independent variables to better appreciate intra-scale variance. METHOD One hundred and forty-four veterans completed the MMPI-2-RF and were diagnosed with PNES (57.6%) or ES (42.4%) by a board-certified neurologist following continuous video-EEG monitoring. Participants with validity scores falling in the definitely or likely invalid ranges were excluded to ensure construct validity among clinical/substantive scales. Independent samples t-tests assessed differences in MMPI-2-RF variables by diagnostic groups. Hierarchical stepwise logistical regressions assessed predictive utility of MMPI-2-RF indices. A clinical calculator was derived from regression findings to help with diagnostic prediction. RESULTS Males with PNES endorsed significantly higher scores on F-r, FBS-r, RBS, RC1, RC7, HPC, and NUC (medium to large effect sizes). The regression block that contained validity, restructured clinical (RC1), and substantive scales (GIC, SUI) had a hit rate of 75.69%, which was an improvement from the baseline model hit rate of 57.64%. Higher endorsement on RC1 and lower reporting on GIC significantly predicted PNES diagnosis for males. CONCLUSIONS Minnesota Multiphasic Personality Inventory-2-RF improved diagnostic accuracy of PNES versus ES among male veterans, and RC1 (somatic complaints) emerged as a significant predictor for males with PNES, in line with hypotheses. Several clinical/substantive scales assisted with differential diagnosis after careful accounting for profile validity. Future studies can validate findings among males outside of veteran samples.
Collapse
Affiliation(s)
- Yosefa A Modiano
- Michael E. DeBakey VA Medical Center, Mental Health Care Line, 2002 Holcombe Blvd., Houston, TX 77030, USA.
| | - Troy Webber
- Michael E. DeBakey VA Medical Center, Mental Health Care Line, 2002 Holcombe Blvd., Houston, TX 77030, USA.
| | - Brittany Cerbone
- Barrow Neurological Institute, 350 West Thomas Rd., Phoenix, AZ 85013, USA.
| | - Zulfi Haneef
- Michael E. DeBakey VA Medical Center, Neurology Care Line, 2002 Holcombe Blvd., Houston, TX 77030, USA; Baylor College of Medicine, Department of Neurology, 1 Baylor Plaza, Houston, TX 77030, USA.
| | - Nicholas J Pastorek
- Michael E. DeBakey VA Medical Center, Rehabilitation Care Line, 2002 Holcombe Blvd., Houston, TX 77030, USA.
| |
Collapse
|
23
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
24
|
Young G. Thirty Complexities and Controversies in Mild Traumatic Brain Injury and Persistent Post-concussion Syndrome: a Roadmap for Research and Practice. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09395-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
25
|
Martindale SL, Shura RD, Ord AS, Williams AM, Brearly TW, Miskey HM, Rowland JA. Symptom burden, validity, and cognitive performance in Iraq and Afghanistan veterans. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:1068-1077. [PMID: 33202168 DOI: 10.1080/23279095.2020.1847111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
INTRODUCTION The present study evaluates the complex relationships between symptom burden, validity, and cognition in a sample of Iraq and Afghanistan veterans to identify key characteristic symptoms and validity measures driving cognitive performance. We hypothesized that symptom and performance validity would account for poorer outcomes on cognitive performance beyond psychological symptoms. METHODS Veterans (n = 226) completed a cognitive test battery, Personality Assessment Inventory (PAI), Word Memory Test (WMT), and Miller Forensic Assessment Symptom Test (M-FAST). Partial least squares structural equation modeling (PLS-SEM) modeled the fully-adjusted relationships among PAI subscales, validity, and cognitive performance. RESULTS 23.45% of participants failed validity indices (19.9% WMT; 7.1% M-FAST). PLS-SEM indicated PAI subscales were not directly associated with performance or symptom validity measures, and there were no direct effects between validity performance and cognitive performance. Several PAI subscales were directly associated with measures of verbal abstraction, visual processing, and verbal learning and memory. CONCLUSION Contrary to hypotheses, symptom and performance validity did not account for poorer outcomes on cognitive performance beyond symptom burden in the PLS-SEM model. Results highlight the association between psychiatric symptoms and cognitive performance beyond validity status.
Collapse
Affiliation(s)
- Sarah L Martindale
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA.,Physiology & Pharmacology Division, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Robert D Shura
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA.,Psychiatry & Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Anna S Ord
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
| | - Ann M Williams
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
| | - Timothy W Brearly
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA.,Neuropsychology Assessment - Directorate of Behavioral Health (Consultation & Education), Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Holly M Miskey
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA.,Psychiatry & Behavioral Medicine, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Jared A Rowland
- Research & Academic Affairs Service Line, W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA.,Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|
26
|
Relationship between intelligence and posttraumatic stress disorder in veterans. INTELLIGENCE 2020. [DOI: 10.1016/j.intell.2020.101472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
27
|
Polsinelli AJ, Cerhan JH. Early Cutoff Criteria for Strong Performance on the Test of Memory Malingering. Arch Clin Neuropsychol 2020; 35:429-433. [PMID: 31867600 DOI: 10.1093/arclin/acz079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 11/19/2019] [Accepted: 11/26/2019] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The Test of Memory Malingering (TOMM) is widely used to assess performance validity. To improve efficiency, we investigated whether abbreviated administration (i.e., only the first 25 items of Trial 1 [T1]) is possible when effort is very strong (≥49/50 on T1 or T2). METHOD We collected TOMM scores of 501 consecutive adult patients ranging in cognitive status who underwent standard neuropsychological evaluation at Mayo Clinic, Rochester, MN. RESULTS Receiver Operating Characteristic (ROC) analysis showed excellent area under the curve (AUC) (0.94; CI95% [0.92, 0.97]) and a cutoff of 25/25 had 100% specificity for identifying strong performance. Of the 224 patients who obtained a perfect score on the first 25 items, 197 (88%) obtained ≥49 on T1 and the remaining patients (n = 27) obtained ≥49 on T2. CONCLUSION A perfect score on the first 25 items of the TOMM predicted overall strong performance 100% of the time, supporting abbreviated administration in select cases in a general outpatient clinical setting.
Collapse
Affiliation(s)
| | - Jane H Cerhan
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN 55904, USA
| |
Collapse
|
28
|
Ord AS, Miskey HM, Lad S, Richter B, Nagy K, Shura RD. Examining embedded validity indicators in Conners continuous performance test-3 (CPT-3). Clin Neuropsychol 2020; 35:1426-1441. [DOI: 10.1080/13854046.2020.1751301] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Anna S. Ord
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
- Mid-Atlantic Mental Illness Research Education and Clinical Center, Durham, NC, USA
| | - Holly M. Miskey
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
- Mid-Atlantic Mental Illness Research Education and Clinical Center, Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
- Via College of Osteopathic Medicine, Blacksburg, VA, USA
| | - Sagar Lad
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
- Mid-Atlantic Mental Illness Research Education and Clinical Center, Durham, NC, USA
| | - Beth Richter
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
| | | | - Robert D. Shura
- W.G. Hefner VA Medical Center, Salisbury, NC, USA
- Mid-Atlantic Mental Illness Research Education and Clinical Center, Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
- Via College of Osteopathic Medicine, Blacksburg, VA, USA
| |
Collapse
|
29
|
Martin PK, Schroeder RW. Base Rates of Invalid Test Performance Across Clinical Non-forensic Contexts and Settings. Arch Clin Neuropsychol 2020; 35:717-725. [DOI: 10.1093/arclin/acaa017] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 01/22/2020] [Accepted: 02/25/2020] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objective
Base rates of invalidity in forensic neuropsychological contexts are well explored and believed to approximate 40%, whereas base rates of invalidity across clinical non-forensic contexts are relatively less known.
Methods
Adult-focused neuropsychologists (n = 178) were surveyed regarding base rates of invalidity across various clinical non-forensic contexts and practice settings. Median values were calculated and compared across contexts and settings.
Results
The median estimated base rate of invalidity across clinical non-forensic evaluations was 15%. When examining specific clinical contexts and settings, base rate estimates varied from 5% to 50%. Patients with medically unexplained symptoms (50%), external incentives (25%–40%), and oppositional attitudes toward testing (37.5%) were reported to have the highest base rates of invalidity. Patients with psychiatric illness, patients evaluated for attention deficit hyperactivity disorder, and patients with a history of mild traumatic brain injury were also reported to invalidate testing at relatively high base rates (approximately 20%). Conversely, patients presenting for dementia evaluation and patients with none of the previously mentioned histories and for whom invalid testing was unanticipated were estimated to produce invalid testing in only 5% of cases. Regarding practice setting, Veterans Affairs providers reported base rates of invalidity to be nearly twice that of any other clinical settings.
Conclusions
Non-forensic clinical patients presenting with medically unexplained symptoms, external incentives, or oppositional attitudes are reported to invalidate testing at base rates similar to that of forensic examinees. The impact of context-specific base rates on the clinical evaluation of invalidity is discussed.
Collapse
Affiliation(s)
- Phillip K Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine, Wichita, KS, USA
| | - Ryan W Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine, Wichita, KS, USA
| |
Collapse
|
30
|
Psychological Symptoms and Rates of Performance Validity Improve Following Trauma-Focused Treatment in Veterans with PTSD and History of Mild-to-Moderate TBI. J Int Neuropsychol Soc 2020; 26:108-118. [PMID: 31658923 DOI: 10.1017/s1355617719000997] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Iraq and Afghanistan Veterans with posttraumatic stress disorder (PTSD) and traumatic brain injury (TBI) history have high rates of performance validity test (PVT) failure. The study aimed to determine whether those with scores in the invalid versus valid range on PVTs show similar benefit from psychotherapy and if psychotherapy improves PVT performance. METHOD Veterans (N = 100) with PTSD, mild-to-moderate TBI history, and cognitive complaints underwent neuropsychological testing at baseline, post-treatment, and 3-month post-treatment. Veterans were randomly assigned to cognitive processing therapy (CPT) or a novel hybrid intervention integrating CPT with TBI psychoeducation and cognitive rehabilitation strategies from Cognitive Symptom Management and Rehabilitation Therapy (CogSMART). Performance below standard cutoffs on any PVT trial across three different PVT measures was considered invalid (PVT-Fail), whereas performance above cutoffs on all measures was considered valid (PVT-Pass). RESULTS Although both PVT groups exhibited clinically significant improvement in PTSD symptoms, the PVT-Pass group demonstrated greater symptom reduction than the PVT-Fail group. Measures of post-concussive and depressive symptoms improved to a similar degree across groups. Treatment condition did not moderate these results. Rate of valid test performance increased from baseline to follow-up across conditions, with a stronger effect in the SMART-CPT compared to CPT condition. CONCLUSION Both PVT groups experienced improved psychological symptoms following treatment. Veterans who failed PVTs at baseline demonstrated better test engagement following treatment, resulting in higher rates of valid PVTs at follow-up. Veterans with invalid PVTs should be enrolled in trauma-focused treatment and may benefit from neuropsychological assessment after, rather than before, treatment.
Collapse
|
31
|
Belanger HG, Wortzel HS, Vanderploeg RD, Cooper DB. A model for intervening with veterans and service members who are concerned about developing Chronic Traumatic Encephalopathy (CTE). Clin Neuropsychol 2019; 34:1105-1123. [DOI: 10.1080/13854046.2019.1699166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Affiliation(s)
- Heather G. Belanger
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Department of Psychiatry and Behavioral Neurosciences, University of South Florida, Tampa, FL, USA
- Department of Psychology, University of South Florida, Tampa, FL, USA
- James A, Haley Veterans Hospital, United States Special Operations Command, 9Line LLC, Tampa, FL, USA
| | - Hal S. Wortzel
- Rocky Mountain MIRECC, Rocky Mountain Regional Medical Center, Aurora, CO, USA
- Departments of Psychiatry, Neurology, and PM&R, University of Colorado, Aurora, CO, USA
| | - Rodney D. Vanderploeg
- Department of Psychiatry and Behavioral Neurosciences, University of South Florida, Tampa, FL, USA
- Department of Psychology, University of South Florida, Tampa, FL, USA
| | - Douglas B. Cooper
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Polytrauma Rehabilitation Center, Audie Murphy Memorial VA Hospital, San Antonio, TX, USA
- Department of Psychiatry, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
32
|
Eggleston B, Dismuke-Greer CE, Pogoda TK, Denning JH, Eapen BC, Carlson KF, Bhatnagar S, Nakase-Richardson R, Troyanskaya M, Nolen T, Walker WC. A prediction model of military combat and training exposures on VA service-connected disability: a CENC study. Brain Inj 2019; 33:1602-1614. [DOI: 10.1080/02699052.2019.1655793] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- B Eggleston
- RTI International, Research Triangle Park, NC, USA
| | - CE Dismuke-Greer
- Health Economics Resource Center (HERC), VA Palo Alto Healthcare System, Palo Alto, California, USA
- Department of Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - TK Pogoda
- Center for Healthcare Organization and Implementation Research, VA Boston Healthcare System, Boston, MA, USA
- School of Public Health, Boston University, Boston, MA, USA
| | - JH Denning
- Mental Health Care Line, Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina
| | - BC Eapen
- Department of Physical Medicine and Rehabilitation, VA Greater Los Angeles Healthcare System, Los Angeles, CA, USA
| | - KF Carlson
- HSR&D Center to Improve Veteran Involvement in Care (CIVIC), VA Portland Health Care System (R&D 66)
- OHSU-PSU School of Public Health, Oregon Health and Science University
| | - S Bhatnagar
- Acting, Assistant Deputy Undersecretary for Health in Quality, Safety & Value, Department of Veterans Affairs
| | - R Nakase-Richardson
- MHBS, James A. Haley Veterans Hospital, Tampa, FL, USA
- Division of Pulmonary and Sleep Medicine, Morsani College of Medicine, University of South Florida, Tampa, FL, USA
- Defense and Veterans Brain Injury Center, Tampa, FL, USA
| | - M Troyanskaya
- Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
- Michael E. DeBakey, Houston, VA, Medical Center
| | - T Nolen
- RTI International, Research Triangle Park, NC, USA
| | - WC Walker
- Department Physical Medicine and Rehabilitation, Virginia Commonwealth University, Richmond, VA, USA
- Hunter Holmes McGuire Veterans Affairs Medical Center, Richmond, VA, USA
| |
Collapse
|
33
|
Shura RD, Martindale SL, Taber KH, Higgins AM, Rowland JA. Digit Span embedded validity indicators in neurologically-intact veterans. Clin Neuropsychol 2019; 34:1025-1037. [PMID: 31315519 DOI: 10.1080/13854046.2019.1635209] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objective: Embedded validity measures are useful in neuropsychological evaluations but should be updated with new test versions and validated across various samples. This study evaluated Wechsler Adult Intelligence Scale, 4th edition (WAIS-IV) Digit Span validity indicators in post-deployment veterans.Method: Neurologically-intact veterans completed structured diagnostic interviews, the WAIS-IV, the Medical Symptom Validity Test (MSVT), and the b Test as part of a larger study. The Noncredible group included individuals who failed either the MSVT or the b Test. Of the total sample (N = 275), 21.09% failed the MSVT and/or b Test. Diagnostic accuracy was calculated predicting group status across cutoff scores on two Digit Span variables, four Reliable Digit Span (RDS) variables, and two Vocabulary minus Digit Span variables.Results: Digit Span age-corrected scaled score (ACSS) had the highest AUC (.648) of all measures assessed; however, sensitivity at the best cutoff of <7 was only 0.17. Of RDS measures, the Working Memory RDS resulted in the highest AUC (.629), but Enhanced RDS and Alternate RDS produced the highest sensitivities (0.22). Overall, cutoff scores were consistent with other studies, but sensitivities were lower. Vocabulary minus Digit Span measures were not significant.Conclusions: Digit Span ACSS was the strongest predictor of noncredible performance, and outperformed traditional RDS variants. Sensitivity across all validity indicators was low in this research sample, though cutoff scores were congruent with previous research. Although embedded Digit Span validity indicators may be useful, they are not sufficient to replace standalone performance validity tests.
Collapse
Affiliation(s)
- Robert D Shura
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, NC, USA.,Salisbury Veterans Affairs Medical Center, Salisbury, NC, USA.,Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Sarah L Martindale
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, NC, USA.,Salisbury Veterans Affairs Medical Center, Salisbury, NC, USA.,Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Katherine H Taber
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, NC, USA.,Salisbury Veterans Affairs Medical Center, Salisbury, NC, USA.,Via College of Osteopathic Medicine, Blacksburg, VA, USA.,Baylor College of Medicine, Houston, TX, USA
| | - Alana M Higgins
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, NC, USA
| | - Jared A Rowland
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Salisbury, NC, USA.,Salisbury Veterans Affairs Medical Center, Salisbury, NC, USA.,Wake Forest School of Medicine, Winston-Salem, NC, USA
| |
Collapse
|
34
|
Reilly KJ, Kalat SS, Richardson AH, Armistead-Jehle P. Preliminary investigation of the Denver Attention Test (DAT) in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:158-164. [PMID: 31091990 DOI: 10.1080/23279095.2019.1607736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
In this pilot study, the clinical utility of a new computerized performance validity test (PVT) called the Denver Attention Test (DAT) was evaluated in a known-groups experimental design. Subjects consisted of 130 adults with mixed neurological conditions evaluated in an outpatient setting. Using the Word Memory Test (WMT) to categorize subjects into valid and invalid groups, the DAT was found to have adequate discrimination. Classification statistics for the DAT demonstrated low to moderate sensitivity and excellent specificity relative to the WMT. ROC analyses demonstrated AUCs of at least .78 for select DAT subtests. Overall, data from this pilot study suggest that the DAT has potential to serve as a useful PVT. Future research directions are discussed.
Collapse
Affiliation(s)
| | | | - Anne H Richardson
- Graduate School of Professional Psychology, University of Denver, Denver, Colorado, USA
| | | |
Collapse
|
35
|
Abstract
Objective: The purpose of this critical review was to evaluate the current state of research regarding the incremental value of neuropsychological assessment in clinical practice, above and beyond what can be accounted for on the basis of demographic, medical, and other diagnostic variables. The focus was on neurological and other medical conditions across the lifespan where there is known risk for presence or future development of cognitive impairment.Method: Eligible investigations were group studies that had been published after 01/01/2000 in English in peer-reviewed journals and that had used standardized neuropsychological measures and reported on objective outcome criterion variables. They were identified through PubMed and PsychInfo electronic databases on the basis of predefined specific selection criteria. Reference lists of identified articles were also reviewed to identify potential additional sources. The Grades of Recommendation, Assessment, Development and Evaluation Working Group's (GRADE) criteria were used to evaluate quality of studies.Results: Fifty-six studies met the final selection criteria, including 2 randomized-controlled trials, 9 prospective cohort studies, 12 retrospective cohort studies, 21 inception cohort studies, 2 case control studies, and 10 case series studies. The preponderance of the evidence was strongly supportive with regard to the incremental value of neuropsychological assessment in the care of persons with mild cognitive impairment/dementia and traumatic brain injury. Evidence was moderately supportive with regard to stroke, epilepsy, multiple sclerosis, and attention-deficit/hyperactivity disorder. Participation in neuropsychological evaluations was also associated with cost savings.Conclusions: Neuropsychological assessment can improve both diagnostic classification and prediction of long-term daily-life outcomes in patients across the lifespan. Future high-quality prospective cohort studies and randomized-controlled trials are necessary to demonstrate more definitively the incremental value of neuropsychological assessment in the management of patients with various neurological and other medical conditions.
Collapse
Affiliation(s)
- Jacobus Donders
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| |
Collapse
|
36
|
Denning JH. When 10 is enough: Errors on the first 10 items of the Test of Memory Malingering (TOMMe10) and administration time predict freestanding performance validity tests (PVTs) and underperformance on memory measures. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:35-47. [PMID: 30950290 DOI: 10.1080/23279095.2019.1588122] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
It is critical that we develop more efficient performance validity tests (PVTs). A shorter version of the Test of Memory Malingering (TOMM) that utilizes errors on the first 10 items (TOMMe10) has shown promise as a freestanding PVT. Retrospective review included 397 consecutive veterans administered TOMM trial 1 (TOMM1), the Medical Symptom Validity Test (MSVT), and the Brief Visuospatial Memory Test-Revised (BVMT-R). TOMMe10 accuracy and administration time were used to predict performance on freestanding PVTs (TOMM1, MSVT). The impact of failing TOMMe10 (2 or more errors) on independent memory measures was also explored. TOMMe10 was a robust predictor of TOMM1 (area under the curve [AUC] = 0.97) and MSVT (AUC = 0.88) with sensitivities = 0.76 to 0.89 and specificities = 0.89 to 0.96. Administration time predicted PVT performance but did not improve accuracy compared to TOMMe10 alone. Failing TOMMe10 was associated with clinically and statistically significant declines on the BVMT-R and MSVT Paired Associates and Free Recall memory tests (d = -0.32 to -1.31). Consistent with prior research, TOMMe10 at 2 or more errors was highly accurate in predicting performance on other well-validated freestanding PVTs. Failing just 1 freestanding PVT (TOMMe10) significantly impacted memory measures and likely reflects invalid test performance.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, South Carolina, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
37
|
Further Validation of the Test of Memory Malingering (TOMM) Trial 1 Performance Validity Index: Examination of False Positives and Convergent Validity. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-018-9335-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
38
|
Olsen DH, Schroeder RW, Heinrichs RJ, Martin PK. Examination of optimal embedded PVTs within the BVMT-R in an outpatient clinical sample. Clin Neuropsychol 2018; 33:732-742. [DOI: 10.1080/13854046.2018.1501096] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
39
|
Dismuke-Greer CE, Nolen TL, Nowak K, Hirsch S, Pogoda TK, Agyemang AA, Carlson KF, Belanger HG, Kenney K, Troyanskaya M, Walker WC. Understanding the impact of mild traumatic brain injury on veteran service-connected disability: results from Chronic Effects of Neurotrauma Consortium. Brain Inj 2018; 32:1178-1187. [PMID: 29889561 DOI: 10.1080/02699052.2018.1482428] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
OBJECTIVES Disability evaluation is complex. The association between mild traumatic brain injury (mTBI) history and VA service-connected disability (SCD) ratings can have implications for disability processes in the civilian population. We examined the association of VA SCD ratings with lifetime mTBI exposure in three models: any mTBI, total mTBI number, and blast-related mTBI. METHODS Participants were 492 Operation Enduring Freedom/Operation Iraqi Freedom/Operation New Dawn veterans from four US VA Medical Centers enrolled in the Chronic Effects of Neurotrauma Consortium study between January 2015 and August 2016. Analyses entailed standard covariate-adjusted linear regression models, accounting for demographic, military, and health-related confounders and covariates. RESULTS Unadjusted and adjusted results indicated lifetime mTBI was significantly associated with increased SCD, with the largest effect observed for blast-related mTBI. Every unit increase in mTBI was associated with an increase in 3.6 points of percent SCD. However, hazardous alcohol use was associated with lower SCD. CONCLUSIONS mTBI, especially blast related, is associated with higher VA SCD ratings, with each additional mTBI increasing percent SCD. The association of hazardous alcohol use with SCD should be investigated as it may impact veteran health services access and health outcomes. These findings have implications for civilian disability processes.
Collapse
Affiliation(s)
- Clara Elizabeth Dismuke-Greer
- a Health Equity and Rural Outreach Innovation Center, Ralph H. Johnson VA Medical Center, and Department of Medicine , Medical University of South Carolina , Charleston , SC , USA.,b Research Service , Ralph H. Johnson VAMC , Charleston , SC, USA
| | | | | | | | - Terri K Pogoda
- d Center for Healthcare Organization and Implementation Research , VA Boston Healthcare System, and Boston University School of Public Health , Boston , MA , USA
| | - Amma A Agyemang
- e Department of Physical Medicine and Rehabilitation , Virginia Commonwealth University , Richmond , VA, USA
| | - Kathleen F Carlson
- f HSR&D Center of Innovation, VA Portland Health Care System, and OHSU-PSU School of Public Health , Oregon Health and Science University , Portland , OR, USA
| | - Heather G Belanger
- g HSR&D Center of Innovation on Disability and Rehabilitation Research (CINDRR), James A. Haley Veterans' Hospital (VH), and Department of Psychiatry and Behavioral Neurosciences , University of South Florida , Tampa , FL, USA
| | - Kimbra Kenney
- h Uniformed Services University of the Health Sciences , Bethesda , MD, USA
| | - Maya Troyanskaya
- i Michael E. DeBakey VA Medical Center and Department of Physical Medicine and Rehabilitation , Baylor College of Medicine , Houston , TX, USA
| | - William C Walker
- e Department of Physical Medicine and Rehabilitation , Virginia Commonwealth University , Richmond , VA, USA.,j Hunter Holmes McGuire VA Medical Center
| |
Collapse
|
40
|
Juengst SB, Terhorst L, Dicianno BE, Niemeier JP, Wagner AK. Development and content validity of the behavioral assessment screening tool (BAST β). Disabil Rehabil 2018; 41:1200-1206. [PMID: 29303003 DOI: 10.1080/09638288.2017.1423403] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
PURPOSE Develop and establish the content validity of the Behavioral Assessment Screening Tool (BASTβ), a self-reported measure of behavioral and emotional symptoms after traumatic brain injury. METHODS This was an assessment development study, including two focus groups of individuals with traumatic brain injury (n = 11) and their family members (n = 10) and an expert panel evaluation of content validity by experts in traumatic brain injury rehabilitation (n = 7). We developed and assessed the Content Validity Index of the BASTβ. RESULTS The BASTβ initial items (n = 77) corresponded with an established conceptual model of behavioral dysregulation after traumatic brain injury. After expert panel evaluation and focus group feedback, the final BASTβ included 66 items (60 primary, 6 branching logic) rated on a three-level ordinal scale (Never, Sometimes, Always) with reference to the past two weeks, and an Environmental Context checklist including recent major life events (n = 23) and four open-ended questions about environmental factors. The BASTβ had a high Content Validity Index of 89.3%. CONCLUSION The BASTβ is a theoretically grounded, multidimensional self-reported assessment of behavioral dysregulation after traumatic brain injury, with good content validity. Future translation into mobile health modalities could improve effectiveness and efficiency of long-term symptom monitoring post-traumatic brain injury. Future work will establish and validate the factor structure, internal consistency reliabilities and other validities of the BAST. Implications for Rehabilitation Behavioral problems after traumatic brain injury is one of the strongest contributing factors to poor mood and community integration outcomes after injury. Behavior is complex and multidimensional, making it a challenge to measure and to monitor long term. The Behavioral Assessment Screening Tool (BAST) is a patient-oriented outcome assessment developed in collaboration with individuals with traumatic brain injury, their care partners, and experts in the field of traumatic brain injury rehabilitation to be relevant and accessible for adults with traumatic brain injuries. The BAST is a long-term monitoring and screening tool for community-dwelling adults with traumatic brain injuries, to improve identification and management of behavioral and emotional sequelae.
Collapse
Affiliation(s)
- Shannon B Juengst
- a Department of Physical Medicine and Rehabilitation , University of Texas Southwestern , Dallas , TX , USA.,b Department of Rehabilitation Counseling , University of Texas Southwestern , Dallas , TX , USA
| | - Lauren Terhorst
- c Department of Occupational Therapy , University of Pittsburgh , Pittsburgh , PA , USA.,d Clinical and Translational Science Institute , University of Pittsburgh , Pittsburgh , PA , USA
| | - Brad E Dicianno
- e Department of Physical Medicine and Rehabilitation , University of Pittsburgh , Pittsburgh , PA , USA.,f Department of Rehabilitation Science and Technology , University of Pittsburgh , Pittsburgh , PA , USA
| | - Janet P Niemeier
- g Department of Physical Medicine and Rehabilitation , Carolinas Medical Center , Charlotte , SC , USA
| | - Amy K Wagner
- f Department of Rehabilitation Science and Technology , University of Pittsburgh , Pittsburgh , PA , USA.,h Center for Neuroscience , University of Pittsburgh , Pittsburgh , PA , USA.,i Safar Center for Resuscitation , University of Pittsburgh , Pittsburgh , PA , USA
| |
Collapse
|