1
|
Stocks JK, Shields AN, DeBoer AB, Cerny BM, Ogram Buckley CM, Ovsiew GP, Jennette KJ, Resch ZJ, Basurto KS, Song W, Pliskin NH, Soble JR. The impact of visual memory impairment on Victoria Symptom Validity Test performance: A known-groups analysis. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:329-338. [PMID: 34985401 DOI: 10.1080/23279095.2021.2021911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE We assessed the effect of visual learning and recall impairment on Victoria Symptom Validity Test (VSVT) accuracy and response latency for Easy, Difficult, and Total Items. METHOD A sample of 163 adult patients administered the VSVT and Brief Visuospatial Memory Test-Revised were classified as valid (114/163) or invalid (49/163) groups via independent criterion performance validity tests (PVTs). Classification accuracies for all VSVT indices were examined for the overall sample, and separately for subgroups based on visual memory functioning. RESULTS In the overall sample, all indices produced acceptable classification accuracy (areas under the curve [AUCs] ≥ 0.79). When stratified by visual learning/recall impairment, accuracy indices yielded acceptable classification for both the unimpaired (AUCs ≥0.79) and impaired subsamples (AUCs ≥0.75). Latency indices had acceptable classification accuracy for the unimpaired subsample (AUCs ≥0.74), but accuracy and sensitivity dropped for the impaired sample (AUCs ≥0.67). CONCLUSIONS VSVT accuracy and response latency yielded acceptable classification accuracies in the overall sample, and this effect was maintained in those with and without visual learning/recall impairment for the accuracy indices. Findings indicate that the VSVT is a psychometrically robust PVT with largely invariant cut-scores, even in the presence of bona fide visual learning/recall impairment.
Collapse
Affiliation(s)
- Jane K Stocks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Allison N Shields
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Adam B DeBoer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | | | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Woojin Song
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
2
|
Williamson ES, Arentsen TJ, Roper BL, Pedersen HA, Shultz LA, Crouse EM. The Importance of the Morel Emotional Numbing Test Instructions: A Diagnosis Threat Induction Study. Arch Clin Neuropsychol 2024; 39:35-50. [PMID: 37449530 DOI: 10.1093/arclin/acad048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/07/2023] [Indexed: 07/18/2023] Open
Abstract
OBJECTIVE Marketed as a validity test that detects feigning of posttraumatic stress disorder (PTSD), the Morel Emotional Numbing Test for PTSD (MENT) instructs examinees that PTSD may negatively affect performance on the measure. This study explored the potential that MENT performance depends on inclusion of "PTSD" in its instructions and the nature of the MENT as a performance validity versus a symptom validity test (PVT/SVT). METHOD 358 participants completed the MENT as a part of a clinical neuropsychological evaluation. Participants were either administered the MENT with the standard instructions (SIs) that referenced "PTSD" or revised instructions (RIs) that did not. Others were administered instructions that referenced "ADHD" rather than PTSD (AI). Comparisons were conducted on those who presented with concerns for potential traumatic-stress related symptoms (SI vs. RI-1) or attention deficit (AI vs. RI-2). RESULTS Participants in either the SI or AI condition produced more MENT errors than those in their respective RI conditions. The relationship between MENT errors and other S/PVTs was significantly stronger in the SI: RI-1 comparison, such that errors correlated with self-reported trauma-related symptoms in the SI but not RI-1 condition. MENT failure also predicted PVT failure at nearly four times the rate of SVT failure. CONCLUSIONS Findings suggest that the MENT relies on overt reference to PTSD in its instructions, which is linked to the growing body of literature on "diagnosis threat" effects. The MENT may be considered a measure of suggestibility. Ethical considerations are discussed, as are the construct(s) measured by PVTs versus SVTs.
Collapse
Affiliation(s)
- Emily S Williamson
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
| | - Timothy J Arentsen
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
- Department of Psychiatry, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Brad L Roper
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
- Department of Psychiatry, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Heather A Pedersen
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
| | - Laura A Shultz
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
| | - Ellen M Crouse
- Department of Veterans Affairs, Lt. Col. Luke Weathers, Jr. VA Medical Center, Memphis, TN, USA
- Department of Psychiatry, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
3
|
Deloria R, Kivisto AJ, Swier-Vosnos A, Elwood L. Optimal per test cutoff scores and combinations of failure on multiple embedded performance validity tests in detecting performance invalidity in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:716-726. [PMID: 34528833 DOI: 10.1080/23279095.2021.1973005] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We tested the usefulness of six embedded performance validity tests (EPVTs) in identifying performance invalidity in a mixed clinical sample. Using a retrospective design, 181 adults were classified as valid (n = 146) or invalid (n = 35) performance based upon their performance on one of three standalone PVTs (Test of Memory Malingering, Victoria Symptom Validity Test, Dot Counting Test). Multiple cutoffs were identified corresponding to predetermined false positive rates of 0, 5, 10, and 15% for each of six EPVTs. EPVT cutoffs corresponding to the predetermined false positive benchmarks were generally more conservative than currently established scores. Sensitivity was low (.0%-42.9%) for individual EPVTs across these cutoffs and was moderately improved by the combination of multiple EPVT failures. The optimal number of EPVT failures using the 10% false positive rate was ≥ 2. Although the overall classification accuracy of 80.7% and specificity of 89.0% were comparable to prior research, the sensitivity of 45.7% was more modest than previous estimates. Low sensitivities indicate that this combination of EPVTs failed to detect a majority of invalid performers.
Collapse
Affiliation(s)
- Rebecca Deloria
- Graduate Department of Clinical Psychology, University of Indianapolis, Indianapolis, IN, United States
| | - Aaron J Kivisto
- Graduate Department of Clinical Psychology, University of Indianapolis, Indianapolis, IN, United States
| | | | - Lisa Elwood
- Graduate Department of Clinical Psychology, University of Indianapolis, Indianapolis, IN, United States
| |
Collapse
|
4
|
Tyson BT, Pyne SR, Crisan I, Calamia M, Holcomb M, Giromini L, Erdodi LA. Logical memory, visual reproduction, and verbal paired associates are effective embedded validity indicators in patients with traumatic brain injury. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-10. [PMID: 36881969 DOI: 10.1080/23279095.2023.2179400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE This study was design to evaluate the potential of the recognition trials for the Logical Memory (LM), Visual Reproduction (VR), and Verbal Paired Associates (VPA) subtests of the Wechsler Memory Scales-Fourth Edition (WMS-IV) to serve as embedded performance validity tests (PVTs). METHOD The classification accuracy of the three WMS-IV subtests was computed against three different criterion PVTs in a sample of 103 adults with traumatic brain injury (TBI). RESULTS The optimal cutoffs (LM ≤ 20, VR ≤ 3, VPA ≤ 36) produced good combinations of sensitivity (.33-.87) and specificity (.92-.98). An age-corrected scaled score of ≤5 on either of the free recall trials on the VPA was specific (.91-.92) and relatively sensitive (.48-.57) to psychometrically defined invalid performance. A VR I ≤ 5 or VR II ≤ 4 had comparable specificity, but lower sensitivity (.25-.42). There was no difference in failure rate as a function of TBI severity. CONCLUSIONS In addition to LM, VR, and VPA can also function as embedded PVTs. Failing validity cutoffs on these subtests signals an increased risk of non-credible presentation and is robust to genuine neurocognitive impairment. However, they should not be used in isolation to determine the validity of an overall neurocognitive profile.
Collapse
Affiliation(s)
- Brad T Tyson
- Evergreen Neuroscience Institute, Evergreen Health Medical Center, Kirkland, WA, USA
| | | | - Iulia Crisan
- Department of Psychology, West University of Timisoara, Timisoara, Romania
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | | | | | - Laszlo A Erdodi
- Jefferson Neurobehavioral Group, New Orleans, LA, USA
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
5
|
Hromas G, Rolin S, Davis JJ. Racial differences in positive findings on embedded performance validity tests. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-9. [PMID: 36416227 PMCID: PMC10203055 DOI: 10.1080/23279095.2022.2146504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
INTRODUCTION Embedded performance validity tests (PVTs) may show increased positive findings in racially diverse examinees. This study examined positive findings in an older adult sample of African American (AA) and European American (EA) individuals recruited as part of a study on aging and cognition. METHOD The project involved secondary analysis of deidentified National Alzheimer's Coordinating Center data (N = 22,688). Exclusion criteria included diagnosis of dementia (n = 5,550), mild cognitive impairment (MCI; n = 5,160), impaired but not MCI (n = 1,126), other race (n = 864), and abnormal Mini Mental State Examination (MMSE < 25; n = 135). The initial sample included 9,853 participants (16.4% AA). Propensity score matching matched AA and EA participants on age, education, sex, and MMSE score. The final sample included 3,024 individuals with 50% of participants identifying as AA. Premorbid ability estimates were calculated based on demographics. Failure rates on five raw score and six age-adjusted scaled score PVTs were examined by race. RESULTS Age, education, sex, MMSE, and premorbid ability estimate were not significantly different by race. Thirteen percent of AA and 3.8% of EA participants failed two or more raw score PVTs (p < .0001). On age-adjusted PVTs, 20.6% of AA and 5.9% of EA participants failed two or more (p < .0001). CONCLUSIONS PVT failure rates were significantly higher among AA participants. Findings indicate a need for cautious interpretation of embedded PVTs with underrepresented groups. Adjustments to embedded PVT cutoffs may need to be considered to improve diagnostic accuracy.
Collapse
Affiliation(s)
- Gabrielle Hromas
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Summer Rolin
- Department of Rehabilitation Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Jeremy J Davis
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
6
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Replication and cross-validation of the personality assessment inventory (PAI) cognitive bias scale (CBS) in a mixed clinical sample. Clin Neuropsychol 2022; 36:1860-1877. [PMID: 33612093 PMCID: PMC8454137 DOI: 10.1080/13854046.2021.1889681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/08/2021] [Indexed: 01/27/2023]
Abstract
Objective: This study is a cross-validation of the Cognitive Bias Scale (CBS) from the Personality Assessment Inventory (PAI), a ten-item scale designed to assess symptom endorsement associated with performance validity test failure in neuropsychological samples. The study utilized a mixed neuropsychological sample of consecutively referred patients at a large academic medical center in the Midwest. Participants and Methods: Participants were 332 patients who completed embedded and free-standing performance validity tests (PVTs) and the PAI. Pass and fail groups were created based on PVT performance to evaluate classification accuracy of the CBS. Results: The results were generally consistent with the initial study for overall classification accuracy, sensitivity, and cut-off score. Consistent with the validation study, CBS had better classification accuracy than the original PAI validity scales and a comparable effect size to that obtained in the original validation publication; however, the Somatic Complaints scale (SOM) and the Conversion subscale (SOM-C) also demonstrated good classification accuracy. The CBS had incremental predictive ability compared to existing PAI scales. Conclusions: The results supported the CBS, but further research is needed on specific populations. Findings from this present study also suggest the relationship between conversion tendencies and PVT failure may be stronger in some geographic locations or population types (forensic versus clinical patients).
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, USA
| |
Collapse
|
7
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
8
|
Profile of Embedded Validity Indicators in Criminal Defendants with Verified Valid Neuropsychological Test Performance. Arch Clin Neuropsychol 2022; 38:513-524. [DOI: 10.1093/arclin/acac073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/21/2022] [Accepted: 08/09/2022] [Indexed: 11/15/2022] Open
Abstract
Abstract
Objective
Few studies have examined the use of embedded validity indicators (EVIs) in criminal-forensic practice settings, where judgements regarding performance validity can carry severe consequences for the individual and society. This study sought to examine how various EVIs perform in criminal defendant populations, and determine relationships between EVI scores and intrapersonal variables thought to influence performance validity.
Method
Performance on 16 empirically established EVI cutoffs were examined in a sample of 164 criminal defendants with valid performance who were referred for forensic neuropsychological evaluation. Subsequent analyses examined the relationship between EVI scores and intrapersonal variables in 83 of these defendants.
Results
Half of the EVIs (within the Wechsler Adult Intelligence Scale Digit Span Total, Conners’ Continuous Performance Test Commissions, Wechsler Memory Scale Logical Memory I and II, Controlled Oral Word Association Test, Trail Making Test Part B, and Stroop Word and Color) performed as intended in this sample. The EVIs that did not perform as intended were significantly influenced by relevant intrapersonal variables, including below-average intellectual functioning and history of moderate–severe traumatic brain injury and neurodevelopmental disorder.
Conclusions
This study identifies multiple EVIs appropriate for use in criminal-forensic settings. However, based on these findings, practitioners may wish to be selective in choosing and interpreting EVIs for forensic evaluations of criminal court defendants.
Collapse
|
9
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
10
|
D Hood E, B Boone K, S Miora D, E Cottingham M, L Victor T, A Zeigler E, A Zeller M, J Wright M. Are there differences in performance validity test scores between African American and White American neuropsychology clinic patients? J Clin Exp Neuropsychol 2022; 44:31-41. [PMID: 35670549 DOI: 10.1080/13803395.2022.2069230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
OBJECTIVE The purpose of the present study was to compare performance on a wide range of PVTs in a neuropsychology clinic sample of African Americans and White Americans to determine if there are differences in mean scores or cut-off failure rates between the two groups, and to identify factors that may account for false positive PVT results in African American patients. METHOD African American and White American non-compensation-seeking neuropsychology clinic patients were compared on a wide range of standalone and embedded PVTs: Dot Counting Test, b Test, Warrington Recognition Memory Test, Rey 15-item plus recognition, Rey Word Recognition Test, Digit Span (ACSS, RDS, 3-digit time, 4-digit time), WAIS-III Picture Completion (Most discrepant index), WAIS-III Digit Symbol/Coding (recognition equation), Rey Auditory Verbal Learning Test, Rey Complex figure, WMS-III Logical Memory, Comalli Stroop Test, Trails A, and Wisconsin Card Sorting Test. RESULTS When groups were equated for age and education, African Americans obtained mean performances significantly worse than White Americans on only four of 25 PVT scores across the 14 different measures (Stroop Word Reading and Color Naming, Trails A, Digit Span 3-digit time); however, FSIQ was also significantly higher in White American patients. When subjects with borderline IQ (FSIQ = 70 to 79) were excluded (resulting in 74 White Americans and 25 African Americans), groups no longer differed in IQ and only continued to differ on a single PVT cutoff (Trails A). Further, specificity rates in African Americans were comparable to those of White Americans with the exception of the b Test, the Dot Counting Test, and Stroop B. CONCLUSIONS PVT performance generally does not differ as a function of Black versus White race once the impact of intellectual level is controlled, and most PVT cutoffs appear appropriate for use in African Americans of low average IQ or higher.
Collapse
Affiliation(s)
- Elexsia D Hood
- California School of Forensic Studies, Alliant International University, Los Angeles, USA
| | - Kyle B Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, USA.,Department of Psychiatry and Biobehavioral Sciences, UCLA, Los Angeles, USA
| | - Deborah S Miora
- California School of Forensic Studies, Alliant International University, Los Angeles, USA
| | - Maria E Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, USA
| | - Tara L Victor
- Department of Psychology, California State University, Dominguez Hills, Carson, USA
| | | | - Michelle A Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, USA
| | - Matthew J Wright
- Department of Psychiatry, Harbor-UCLA Medical Center, Torrance, USA
| |
Collapse
|
11
|
White DJ, Ovsiew GP, Rhoads T, Resch ZJ, Lee M, Oh AJ, Soble JR. The Divergent Roles of Symptom and Performance Validity in the Assessment of ADHD. J Atten Disord 2022; 26:101-108. [PMID: 33084457 DOI: 10.1177/1087054720964575] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study examined concordance between symptom and performance validity among clinically-referred patients undergoing neuropsychological evaluation for Attention-Deficit/Hyperactivity Disorder (ADHD). METHOD Data from 203 patients who completed the WAIS-IV Working Memory Index, the Clinical Assessment of Attention Deficit-Adult (CAT-A), and ≥4 criterion performance validity tests (PVTs) were analyzed. RESULTS Symptom and performance validity were concordant in 76% of cases, with the majority being valid performance. Of the remaining 24% of cases with divergent validity findings, patients were more likely to exhibit symptom invalidity (15%) than performance invalidity (9%). Patients demonstrating symptom invalidity endorsed significantly more ADHD symptoms than those with credible symptom reporting (ηp2 = .06-.15), but comparable working memory test performance, whereas patients with performance invalidity had significantly worse working memory performance than those with valid PVT performance (ηp2 = .18). CONCLUSION Symptom and performance invalidity represent dissociable constructs in patients undergoing neuropsychological evaluation of ADHD and should be evaluated independently.
Collapse
Affiliation(s)
- Daniel J White
- University of Illinois College of Medicine, Chicago, IL USA.,Roosevelt University, Chicago, IL, USA
| | | | - Tasha Rhoads
- University of Illinois College of Medicine, Chicago, IL USA
| | - Zachary J Resch
- University of Illinois College of Medicine, Chicago, IL USA.,Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Mary Lee
- University of Illinois College of Medicine, Chicago, IL USA
| | - Alison J Oh
- University of Illinois College of Medicine, Chicago, IL USA
| | - Jason R Soble
- University of Illinois College of Medicine, Chicago, IL USA
| |
Collapse
|
12
|
OUP accepted manuscript. Arch Clin Neuropsychol 2022; 37:1214-1220. [DOI: 10.1093/arclin/acac022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/27/2022] [Indexed: 11/13/2022] Open
|
13
|
Dunn A, Pyne S, Tyson B, Roth R, Shahein A, Erdodi L. Critical Item Analysis Enhances the Classification Accuracy of the Logical Memory Recognition Trial as a Performance Validity Indicator. Dev Neuropsychol 2021; 46:327-346. [PMID: 34525856 DOI: 10.1080/87565641.2021.1956499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE : Replicate previous research on Logical Memory Recognition (LMRecog) and perform a critical item analysis. METHOD : Performance validity was psychometrically operationalized in a mixed clinical sample of 213 adults. Classification of the LMRecog and nine critical items (CR-9) was computed. RESULTS : LMRecog ≤20 produced a good combination of sensitivity (.30-.35) and specificity (.89-.90). CR-9 ≥5 and ≥6 had comparable classification accuracy. CR-9 ≥5 increased sensitivity by 4% over LMRecog ≤20; CR-9 ≥6 increased specificity by 6-8% over LMRecog ≤20; CR-9 ≥7 increased specificity by 8-15%. CONCLUSIONS : Critical item analysis enhances the classification accuracy of the optimal LMRecog cutoff (≤20).
Collapse
Affiliation(s)
- Alexa Dunn
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Sadie Pyne
- Windsor Neuropsychology, Windsor, Canada
| | - Brad Tyson
- Neuroscience Institute, Evergreen Neuroscience Institute, EvergreenHealth Medical Center, Kirkland, USA
| | - Robert Roth
- Neuropsychology Services, Dartmouth-Hitchcock Medical Center, USA
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
14
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
15
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
16
|
Cerny BM, Rhoads T, Leib SI, Jennette KJ, Basurto KS, Durkin NM, Ovsiew GP, Resch ZJ, Soble JR. Mean response latency indices on the Victoria Symptom Validity Test do not contribute meaningful predictive value over accuracy scores for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1304-1311. [PMID: 33470869 DOI: 10.1080/23279095.2021.1872575] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The utility of the Victoria Symptom Validity Test (VSVT) as a performance validity test (PVT) has been primarily established using response accuracy scores. However, the degree to which response latency may contribute to accurate classification of performance invalidity over and above accuracy scores remains understudied. Therefore, this study investigated whether combining VSVT accuracy and response latency scores would increase predictive utility beyond use of accuracy scores alone. Data from a mixed clinical sample of 163 patients, who were administered the VSVT as part of a larger neuropsychological battery, were analyzed. At least four independent criterion PVTs were used to establish validity groups (121 valid/42 invalid). Logistic regression models examining each difficulty level revealed that all VSVT measures were useful in classifying validity groups, both independently and when combined. Individual predictor classification accuracy ranged from 77.9 to 81.6%, indicating acceptable to excellent discriminability across the validity indices. The results of this study support the value of both accuracy and latency scores on the VSVT to identify performance invalidity, although the accuracy scores had superior classification statistics compared to response latency, and mean latency indices provided no unique benefit for classification accuracy beyond dimensional accuracy scores alone.
Collapse
Affiliation(s)
- Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Karen S Basurto
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Nicole M Durkin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
17
|
A Systematic Review and Meta-Analysis of the Diagnostic Accuracy of the Advanced Clinical Solutions Word Choice Test as a Performance Validity Test. Neuropsychol Rev 2021; 31:349-359. [PMID: 33447952 DOI: 10.1007/s11065-020-09468-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 11/29/2020] [Indexed: 10/22/2022]
Abstract
Thorough assessment of performance validity has become an established standard of practice in neuropsychological assessment. While there has been a large focus on the development and cross-validation of embedded performance validity tests (PVTs) in recent years, new freestanding PVTs have also been developed, including the Word Choice Test (WCT) as part of the Advanced Clinical Solutions Effort System. And, while the WCT's general utility for identifying invalid performance has been demonstrated in the ensuing decade since its initial publication, optimal cut-scores and associated psychometric properties have varied widely across studies. This study sought to synthesize the existing diagnostic accuracy literature regarding the WCT via a systematic review and to conduct a meta-analysis to determine the performance validity cut-score that best maximizes sensitivity while maintaining acceptable specificity. A systematic search of the literature resulted in 14 studies for synthesis, with eight of those available for meta-analysis. Meta-analytic results revealed an optimal cut-score of ≤ 42 with 54% sensitivity and 93% specificity for identifying invalid neuropsychological test performance. Collectively, the WCT demonstrated adequate diagnostic accuracy as a PVT across a variety of populations. Recommendations for future studies are also provided.
Collapse
|
18
|
Victoria Symptom Validity Test: A Systematic Review and Cross-Validation Study. Neuropsychol Rev 2021; 31:331-348. [PMID: 33433828 DOI: 10.1007/s11065-021-09477-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 01/03/2021] [Indexed: 12/12/2022]
Abstract
The Victoria Symptom Validity Test (VSVT) is a performance validity test (PVT) with over two decades of empirical backing, although methodological limitations within the extant literature restrict its clinical and research generalizability. Chief among these constraints includes limited consensus on the most accurate index within the VSVT and the most appropriate cut-scores within each VSVT validity index. The current systematic review synthesizes existing VSVT validation studies and provides additional cross-validation in an independent sample using a known-groups design. We completed a systematic search of the literature, identifying 17 peer-reviewed studies for synthesis (7 simulation designs, 7 differential prevalence designs, and 3 known-groups designs). The independent cross-validation sample consisted of 200 mixed clinical neuropsychiatric patients referred for outpatient neuropsychological evaluation. Across all indices, Total item accuracy produced the strongest psychometric properties at an optimal cut-score of ≤ 40 (62% sensitivity/88% specificity). However, ROC curve analyses for all VSVT indices yielded statistically significant areas under the curve (AUCs; .73-81), suggestive of moderate classification accuracy. Cut-scores derived using the independent cross-validation sample converged with some previous findings supporting cut-scores of ≤ 22 for Easy item accuracy and ≤ 40 for Total item accuracy, although divergent findings were noted for Difficult item accuracy. Overall, VSVT validity indicators have adequate diagnostic accuracy across populations, with the current study providing additional support for its use as a psychometrically sound PVT in clinical settings. However, caution is recommended among patients with certain verified clinical conditions (e.g., dementia) and those with pronounced working memory deficits due to concerns for increased risk of false positives.
Collapse
|
19
|
Kosky KM, Lace JW, Austin TA, Seitz DJ, Clark B. The utility of the Wisconsin card sorting test, 64-card version to detect noncredible attention-deficit/hyperactivity disorder. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:1231-1241. [DOI: 10.1080/23279095.2020.1864633] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Karen M. Kosky
- Department of Health Psychology, University of Missouri, Columbia, MO, USA
| | - John W. Lace
- Department of Neurology, Cleveland Clinic, Cleveland, OH, USA
| | - Tara A. Austin
- University of Texas at Austin Dell Medical School, Austin, TX, USA
| | - Dylan J. Seitz
- Department of Neurology, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Brook Clark
- Department of Health Psychology, University of Missouri, Columbia, MO, USA
| |
Collapse
|
20
|
Attridge J, Zimmerman D, Rolin S, Davis JJ. Comparing Boston naming test short forms in a rehabilitation sample. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:810-815. [PMID: 32841074 DOI: 10.1080/23279095.2020.1811984] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The Boston Naming Test (BNT) has multiple short forms that do not include the noose item that have been primarily examined in dementia populations. This study compared BNT short forms with standard administration (BNT-S) in physical medicine and rehabilitation patients who underwent outpatient evaluation. The sample (N = 480) was 34% female and 91% white with average age of 46 years (SD = 15) and average education of 14 years (SD = 3). Five 15-item short forms were calculated: Consortium to Establish a Registry for Alzheimer's disease (CERAD-15); Lansing; and Mack 1, 2, and 4 (Mack-15.1, -15.2). Three 30-item short forms were calculated: Mack A, Saxon A, and BNT odd items. BNT-S and short forms were compared with Spearman correlations. Cronbach's alpha was calculated for all BNT forms. Impaired BNT scores were determined with norm-referenced scores (T < 36 and T < 40). Area under the curve (AUC) values were compared across short forms with impaired BNT as criterion. BNT-S showed strong correlations with 30-item (rho = 0.92-0.93) and 15-item short forms (rho = 0.80-0.87) except for CERAD-15 (rho = 0.69). Internal consistency was acceptable for all short forms (alpha = 0.72-0.86). BNT-S was impaired in 17% and 33% of participants at 35 T and 39 T cutoffs, respectively. BNT short forms showed excellent to outstanding classification accuracy predicting impairment using both cutoffs. BNT short forms warrant further study in rehabilitation settings.
Collapse
Affiliation(s)
- J Attridge
- University of Utah School of Medicine, Salt Lake City, UT, USA
| | | | - Summer Rolin
- University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Jeremy J Davis
- University of Utah School of Medicine, Salt Lake City, UT, USA
| |
Collapse
|
21
|
Loring DW, Meador KJ, Goldstein FC. Valid or not: A critique of Graver and Green. APPLIED NEUROPSYCHOLOGY. ADULT 2020; 29:639-642. [PMID: 32735139 DOI: 10.1080/23279095.2020.1798961] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Disagreements in science and medicine are not uncommon, and formal exchanges of disagreements serve a variety of valuable roles. As identified by a Nature Methods editorial entitled "The Power of Disagreement" (2016), disagreements bring attention to best practices so that differences in interpretation do not result from inferior data sets or confirmation bias, "prompting researchers to take a second look at evidence that is not in agreement with their hypothesis, rather than dismiss it as artifacts." Graver and Green published reasons why they disagree with a recent clinical case report and a decades old randomized control trial characterizing the effect of an acute 2 mg dosing of lorazepam on the Word Memory Test. In this article, we formally responded to their commentary to further clarify the reasons for our data interpretations. These two opposing views provide an excellent learning opportunity, particularly for students, demonstrating the importance of careful articulation of the rationale behind certain conclusions from different perspectives. We encourage careful review of the original articles being discussed so the neuropsychologists can read both positions and decide which interpretation of the findings they consider most sound.
Collapse
Affiliation(s)
- David W Loring
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA.,Department of Pediatrics, Emory University School of Medicine, Atlanta, GA, USA
| | - Kimford J Meador
- Department of Neurology & Neurological Sciences, Stanford University School of Medicine, Stanford, CA, USA
| | - Felicia C Goldstein
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
22
|
Ovsiew GP, Resch ZJ, Nayar K, Williams CP, Soble JR. Not so fast! Limitations of processing speed and working memory indices as embedded performance validity tests in a mixed neuropsychiatric sample. J Clin Exp Neuropsychol 2020; 42:473-484. [DOI: 10.1080/13803395.2020.1758635] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Kritika Nayar
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Christopher P. Williams
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
23
|
White DJ, Korinek D, Bernstein MT, Ovsiew GP, Resch ZJ, Soble JR. Cross-validation of non-memory-based embedded performance validity tests for detecting invalid performance among patients with and without neurocognitive impairment. J Clin Exp Neuropsychol 2020; 42:459-472. [DOI: 10.1080/13803395.2020.1758634] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Dale Korinek
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Matthew T. Bernstein
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
24
|
Shwartz SK, Roper BL, Arentsen TJ, Crouse EM, Adler MC. The Behavior Rating Inventory of Executive Function®-Adult Version is Related to Emotional Distress, Not Executive Dysfunction, in a Veteran Sample. Arch Clin Neuropsychol 2020; 35:701-716. [DOI: 10.1093/arclin/acaa024] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Abstract
Objective
In three studies, we explore the impact of response bias, symptom validity, and psychological factors on the self-report form of the Behavior Rating Inventory of Executive Function-Adult Version (BRIEF-A) and the relationship between self-reported executive functioning (EF) and objective performance.
Method
Each study pulled from a sample of 123 veterans who were administered a BRIEF-A and Minnesota Multiphasic Personality Inventory-2 (MMPI-2) during a neuropsychological evaluation. Participants were primarily middle-aged, and half carried a mood disorder diagnosis. Study 1 examined group differences in BRIEF-A ratings among valid, invalid, and indeterminate MMPI-2 responders. Analyses were conducted to determine the optimal cut-score for the BRIEF-A Negativity Validity scale. In Study 2, relationships were explored among MMPI-2-RF (restructured form) Restructured Clinical (RC) scales, somatic/cognitive scales, and the BRIEF-A Metacognition Index (MI); hierarchical analyses were performed to predict MI using MMPI-2-RF Demoralization (RCd) and specific RC scales. Study 3 correlated BRIEF-A clinical scales and indices with RCd and an EF composite score from neuropsychological testing. Hierarchical analyses were conducted to predict BRIEF-A clinical scales.
Results
Invalid performance on the MMPI-2 resulted in significantly elevated scores on the BRIEF-A compared to those with valid responding. A more stringent cut-score of ≥4 for the BRIEF-A Negativity scale is more effective at identifying invalid symptom reporting. The BRIEF-A MI is most strongly correlated with demoralization. BRIEF-A indices and scales are largely unrelated to objective EF performance.
Conclusions
In a veteran sample, responses on the BRIEF-A are most representative of generalized emotional distress and response bias, not actual EF abilities.
Collapse
Affiliation(s)
- Susan K Shwartz
- Department of Veterans Affairs Medical Center, Memphis, TN, USA
| | - Brad L Roper
- Department of Veterans Affairs Medical Center, Memphis, TN, USA
- Departments of Psychiatry and Neurology, University of Tennessee College of Medicine, Memphis, TN, USA
| | | | - Ellen M Crouse
- Department of Veterans Affairs Medical Center, Memphis, TN, USA
| | - Marcy C Adler
- Department of Veterans Affairs Medical Center, Memphis, TN, USA
| |
Collapse
|
25
|
Resch ZJ, Pham AT, Abramson DA, White DJ, DeDios-Stern S, Ovsiew GP, Castillo LR, Soble JR. Examining independent and combined accuracy of embedded performance validity tests in the California Verbal Learning Test-II and Brief Visuospatial Memory Test-Revised for detecting invalid performance. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:252-261. [DOI: 10.1080/23279095.2020.1742718] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Amber T. Pham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, DePaul University, Chicago, IL, USA
| | - Dayna A. Abramson
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Daniel J. White
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Samantha DeDios-Stern
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P. Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Liliam R. Castillo
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
26
|
Rolin SN, Mullen CM, Vaccariello E, Davis JJ. Examining the Cognitive Proficiency Index in rehabilitation patients. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:573-582. [PMID: 31530025 DOI: 10.1080/23279095.2019.1666269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
This study examined the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) Cognitive Proficiency Index (CPI) in relation to other WAIS-IV indices, overall test battery mean (OTBM), and impairment (IMP) in an outpatient rehabilitation setting. Participants (N = 329) were 35% female and 88% Caucasian with average age and education of 42.9 (SD = 13.5) and 13.6 (SD = 2.4) years, respectively. Participants were grouped by diagnosis and validity: traumatic brain injury (TBI; n = 176; 39% mild), cerebrovascular accident (CVA; n = 52), other neurologic and psychiatric conditions (OTH; n = 49), and questionable performance validity (QPV; n = 52). OTBM was calculated from non-WAIS-IV tests; IMP was dichotomously defined as four or more non-WAIS-IV scores below cutoff (≤35 T). Significant group differences were observed on CPI, WAIS-IV indices, OTBM, and IMP. CPI significantly contributed (β = .51) to a linear regression model predicting OTBM (R2 = .63) with education and GAI as covariates. A logistic regression model with IMP as the outcome and education, GAI, and CPI as predictors correctly classified 80% of cases with area under the curve of .86. A previously identified cutoff (CPI < 84) correctly classified 65-78% of clinical groups categorized by IMP. A novel cutoff (CPI ≤ 80) differentiated clinical participants with history of mild TBI from the QPV group with sensitivity of 44.2% and specificity of 89.7%. CPI showed incremental validity in predicting OTBM and IMP and warrants further study as a useful clinical addition to other WAIS-IV indices.
Collapse
Affiliation(s)
- Summer N Rolin
- Division of Physical Medicine and Rehabilitation, University of Utah School of Medicine, Salt Lake City, UT, USA
| | - Christine M Mullen
- Division of Physical Medicine and Rehabilitation, University of Utah School of Medicine, Salt Lake City, UT, USA
| | | | - Jeremy J Davis
- Division of Physical Medicine and Rehabilitation, University of Utah School of Medicine, Salt Lake City, UT, USA
| |
Collapse
|
27
|
Executive (dys)function after traumatic brain injury: special considerations for behavioral pharmacology. Behav Pharmacol 2019; 29:617-637. [PMID: 30215621 PMCID: PMC6155367 DOI: 10.1097/fbp.0000000000000430] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Executive function is an umbrella term that includes cognitive processes such as decision-making, impulse control, attention, behavioral flexibility, and working memory. Each of these processes depends largely upon monoaminergic (dopaminergic, serotonergic, and noradrenergic) neurotransmission in the frontal cortex, striatum, and hippocampus, among other brain areas. Traumatic brain injury (TBI) induces disruptions in monoaminergic signaling along several steps in the neurotransmission process - synthesis, distribution, and breakdown - and in turn, produces long-lasting deficits in several executive function domains. Understanding how TBI alters monoamingeric neurotransmission and executive function will advance basic knowledge of the underlying principles that govern executive function and potentially further treatment of cognitive deficits following such injury. In this review, we examine the influence of TBI on the following measures of executive function - impulsivity, behavioral flexibility, and working memory. We also describe monoaminergic-systems changes following TBI. Given that TBI patients experience alterations in monoaminergic signaling following injury, they may represent a unique population with regard to pharmacotherapy. We conclude this review by discussing some considerations for pharmacotherapy in the field of TBI.
Collapse
|
28
|
Loring DW, Goldstein FC. If Invalid PVT Scores Are Obtained, Can Valid Neuropsychological Profiles Be Believed? Arch Clin Neuropsychol 2019; 34:1192-1202. [DOI: 10.1093/arclin/acz028] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 03/07/2019] [Accepted: 06/05/2019] [Indexed: 11/14/2022] Open
Abstract
Abstract
Background
Performance Validity Testing (PVT) decision-making rules may be indeterminate in patients with neurological disease in which PVT characteristics have not been adequately studied. We report a patient with multiple sclerosis (MS) who failed computerized PVT testing but had normal memory scores with a neuropsychological profile consistent with expected MS disease-related weaknesses.
Method
Neuropsychological testing was conducted on two occasions in a middle-aged woman with an established MS diagnosis to address concerns of possible memory decline. Testing was discontinued after PVT scores below recommended cut-points were obtained during the first evaluation. During the second assessment, subthreshold PVT scores on a different computerized PVT were obtained, but unlike the first assessment, the entire neuropsychological protocol was administered.
Results
Despite subthreshold computerized PVT scores, normal learning and memory performance was obtained providing objective data to answer the referral question. Other neuropsychological findings included decreased processing speed, poor working memory, and poor executive function consistent with her MS diagnosis. Embedded PVT scores were normal.
Conclusions
We speculate that poor computerized PVT scores resulted from the disease-related features of MS, although we also discuss approaches to reconcile apparently contradictory PVT versus neuropsychological results if the contributions of disease-related variables on PVTs scores are discounted. This case demonstrates the value of completing the assessment protocol despite obtaining PVT scores below publisher recommended cutoffs in clinical evaluations. If subthreshold PVT scores are considered evidence of performance invalidity, it is still necessary to have an approach for interpreting seemingly credible neuropsychological test results rather than simply dismissing them as invalid.
Collapse
Affiliation(s)
- David W Loring
- Department of Neurology, Emory University School of Medicine, 12 Executive Park Atlanta, GA 30329, USA
| | - Felicia C Goldstein
- Department of Neurology, Emory University School of Medicine, 12 Executive Park Atlanta, GA 30329, USA
| |
Collapse
|
29
|
|
30
|
Soble JR, Resch ZJ, Schulze ET, Paxton JL, Cation B, Friedhoff C, Costin C, Fink JW, Lee RC, Pliskin NH. Examination of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) validity and substantive scales in patients with electrical injury. Clin Neuropsychol 2019; 33:1501-1515. [DOI: 10.1080/13854046.2019.1616114] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Jason R. Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J. Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Evan T. Schulze
- Department of Neurology, Saint Louis University, St. Louis, MO, USA
| | | | - Bailey Cation
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Claire Friedhoff
- Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Colleen Costin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Illinois School of Professional Psychology, Schaumburg, IL, USA
| | - Joseph W. Fink
- Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL, USA
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
| | - Raphael C. Lee
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
- Departments of Surgery, Medicine and Organismal Biology, University of Chicago, Chicago, IL, USA
| | - Neil H. Pliskin
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
| |
Collapse
|
31
|
Gaasedelen OJ, Whiteside DM, Altmaier E, Welch C, Basso MR. The construction and the initial validation of the Cognitive Bias Scale for the Personality Assessment Inventory. Clin Neuropsychol 2019; 33:1467-1484. [DOI: 10.1080/13854046.2019.1612947] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Owen J. Gaasedelen
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
- New Mexico VA Health Care System, Albuquerque, NM, USA
| | - Douglas M. Whiteside
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Elizabeth Altmaier
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
| | - Catherine Welch
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
| | | |
Collapse
|
32
|
Lichtenstein JD, Flaro L, Baldwin FS, Rai J, Erdodi LA. Further Evidence for Embedded Performance Validity Tests in Children within the Conners’ Continuous Performance Test – Second Edition. Dev Neuropsychol 2019; 44:159-171. [DOI: 10.1080/87565641.2019.1565535] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Affiliation(s)
- Jonathan D. Lichtenstein
- Department of Psychiatry, Pediatric Neuropsychology Program, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
- Department of Pediatrics, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
- The Dartmouth Institute for Health Policy an d Clinical Practice, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
| | | | - Fern S. Baldwin
- Department of Psychiatry, Pediatric Neuropsychology Program, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
| | - Jaspreet Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, Ontario, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Ontario, Canada
| |
Collapse
|
33
|
Emhoff SM, Lynch JK, McCaffrey RJ. Performance and Symptom Validity Testing in Pediatric Assessment: A Review of the Literature. Dev Neuropsychol 2018; 43:671-707. [DOI: 10.1080/87565641.2018.1525612] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Stephanie M. Emhoff
- Department of Psychology, University at Albany, State University of New York, Albany, New York, USA
| | - Julie K. Lynch
- Department of Psychology, University at Albany, State University of New York, Albany, New York, USA
- Albany Neuropsychological Associates, Albany, New York, USA
| | - Robert J. McCaffrey
- Department of Psychology, University at Albany, State University of New York, Albany, New York, USA
- Albany Neuropsychological Associates, Albany, New York, USA
| |
Collapse
|
34
|
Abstract
OBJECTIVES The aim of this study was to investigate the relationship of psychological variables to cognitive performance validity test (PVT) results in mixed forensic and nonforensic clinical samples. METHODS Participants included 183 adults who underwent comprehensive neuropsychological examination. Criterion groups were formed, that is, Credible Group or Noncredible Group, based upon their performance on the Word Memory Test and other stand-alone and embedded PVT measures. RESULTS Multivariate logistic regression analysis identified three significant predictors of cognitive performance validity. These included two psychological constructs, for example, Cogniphobia (perception that cognitive effort will exacerbate neurological symptoms), and Symptom Identity (perception that current symptoms are the result of illness or injury), and one contextual factor (forensic). While there was no interaction between these factors, elevated scores were most often observed in the forensic sample, suggesting that these independently contributing intrinsic psychological factors are more likely to occur in a forensic environment. CONCLUSIONS Illness perceptions were significant predictors of cognitive performance validity particularly when they reached very elevated levels. Extreme elevations were more common among participants in the forensic sample, and potential reasons for this pattern are explored. (JINS, 2018, 24, 735-745).
Collapse
|
35
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
36
|
Whiteside DM, Caraher K, Hahn-Ketter A, Gaasedelen O, Basso MR. Classification accuracy of individual and combined executive functioning embedded performance validity measures in mild traumatic brain injury. APPLIED NEUROPSYCHOLOGY-ADULT 2018. [DOI: 10.1080/23279095.2018.1443935] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa, Iowa City, Iowa, USA
| | - Amanda Hahn-Ketter
- Department of Rehabilitation Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Owen Gaasedelen
- Department of Psychiatry, University of Iowa, Iowa City, Iowa, USA
| | | |
Collapse
|
37
|
Persinger VC, Whiteside DM, Bobova L, Saigal SD, Vannucci MJ, Basso MR. Using the California Verbal Learning Test, Second Edition as an embedded performance validity measure among individuals with TBI and individuals with psychiatric disorders. Clin Neuropsychol 2017; 32:1039-1053. [DOI: 10.1080/13854046.2017.1419507] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Virginia C. Persinger
- Department of Neuropsychology, Methodist Rehabilitation Center, Jackson, MS, USA
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | | | - Lyuba Bobova
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | - Seema D. Saigal
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | - Marla J. Vannucci
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | | |
Collapse
|
38
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
39
|
Young G. PTSD in Court III: Malingering, assessment, and the law. INTERNATIONAL JOURNAL OF LAW AND PSYCHIATRY 2017; 52:81-102. [PMID: 28366496 DOI: 10.1016/j.ijlp.2017.03.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 03/02/2017] [Indexed: 06/07/2023]
Abstract
This journal's third article on PTSD in Court focuses especially on the topic's "court" component. It first considers the topic of malingering, including in terms of its definition, certainties, and uncertainties. As with other areas of the study of psychological injury and law, generally, and PTSD (posttraumatic stress disorder), specifically, malingering is a contentious area not only definitionally but also empirically, in terms of establishing its base rate in the index populations assessed in the field. Both current research and re-analysis of past research indicates that the malingering prevalence rate at issue is more like 15±15% as opposed to 40±10%. As for psychological tests used to assess PTSD, some of the better ones include the TSI-2 (Trauma Symptom Inventory, Second Edition; Briere, 2011), the MMPI-2-RF (Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form; Ben-Porath & Tellegen, 2008/2011), and the CAPS-5 (The Clinician-Administered PTSD Scale for DSM-5; Weathers, Blake, Schnurr, Kaloupek, Marx, & Keane, 2013b). Assessors need to know their own possible biases, the applicable laws (e.g., the Daubert trilogy), and how to write court-admissible reports. Overall conclusions reflect a moderate approach that navigates the territory between the extreme plaintiff or defense allegiances one frequently encounters in this area of forensic practice.
Collapse
|
40
|
Gaasedelen OJ, Whiteside DM, Basso M. Exploring the sensitivity of the Personality Assessment Inventory symptom validity tests in detecting response bias in a mixed neuropsychological outpatient sample. Clin Neuropsychol 2017; 31:844-856. [PMID: 28391774 DOI: 10.1080/13854046.2017.1312700] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Few studies have evaluated the symptom validity tests (SVTs) within the Personality Assessment Inventory (PAI) in a neuropsychological assessment context. Accordingly, the present study explored the accuracy of PAI SVTs in identifying exaggerated cognitive dysfunction in a mixed sample of outpatients referred for neuropsychological assessment. METHOD Participants who failed two or more Performance Validity Tests (PVTs) were classified as having exaggerated cognitive dysfunction (n = 49). Their responses on PAI SVTs were compared to examinees who did not fail PVTs (n = 257). RESULTS Multivariate analysis of variance indicated the Negative Impression Management (NIM) scale most strongly discriminated between those with exaggerated cognitive dysfunction from honest responders (Cohen's d = .58). Nonetheless, its classification accuracy was low (area under the curve [AUC] = .65). A k-means cluster analysis and a subsequent multinomial logistic regression indicated evidence for two distinct groups of exaggerators. In particular, one group seemed to exaggerate symptoms, whereas another presented in a defensive manner, implying that individuals with positive and NIM biases on the PAI were apt to display invalid performance on PVTs. CONCLUSIONS Findings indicated that exaggerated cognitive dysfunction tends to be present when NIM is very high and that evidence exists for a defensive response style on the PAI in the context of PVT failure.
Collapse
Affiliation(s)
- Owen J Gaasedelen
- a Department of Psychological and Quantitative Foundations, Counseling Psychology , The University of Iowa , Iowa City , IA , USA
| | | | - Michael Basso
- c Department of Psychology , University of Tulsa , Tulsa , OK , USA
| |
Collapse
|
41
|
Rickards TA, Cranston CC, Touradji P, Bechtold KT. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 25:219-230. [DOI: 10.1080/23279095.2017.1278602] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Tyler A. Rickards
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Christopher C. Cranston
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Pegah Touradji
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Kathleen T. Bechtold
- Department of Physical Medicine & Rehabilitation, Division of Rehabilitation Psychology & Neuropsychology, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
42
|
Lichtenstein JD, Erdodi LA, Rai JK, Mazur-Mosiewicz A, Flaro L. Wisconsin Card Sorting Test embedded validity indicators developed for adults can be extended to children. Child Neuropsychol 2016; 24:247-260. [DOI: 10.1080/09297049.2016.1259402] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Jonathan D. Lichtenstein
- Department of Psychiatry, Neuropsychology Services, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA
| | - Laszlo A. Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, ON, USA
| | - Jaspreet K. Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, ON, USA
| | - Anya Mazur-Mosiewicz
- Department of Psychology, Chicago School of Professional Psychology, IL, USA
- Department of Psychiatry and Behavioral Science, Oklahoma State University, Tulsa, OK, USA
| | | |
Collapse
|
43
|
Rai JK, Abecassis M, Casey JE, Flaro L, Erdodi LA, Roth RM. Parent rating of executive function in fetal alcohol spectrum disorder: A review of the literature and new data on Aboriginal Canadian children. Child Neuropsychol 2016; 23:713-732. [DOI: 10.1080/09297049.2016.1191628] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Jaspreet K. Rai
- Department of Psychology, University of Windsor, Ontario, Canada
| | - Maurissa Abecassis
- Neuropsychology Program, Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA
| | - Joseph E. Casey
- Department of Psychology, University of Windsor, Ontario, Canada
| | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Ontario, Canada
| | - Robert M. Roth
- Neuropsychology Program, Department of Psychiatry, Geisel School of Medicine at Dartmouth, Lebanon, NH, USA
| |
Collapse
|
44
|
Miskey HM, Gross PL. Neuropsychological assessment of a veteran with a large arachnoid cyst. APPLIED NEUROPSYCHOLOGY-ADULT 2016; 23:464-70. [PMID: 26979132 DOI: 10.1080/23279095.2015.1088853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Arachnoid cysts are benign, congenital masses that are believed to form when the arachnoid membrane splits or is duplicated and the resulting space fills with fluid. Despite their potentially alarming appearance on brain imaging, congenital cysts discovered in adulthood are usually silent and do not result in functional impairment. A left-handed male veteran with mild memory complaints was discovered to have a large (16.4 cm × 7.7 cm), left-sided arachnoid cyst. Magnetic resonance imaging (MRI) revealed significant displacement of brain structures including the hippocampus, Sylvan fissure, and splenium. Viewing brain MRI images in only 1 plane was misleading and could have erroneously resulted in assuming some structures were absent. Viewing multiple planes of section revealed significant structural displacement and provided a better 3-dimensional conceptualization of an abnormal brain. A clinical interview indicated excellent premorbid functioning, and neuropsychological test results were within the normal range with the exception of mildly impaired scores on tests reliant on processing speed and lower-than-expected visual memory scores. Results were consistent with previous research noting retained verbal abilities and low-average visual skills. Low-average and mildly impaired scores were potentially secondary to microvascular changes, slowed visual scanning, psychiatric conditions, and testing base rates.
Collapse
Affiliation(s)
- Holly M Miskey
- a Mid-Atlantic Mental Illness Research, Education, and Clinical Center (MA-MIRECC).,b W. G. "Bill" Hefner Veterans Affairs Medical Center, Mental Health and Behavioral Services , Salisbury , North Carolina.,c Department of Psychiatry , Wake Forest School of Medicine , Winston-Salem , North Carolina
| | - Patricia L Gross
- b W. G. "Bill" Hefner Veterans Affairs Medical Center, Mental Health and Behavioral Services , Salisbury , North Carolina
| |
Collapse
|
45
|
Whiteside DM, Gaasedelen OJ, Hahn-Ketter AE, Luu H, Miller ML, Persinger V, Rice L, Basso MR. Derivation of a Cross-Domain Embedded Performance Validity Measure in Traumatic Brain Injury. Clin Neuropsychol 2015; 29:788-803. [DOI: 10.1080/13854046.2015.1093660] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
46
|
|
47
|
Shura RD, Miskey HM, Rowland JA, Yoash-Gantz RE, Denning JH. Embedded Performance Validity Measures with Postdeployment Veterans: Cross-Validation and Efficiency with Multiple Measures. APPLIED NEUROPSYCHOLOGY-ADULT 2015; 23:94-104. [DOI: 10.1080/23279095.2015.1014556] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Robert D. Shura
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, and Mental Health and Behavioral Sciences Service Line, W. G. “Bill” Hefner Veterans Affairs Medical Center, Salisbury, and Department of Psychiatry and Behavioral Sciences, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Holly M. Miskey
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, and Mental Health and Behavioral Sciences Service Line, W. G. “Bill” Hefner Veterans Affairs Medical Center, Salisbury, and Department of Psychiatry and Behavioral Sciences, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Jared A. Rowland
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, and Research & Academic Affairs Service Line, W. G. “Bill” Hefner Veterans Affairs Medical Center, Salisbury, and Department of Psychiatry & Behavioral Sciences, Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Ruth E. Yoash-Gantz
- Mid-Atlantic Mental Illness Research, Education, and Clinical Center, and Mental Health and Behavioral Sciences Service Line, W. G. “Bill” Hefner Veterans Affairs Medical Center, Salisbury, and Department of Psychiatry and Behavioral Sciences, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - John H. Denning
- Tennessee Valley Healthcare System, Alvin C. York Veterans Affairs Medical Center, Mental Health Care Line, Murfreesboro, Tennessee
| |
Collapse
|
48
|
Lange RT, Brickell TA, French LM. Examination of the Mild Brain Injury Atypical Symptom Scale and the Validity-10 Scale to detect symptom exaggeration in US military service members. J Clin Exp Neuropsychol 2015; 37:325-37. [DOI: 10.1080/13803395.2015.1013021] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
49
|
Whiteside DM, Kogan J, Wardin L, Phillips D, Franzwa MG, Rice L, Basso M, Roper B. Language-based embedded performance validity measures in traumatic brain injury. J Clin Exp Neuropsychol 2015; 37:220-7. [DOI: 10.1080/13803395.2014.1002758] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
50
|
Davis JJ. Further Consideration of Advanced Clinical Solutions Word Choice: Comparison to the Recognition Memory Test-Words and Classification Accuracy in a Clinical Sample. Clin Neuropsychol 2014; 28:1278-94. [DOI: 10.1080/13854046.2014.975844] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Jeremy J. Davis
- Division of Physical Medicine and Rehabilitation, University of Utah School of Medicine, Salt Lake City, UT
| |
Collapse
|