1
|
Ingram PB, Armistead-Jehle P, Herring TT, Morris CS. Cross validation of the Personality Assessment Inventory (PAI) Cognitive Bias Scale of Scales (CB-SOS) over-reporting indicators in a military sample. MILITARY PSYCHOLOGY 2024; 36:192-202. [PMID: 37651693 PMCID: PMC10880507 DOI: 10.1080/08995605.2022.2160151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 12/09/2022] [Indexed: 01/07/2023]
Abstract
Following the development of the Cognitive Bias Scale (CBS), three other cognitive over-reporting indicators were created. This study cross-validates these new Cognitive Bias Scale of Scales (CB-SOS) measurements in a military sample and contrasts their performance to the CBS. We analyzed data from 288 active-duty soldiers who underwent neuropsychological evaluation. Groups were established based on performance validity testing (PVT) failure. Medium effects (d = .71 to .74) were observed between those passing and failing PVTs. The CB-SOS scales have high specificity (≥.90) but low sensitivity across the suggested cut scores. While all CB-SOS were able to achieve .90, lower scores were typically needed. CBS demonstrated incremental validity beyond CB-SOS-1 and CB-SOS-3; only CB-SOS-2 was incremental beyond CBS. In a military sample, the CB-SOS scales have more limited sensitivity than in its original validation, indicating an area of limited utility despite easier calculation. The CBS performs comparably, if not better, than CB-SOS scales. CB-SOS-2's differences in performance in this study and its initial validation suggest that its psychometric properties may be sample dependent. Given their ease of calculation and relatively high specificity, our study supports the interpretation of elevated CB-SOS scores indicating those who are likely to fail concurrent PVTs.
Collapse
Affiliation(s)
- Paul B. Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, Kansas, USA
| | | | - Tristan T. Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| | - Cole S. Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| |
Collapse
|
2
|
Finley JCA, Brooks JM, Nili AN, Oh A, VanLandingham HB, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Multivariate examination of embedded indicators of performance validity for ADHD evaluations: A targeted approach. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-14. [PMID: 37703401 DOI: 10.1080/23279095.2023.2256440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Amanda N Nili
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Medical Social Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
3
|
Jennette KJ, Rhoads T, Resch ZJ, Cerny BM, Leib SI, Sharp DW, Ovsiew GP, Soble JR. Multivariable analysis of the relative utility and additive value of eight embedded performance validity tests for classifying invalid neuropsychological test performance. J Clin Exp Neuropsychol 2022; 44:451-460. [PMID: 36197342 DOI: 10.1080/13803395.2022.2128067] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
Abstract
INTRODUCTION This study investigated a combination of eight embedded performance validity tests (PVTs) derived from commonly administered neuropsychological tests to optimize sensitivity/specificity for detecting invalid neuropsychological test performance. The goal of this study was to evaluate what combination of these common embedded PVTs that have the most robust predictive power for detecting invalid neuropsychological test performance in a single diverse clinical sample. METHOD Eight previously validated memory- and nonmemory-based embedded PVTs were examined among 231 patients undergoing neuropsychological evaluation. Patients were classified into valid/invalid groups based on four independent criterion PVTs. Embedded PVT accuracy was assessed using standard and stepwise multiple logistic regression models. RESULTS Three PVTs, the Brief Visuospatial Memory Test-Revised Recognition Discrimination (BVMT-R-RD), Rey Auditory Verbal Learning Test Forced Choice, and WAIS-IV Digit Span Age Corrected Scaled Score, predicted 45.5% of the variance in validity group membership. BVMT-RD independently accounted for 32% of the variance in prediction of independent, criterion-defined validity group membership. CONCLUSIONS This study demonstrated the incremental predictive power of multiple embedded PVTs derived from common neuropsychological measures in detecting invalid test performance and those measures accounting for the greatest portion of the variance. These results provide guidance for evaluating the most fruitful embedded PVTs and proof of concept to better guide selection of embedded validity indices. Further, this offers clinicians an efficient, empirically derived approach to assessing performance validity when time restraints potentially limit the use of freestanding PVTs.
Collapse
Affiliation(s)
- Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Sophie I Leib
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Dillon W Sharp
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
4
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Replication and cross-validation of the personality assessment inventory (PAI) cognitive bias scale (CBS) in a mixed clinical sample. Clin Neuropsychol 2022; 36:1860-1877. [PMID: 33612093 PMCID: PMC8454137 DOI: 10.1080/13854046.2021.1889681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/08/2021] [Indexed: 01/27/2023]
Abstract
Objective: This study is a cross-validation of the Cognitive Bias Scale (CBS) from the Personality Assessment Inventory (PAI), a ten-item scale designed to assess symptom endorsement associated with performance validity test failure in neuropsychological samples. The study utilized a mixed neuropsychological sample of consecutively referred patients at a large academic medical center in the Midwest. Participants and Methods: Participants were 332 patients who completed embedded and free-standing performance validity tests (PVTs) and the PAI. Pass and fail groups were created based on PVT performance to evaluate classification accuracy of the CBS. Results: The results were generally consistent with the initial study for overall classification accuracy, sensitivity, and cut-off score. Consistent with the validation study, CBS had better classification accuracy than the original PAI validity scales and a comparable effect size to that obtained in the original validation publication; however, the Somatic Complaints scale (SOM) and the Conversion subscale (SOM-C) also demonstrated good classification accuracy. The CBS had incremental predictive ability compared to existing PAI scales. Conclusions: The results supported the CBS, but further research is needed on specific populations. Findings from this present study also suggest the relationship between conversion tendencies and PVT failure may be stronger in some geographic locations or population types (forensic versus clinical patients).
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, USA
| |
Collapse
|
5
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
6
|
Brantuo MA, An K, Biss RK, Ali S, Erdodi LA. Neurocognitive Profiles Associated With Limited English Proficiency in Cognitively Intact Adults. Arch Clin Neuropsychol 2022; 37:1579-1600. [PMID: 35694764 DOI: 10.1093/arclin/acac019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
Collapse
Affiliation(s)
- Maame A Brantuo
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Kelly An
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Renee K Biss
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| |
Collapse
|
7
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
8
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
9
|
Raffa G, Quattropani MC, Marzano G, Curcio A, Rizzo V, Sebestyén G, Tamás V, Büki A, Germanò A. Mapping and Preserving the Visuospatial Network by repetitive nTMS and DTI Tractography in Patients With Right Parietal Lobe Tumors. Front Oncol 2021; 11:677172. [PMID: 34249716 PMCID: PMC8268025 DOI: 10.3389/fonc.2021.677172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
INTRODUCTION The goal of brain tumor surgery is the maximal resection of neoplastic tissue, while preserving the adjacent functional brain tissues. The identification of functional networks involved in complex brain functions, including visuospatial abilities (VSAs), is usually difficult. We report our preliminary experience using a preoperative planning based on the combination of navigated transcranial magnetic stimulation (nTMS) and DTI tractography to provide the preoperative 3D reconstruction of the visuospatial (VS) cortico-subcortical network in patients with right parietal lobe tumors. MATERIAL AND METHODS Patients affected by right parietal lobe tumors underwent mapping of both hemispheres using an nTMS-implemented version of the Hooper Visual Organization Test (HVOT) to identify cortical areas involved in the VS network. DTI tractography was used to compute the subcortical component of the network, consisting of the three branches of the superior longitudinal fasciculus (SLF). The 3D reconstruction of the VS network was used to plan and guide the safest surgical approach to resect the tumor and avoid damage to the network. We retrospectively analyzed the cortical distribution of nTMS-induced errors, and assessed the impact of the planning on surgery by analyzing the extent of tumor resection (EOR) and the occurrence of postoperative VSAs deficits in comparison with a matched historical control group of patients operated without using the nTMS-based preoperative reconstruction of the VS network. RESULTS Twenty patients were enrolled in the study (Group A). The error rate (ER) induced by nTMS was higher in the right vs. the left hemisphere (p=0.02). In the right hemisphere, the ER was higher in the anterior supramarginal gyrus (aSMG) (1.7%), angular gyrus (1.4%) superior parietal lobule (SPL) (1.3%), and dorsal lateral occipital gyrus (dLoG) (1.2%). The reconstruction of the cortico-subcortical VS network was successfully used to plan and guide tumor resection. A gross total resection (GTR) was achieved in 85% of cases. After surgery no new VSAs deficits were observed and a slightly significant improvement of the HVOT score (p=0.02) was documented. The historical control group (Group B) included 20 patients matched for main clinical characteristics with patients in Group A, operated without the support of the nTMS-based planning. A GTR was achieved in 90% of cases, but the postoperative HVOT score resulted to be worsened as compared to the preoperative period (p=0.03). The comparison between groups showed a significantly improved postoperative HVOT score in Group A vs. Group B (p=0.03). CONCLUSIONS The nTMS-implemented HVOT is a feasible approach to map cortical areas involved in VSAs. It can be combined with DTI tractography, thus providing a reconstruction of the VS network that could guide neurosurgeons to preserve the VS network during tumor resection, thus reducing the occurrence of postoperative VSAs deficits as compared to standard asleep surgery.
Collapse
Affiliation(s)
- Giovanni Raffa
- Division of Neurosurgery, BIOMORF Department, University of Messina, Messina, Italy
| | | | - Giuseppina Marzano
- Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | - Antonello Curcio
- Division of Neurosurgery, BIOMORF Department, University of Messina, Messina, Italy
| | - Vincenzo Rizzo
- Division of Neurology, Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | - Gabriella Sebestyén
- Department of Neurosurgery, Medical School, University of Pécs, Pécs, Hungary
| | - Viktória Tamás
- Department of Neurosurgery, Medical School, University of Pécs, Pécs, Hungary
| | - András Büki
- Department of Neurosurgery, Medical School, University of Pécs, Pécs, Hungary
| | - Antonino Germanò
- Division of Neurosurgery, BIOMORF Department, University of Messina, Messina, Italy
| |
Collapse
|
10
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
11
|
Identifying Novel Embedded Performance Validity Test Formulas Within the Repeatable Battery for the Assessment of Neuropsychological Status: a Simulation Study. PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09382-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
12
|
Domen CH, Greher MR, Hosokawa PW, Barnes SL, Hoyt BD, Wodushek TR. Are Established Embedded Performance Validity Test Cut-Offs Generalizable to Patients With Multiple Sclerosis? Arch Clin Neuropsychol 2020; 35:511-516. [DOI: 10.1093/arclin/acaa016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 02/20/2019] [Accepted: 02/27/2020] [Indexed: 11/13/2022] Open
Abstract
Abstract
Objective
Data for the use of embedded performance validity tests (ePVTs) with multiple sclerosis (MS) patients are limited. The purpose of the current study was to determine whether ePVTs previously validated in other neurological samples perform similarly in an MS sample.
Methods
In this retrospective study, the prevalence of below-criterion responding at different cut-off scores was calculated for each ePVT of interest among patients with MS who passed a stand-alone PVT.
Results
Previously established PVT cut-offs generally demonstrated acceptable specificity when applied to our sample. However, the overall cognitive burden of the sample was limited relative to that observed in prior large-scale MS studies.
Conclusion
The current study provides initial data regarding the performance of select ePVTs among an MS sample. Results indicate most previously validated cut-offs avoid excessive false positive errors in a predominantly relapsing remitting MS sample. Further validation among MS patients with more advanced disease is warranted.
Collapse
Affiliation(s)
- Christopher H Domen
- Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO, USA
| | - Michael R Greher
- Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO, USA
| | | | - Sierra L Barnes
- Neurosciences, University of Colorado Health, Aurora, CO, USA
| | - Brian D Hoyt
- Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO, USA
| | - Thomas R Wodushek
- Department of Neurosurgery, University of Colorado School of Medicine, Aurora, CO, USA
| |
Collapse
|
13
|
|
14
|
Abeare C, Sabelli A, Taylor B, Holcomb M, Dumitrescu C, Kirsch N, Erdodi L. The Importance of Demographically Adjusted Cutoffs: Age and Education Bias in Raw Score Cutoffs Within the Trail Making Test. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09353-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
15
|
Gaasedelen OJ, Whiteside DM, Altmaier E, Welch C, Basso MR. The construction and the initial validation of the Cognitive Bias Scale for the Personality Assessment Inventory. Clin Neuropsychol 2019; 33:1467-1484. [DOI: 10.1080/13854046.2019.1612947] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Owen J. Gaasedelen
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
- New Mexico VA Health Care System, Albuquerque, NM, USA
| | - Douglas M. Whiteside
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Elizabeth Altmaier
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
| | - Catherine Welch
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
| | | |
Collapse
|
16
|
Poynter K, Boone KB, Ermshar A, Miora D, Cottingham M, Victor TL, Ziegler E, Zeller MA, Wright M. Wait, There’s a Baby in this Bath Water! Update on Quantitative and Qualitative Cut-Offs for Rey 15-Item Recall and Recognition. Arch Clin Neuropsychol 2018; 34:1367-1380. [DOI: 10.1093/arclin/acy087] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 10/10/2018] [Accepted: 10/17/2018] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objective
Evaluate the effectiveness of Rey 15-item plus recognition data in a large neuropsychological sample.
Method
Rey 15-item plus recognition scores were compared in credible (n = 138) and noncredible (n = 353) neuropsychology referrals.
Results
Noncredible patients scored significantly worse than credible patients on all Rey 15-item plus recognition scores. When cut-offs were selected to maintain at least 89.9% specificity, cut-offs could be made more stringent, with the highest sensitivity found for recognition correct (cut-off ≤11; 62.6% sensitivity) and the combination score (recall + recognition – false positives; cut-off ≤22; 60.6% sensitivity), followed by recall correct (cut-off ≤11; 49.3% sensitivity), and recognition false positive errors (≥3; 17.9% sensitivity). A cut-off of ≥4 applied to a summed qualitative error score for the recall trial resulted in 19.4% sensitivity. Approximately 10% of credible subjects failed either recall correct or recognition correct, whereas two-thirds of noncredible patients (67.7%) showed this pattern. Thirteen percent of credible patients failed either recall correct, recognition correct, or the recall qualitative error score, whereas nearly 70% of noncredible patients failed at least one of the three. Some individual qualitative recognition errors had low false positive rates (<2%) indicating that their presence was virtually pathognomonic for noncredible performance. Older age (>50) and IQ < 80 were associated with increased false positive rates in credible patients.
Conclusions
Data on a larger sample than that available in the 2002 validation study show that Rey 15-item plus recognition cut-offs can be made more stringent, and thereby detect up to 70% of noncredible test takers, but the test should be used cautiously in older individuals and in individuals with lowered IQ.
Collapse
Affiliation(s)
- Kellie Poynter
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Kyle Brauer Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Annette Ermshar
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Deborah Miora
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Maria Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Tara L Victor
- California State University, Dominguez Hills, Carson, CA, USA
| | | | - Michelle A Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, CA, USA
| | | |
Collapse
|
17
|
Whiteside DM, Caraher K, Hahn-Ketter A, Gaasedelen O, Basso MR. Classification accuracy of individual and combined executive functioning embedded performance validity measures in mild traumatic brain injury. APPLIED NEUROPSYCHOLOGY-ADULT 2018. [DOI: 10.1080/23279095.2018.1443935] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa, Iowa City, Iowa, USA
| | - Amanda Hahn-Ketter
- Department of Rehabilitation Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Owen Gaasedelen
- Department of Psychiatry, University of Iowa, Iowa City, Iowa, USA
| | | |
Collapse
|
18
|
Persinger VC, Whiteside DM, Bobova L, Saigal SD, Vannucci MJ, Basso MR. Using the California Verbal Learning Test, Second Edition as an embedded performance validity measure among individuals with TBI and individuals with psychiatric disorders. Clin Neuropsychol 2017; 32:1039-1053. [DOI: 10.1080/13854046.2017.1419507] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Virginia C. Persinger
- Department of Neuropsychology, Methodist Rehabilitation Center, Jackson, MS, USA
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | | | - Lyuba Bobova
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | - Seema D. Saigal
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | - Marla J. Vannucci
- Department of Clinical Psychology, Adler University, Chicago, IL, USA
| | | |
Collapse
|
19
|
Lau L, Basso MR, Estevis E, Miller A, Whiteside DM, Combs D, Arentsen TJ. Detecting coached neuropsychological dysfunction: a simulation experiment regarding mild traumatic brain injury. Clin Neuropsychol 2017; 31:1412-1431. [DOI: 10.1080/13854046.2017.1318954] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Lily Lau
- Department of Psychology, University of Tulsa, Tulsa, OK, USA
| | | | - Eduardo Estevis
- Department of Psychology, University of Tulsa, Tulsa, OK, USA
| | - Ashley Miller
- Department of Psychology, University of Tulsa, Tulsa, OK, USA
| | | | - Dennis Combs
- Department of Psychology, University of Texas at Tyler, Tyler, TX, USA
| | | |
Collapse
|
20
|
Gaasedelen OJ, Whiteside DM, Basso M. Exploring the sensitivity of the Personality Assessment Inventory symptom validity tests in detecting response bias in a mixed neuropsychological outpatient sample. Clin Neuropsychol 2017; 31:844-856. [PMID: 28391774 DOI: 10.1080/13854046.2017.1312700] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Few studies have evaluated the symptom validity tests (SVTs) within the Personality Assessment Inventory (PAI) in a neuropsychological assessment context. Accordingly, the present study explored the accuracy of PAI SVTs in identifying exaggerated cognitive dysfunction in a mixed sample of outpatients referred for neuropsychological assessment. METHOD Participants who failed two or more Performance Validity Tests (PVTs) were classified as having exaggerated cognitive dysfunction (n = 49). Their responses on PAI SVTs were compared to examinees who did not fail PVTs (n = 257). RESULTS Multivariate analysis of variance indicated the Negative Impression Management (NIM) scale most strongly discriminated between those with exaggerated cognitive dysfunction from honest responders (Cohen's d = .58). Nonetheless, its classification accuracy was low (area under the curve [AUC] = .65). A k-means cluster analysis and a subsequent multinomial logistic regression indicated evidence for two distinct groups of exaggerators. In particular, one group seemed to exaggerate symptoms, whereas another presented in a defensive manner, implying that individuals with positive and NIM biases on the PAI were apt to display invalid performance on PVTs. CONCLUSIONS Findings indicated that exaggerated cognitive dysfunction tends to be present when NIM is very high and that evidence exists for a defensive response style on the PAI in the context of PVT failure.
Collapse
Affiliation(s)
- Owen J Gaasedelen
- a Department of Psychological and Quantitative Foundations, Counseling Psychology , The University of Iowa , Iowa City , IA , USA
| | | | - Michael Basso
- c Department of Psychology , University of Tulsa , Tulsa , OK , USA
| |
Collapse
|
21
|
Whiteside DM, Gaasedelen OJ, Hahn-Ketter AE, Luu H, Miller ML, Persinger V, Rice L, Basso MR. Derivation of a Cross-Domain Embedded Performance Validity Measure in Traumatic Brain Injury. Clin Neuropsychol 2015; 29:788-803. [DOI: 10.1080/13854046.2015.1093660] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
22
|
Martin PK, Schroeder RW, Odland AP. Neuropsychologists’ Validity Testing Beliefs and Practices: A Survey of North American Professionals. Clin Neuropsychol 2015; 29:741-76. [DOI: 10.1080/13854046.2015.1087597] [Citation(s) in RCA: 189] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
23
|
|
24
|
Sugarman MA, Holcomb EM, Axelrod BN, Meyers JE, Liethen PC. Embedded Measures of Performance Validity in the Rey Complex Figure Test in a Clinical Sample of Veterans. APPLIED NEUROPSYCHOLOGY-ADULT 2015; 23:105-14. [DOI: 10.1080/23279095.2015.1014557] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Michael A. Sugarman
- John D. Dingell Veterans Affairs’ Medical Center, Detroit, Michigan
- Department of Psychology, Wayne State University, Detroit, Michigan
| | - Erin M. Holcomb
- Department of Psychology, James A. Haley Veterans’ Hospital, Tampa, Florida
| | | | | | - Philip C. Liethen
- Department of Psychology, Henry Ford Health System, Detroit, Michigan
| |
Collapse
|
25
|
Odland AP, Lammy AB, Martin PK, Grote CL, Mittenberg W. Advanced Administration and Interpretation of Multiple Validity Tests. PSYCHOLOGICAL INJURY & LAW 2015. [DOI: 10.1007/s12207-015-9216-4] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
26
|
Whiteside DM, Kogan J, Wardin L, Phillips D, Franzwa MG, Rice L, Basso M, Roper B. Language-based embedded performance validity measures in traumatic brain injury. J Clin Exp Neuropsychol 2015; 37:220-7. [DOI: 10.1080/13803395.2014.1002758] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
27
|
Axelrod BN, Meyers JE, Davis JJ. Finger Tapping Test Performance as a Measure of Performance Validity. Clin Neuropsychol 2014; 28:876-88. [DOI: 10.1080/13854046.2014.907583] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
28
|
Denning JH. The Efficiency and Accuracy of The Test of Memory Malingering Trial 1, Errors on the First 10 Items of The Test of Memory Malingering, and Five Embedded Measures in Predicting Invalid Test Performance. Arch Clin Neuropsychol 2012; 27:417-32. [DOI: 10.1093/arclin/acs044] [Citation(s) in RCA: 129] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
29
|
Busse M, Whiteside D. Detecting suboptimal cognitive effort: classification accuracy of the Conner's Continuous Performance Test-II, Brief Test Of Attention, and Trail Making Test. Clin Neuropsychol 2012; 26:675-87. [PMID: 22533714 DOI: 10.1080/13854046.2012.679623] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Many cognitive measures have been studied for their ability to detect suboptimal cognitive effort; however, attention measures have not been extensively researched. The current study evaluated the classification accuracy of commonly used attention/concentration measures, the Brief Test of Attention (BTA), Trail Making Test (TMT), and the Conners' Continuous Performance Test (CPT-II). Participants included 413 consecutive patients who completed a comprehensive neuropsychological evaluation. Participants were separated into two groups, identified as either unbiased responders or biased responders as determined by performance on the TOMM. Based on Mann-Whitney U results, the two groups differed significantly on all attentional measures. Classification accuracy of the BTA (.83), CPT-II omission errors (OE; .76) and TMT B (.75) were acceptable; however, classification accuracy of CPT-II commission errors (CE; .64) and TMT A (.62) were poor. When variables were combined in different combinations, sensitivity did not significantly increase. Results indicated for optimal cut-off scores, sensitivity ranged from 48% to 64% when specificity was at least 85%. Given that sensitivity rates were not adequate, there remains a need to utilize highly sensitive measures in addition to these embedded measures. Results were discussed within the context of research promoting the need for multiple measures of cognitive effort.
Collapse
Affiliation(s)
- Michelle Busse
- Washington School of Professional Psychology, Seattle, WA 98121, USA.
| | | |
Collapse
|