1
|
Roor JJ, Peters MJV, Dandachi-FitzGerald B, Ponds RWHM. Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates. Neuropsychol Rev 2024; 34:299-319. [PMID: 36872398 PMCID: PMC10920461 DOI: 10.1007/s11065-023-09582-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 11/16/2022] [Indexed: 03/07/2023]
Abstract
Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97, p < .001; I2 = 91%; τ2 = 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.
Collapse
Affiliation(s)
- Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, The Netherlands.
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Maarten J V Peters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Brechje Dandachi-FitzGerald
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology, Open University, Heerlen, The Netherlands
| | - Rudolf W H M Ponds
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, location VU, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
3
|
Finley JCA, Brooks JM, Nili AN, Oh A, VanLandingham HB, Ovsiew GP, Ulrich DM, Resch ZJ, Soble JR. Multivariate examination of embedded indicators of performance validity for ADHD evaluations: A targeted approach. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-14. [PMID: 37703401 DOI: 10.1080/23279095.2023.2256440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
This study investigated the individual and combined utility of 10 embedded validity indicators (EVIs) within executive functioning, attention/working memory, and processing speed measures in 585 adults referred for an attention-deficit/hyperactivity disorder (ADHD) evaluation. Participants were categorized into invalid and valid performance groups as determined by scores from empirical performance validity indicators. Analyses revealed that all of the EVIs could meaningfully discriminate invalid from valid performers (AUCs = .69-.78), with high specificity (≥90%) but low sensitivity (19%-51%). However, none of them explained more than 20% of the variance in validity status. Combining any of these 10 EVIs into a multivariate model significantly improved classification accuracy, explaining up to 36% of the variance in validity status. Integrating six EVIs from the Stroop Color and Word Test, Trail Making Test, Verbal Fluency Test, and Wechsler Adult Intelligence Scale-Fourth Edition was as efficacious (AUC = .86) as using all 10 EVIs together. Failing any two of these six EVIs or any three of the 10 EVIs yielded clinically acceptable specificity (≥90%) with moderate sensitivity (60%). Findings support the use of multivariate models to improve the identification of performance invalidity in ADHD evaluations, but chaining multiple EVIs may only be helpful to an extent.
Collapse
Affiliation(s)
- John-Christopher A Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Julia M Brooks
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, University of Illinois at Chicago, Chicago, IL, USA
| | - Amanda N Nili
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Medical Social Sciences, Northwestern University Feinberg School, Chicago, IL, USA
| | - Alison Oh
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Hannah B VanLandingham
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Psychology, Illinois Institute of Technology Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Devin M Ulrich
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
4
|
Bütz MR, English JV, Meyers JE, Cohen LJ. Threats to the integrity of psychological assessment: The misuse of test raw data and materials. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-20. [PMID: 37573544 DOI: 10.1080/23279095.2023.2241094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
In the practice of psychological assessment there have been warnings for decades by the American Psychological Association (APA), the National Academy of Neuropsychology (NAN), other associations, and test vendors, against the disclosure of test raw data and test materials. Psychological assessment occurs across several different practice environments, and test raw data is a particularly sensitive aspect of practice considering what it implicitly represents about a client/patient, and this concept is further developed in this paper. Many times, test materials are intellectual property protected by copyrights and user agreements. It follows that improper management of the release of test raw data and test materials threatens the scientific integrity of psychological assessment. Here the matters of test raw data, test materials, and different practice environments are addressed to highlight the challenges involved with improper releases and to offer guidance concerning good-faith efforts to preserve the integrity of psychological assessment and legal agreements. The unique demands of forensic practice are also discussed, including attorneys' needs for cross-examination and discovery, which may place psychologists (and other duly vetted evaluators) in conflict with their commitment to professional ethical codes and legal agreements. To this end, important threats to the proper use of test raw data and test materials include uninformed professionals and compromised evaluators. In this paper, the mishandling of test raw data and materials by both psychologists and other evaluators is reviewed, representative case examples, including those from the literature, are provided, pertinent case law is discussed, and practical stepwise conflict resolutions are offered.
Collapse
Affiliation(s)
- Michael R Bütz
- Aspen Practice, P.C. and Intermountain Healthcare, Billings, MT, USA
| | | | - John E Meyers
- Meyers Neuropsychological Services, Clermont, FL, USA
| | | |
Collapse
|
5
|
Vizgaitis AL, Bottini S, Polizzi CP, Barden E, Krantweiss AR. Self-Reported Adult ADHD Symptoms: Evidence Supporting Cautious Use in an Assessment-Seeking Sample. J Atten Disord 2023:10870547231172764. [PMID: 37158158 DOI: 10.1177/10870547231172764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
OBJECTIVE Self-report symptom inventories are commonly used in adult ADHD assessment, and research indicates they should be interpreted with caution. This study investigated one self-report symptom inventory for adult ADHD in a clinical sample. METHOD Archival data were used to evaluate diagnostic utility of the Conners Adult ADHD Rating Scale-Self-Report: Long Version (CAARS-S:L) in a sample of 122 adults seeking ADHD assessment. RESULTS Overall, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) estimates for the ADHD Index and other CAARS-S:L scales demonstrated weak accuracy. Anxiety and depression were the most common diagnoses present when a false positive on the ADHD Index was observed. PPV and specificity for the ADHD Index were higher in males compared to females. CONCLUSION The CAARS-S:L may be useful for screening purposes in some cases, but should not be the main method used for diagnostic purposes. Clinical implications of findings are discussed.
Collapse
Affiliation(s)
| | | | | | - Eileen Barden
- Boston University School of Medicine, MA, USA
- State University of New York at Binghamton, NY, USA
| | | |
Collapse
|
6
|
Robinson A, Reed C, Davis K, Divers R, Miller L, Erdodi LA, Calamia M. Settling the Score: Can CPT-3 Embedded Validity Indicators Distinguish Between Credible and Non-Credible Responders Referred for ADHD and/or SLD? J Atten Disord 2023; 27:80-88. [PMID: 36113024 DOI: 10.1177/10870547221121781] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE The purpose of the present study was to further investigate the clinical utility of individual and composite indicators within the CPT-3 as embedded validity indicators (EVIs) given the discrepant findings of previous investigations. METHODS A total of 201 adults undergoing psychoeducational evaluation for ADHD and/or Specific Learning Disorder (SLD) were divided into credible (n = 159) and non-credible (n = 42) groups based on five criterion measures. RESULTS Receiver operating characteristic curves (ROC) revealed that 5/9 individual indicators and 2/4 composite indicators met minimally acceptable classification accuracy of ≥0.70 (AUC = 0.43-0.78). Individual (0.16-0.45) and composite indicators (0.23-0.35) demonstrated low sensitivity when using cutoffs that maintained specificity ≥90%. CONCLUSION Given the lack of stability across studies, further research is needed before recommending any specific cutoff be used in clinical practice with individuals seeking psychoeducational assessment.
Collapse
Affiliation(s)
| | | | | | - Ross Divers
- Louisiana State University, Baton Rouge, USA
| | - Luke Miller
- Louisiana State University, Baton Rouge, USA
| | | | | |
Collapse
|
7
|
Weitzner DS, Miller BI, Webber TA. Embedded cognitive and emotional/affective self-reported symptom validity indices on the patient competency rating scale. J Clin Exp Neuropsychol 2022; 44:533-549. [PMID: 36369702 DOI: 10.1080/13803395.2022.2138270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Although there is an abundance of research on stand-alone and embedded performance validity tests and stand-alone symptom validity tests (SVTs), less emphasis has been placed on embedded SVTs. The goal of the current study was to examine the ability of embedded indicators within the Patient Competency Rating Scale (PCRS) to separately detect invalid cognitive and/or emotional/affective symptom responding. METHOD Participants included 299 veterans assessed in a VA medical center epilepsy monitoring unit from 2013-2017 (mean age = 48.8 years, SD = 13.5 years). Two SVT composites were created; self-reported cognitive symptom validity (SVT-C) and self-reported emotional/affective symptom validity (SVT-E). Groups were compared on PCRS total and index scores (i.e., cognitive, activities of daily living, emotional, and interpersonal competencies) using ANOVAs. Receiver operating characteristic (ROC) curve analyses assessed the classification accuracy of the PCRS total and index scores for SVT-C and SVT-E. RESULTS In ANOVAs, SVT-C was significantly associated with all PCRS indices, while SVT-E was only significantly associated with the PCRS total, emotional, and interpersonal competency indices. Although the PCRS-T ≤ 90 had the strongest classification of SVT-C and SVT-E (specificities: .90, sensitivities: .44 to .50), PCRS index scores showed suggestive evidence of domain specificity, with PCRS-ADL ≤22, PCRS-C ≤ 20, and PCRS-CADL ≤45 best classifying SVT-C (specificities: .92, sensitivities: .33) and the PCRS-E ≤ 18 best classifying the SVT-E group (specificity: .93, sensitivity: .40). CONCLUSION Results suggest the PCRS may be used to obtain clinically useful information while including embedded indicators that can assess cognitive and/or emotional/affective symptom invalidity.
Collapse
Affiliation(s)
- Daniel S Weitzner
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
8
|
College Students’ Access to Academic Accommodations Over Time: Evidence of a Matthew Effect in Higher Education. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09429-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|