1
|
Considine CM, Rossetti MA, Anderson K, Del Bene VA, Anderson SA, Celka AS, Edmondson MC, Sheese ALN, Piccolino A, Teixeira AL, Stout JC. Huntington study group's neuropsychology working group position on best practice recommendations for the clinical neuropsychological evaluation of patients with Huntington disease. Clin Neuropsychol 2024; 38:984-1006. [PMID: 37849335 DOI: 10.1080/13854046.2023.2267789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 10/02/2023] [Indexed: 10/19/2023]
Abstract
Objective: Neuropsychological evaluation is critical to detection and management of cognitive and neuropsychiatric changes associated with Huntington disease (HD). Accurate assessment of non-motor complications of HD is critical given the prominent impact on functional disability, frequently commensurate with or exceeding that of motor symptoms. The increasing emphasis on developing disease-modifying therapies targeting cognitive decline in HD requires consensus on clinical neuropsychological assessment methods. The Neuropsychology Working Group (NPWG) of the Huntington Study Group (HSG) sought to provide evidence and consensus-based, practical guidelines for the evaluation of cognitive and neuropsychiatric symptoms associated with HD. Method: The NPWG recruited a multi-disciplinary group of neuropsychologists, neurologists, and psychiatrists to inform best practices in assessing, diagnosing, and treating the non-motor symptoms in HD. A review was circulated among the NPWG, and in an iterative process informed by reviewed literature, best practices in neuropsychological evaluation of patients with HD were identified. Results: A brief review of the available literature and rational for a clinical consensus battery is offered. Conclusion: Clinical neuropsychologists are uniquely positioned to both detect and characterize the non-motor symptoms in HD, and further, provide neurologists and allied health professions with clinically meaningful information that impacts functional outcomes and quality of life. The NPWG provides guidance on best practices to clinical neuropsychologists in this statement. A companion paper operationalizing clinical application of previous research-based non-motor diagnostic criteria for HD is forthcoming, which also advises on non-motor symptom screening methods for the non-neuropsychologist working with HD.
Collapse
Affiliation(s)
- Ciaran M Considine
- Department of Neurology, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - M Agustina Rossetti
- Department of Neurology, University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Kendra Anderson
- Department of Neurology, McGovern Medical School UT Health, The University of Texas Health Science Center, Houston, TX, USA
| | - Victor A Del Bene
- Department of Neurology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Sharlet A Anderson
- Department of Neurological Sciences, Rush University Medical Center, Chicago, IL, USA
| | - Andrea S Celka
- Department of Neurology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | | | - Amelia L Nelson Sheese
- Department of Neurological Sciences, University of Nebraska Medical Center College of Medicine, Omaha, NE, USA
| | - Adam Piccolino
- Psychology, Piccolino Psychological Services, Burnsville, MN, USA
| | - Antonio L Teixeira
- Department of Neurology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Julie C Stout
- Turner Institute for Brain and Mental Health, and School of Psychological Science, Monash University, Melbourne, Australia
| |
Collapse
|
2
|
van Vliet FIM, van Schothorst HP, Donker-Cools BHPM, Schaafsma FG, Ponds RWHM, Geurtsen GJ. Validity of the Groningen Effort Test in patients with suspected chronic solvent-induced encephalopathy. Arch Clin Neuropsychol 2024:acae025. [PMID: 38572600 DOI: 10.1093/arclin/acae025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/11/2024] [Accepted: 02/27/2024] [Indexed: 04/05/2024] Open
Abstract
INTRODUCTION The use of performance validity tests (PVTs) in a neuropsychological assessment to determine indications of invalid performance has been a common practice for over a decade. Most PVTs are memory-based; therefore, the Groningen Effort Test (GET), a non-memory-based PVT, has been developed. OBJECTIVES This study aimed to validate the GET in patients with suspected chronic solvent-induced encephalopathy (CSE) using the criterion standard of 2PVTs. A second goal was to determine diagnostic accuracy for GET. METHOD Sixty patients with suspected CSE referred for NPA were included. The GET was compared to the criterion standard of 2PVTs based on the Test of Memory Malingering and the Amsterdam Short Term Memory Test. RESULTS The frequency of invalid performance using the GET was significantly higher compared to the criterion of 2PVTs (51.7% vs. 20.0% respectively; p < 0.001). For the GET index, the sensitivity was 75% and the specificity was 54%, with a Youden's Index of 27. CONCLUSION The GET showed significantly more invalid performance compared to the 2PVTs criterion suggesting a high number of false positives. The general accepted minimum norm of specificity for PVTs of >90% was not met. Therefore, the GET is of limited use in clinical practice with suspected CSE patients.
Collapse
Affiliation(s)
- Fabienne I M van Vliet
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Henrita P van Schothorst
- Department of Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Birgit H P M Donker-Cools
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Frederieke G Schaafsma
- Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Centres, Amsterdam, The Netherlands
- Research Centre for Insurance Medicine, Amsterdam, The Netherlands
| | - Rudolf W H M Ponds
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Roor JJ, Peters MJV, Dandachi-FitzGerald B, Ponds RWHM. Performance Validity Test Failure in the Clinical Population: A Systematic Review and Meta-Analysis of Prevalence Rates. Neuropsychol Rev 2024; 34:299-319. [PMID: 36872398 PMCID: PMC10920461 DOI: 10.1007/s11065-023-09582-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Accepted: 11/16/2022] [Indexed: 03/07/2023]
Abstract
Performance validity tests (PVTs) are used to measure the validity of the obtained neuropsychological test data. However, when an individual fails a PVT, the likelihood that failure truly reflects invalid performance (i.e., the positive predictive value) depends on the base rate in the context in which the assessment takes place. Therefore, accurate base rate information is needed to guide interpretation of PVT performance. This systematic review and meta-analysis examined the base rate of PVT failure in the clinical population (PROSPERO number: CRD42020164128). PubMed/MEDLINE, Web of Science, and PsychINFO were searched to identify articles published up to November 5, 2021. Main eligibility criteria were a clinical evaluation context and utilization of stand-alone and well-validated PVTs. Of the 457 articles scrutinized for eligibility, 47 were selected for systematic review and meta-analyses. Pooled base rate of PVT failure for all included studies was 16%, 95% CI [14, 19]. High heterogeneity existed among these studies (Cochran's Q = 697.97, p < .001; I2 = 91%; τ2 = 0.08). Subgroup analysis indicated that pooled PVT failure rates varied across clinical context, presence of external incentives, clinical diagnosis, and utilized PVT. Our findings can be used for calculating clinically applied statistics (i.e., positive and negative predictive values, and likelihood ratios) to increase the diagnostic accuracy of performance validity determination in clinical evaluation. Future research is necessary with more detailed recruitment procedures and sample descriptions to further improve the accuracy of the base rate of PVT failure in clinical practice.
Collapse
Affiliation(s)
- Jeroen J Roor
- Department of Medical Psychology, VieCuri Medical Center, Venlo, The Netherlands.
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Maarten J V Peters
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Brechje Dandachi-FitzGerald
- Department of Clinical Psychological Science, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Faculty of Psychology, Open University, Heerlen, The Netherlands
| | - Rudolf W H M Ponds
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Amsterdam University Medical Centres, location VU, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Tyson BT, Shahein A, Abeare CA, Baker SD, Kent K, Roth RM, Erdodi LA. Replicating a Meta-Analysis: The Search for the Optimal Word Choice Test Cutoff Continues. Assessment 2023; 30:2476-2490. [PMID: 36752050 DOI: 10.1177/10731911221147043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
This study was designed to expand on a recent meta-analysis that identified ≤42 as the optimal cutoff on the Word Choice Test (WCT). We examined the base rate of failure and the classification accuracy of various WCT cutoffs in four independent clinical samples (N = 252) against various psychometrically defined criterion groups. WCT ≤ 47 achieved acceptable combinations of specificity (.86-.89) at .49 to .54 sensitivity. Lowering the cutoff to ≤45 improved specificity (.91-.98) at a reasonable cost to sensitivity (.39-.50). Making the cutoff even more conservative (≤42) disproportionately sacrificed sensitivity (.30-.38) for specificity (.98-1.00), while still classifying 26.7% of patients with genuine and severe deficits as non-credible. Critical item (.23-.45 sensitivity at .89-1.00 specificity) and time-to-completion cutoffs (.48-.71 sensitivity at .87-.96 specificity) were effective alternative/complementary detection methods. Although WCT ≤ 45 produced the best overall classification accuracy, scores in the 43 to 47 range provide comparable objective psychometric evidence of non-credible responding. Results question the need for designating a single cutoff as "optimal," given the heterogeneity of signal detection environments in which individual assessors operate. As meta-analyses often fail to replicate, ongoing research is needed on the classification accuracy of various WCT cutoffs.
Collapse
Affiliation(s)
| | | | | | | | | | - Robert M Roth
- Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | | |
Collapse
|
5
|
Aguilar C, Bailey C, Karyadi KA, Kinney DI, Nitch SR. The use of performance validity tests among inpatient forensic monolingual Spanish-speakers. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:671-679. [PMID: 34491851 DOI: 10.1080/23279095.2021.1970555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Performance validity tests (PVTs) are an integral part of neuropsychological assessments. Yet no studies have examined how Spanish-speaking forensic inpatients perform on PVTs, making it difficult to interpret these tests in this population. The present study examined archival data collected from monolingual Spanish-speaking forensic inpatients (n = 55; Mage = 49.6 years, SD = 12.0; 84.9% male; 93.5% diagnosed with a Psychotic Spectrum Disorder) to determine how this population performs on several PVTs. Most participants' scores on the Dot Counting Test (DCT; 82.2%; n = 45), Repeatable Battery for Assessment of Neuropsychological Status-Effort Index (RBANS EI; 84.4%; n = 33), and Test of Memory Malingering (TOMM; 79.1%; n = 43) were indicative of valid performance. Few participants, however, had Rey-15 Item Test (FIT) scores in the valid range (24.5% to 48.0%; Recall n = 50 and Combined n = 49, respectively); although FIT Recall specificity was improved when cutoff scores were lowered. Total years of education, but not other educational factors, were significantly associated with performance on PVTs (r = .33-.40, p = .01-.03). Study results suggest the DCT, TOMM, and RBANS EI may be more appropriate PVTs for Spanish-speaking forensic inpatients compared to the FIT.
Collapse
Affiliation(s)
- Cynthia Aguilar
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| | - Cassandra Bailey
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| | - Kenny A Karyadi
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| | | | - Stephen R Nitch
- Department of Psychology, Patton State Hospital, Patton, CA, USA
| |
Collapse
|
6
|
Denning JH. The TOMM1 discrepancy index (TDI): A new performance validity test (PVT) that differentiates between invalid cognitive testing and those diagnosed with dementia. APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:83-90. [PMID: 33945362 DOI: 10.1080/23279095.2021.1910951] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
There is a need to develop performance validity tests (PVTs) that accurately identify those with severe cognitive decline but also remain sensitive to those suspected of invalid cognitive testing. The TOMM1 Discrepancy Index (TDI) attempts to address both of these issues. Veterans diagnosed with dementia (n = 251) were administered TOMM1 and the MSVT in order to develop the TDI (TOMM1 percent correct minus MSVT Free Recall percent correct). Cut offs based on the dementia sample were then used to identify those in the non-dementia sample (n = 1,226) suspected of invalid test performance (n = 401). Combining TOMM1 and the TDI in the dementia sample greatly reduced the false positive rate (specificity = 0.97) at a cut off of 28 points or less on the TDI. Those suspected of invalid testing were identified at much higher rates (sensitivity = 0.75) compared to the MSVT genuine memory impairment profile (GMIP, sensitivity = 0.49). By utilizing a neurologically plausible pattern of scores across two PVTs, the TDI correctly classified those with dementia and identified a large percentage with invalid test performance. PVTs utilizing a complex pattern of performance may help reduce one's ability to fabricate cognitive deficits.
Collapse
Affiliation(s)
- John H Denning
- Department of Veteran Affairs, Mental Health Service, Ralph H. Johnson Veterans Affairs Medical Center, Charleston, SC, USA.,Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
7
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
8
|
Abstract
AbstractAre personality traits related to symptom overreporting and/or symptom underreporting? With this question in mind, we evaluated studies from 1979 to 2020 (k = 55), in which personality traits were linked to scores on stand-alone validity tests, including symptom validity tests (SVTs) and measures of socially desirable responding (SDR) and/or supernormality. As to symptom overreporting (k = 14), associations with depression, alexithymia, apathy, dissociation, and fantasy proneness varied widely from weak to strong (rs .27 to .79). For underreporting (k = 41), inconsistent links (rs − .43 to .63) were found with narcissism, whereas alexithymia and dissociation were often associated with lower SDR tendencies, although effect sizes were small. Taken together, the extant literature mainly consists of cross-sectional studies on single traits and contexts, mostly offering weak correlations that do not necessarily reflect causation. What this field lacks is an overarching theory relating traits to symptom reporting. Longitudinal studies involving a broad range of traits, samples, and incentives would be informative. Until such studies have been done, traits are best viewed as modest concomitants of symptom distortion.
Collapse
|
9
|
Shura RD, Ord AS, Worthen MD. Structured Inventory of Malingered Symptomatology: a Psychometric Review. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09432-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
10
|
Exploring the Structured Inventory of Malingered Symptomatology in Patients with Multiple Sclerosis. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09424-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
11
|
Dunn A, Pyne S, Tyson B, Roth R, Shahein A, Erdodi L. Critical Item Analysis Enhances the Classification Accuracy of the Logical Memory Recognition Trial as a Performance Validity Indicator. Dev Neuropsychol 2021; 46:327-346. [PMID: 34525856 DOI: 10.1080/87565641.2021.1956499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE : Replicate previous research on Logical Memory Recognition (LMRecog) and perform a critical item analysis. METHOD : Performance validity was psychometrically operationalized in a mixed clinical sample of 213 adults. Classification of the LMRecog and nine critical items (CR-9) was computed. RESULTS : LMRecog ≤20 produced a good combination of sensitivity (.30-.35) and specificity (.89-.90). CR-9 ≥5 and ≥6 had comparable classification accuracy. CR-9 ≥5 increased sensitivity by 4% over LMRecog ≤20; CR-9 ≥6 increased specificity by 6-8% over LMRecog ≤20; CR-9 ≥7 increased specificity by 8-15%. CONCLUSIONS : Critical item analysis enhances the classification accuracy of the optimal LMRecog cutoff (≤20).
Collapse
Affiliation(s)
- Alexa Dunn
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Sadie Pyne
- Windsor Neuropsychology, Windsor, Canada
| | - Brad Tyson
- Neuroscience Institute, Evergreen Neuroscience Institute, EvergreenHealth Medical Center, Kirkland, USA
| | - Robert Roth
- Neuropsychology Services, Dartmouth-Hitchcock Medical Center, USA
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
12
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
13
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
14
|
Varela JL, Ord AS, Phillips JI, Shura RD, Sautter SW. The Development and Validation of the Embedded Validity Indicator for the Neuropsychological Assessment Battery. Arch Clin Neuropsychol 2021; 37:133-145. [PMID: 33876179 DOI: 10.1093/arclin/acab025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/31/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The purpose of this study was to develop and validate an embedded measure of performance validity within the Neuropsychological Assessment Battery (NAB). METHOD This study involved a retrospective chart review at an outpatient neuropsychology clinic. Participants were 183 adults (ages 18-70) who completed the attention and memory modules of the NAB, as well as the Word Choice Test, Green's Medical Symptom Validity Test (MSVT), and Green's Non-Verbal MSVT, as part of a clinical neuropsychological assessment (n = 147) or as part of a forensic neuropsychological evaluation (n = 36). Replicating methodology utilized by Silverberg et al. (2007) for the development of the Effort Index within the Repeatable Battery for the Assessment of Neuropsychological Status, an Embedded Validity Indictor (EVI) for the NAB was developed in the present study based on Digits Forward and List Learning Long Delayed Forced-Choice Recognition (list recognition) subtests. RESULTS Receiver operating characteristic curve analyses indicated the newly developed NAB EVI was able to significantly differentiate between valid and invalid status on stand-alone performance-validity tests, with area under the curve values ranging from 0.797 to 0.977. Optimal cutoffs for medical, forensic, and mixed samples were identified. CONCLUSIONS The newly developed NAB EVI shows promise as an embedded performance validity measure; however, due to moderate sensitivity, it should be used in combination with stand-alone performance validity tests to detect invalid performance.
Collapse
Affiliation(s)
- Jacob L Varela
- School of Psychology and Counseling, Regent University, Virginia Beach, VA 23464, USA
| | - Anna S Ord
- School of Psychology and Counseling, Regent University, Virginia Beach, VA 23464, USA.,Research & Academic Affairs Service Line, W. G. Hefner VA Medical Center, Salisbury, NC 28144, USA.,Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Durham, NC 27707, USA.,Department of Neurology, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA
| | - Jacob I Phillips
- School of Psychology and Counseling, Regent University, Virginia Beach, VA 23464, USA.,Independent Private Practice, Virginia Beach, VA 23451, USA
| | - Robert D Shura
- Research & Academic Affairs Service Line, W. G. Hefner VA Medical Center, Salisbury, NC 28144, USA.,Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Durham, NC 27707, USA.,Department of Neurology, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA
| | - Scott W Sautter
- School of Psychology and Counseling, Regent University, Virginia Beach, VA 23464, USA.,Independent Private Practice, Virginia Beach, VA 23451, USA
| |
Collapse
|
15
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|