1
|
Arango-Lasprilla JC, Ayearst LE, Rivera D, Dini ME, Olabarrieta-Landa L, Ramos-Usuga D, Perrin PB, McCaffrey R. Test of memory Malingering 2nd Edition: Normative data from cognitively intact adults living in Spain. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-7. [PMID: 39499648 DOI: 10.1080/23279095.2024.2421450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2024]
Abstract
This study evaluated the universality of the TOMM 2 and provided a reference sample of cognitively intact adults living in Spain whose native language was Spanish. A total of 203 adults completed the TOMM 2 from June 2019 to January 2020. When using the original TOMM cutoff scores derived from English speakers, all participants scored in a range that would suggest that they passed the TOMM. When using a cut score less than 40 on Trial 1, only one participant in this study would be mistakenly classified as providing an invalid performance. Spanish-speaking adults in Spain from this study achieved a perfect score on Trial 1 at a rate more than double that of English-speaking individuals on the original TOMM. At the item level, all but one item met the minimum standard for performance validity; this item fell only marginally below the standard at 89%. This study found a very low failure rate for the TOMM 2, suggesting that the second edition has at least as high specificity as the original in Spanish adults.
Collapse
Affiliation(s)
| | | | - Diego Rivera
- Department of Health Science, Public University of Navarre, Pamplona, Spain
- Instituto de Investigación Sanitaria de Navarra (IdiSNA), Pamplona, Spain
| | - Mia E Dini
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | - Laiene Olabarrieta-Landa
- Department of Health Science, Public University of Navarre, Pamplona, Spain
- Instituto de Investigación Sanitaria de Navarra (IdiSNA), Pamplona, Spain
| | - Daniela Ramos-Usuga
- Biomedical Research Doctorate Program, University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Paul B Perrin
- School of Data Science and Department of Psychology, University of Virginia, Charlottesville, VA USA
| | | |
Collapse
|
2
|
Doddato FR, Forde J, Wang Y, Puente AE. An alternative approach to TOMM cutoff scores using a large sample of military personnel. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1261-1269. [PMID: 36227693 DOI: 10.1080/23279095.2022.2119391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The accuracy of neuropsychological assessments relies on participants exhibiting their true abilities during administration. The Test of Memory Malingering (TOMM) is a popular performance validity test used to determine whether an individual is providing honest answers. While the TOMM has proven to be highly sensitive to those who are deliberately exaggerating their symptoms, there is a limited explanation regarding the significance of using 45 as a cutoff score. The present study aims to further investigate this question by examining TOMM scores obtained in a large sample of active-duty military personnel (N = 859, M = 26 years, SD = 6.14, 97.31% males, 72.44% white). Results indicated that no notable discrepancies existed between the frequency of participants who scored a 45 and those who scored slightly below a 45 on the TOMM. The sensitivity and specificity of the TOMM were derived using the forced-choice recognition (FCR) scores obtained by participants on the California Verbal Learning Test, Second Edition (CVLT-II). The sensitivity for each trial of the TOMM was 0.84, 0.55, and 0.63, respectively; the specificity for each trial of the TOMM was 0.69, 0.93, and 0.92, respectively. Because sensitivity and specificity rates are both of importance in this study, balanced accuracy scores were also reported. Results suggested that various alternative cutoff scores produced a more accurate classification compared to the traditional cutoff of 45. Further analyses using Fisher's exact test also indicated that there were no significant performance differences on the FCR of the CVLT-II between individuals who received a 44 and individuals who received a 45 on the TOMM. The current study provides evidence on why the traditional cutoff may not be the most effective score. Future research should consider employing alternative methods which do not rely on a single score.
Collapse
Affiliation(s)
- Felicity R Doddato
- Department of Psychology, University of North Carolina Wilmington, Wilmington, NC, USA
| | - Jessica Forde
- Naval Hospital, Marine Corps Base Camp LeJeune, Hampstead, NC, USA
| | - Yishi Wang
- Department of Mathematics and Statistics, University of North Carolina Wilmington, Wilmington, NC, USA
| | - Antonio E Puente
- Department of Psychology, University of North Carolina Wilmington, Wilmington, NC, USA
| |
Collapse
|
3
|
Crişan I, Erdodi L. Examining the cross-cultural validity of the test of memory malingering and the Rey 15-item test. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:721-731. [PMID: 35476611 DOI: 10.1080/23279095.2022.2064753] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study was designed to investigate the cross-cultural validity of two freestanding performance validity tests (PVTs), the Test of Memory Malingering - Trial 1 (TOMM-1) and the Rey Fifteen Item Test (Rey-15) in Romanian-speaking patients. METHODS The TOMM-1 and Rey-15 free recall (FR) and the combination score incorporating the recognition trial (COMB) were administered to a mixed clinical sample of 61 adults referred for cognitive evaluation, 24 of whom had external incentives to appear impaired. Average scores on PVTs were compared between the two groups. Classification accuracies were computed using one PVT against another. RESULTS Patients with identifiable external incentives to appear impaired produced significantly lower scores and more errors on validity indicators. The largest effect sizes emerged on TOMM-1 (Cohen's d = 1.00-1.19). TOMM-1 was a significant predictor of the Rey-15 COMB ≤20 (AUC = .80; .38 sensitivity; .89 specificity at a cutoff of ≤39). Similarly, both Rey-15 indicators were significant predictors of TOMM-1 at ≤39 as the criterion (AUCs = .73-.76; .33 sensitivity; .89-.90 specificity). CONCLUSION Results offer a proof of concept for the cross-cultural validity of the TOMM-1 and Rey-15 in a Romanian clinical sample.
Collapse
Affiliation(s)
- Iulia Crişan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
4
|
Brown CC, Stewart-Willis JJ. A preliminary investigation of the utility of the Word Memory Test Immediate Recognition trial as a screener for noncredible performance. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-5. [PMID: 39099003 DOI: 10.1080/23279095.2024.2387233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/06/2024]
Abstract
The assessment of performance validity is an important consideration to the interpretation of neuropsychological data. However, commonly used performance validity tests such as the Test of Memory Malingering (TOMM) and Word Memory Test (WMT) have lengthy administration times (20-30 minutes). Alternatively, utilizing a screener of performance validity (e.g., the TOMM T1 or TOMMe10) has proven to be an effective method of assessing performance validity while conserving time. The present study investigates the use of the WMT Immediate Recognition (IR) Trial scores as a screening measure for performance validity using an archival mTBI polytrauma sample (n = 48). Results show that the WMT IR demonstrates a high degree of accuracy in predicting WMT Delayed Recognition (DR) Trial performance across a range of base rates suggesting that the WMT IR is a useful screening measure for noncredible performance. Clinical implications and selection of optimal cutoff are discussed.
Collapse
Affiliation(s)
- C C Brown
- Neuropsychology Department, Bay Pines Veterans' Affairs Health Care System, Bay Pines, FL, USA
| | - J J Stewart-Willis
- Neuropsychology Department, Bay Pines Veterans' Affairs Health Care System, Bay Pines, FL, USA
| |
Collapse
|
5
|
Ladowsky-Brooks RL. Recall and recognition of similarities items in neuropsychological assessment: Memory, validity, and meaning. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-8. [PMID: 38557276 DOI: 10.1080/23279095.2024.2334344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The current study examined whether the Memory Similarities Extended Test (M-SET), a memory test based on the Similarities subtest of the Wechsler Abbreviated Scale of Intelligence, Second Edition (WASI-II), has value in neuropsychological testing. The relationship of M-SET measures of cued recall (CR) and recognition memory (REC) to brain injury severity and memory scores from the Wechsler Memory Scale, Fourth Edition (WMS-IV) was analyzed in examinees with traumatic brain injuries ranging from mild to severe. Examinees who passed standard validity tests were divided into groups with intracranial injury (CT + ve, n = 18) and without intracranial injury (CT-ve, n = 50). In CT + ve only, CR was significantly correlated with Logical Memory I (LMI: rs = .62) and Logical Memory II (LMII: rs = .65). In both groups, there were smaller correlations with delayed visual memory (VRII: rs = .38; rs = .44) and psychomotor speed (Coding: rs = .29; rs = .29). The REC score was neither an indicator of memory ability nor an internal indicator of performance validity. There were no differences in M-SET or WMS-IV scores for CT-ve and CT + ve, and reasons for this are discussed. It is concluded that M-SET has utility as an incidental cued recall measure.
Collapse
|
6
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
7
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
8
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues. Neuropsychol Rev 2023; 33:581-603. [PMID: 37612531 DOI: 10.1007/s11065-023-09601-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 03/29/2023] [Indexed: 08/25/2023]
Abstract
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, Box 200, 1 Drexel Dr, New Orleans, LA, 70125, USA.
| |
Collapse
|
9
|
Cutler L, Greenacre M, Abeare CA, Sirianni CD, Roth R, Erdodi LA. Multivariate models provide an effective psychometric solution to the variability in classification accuracy of D-KEFS Stroop performance validity cutoffs. Clin Neuropsychol 2023; 37:617-649. [PMID: 35946813 DOI: 10.1080/13854046.2022.2073914] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
ObjectiveThe study was designed to expand on the results of previous investigations on the D-KEFS Stroop as a performance validity test (PVT), which produced diverging conclusions. Method The classification accuracy of previously proposed validity cutoffs on the D-KEFS Stroop was computed against four different criterion PVTs in two independent samples: patients with uncomplicated mild TBI (n = 68) and disability benefit applicants (n = 49). Results Age-corrected scaled scores (ACSSs) ≤6 on individual subtests often fell short of specificity standards. Making the cutoffs more conservative improved specificity, but at a significant cost to sensitivity. In contrast, multivariate models (≥3 failures at ACSS ≤6 or ≥2 failures at ACSS ≤5 on the four subtests) produced good combinations of sensitivity (.39-.79) and specificity (.85-1.00), correctly classifying 74.6-90.6% of the sample. A novel validity scale, the D-KEFS Stroop Index correctly classified between 78.7% and 93.3% of the sample. Conclusions A multivariate approach to performance validity assessment provides a methodological safeguard against sample- and instrument-specific fluctuations in classification accuracy, strikes a reasonable balance between sensitivity and specificity, and mitigates the invalid before impaired paradox.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Matthew Greenacre
- Schulich School of Medicine, Western University, London, Ontario, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | | | - Robert Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, USA
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
10
|
Becke M, Tucha L, Butzbach M, Aschenbrenner S, Weisbrod M, Tucha O, Fuermaier ABM. Feigning Adult ADHD on a Comprehensive Neuropsychological Test Battery: An Analogue Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4070. [PMID: 36901080 PMCID: PMC10001580 DOI: 10.3390/ijerph20054070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
The evaluation of performance validity is an essential part of any neuropsychological evaluation. Validity indicators embedded in routine neuropsychological tests offer a time-efficient option for sampling performance validity throughout the assessment while reducing vulnerability to coaching. By administering a comprehensive neuropsychological test battery to 57 adults with ADHD, 60 neurotypical controls, and 151 instructed simulators, we examined each test's utility in detecting noncredible performance. Cut-off scores were derived for all available outcome variables. Although all ensured at least 90% specificity in the ADHD Group, sensitivity differed significantly between tests, ranging from 0% to 64.9%. Tests of selective attention, vigilance, and inhibition were most useful in detecting the instructed simulation of adult ADHD, whereas figural fluency and task switching lacked sensitivity. Five or more test variables demonstrating results in the second to fourth percentile were rare among cases of genuine adult ADHD but identified approximately 58% of instructed simulators.
Collapse
Affiliation(s)
- Miriam Becke
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Lara Tucha
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
| | - Marah Butzbach
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| | - Steffen Aschenbrenner
- Department of Clinical Psychology and Neuropsychology, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
| | - Matthias Weisbrod
- Department of Psychiatry and Psychotherapy, SRH Clinic Karlsbad-Langensteinbach, 76307 Karlsbad, Germany
- Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg, 69115 Heidelberg, Germany
| | - Oliver Tucha
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
- Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147 Rostock, Germany
- Department of Psychology, National University of Ireland, W23 F2K8 Maynooth, Ireland
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, University of Groningen, 9712 TS Groningen, The Netherlands
| |
Collapse
|
11
|
Legemaat AM, Haagedoorn MAS, Burger H, Denys D, Bockting CL, Geurtsen GJ. Is suboptimal effort an issue? A systematic review on neuropsychological performance validity in major depressive disorder. J Affect Disord 2023; 323:731-740. [PMID: 36528136 DOI: 10.1016/j.jad.2022.12.043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 11/25/2022] [Accepted: 12/11/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND In Major Depressive Disorder (MDD), emotion- and motivation related symptoms may affect effort during neuropsychological testing. Performance Validity Tests (PVT's) are therefore essential, but are rarely mentioned in research on cognitive functioning in MDD. We aimed to assess the proportion of MDD patients with demonstrated valid performance and determine cognitive functioning in patients with valid performance. This is the first systematic review on neuropsychological performance validity in MDD. METHODS Databases PubMed, PsycINFO, Embase, and Cochrane Library were searched for studies reporting on PVT results of adult MDD patients. We meta-analyzed the proportion of MDD patients with PVT scores indicative of valid performance. RESULTS Seven studies with a total of 409 MDD patients fulfilled inclusion criteria. Six studies reported the exact proportion of patients with PVT scores indicative of valid performance, which ranged from 60 to 100 % with a proportion estimate of 94 %. Four studies reported on cognitive functioning in MDD patients with valid performance. Two out of these studies found memory impairment in a minority of MDD patients and two out of these studies found no cognitive impairment. LIMITATIONS Small number of studies and small sample sizes. CONCLUSIONS A surprisingly small number of studies reported on PVT in MDD. About 94 % of MDD patients in studies using PVT's had valid neuropsychological test performance. Concessive information regarding cognitive functioning in MDD patients with valid performance was lacking. Neuropsychological performance validity should be taken into account since this may alter conclusions regarding cognitive functioning.
Collapse
Affiliation(s)
- Amanda M Legemaat
- Department of Psychiatry Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Marcella A S Haagedoorn
- Department of Geriatric Psychiatry, Mental Health Care North-Holland North, Maelsonstraat 1, 1624 NP Hoorn, the Netherlands
| | - Huibert Burger
- Department of General Practice and Elderly Care Medicine, University Medical Center Groningen, University of Groningen, Antonius Deusinglaan 1, 9713 AV Groningen, the Netherlands
| | - Damiaan Denys
- Department of Psychiatry Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands
| | - Claudi L Bockting
- Department of Psychiatry Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands; Centre for Urban Mental Health, University of Amsterdam, Oude Turfmarkt 147, 1012 GC Amsterdam, the Netherlands
| | - Gert J Geurtsen
- Department of Medical Psychology Amsterdam University Medical Centers, Location AMC, University of Amsterdam, Amsterdam Neuroscience & Amsterdam Public Health, Meibergdreef 9, 1105 AZ Amsterdam, the Netherlands.
| |
Collapse
|
12
|
Martin BJ, Sober JD, Millis SR, Hanks RA, Reslan S, Waldron-Perrine B. CVLT-3 response bias as an indicator of performance validity in a litigating population. Clin Neuropsychol 2023; 37:81-90. [PMID: 34689724 DOI: 10.1080/13854046.2021.1993347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
This study examined the efficacy of CVLT-3 response bias (i.e., parametric and nonparametric response bias) indices in differentiating between a clinical sample with traumatic brain injury and a litigating sample with poor performance validity. Participants included 106 individuals, divided into two groups: clinical group with TBI (n = 56) and a litigating group who demonstrated inadequate performance validity (n = 50), as measured by failure on at least two performance validity tests. Archival CVLT-II data was rescored utilizing the CVLT-3 scoring and normative data. Receiver operator characteristic (ROC) curve analysis was used to evaluate the diagnostic discriminability of the two response bias indices. Both parametric and nonparametric bias indices showed acceptable levels of diagnostic discrimination: AUC = .791 for parametric response bias and AUC = .753 for nonparametric response bias. Parametric response bias' discrimination was statistically superior to the nonparametric responses bias' discrimination. The CVLT-3 response bias score demonstrated good sensitivity and specificity when differentiating between individuals in a clinical sample with TBI and individuals in litigation who demonstrated inadequate performance validity.
Collapse
Affiliation(s)
- Bess J Martin
- Neuropsychology, Rehabilitation Institute of Michigan, Detroit, Michigan, USA
| | - Jonathan D Sober
- Neuropsychology, Rehabilitation Institute of Michigan, Detroit, Michigan, USA
| | - Scott R Millis
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Michigan, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Robin A Hanks
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Michigan, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Summar Reslan
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Michigan, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Brigid Waldron-Perrine
- Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Michigan, Wayne State University School of Medicine, Detroit, Michigan, USA
| |
Collapse
|
13
|
Weigard A, Spencer RJ. Benefits and challenges of using logistic regression to assess neuropsychological performance validity: Evidence from a simulation study. Clin Neuropsychol 2023; 37:34-59. [PMID: 35006042 PMCID: PMC9273108 DOI: 10.1080/13854046.2021.2023650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 12/22/2021] [Indexed: 02/07/2023]
Abstract
Logistic regression (LR) is recognized as a promising method for making decisions about neuropsychological performance validity by integrating information across multiple measures. However, this method has yet to be widely adopted in clinical practice, likely because several open questions remain about its utility relative to simpler methods, its effectiveness across different clinical contexts, and its feasibility at sample sizes common in the field. The current study addresses these questions by assessing classification performance of logistic regression and alternative methods across an array of simulated data sets. We simulated scores of valid and invalid performers on 6 tests designed to mimic the psychometric and distributional properties of real performance validity measures. Out-of-sample predictive performance of LR and a commonly used alternative ("vote counting") was assessed across different base rates, validity measure properties, and sample sizes. LR improved classification accuracy by 2%-12% across simulation conditions, primarily by improving sensitivity. False positives and negatives can be further reduced when LR predictions are interpreted as continuous, rather than binary. LR made robust predictions at sample sizes feasible for neuropsychology research (N = 307) and when as few as 2 tests with good psychometric properties were used. Although training and test data sets of at least several hundred individuals may be required to develop and evaluate LR models for use in clinical practice, LR promises to be an efficient and powerful tool for improving judgements about performance validity. We offer several recommendations for model development and LR interpretation in a clinical setting.
Collapse
Affiliation(s)
| | - Robert J. Spencer
- Department of Psychiatry, University of Michigan
- VA Ann Arbor Healthcare System
| |
Collapse
|
14
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
15
|
Abeare K, Cutler L, An KY, Razvi P, Holcomb M, Erdodi LA. BNT-15: Revised Performance Validity Cutoffs and Proposed Clinical Classification Ranges. Cogn Behav Neurol 2022; 35:155-168. [PMID: 35507449 DOI: 10.1097/wnn.0000000000000304] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 08/09/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND Abbreviated neurocognitive tests offer a practical alternative to full-length versions but often lack clear interpretive guidelines, thereby limiting their clinical utility. OBJECTIVE To replicate validity cutoffs for the Boston Naming Test-Short Form (BNT-15) and to introduce a clinical classification system for the BNT-15 as a measure of object-naming skills. METHOD We collected data from 43 university students and 46 clinical patients. Classification accuracy was computed against psychometrically defined criterion groups. Clinical classification ranges were developed using a z -score transformation. RESULTS Previously suggested validity cutoffs (≤11 and ≤12) produced comparable classification accuracy among the university students. However, a more conservative cutoff (≤10) was needed with the clinical patients to contain the false-positive rate (0.20-0.38 sensitivity at 0.92-0.96 specificity). As a measure of cognitive ability, a perfect BNT-15 score suggests above average performance; ≤11 suggests clinically significant deficits. Demographically adjusted prorated BNT-15 T-scores correlated strongly (0.86) with the newly developed z -scores. CONCLUSION Given its brevity (<5 minutes), ease of administration and scoring, the BNT-15 can function as a useful and cost-effective screening measure for both object-naming/English proficiency and performance validity. The proposed clinical classification ranges provide useful guidelines for practitioners.
Collapse
Affiliation(s)
| | | | - Kelly Y An
- Private Practice, London, Ontario, Canada
| | - Parveen Razvi
- Faculty of Nursing, University of Windsor, Windsor, Ontario, Canada
| | | | | |
Collapse
|
16
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
17
|
Abstract
OBJECTIVES To determine base rates of invalid performance on the Test of Memory Malingering (TOMM) in patients with traumatic brain injury (TBI) undertaking rehabilitation who were referred for clinical assessment, and the factors contributing to TOMM failure. METHODS Retrospective file review of consecutive TBI referrals for neuropsychological assessment over seven years. TOMM failure was conventionally defined as performance <45/50 on Trial 2 or Retention Trial. Demographic, injury, financial compensation, occupational, and medical variables were collected. RESULTS Four hundred and ninety one TBI cases (Median age = 40 years [IQR = 26-52], 79% male, 82% severe TBI) were identified. Overall, 48 cases (9.78%) failed the TOMM. Logistic regression analyses revealed that use of an interpreter during the assessment (adjusted odds ratio [aOR] = 8.25, 95%CI = 3.96-17.18), outpatient setting (aOR = 4.80, 95%CI = 1.87-12.31) and post-injury psychological distress (aOR = 2.77, 95%CI = 1.35-5.70) were significant multivariate predictors of TOMM failure. The TOMM failure rate for interpreter cases was 49% (21/43) in the outpatient setting vs. 7% (2/30) in the inpatient setting. By comparison, 9% (21/230) of non-interpreter outpatient cases failed the TOMM vs. 2% (4/188) of inpatient cases. CONCLUSIONS TOMM failure very rarely occurs in clinical assessment of TBI patients in the inpatient rehabilitation setting. It is more common in the outpatient setting, particularly in non-English-speaking people requiring an interpreter. The findings reinforce the importance of routinely administering stand-alone performance validity tests in assessments of clinical TBI populations, particularly in outpatient settings, to ensure that neuropsychological test results can be interpreted with a high degree of confidence.
Collapse
|
18
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
19
|
Brantuo MA, An K, Biss RK, Ali S, Erdodi LA. Neurocognitive Profiles Associated With Limited English Proficiency in Cognitively Intact Adults. Arch Clin Neuropsychol 2022; 37:1579-1600. [PMID: 35694764 DOI: 10.1093/arclin/acac019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
Collapse
Affiliation(s)
- Maame A Brantuo
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Kelly An
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Renee K Biss
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| |
Collapse
|
20
|
Becke M, Tucha L, Weisbrod M, Aschenbrenner S, Tucha O, Fuermaier ABM. Joint Consideration of Validity Indicators Embedded in Conners’ Adult ADHD Rating Scales (CAARS). PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09445-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
AbstractA decade of research has both illustrated the need for accurate clinical assessment of adult ADHD and brought forward a series of validity indicators assisting this diagnostic process. Several of these indicators have been embedded into Conners’ Adult ADHD Rating Scales (CAARS). As their different theoretical underpinnings offer the opportunity of possible synergy effects, the present study sought to examine whether the item- or index-wise combination of multiple validity indicators benefits classification accuracy. A sample of controls (n = 856) and adults with ADHD (n = 72) answered the CAARS, including the ADHD Credibility Index (ACI) honestly, while a group of instructed simulators (n = 135) completed the instrument as though they had ADHD. First, original CAARS items, which are part of the CAARS Infrequency Index (CII), and items drawn from the ACI were combined into a new CII-ACI-Compound Index. Secondly, existing validity indicators, including suspect T-score elevations and the CII, were considered in combination. Both approaches were evaluated in terms of sensitivity and specificity. The combination of four CII and five ACI items into the CII-ACI-Compound Index yielded a sensitivity between 41 and 51% and an estimated specificity above 87%. Suspect T-score elevations on all three DSM scales emerged as another potentially useful validity indicator with a sensitivity of 45 to 46% and a specificity > 90%. Deeming examinees non-credible whenever two or more validity indicators showed suspect results ensured low false-positive rates (< 10%), but reduced sensitivity significantly. Classifying respondents as non-credible as soon as any given indicator fell into the suspect range resulted in frequent false positives (> 11% of misclassified adults with ADHD). Depending on whether high specificity or high sensitivity is prioritized, such combined considerations offer valuable additions to individual validity indicators. High sensitivity provided by “either/or” combinations could prove useful in screening settings, whereas high stakes settings could benefit from “and” combinations.
Collapse
|
21
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
22
|
DiCarlo GM, Ernst WJ, Kneavel ME. An exploratory study of the convergent validity of the Test of Effort (TOE) in adults with acquired brain injury. Brain Inj 2022; 36:424-431. [PMID: 35113759 DOI: 10.1080/02699052.2022.2034953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
PRIMARY OBJECTIVE To examine the convergent validity of the Test of Effort (TOE), a performance validity test (PVT) currently under development that employs a two-subtest (one verbal, one visual), forced-choice recognition memory format. RESEARCH DESIGN A descriptive, correlational design was employed to describe performance on the TOE and examine the convergent validity between the TOE and comparison measures. METHODS AND PROCEDURES A sample of 53 individuals with chronic acquired brain injury (ABI) were administered the TOE and three well-validated PVTs (Reliable Digit Span [RDS], Test of Memory Malingering [TOMM] and Dot Counting Test [DCT]). MAIN OUTCOMES AND RESULTS The TOE appeared more difficult than it actually was, suggesting adequate face validity. Medium-to-large correlations were observed between the TOE and established PVTs, suggesting good convergent validity. Provisional cutoff scores are offered based on performance of a subgroup of participants with "sufficient effort." CONCLUSIONS Overall, the TOE shows promise as a PVT measure for clinical use. Future studies with larger and more diverse samples are needed to more fully determine the psychometric characteristics of the TOE.
Collapse
Affiliation(s)
| | - William J Ernst
- Department of Professional Psychology, Chestnut Hill College, Philadelphia, Pennsylvania, USA
| | - Meredith E Kneavel
- School of Nursing and Health Sciences, La Salle University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
23
|
Hall VL, Kalus AM. A Comparative Analysis of the Base Rate of Malingering Using Slick et al. (1999) and Sherman et al. (2020) Multidimensional Criteria for Malingering in a UK Litigant Population. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09438-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
25
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
26
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
27
|
Donders J, Lefebre N, Goldsworthy R. Patterns of Performance and Symptom Validity Test Findings After Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2021; 36:394-402. [PMID: 31732733 DOI: 10.1093/arclin/acz057] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 05/03/2019] [Accepted: 08/19/2019] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE The purpose of this study was to evaluate the presence of demographic, injury and neuropsychological correlates of distinct patterns of performance validity test and symptom validity test results in persons with mild traumatic brain injury (mTBI). METHOD One hundred and seventy-eight persons with mTBI completed the Test of Memory Malingering (TOMM; performance validity) and the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; symptom validity) within 1-12 months postinjury. Four groups were compared: (a) pass both TOMM and MMPI-2-RF validity criteria, (b) pass TOMM and fail MMPI-2-RF, (c) fail TOMM and pass MMPI-2-RF, and (d) fail both TOMM and MMPI-2-RF. RESULTS Compared to Group a, participants in combined Groups b-d were more than twice as likely to be engaged in financial compensation-seeking and about four times less likely to have neuroimaging evidence of an intracranial lesion. The average performance of Group d on an independent test of verbal learning was more than 1.5 standard deviations below that of Group a. Participants in Group b were more likely to have intracranial lesions on neuroimaging than participants in Group c. CONCLUSION Performance and symptom validity tests provide complementary and non-redundant information in persons with mTBI. Whereas financial compensation-seeking is associated with increased risk of failure of either PVT or SVT, or both, the presence of intracranial findings on neuroimaging is associated with decreased risk of such.
Collapse
Affiliation(s)
- Jacobus Donders
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| | - Nathan Lefebre
- Department of Psychology, Calvin College, Grand Rapids, MI, USA
| | - Rachael Goldsworthy
- Department of Psychology, Mary Free Bed Rehabilitation Hospital, Grand Rapids, MI, USA
| |
Collapse
|
28
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
29
|
Becke M, Tucha L, Weisbrod M, Aschenbrenner S, Tucha O, Fuermaier ABM. Non-credible symptom report in the clinical evaluation of adult ADHD: development and initial validation of a new validity index embedded in the Conners' adult ADHD rating scales. J Neural Transm (Vienna) 2021; 128:1045-1063. [PMID: 33651237 PMCID: PMC8295107 DOI: 10.1007/s00702-021-02318-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 02/15/2021] [Indexed: 11/10/2022]
Abstract
As attention-deficit/hyperactivity disorder (ADHD) is a feasible target for individuals aiming to procure stimulant medication or accommodations, there is a high clinical need for accurate assessment of adult ADHD. Proven falsifiability of commonly used diagnostic instruments is therefore of concern. The present study aimed to develop a new, ADHD-specific infrequency index to aid the detection of non-credible self-report. Disorder-specific adaptations of four detection strategies were embedded into the Conners’ Adult ADHD Rating Scales (CAARS) and tested for infrequency among credible neurotypical controls (n = 1001) and credible adults with ADHD (n = 100). The new index’ ability to detect instructed simulators (n = 242) and non-credible adults with ADHD (n = 22) was subsequently examined using ROC analyses. Applying a conservative cut-off score, the new index identified 30% of participants instructed to simulate ADHD while retaining a specificity of 98%. Items assessing supposed symptoms of ADHD proved most useful in distinguishing genuine patients with ADHD from simulators, whereas inquiries into unusual symptom combinations produced a small effect. The CAARS Infrequency Index (CII) outperformed the new infrequency index in terms of sensitivity (46%), but not overall classification accuracy as determined in ROC analyses. Neither the new infrequency index nor the CII detected non-credible adults diagnosed with ADHD with adequate accuracy. In contrast, both infrequency indices showed high classification accuracy when used to detect symptom over-report. Findings support the new indices’ utility as an adjunct measure in uncovering feigned ADHD, while underscoring the need to differentiate general over-reporting from specific forms of feigning.
Collapse
Affiliation(s)
- Miriam Becke
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioural and Social Sciences, University of Groningen, Grote Kruisstraat 2/1, 9712 TS, Groningen, The Netherlands.
| | - Lara Tucha
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioural and Social Sciences, University of Groningen, Grote Kruisstraat 2/1, 9712 TS, Groningen, The Netherlands.,Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147, Rostock, Germany
| | - Matthias Weisbrod
- Department of Psychiatry and Psychotherapy, SRH Clinic Karlsbad-Langensteinbach, 76307, Karlsbad, Germany.,Department of General Psychiatry, Center of Psychosocial Medicine, University of Heidelberg, 69115, Heidelberg, Germany
| | - Steffen Aschenbrenner
- Department of Clinical Psychology and Neuropsychology, SRH Clinic Karlsbad-Langensteinbach, 76307, Karlsbad, Germany
| | - Oliver Tucha
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioural and Social Sciences, University of Groningen, Grote Kruisstraat 2/1, 9712 TS, Groningen, The Netherlands.,Department of Psychiatry and Psychotherapy, University Medical Center Rostock, Gehlsheimer Str. 20, 18147, Rostock, Germany
| | - Anselm B M Fuermaier
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioural and Social Sciences, University of Groningen, Grote Kruisstraat 2/1, 9712 TS, Groningen, The Netherlands
| |
Collapse
|
30
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
31
|
O'Sullivan M, Fitzsimons S, Ramos SDS, Oddy M, Sterr A. Characteristics and neuropsychological impact of traumatic brain injury in female prisoners. Brain Inj 2020; 35:72-81. [PMID: 33307834 DOI: 10.1080/02699052.2020.1858344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Objective: To investigate the characteristics of head injury (HI) and its association with offending behaviour, psychological and neurobehavioral functioning, and cognitive performance in female prisoners.Methods: Using a cross-sectional design, female prisoners in the UK reporting a HI with a loss of consciousness (LOC) over ten minutes (n = 10) were compared with a group without a HI with LOC over ten minutes (n = 41) across a range of measures; including scores on standardized clinical questionnaires and performance-based cognitive assessments. Semi-structured clinical interviews assessed HI and forensic history, with forensic history triangulated against the prison database.Results: Domestic abuse was the most frequently reported cause of HI. The HI with LOC group had been to prison a greater number of times and had committed a greater number violent offences. No significant difference was found on self-reported psychological and neurobehavioral measures, or between the groups' cognitive functioning on neuropsychological tests.Conclusions: Psychosocial factors such as trauma may contribute to higher rates of violent offending and imprisonment in those with a HI with LOC. Domestic abuse is an important factor in HI amongst female prisoners. Forensic screening and interventions need to be designed, adapted and evaluated with consideration of trauma and HI.
Collapse
Affiliation(s)
- Michelle O'Sullivan
- School of Psychology, University of Surrey, Guildford, Surrey, UK.,Rail Safety & Standards Board, The Helicon, London, UK
| | | | - Sara da Silva Ramos
- The Disabilities Trust Foundation, Brain Injury Rehabilitation Trust, Kerwin Court, Horsham, West Sussex, UK
| | - Michael Oddy
- The Disabilities Trust Foundation, Brain Injury Rehabilitation Trust, Kerwin Court, Horsham, West Sussex, UK
| | - Annette Sterr
- School of Psychology, University of Surrey, Guildford, Surrey, UK
| |
Collapse
|
32
|
van Impelen A, Jelicic M, Otgaar H, Merckelbach H. Detecting Feigned Cognitive Impairment With Schretlen’s Malingering Scale Vocabulary and Abstraction Test. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2019. [DOI: 10.1027/1015-5759/a000438] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Abstract. Schretlen’s Malingering Scale Vocabulary and Abstraction test (MSVA) differs from the majority of performance validity tests in that it focuses on the detection of feigned impairments in semantic knowledge and perceptual reasoning rather than feigned memory problems. We administered the MSVA to children ( n = 41), forensic inpatients with intellectual disability ( n = 25), forensic inpatients with psychiatric symptoms ( n = 57), and three groups of undergraduate students ( n = 30, n = 79, and n = 90, respectively), asking approximately half of each of these samples to feign impairment and the other half to respond genuinely. With cutpoints chosen so as to keep false-positive rates below 10%, detection rates of experimentally feigned cognitive impairment were high in children (90%) and inpatients with intellectual disability (100%), but low in adults without intellectual disability (46%). The rates of significantly below-chance performance were low (4%), except in children (47%) and intellectually disabled inpatients (50%). The reliability of the MSVA was excellent (Cronbach’s α = .93–.97) and the MSVA proved robust against coaching (i.e., informed attempts to evade detection while feigning). We conclude that the MSVA is not ready yet for clinical use, but that it shows sufficient promise to warrant further validation efforts.
Collapse
Affiliation(s)
- Alfons van Impelen
- Forensic Psychology Section, Department of Clinical Psychological Science, Maastricht University, Maastricht, The Netherlands
| | - Marko Jelicic
- Forensic Psychology Section, Department of Clinical Psychological Science, Maastricht University, Maastricht, The Netherlands
| | - Henry Otgaar
- Forensic Psychology Section, Department of Clinical Psychological Science, Maastricht University, Maastricht, The Netherlands
| | - Harald Merckelbach
- Forensic Psychology Section, Department of Clinical Psychological Science, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
33
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
34
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
35
|
Rai JK, Erdodi LA. Impact of criterion measures on the classification accuracy of TOMM-1. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:185-196. [PMID: 31187632 DOI: 10.1080/23279095.2019.1613994] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
This study was designed to examine the effect of various criterion measures on the classification accuracy of Trial 1 of the Test of Memory Malingering (TOMM-1), a free-standing performance validity test (PVT). Archival data were collected from a case sequence of 91 (M Age = 42.2 years; M Education = 12.7) patients clinically referred for neuropsychological assessment. Trials 2 and Retention of the TOMM, the Word Choice Test, and three validity composites were used as criterion PVTs. Classification accuracy varied systematically as a function of criterion PVT. TOMM-1 ≤ 43 emerged as the optimal cutoff, resulting in a wide range of sensitivity (.47-1.00), with perfect overall specificity. Failing the TOMM-1 was unrelated to age, education or gender, but was associated with elevated self-reported depression. Results support the utility of TOMM-1 as an independent, free-standing, single-trial PVT. Consistent with previous reports, the choice of criterion measure influences parameter estimates of the PVT being calibrated. The methodological implications of modality specificity to PVT research and clinical/forensic practice should be considered when evaluating cutoffs or interpreting scores in the failing range.
Collapse
Affiliation(s)
- Jaspreet K Rai
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada.,University of Windsor, Edmonton, Alberta, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
36
|
Erdodi LA, Taylor B, Sabelli AG, Malleck M, Kirsch NL, Abeare CA. Demographically Adjusted Validity Cutoffs on the Finger Tapping Test Are Superior to Raw Score Cutoffs in Adults with TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09352-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
37
|
Maiman M, Del Bene VA, MacAllister WS, Sheldon S, Farrell E, Arce Rentería M, Slugh M, Nadkarni SS, Barr WB. Reliable Digit Span: Does it Adequately Measure Suboptimal Effort in an Adult Epilepsy Population? Arch Clin Neuropsychol 2019; 34:259-267. [PMID: 29659666 DOI: 10.1093/arclin/acy027] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2017] [Accepted: 03/21/2018] [Indexed: 01/19/2023] Open
Abstract
Objective Assessment of performance validity is a necessary component of any neuropsychological evaluation. Prior research has shown that cutoff scores of ≤6 or ≤7 on Reliable Digit Span (RDS) can detect suboptimal effort across numerous adult clinical populations; however, these scores have not been validated for that purpose in an adult epilepsy population. This investigation aims to determine whether these previously established RDS cutoff scores could detect suboptimal effort in adults with epilepsy. Method Sixty-three clinically referred adults with a diagnosis of epilepsy or suspected seizures were administered the Digit Span subtest of the Wechsler Adult Intelligence Scale (WAIS-III or WAIS-IV). Most participants (98%) passed Trial 2 of the Test of Memory Malingering (TOMM), achieving a score of ≥45. Results Previously established cutoff scores of ≤6 and ≤7 on RDS yielded a specificity rate of 85% and 77% respectively. Findings also revealed that RDS scores were positively related to attention and intellectual functioning. Given the less than ideal specificity rate associated with each of these cutoff scores, together with their strong association to cognitive factors, secondary analyses were conducted to identify more optimal cutoff scores. Preliminary results suggest that an RDS cutoff score of ≤4 may be more appropriate in a clinically referred adult epilepsy population with a low average IQ or lower. Conclusions Preliminary findings indicate that cutoff scores of ≤6 and ≤7 on RDS are not appropriate in adults with epilepsy, especially in individuals with low average IQ or below.
Collapse
Affiliation(s)
- Moshe Maiman
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA.,Department of Psychology, Drexel University, Philadelphia, PA, USA
| | - Victor A Del Bene
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA.,Ferkauf Graduate School of Psychology, Clinical Health Psychology Program, Yeshiva University, Bronx, NY, USA
| | - William S MacAllister
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA
| | - Sloane Sheldon
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA.,Ferkauf Graduate School of Psychology, Clinical Health Psychology Program, Yeshiva University, Bronx, NY, USA
| | - Eileen Farrell
- Institute of Neurology and Neurosurgery, Saint Barnabas, Livingston, NJ, USA
| | - Miguel Arce Rentería
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA.,Psychology Department, Fordham University, Bronx, NY, USA
| | - Mitchell Slugh
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA.,School of Psychology, Farleigh Dickinson University, Teaneck, NJ, USA
| | - Siddhartha S Nadkarni
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA
| | - William B Barr
- Department of Neurology, NYU-Langone Comprehensive Epilepsy Center, NYU-Langone Health, NYU School of Medicine, New York, NY, USA
| |
Collapse
|
38
|
Lippa SM, Lange RT, French LM, Iverson GL. Performance Validity, Neurocognitive Disorder, and Post-concussion Symptom Reporting in Service Members with a History of Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2019; 33:606-618. [PMID: 29069278 DOI: 10.1093/arclin/acx098] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
Objective To examine the influence of different performance validity test (PVT) cutoffs on neuropsychological performance, post-concussion symptoms, and rates of neurocognitive disorder and postconcussional syndrome following mild traumatic brain injury (MTBI) in active duty service members. Method Participants were 164 service members (Age: M = 28.1 years [SD = 7.3]) evaluated on average 4.1 months (SD = 5.0) following injury. Participants were divided into three mutually exclusive groups using original and alternative cutoff scores on the Test of Memory Malingering (TOMM) and the Effort Index (EI) from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS): (a) PVT-Pass, n = 85; (b) Alternative PVT-Fail, n = 53; and (c) Original PVT-Fail, n = 26. Participants also completed the Neurobehavioral Symptom Inventory. Results The PVT-Pass group performed better on cognitive testing and reported fewer symptoms than the two PVT-Fail groups. The Original PVT-Fail group performed more poorly on cognitive testing and reported more symptoms than the Alternative PVT-Fail group. Both PVT-Fail groups were more likely to meet DSM-5 Category A criteria for mild and major neurocognitive disorder and symptom reporting criteria for postconcussional syndrome than the PVT-Pass group. When alternative PVT cutoffs were used instead of original PVT cutoffs, the number of participants with valid data meeting cognitive testing criteria for neurocognitive disorder or postconcussional syndrome decreased dramatically. Conclusion PVT performance is significantly and meaningfully related to overall neuropsychological outcome. By using only original cutoffs, clinicians and researchers may miss people with invalid performances.
Collapse
Affiliation(s)
- Sara M Lippa
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Rael T Lange
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA.,Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada
| | - Louis M French
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA.,Center for Neuroscience and Regenerative Medicine, Bethesda, MD, USA.,Department of Physical Medicine and Rehabilitation, Center for Rehabilitation Sciences Research, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Grant L Iverson
- Defense and Veterans Brain Injury Center, Bethesda, MD, USA.,Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA.,Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, MA, USA.,Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Boston, MA, USA
| |
Collapse
|
39
|
Schroeder RW, Olsen DH, Martin PK. Classification accuracy rates of four TOMM validity indices when examined independently and jointly. Clin Neuropsychol 2019; 33:1373-1387. [DOI: 10.1080/13854046.2019.1619839] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ryan W. Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Phillip K. Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
40
|
Waldron-Perrine B, Gabel NM, Seagly K, Kraal AZ, Pangilinan P, Spencer RJ, Bieliauskas L. Montreal Cognitive Assessment as a screening tool: Influence of performance and symptom validity. Neurol Clin Pract 2019; 9:101-108. [PMID: 31041123 PMCID: PMC6461423 DOI: 10.1212/cpj.0000000000000604] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Accepted: 12/03/2018] [Indexed: 11/15/2022]
Abstract
BACKGROUND We evaluated Montreal Cognitive Assessment (MoCA) performance in a veteran traumatic brain injury (TBI) population, considering performance validity test (PVT) and symptom validity test (SVT) data, and explored associations of MoCA performance with neuropsychological test performance and self-reported distress. METHODS Of 198 consecutively referred veterans to a Veterans Administration TBI/Polytrauma Clinic, 117 were included in the final sample. The MoCA was administered as part of the evaluation. Commonly used measures of neuropsychological functioning and performance and symptom validity were also administered to aid in diagnosis. RESULTS Successively worse MoCA performances were associated with a greater number of PVT failures (ps < 0.05). Failure of both the SVT and at least 1 PVT yielded the lowest MoCA scores. Self-reported distress (both posttraumatic stress disorder symptoms and neurobehavioral cognitive symptoms) was also related to MoCA performance. CONCLUSIONS Performance on the MoCA is influenced by task engagement and symptom validity. Causal inferences about neurologic and neurocognitive impairment, particularly in the context of mild TBI, wherein the natural course of recovery is well known, should therefore be made cautiously when such inferences are based heavily on MoCA scores. Neuropsychologists are well versed in the assessment of performance and symptom validity and thus may be well suited to explore the influences of abnormal performances on cognitive screening.
Collapse
Affiliation(s)
- Brigid Waldron-Perrine
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Nicolette M Gabel
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Katharine Seagly
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - A Zarina Kraal
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Percival Pangilinan
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Robert J Spencer
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| | - Linas Bieliauskas
- Department of Physical Medicine and Rehabilitation (BW-P), Wayne State University School of Medicine, Detroit; Department of Physical Medicine and Rehabilitation (NMG, KS, PP), Michigan Medicine, University of Michigan; Department of Psychology (AZK), University of Michigan; VA Health System (PP); Mental health Service (116b) (RJS), VA Ann Arbor Healthcare; Neuropsychology Section of Psychiatry (RJS, LB), Michigan Medicine; and Ann Arbor, MI
| |
Collapse
|
41
|
Critchfield E, Soble JR, Marceaux JC, Bain KM, Chase Bailey K, Webber TA, Alex Alverson W, Messerly J, Andrés González D, O’Rourke JJF. Cognitive impairment does not cause invalid performance: analyzing performance patterns among cognitively unimpaired, impaired, and noncredible participants across six performance validity tests. Clin Neuropsychol 2018; 33:1083-1101. [DOI: 10.1080/13854046.2018.1508615] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Edan Critchfield
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
- Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| | - Janice C. Marceaux
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Kathleen M. Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - K. Chase Bailey
- Division of Psychology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Troy A. Webber
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - W. Alex Alverson
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Johanna Messerly
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - David Andrés González
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | | |
Collapse
|
42
|
One-Minute PVT: Further Evidence for the Utility of the California Verbal Learning Test—Children’s Version Forced Choice Recognition Trial. JOURNAL OF PEDIATRIC NEUROPSYCHOLOGY 2018. [DOI: 10.1007/s40817-018-0057-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
43
|
Reyes A, LaBode-Richman V, Salinas L, Barr WB. WHO-AVLT recognition trial: Initial validation for a new malingering index for Spanish-speaking patients. APPLIED NEUROPSYCHOLOGY-ADULT 2018; 26:564-572. [PMID: 30183353 DOI: 10.1080/23279095.2018.1470974] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Several methods for identifying suboptimal effort on Spanish neuropsychological assessment have been established. The purpose of this retrospective study was to determine whether recognition data from the WHO-AVLT could be employed for determination of malingering in a Spanish-speaking sample. Sixteen subjects in litigation, 25 neurological patients, and 14 healthy controls completed neuropsychological testing. All subjects completed the Test of Memory Malingering (TOMM). Inclusion criteria for neurological patients and controls included performance above the standard TOMM cutoff. Subjects in litigation were classified as probable malingering, through lower than cutoff performance on the TOMM and at least one other performance validity measure. Cut-off scores for classification of malingering were determined based on the number of recognition hits on the WHO-AVLT. The probable malingering group performed significantly worse than both groups on recognition hits. A score <10 was determined to be the optimal group cutoff, with 56.25% sensitivity and specificity greater than 92%. A combination score of 14 increased sensitivity to 68.75%. These findings provide initial validation of a new malingering index, based on the number of hits on the WHO-AVLT recognition trial. This index will provide valuable information to neuropsychologists conducting forensic or clinical evaluations on Spanish-speaking individuals.
Collapse
Affiliation(s)
- Anny Reyes
- Neuropsychology Division, Department of Neurology, NYU School of Medicine , New York , New York , USA
| | - Vanessa LaBode-Richman
- Neuropsychology Division, Department of Neurology, NYU School of Medicine , New York , New York , USA.,Long Island University: Brooklyn Campus , Brooklyn , New York , USA
| | - Lilian Salinas
- Neuropsychology Division, Department of Neurology, NYU School of Medicine , New York , New York , USA
| | - William B Barr
- Neuropsychology Division, Department of Neurology, NYU School of Medicine , New York , New York , USA
| |
Collapse
|
44
|
Riordan P, Lahr G. Classification accuracy of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) Effort Index (EI) and Effort Scale (ES) relative to the Test Of Memory Malingering (TOMM) in a mixed clinical sample. APPLIED NEUROPSYCHOLOGY-ADULT 2018; 27:82-86. [DOI: 10.1080/23279095.2018.1485678] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Patrick Riordan
- Neuropsychology Department, Loyola University Medical Center, Maywood, Illinois, USA
| | - Genessa Lahr
- Neuropsychology Department, Loyola University Medical Center, Maywood, Illinois, USA
| |
Collapse
|
45
|
Kanser RJ, Rapport LJ, Bashem JR, Hanks RA. Detecting malingering in traumatic brain injury: Combining response time with performance validity test accuracy. Clin Neuropsychol 2018; 33:90-107. [DOI: 10.1080/13854046.2018.1440006] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Robert J. Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J. Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Jesse R. Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A. Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| |
Collapse
|
46
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 103] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|
47
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
48
|
Binder LM, Chafetz MD. [Formula: see text]Determination of the smoking gun of intent: significance testing of forced choice results in social security claimants. Clin Neuropsychol 2017; 32:132-144. [PMID: 28617092 DOI: 10.1080/13854046.2017.1337931] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Significantly below-chance findings on forced choice tests have been described as revealing "the smoking gun of intent" that proved malingering. The issues of probability levels, one-tailed vs. two-tailed tests, and the combining of PVT scores on significantly below-chance findings were addressed in a previous study, with a recommendation of a probability level of .20 to test the significance of below-chance results. The purpose of the present study was to determine the rate of below-chance findings in a Social Security Disability claimant sample using the previous recommendations. METHOD We compared the frequency of below-chance results on forced choice performance validity tests (PVTs) at two levels of significance, .05 and .20, and when using significance testing on individual subtests of the PVTs compared with total scores in claimants for Social Security Disability in order to determine the rate of the expected increase. RESULTS The frequency of significant results increased with the higher level of significance for each subtest of the PVT and when combining individual test sections to increase the number of test items, with up to 20% of claimants showing significantly below-chance results at the higher p-value. CONCLUSIONS These findings are discussed in light of Social Security Administration policy, showing an impact on policy issues concerning child abuse and neglect, and the importance of using these techniques in evaluations for Social Security Disability.
Collapse
|
49
|
Dorociak KE, Schulze ET, Piper LE, Molokie RE, Janecek JK. Performance validity testing in a clinical sample of adults with sickle cell disease. Clin Neuropsychol 2017. [PMID: 28632024 DOI: 10.1080/13854046.2017.1339830] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Neuropsychologists utilize performance validity tests (PVTs) as objective means for drawing inferences about performance validity. The Test of Memory Malingering (TOMM) is a well-validated, stand-alone PVT and the Reliable Digit Span (RDS) and Reliable Digit Span-Revised (RDS-R) from the Digit Span subtest of the WAIS-IV are commonly employed, embedded PVTs. While research has demonstrated the utility of these PVTs with various clinical samples, no research has investigated their use in adults with sickle cell disease (SCD), a condition associated with multiple neurological, physical, and psychiatric symptoms. Thus, the purpose of this study was to explore PVT performance in adults with SCD. METHOD Fifty-four adults with SCD (Mage = 40.61, SD = 12.35) were consecutively referred by their hematologist for a routine clinical outpatient neuropsychological evaluation. During the evaluation, participants were administered the TOMM (Trials 1 and 2), neuropsychological measures including the WAIS-IV Digit Span subtest, and mood and behavioral questionnaires. RESULTS The average score on the TOMM was 47.70 (SD = 3.47, range = 34-50) for Trial 1 and 49.69 (SD = 1.66, range = 38-50) for Trial 2. Only one participant failed Trial 2 of the TOMM, yielding a 98.1% pass rate for the sample. Pass rates at various RDS and RDS-R values were calculated with TOMM Trial 2 performance as an external criterion. CONCLUSIONS Results support the use of the TOMM as a measure of performance validity for individuals with SCD, while RDS and RDS-R should be interpreted with caution in this population.
Collapse
Affiliation(s)
- Katherine E Dorociak
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Evan T Schulze
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Lauren E Piper
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Robert E Molokie
- b Department of Medicine , University of Illinois at Chicago , Chicago , IL , USA
| | - Julie K Janecek
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| |
Collapse
|
50
|
Psychometric Markers of Genuine and Feigned Neurodevelopmental Disorders in the Context of Applying for Academic Accommodations. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9287-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|