1
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
2
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
3
|
Erdodi LA. From "below chance" to "a single error is one too many": Evaluating various thresholds for invalid performance on two forced choice recognition tests. BEHAVIORAL SCIENCES & THE LAW 2023; 41:445-462. [PMID: 36893020 DOI: 10.1002/bsl.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 01/16/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
This study was designed to empirically evaluate the classification accuracy of various definitions of invalid performance in two forced-choice recognition performance validity tests (PVTs; FCRCVLT-II and Test of Memory Malingering [TOMM-2]). The proportion of at and below chance level responding defined by the binomial theory and making any errors was computed across two mixed clinical samples from the United States and Canada (N = 470) and two sets of criterion PVTs. There was virtually no overlap between the binomial and empirical distributions. Over 95% of patients who passed all PVTs obtained a perfect score. At chance level responding was limited to patients who failed ≥2 PVTs (91% of them failed 3 PVTs). No one scored below chance level on FCRCVLT-II or TOMM-2. All 40 patients with dementia scored above chance. Although at or below chance level performance provides very strong evidence of non-credible responding, scores above chance level have no negative predictive value. Even at chance level scores on PVTs provide compelling evidence for non-credible presentation. A single error on the FCRCVLT-II or TOMM-2 is highly specific (0.95) to psychometrically defined invalid performance. Defining non-credible responding as below chance level scores is an unnecessarily restrictive threshold that gives most examinees with invalid profiles a Pass.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
4
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Replication and cross-validation of the personality assessment inventory (PAI) cognitive bias scale (CBS) in a mixed clinical sample. Clin Neuropsychol 2022; 36:1860-1877. [PMID: 33612093 PMCID: PMC8454137 DOI: 10.1080/13854046.2021.1889681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/08/2021] [Indexed: 01/27/2023]
Abstract
Objective: This study is a cross-validation of the Cognitive Bias Scale (CBS) from the Personality Assessment Inventory (PAI), a ten-item scale designed to assess symptom endorsement associated with performance validity test failure in neuropsychological samples. The study utilized a mixed neuropsychological sample of consecutively referred patients at a large academic medical center in the Midwest. Participants and Methods: Participants were 332 patients who completed embedded and free-standing performance validity tests (PVTs) and the PAI. Pass and fail groups were created based on PVT performance to evaluate classification accuracy of the CBS. Results: The results were generally consistent with the initial study for overall classification accuracy, sensitivity, and cut-off score. Consistent with the validation study, CBS had better classification accuracy than the original PAI validity scales and a comparable effect size to that obtained in the original validation publication; however, the Somatic Complaints scale (SOM) and the Conversion subscale (SOM-C) also demonstrated good classification accuracy. The CBS had incremental predictive ability compared to existing PAI scales. Conclusions: The results supported the CBS, but further research is needed on specific populations. Findings from this present study also suggest the relationship between conversion tendencies and PVT failure may be stronger in some geographic locations or population types (forensic versus clinical patients).
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, USA
| |
Collapse
|
5
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
6
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
7
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
8
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
9
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
10
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
11
|
Abstract
OBJECTIVES A number of commonly used performance validity tests (PVTs) may be prone to high failure rates when used for individuals with severe neurocognitive deficits. This study investigated the validity of 10 PVT scores in justice-involved adults with fetal alcohol spectrum disorder (FASD), a neurodevelopmental disability stemming from prenatal alcohol exposure and linked with severe neurocognitive deficits. METHOD The sample comprised 80 justice-involved adults (ages 19-40) including 25 with confirmed or possible FASD and 55 where FASD was ruled out. Ten PVT scores were calculated, derived from Word Memory Test, Genuine Memory Impairment Profile, Advanced Clinical Solutions (Word Choice), the Wechsler Adult Intelligence Scale - Fourth Edition (Reliable Digit Span and age-corrected scaled scores (ACSS) from Digit Span, Coding, Symbol Search, Coding - Symbol Search, Vocabulary - Digit Span), and the Wechsler Memory Scale - Fourth Edition (Logical Memory II Recognition). RESULTS Participants with diagnosed/possible FASD were more likely to fail any single PVT, and failed a greater number of PVTs overall, compared to those without FASD. They were also more likely to fail based on Word Memory Test, Digit Span ACSS, Coding ACSS, Symbol Search ACSS, and Logical Memory II Recognition, compared to controls (35-76%). Across both groups, substantially more participants with IQ <70 failed two or more PVTs (90%), compared to those with an IQ ≥70 (44%). CONCLUSIONS Results highlight the need for additional research examining the use of PVTs in justice-involved populations with FASD.
Collapse
|
12
|
Ryan JJ, Yamaguchi T, Kreiner DS. Preliminary Validation of the Rey 15-Item Test and Reliable Digit Span in Native Japanese Samples. Psychol Rep 2019; 122:1925-1945. [DOI: 10.1177/0033294118792697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Rey 15-Item Test and reliable digit span were evaluated in Japan. Participants were controls ( n = 15), healthy volunteers instructed to simulate memory impairment ( n = 12; 5 of 17 volunteers did not comply with instructions and were dropped), healthy elderly ( n = 12), and cognitively disabled nursing home residents ( n = 8). On the 15-Item Test, controls and elderly performed similarly and were combined. Nursing home residents could not cope with the 15-Item Test and were dropped. Total score was a fair predictor of dissimulation using a cutoff ≤ 8. Rows were fair predictors using a ≤2 cutoff. Sensitivities were low and specificities were excellent. Reliable digit span contrasts between simulators and each of the other groups demonstrated that reliable digit span discriminated controls and elderly from simulators (≤6 and ≤5 cutoffs). Sensitivities were moderate and specificities were excellent. Reliable digit span did not differentiate simulators from nursing home residents.
Collapse
Affiliation(s)
- Joseph J. Ryan
- School of Nutrition, Kinesiology, and Psychological Science, University of Central Missouri, MO, USA
| | - Takahiro Yamaguchi
- Department of Psychology and Counseling, Northeastern State University, OK, USA
| | - David S. Kreiner
- School of Nutrition, Kinesiology, and Psychological Science, University of Central Missouri, MO, USA
| |
Collapse
|
13
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
14
|
Poynter K, Boone KB, Ermshar A, Miora D, Cottingham M, Victor TL, Ziegler E, Zeller MA, Wright M. Wait, There’s a Baby in this Bath Water! Update on Quantitative and Qualitative Cut-Offs for Rey 15-Item Recall and Recognition. Arch Clin Neuropsychol 2018; 34:1367-1380. [DOI: 10.1093/arclin/acy087] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Revised: 10/10/2018] [Accepted: 10/17/2018] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objective
Evaluate the effectiveness of Rey 15-item plus recognition data in a large neuropsychological sample.
Method
Rey 15-item plus recognition scores were compared in credible (n = 138) and noncredible (n = 353) neuropsychology referrals.
Results
Noncredible patients scored significantly worse than credible patients on all Rey 15-item plus recognition scores. When cut-offs were selected to maintain at least 89.9% specificity, cut-offs could be made more stringent, with the highest sensitivity found for recognition correct (cut-off ≤11; 62.6% sensitivity) and the combination score (recall + recognition – false positives; cut-off ≤22; 60.6% sensitivity), followed by recall correct (cut-off ≤11; 49.3% sensitivity), and recognition false positive errors (≥3; 17.9% sensitivity). A cut-off of ≥4 applied to a summed qualitative error score for the recall trial resulted in 19.4% sensitivity. Approximately 10% of credible subjects failed either recall correct or recognition correct, whereas two-thirds of noncredible patients (67.7%) showed this pattern. Thirteen percent of credible patients failed either recall correct, recognition correct, or the recall qualitative error score, whereas nearly 70% of noncredible patients failed at least one of the three. Some individual qualitative recognition errors had low false positive rates (<2%) indicating that their presence was virtually pathognomonic for noncredible performance. Older age (>50) and IQ < 80 were associated with increased false positive rates in credible patients.
Conclusions
Data on a larger sample than that available in the 2002 validation study show that Rey 15-item plus recognition cut-offs can be made more stringent, and thereby detect up to 70% of noncredible test takers, but the test should be used cautiously in older individuals and in individuals with lowered IQ.
Collapse
Affiliation(s)
- Kellie Poynter
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Kyle Brauer Boone
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Annette Ermshar
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Deborah Miora
- California School of Forensic Studies, Alliant International University, Los Angeles, CA, USA
| | - Maria Cottingham
- Mental Health Care Line, Veterans Administration Tennessee Valley Healthcare System, Nashville, TN, USA
| | - Tara L Victor
- California State University, Dominguez Hills, Carson, CA, USA
| | | | - Michelle A Zeller
- West Los Angeles Veterans Administration Medical Center, Los Angeles, CA, USA
| | | |
Collapse
|
15
|
Goworowski L, Vagt D, Salazar C, Mulligan K, Webbe F. Normative values of the Rey Word Recognition Test in college athletes. APPLIED NEUROPSYCHOLOGY-ADULT 2018; 27:94-100. [PMID: 30265571 DOI: 10.1080/23279095.2018.1488716] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Improved detection of sport-related concussions can be enhanced by performance validity tests, such as the Rey Word Recognition Test (WRT). The WRT is brief and in the public domain but no norms exist for healthy college-athletes. The present study identified such normative values in a large college-athlete sample. Participants included 1,147 college-athletes, and four measures were collected: total words correct, words correct of the first 8, total number of intrusions, and combination score. The WRT was administered individually during baseline evaluations. Means and standard deviations were as follows: total correct words recognized, 10.47 (SD = 2.12); number of correct words out of the first eight words presented, 6.01 (SD = 1.41); number of intrusions, 0.89 (SD = 1.09); combination score, 15.59 (SD = 3.55). Females scored significantly higher than males in total words correct, number correct of the first eight, and combination score, and significantly lower in intrusions. The WRT proved to be a quick, easily administer test in the baseline testing setting. Only 22 athletes recognized all 15 words, and close to a normal distribution of scores was obtained, suggesting that an expectation of optimum performance by college-athletes as an inference of effortful performance would be misplaced.
Collapse
Affiliation(s)
- Lauren Goworowski
- Department of Psychology, Florida Institute of Technology, Melbourne, Florida, USA
| | - Denise Vagt
- Department of Psychology, Florida Institute of Technology, Melbourne, Florida, USA
| | - Carlos Salazar
- Department of Psychology, Florida Institute of Technology, Melbourne, Florida, USA
| | - Kevin Mulligan
- Department of Psychology, Florida Institute of Technology, Melbourne, Florida, USA
| | - Frank Webbe
- Department of Psychology, Florida Institute of Technology, Melbourne, Florida, USA
| |
Collapse
|
16
|
Tracy DK. Evaluating malingering in cognitive and memory examinations: a guide for clinicians. ACTA ACUST UNITED AC 2018. [DOI: 10.1192/apt.bp.114.012906] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
SummaryCognitive and memory testing are a common part of clinical practice, but professional concerns are sometimes raised that the individual being tested might be feigning deficits. Most clinicians have limited experience and training in detecting malingering in such cognitive testing, and the very issue raises considerable ethical dilemmas. Nevertheless, psychiatric work faces ever greater potential for legal scrutiny, and failure to appropriately evaluate potential malingering risks professional embarrassment and distress. There is a need for clinicians to make themselves aware of the ways in which malingered behaviour might be evaluated through the clinical history, the use of routine psychometric testing and, particularly, the use of symptom validity (‘malingering’) tests. This article describes these factors and gives guidance on the appropriate reporting of findings.Learning Objectives•Better understand the complexities in cognitive assessment where malingering is suspected.•Understand the types and limitations of the major symptom validity tests.•Be better prepared to produce documentation and reports stating test findings.
Collapse
|
17
|
Lippa SM. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature. Clin Neuropsychol 2017; 32:391-421. [DOI: 10.1080/13854046.2017.1406146] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sara M. Lippa
- Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
- Walter Reed National Military Medical Center, Bethesda, MD, USA
- National Intrepid Center of Excellence, Bethesda, MD, USA
| |
Collapse
|
18
|
Erdodi LA, Rai JK. A single error is one too many: Examining alternative cutoffs on Trial 2 of the TOMM. Brain Inj 2017; 31:1362-1368. [DOI: 10.1080/02699052.2017.1332386] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jaspreet K. Rai
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
19
|
Bailey KC, Soble JR, O’Rourke JJF. Clinical utility of the Rey 15-Item Test, recognition trial, and error scores for detecting noncredible neuropsychological performance in a mixed clinical sample of veterans. Clin Neuropsychol 2017; 32:119-131. [DOI: 10.1080/13854046.2017.1333151] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- K. Chase Bailey
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Justin J. F. O’Rourke
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
- Polytrauma Rehabilitation Center, South Texas Veterans Health Care System, San Antonio, TX, USA
| |
Collapse
|
20
|
Liu Z, Dong J, Zhao X, Chen X, Lippa SM, Caroselli JS, Fang X. Assessment of feigned cognitive impairment in severe traumatic brain injury patients with the Forced-choice Graphics Memory Test. Brain Behav 2016; 6:e00593. [PMID: 28032009 PMCID: PMC5166992 DOI: 10.1002/brb3.593] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/01/2016] [Revised: 08/07/2016] [Accepted: 09/04/2016] [Indexed: 11/24/2022] Open
Abstract
INTRODUCTION The Forced-choice Graphics Memory Test (FGMT) is a newly developed measure to assess feigned cognitive impairment. This study investigated the ability and reliability of FGMT for identification of malingering in patients with traumatic brain injury (TBI). METHODS The FGMT was administered to 40 healthy volunteers instructed to respond validly (Healthy Control, H-C), 40 healthy volunteers instructed to feign cognitive impairment (Healthy Malingering, H-M), 40 severe TBI patients who responded validly (TBI control, TBI-C), and 30 severe TBI patients who evidenced invalid performance (TBI malingering, TBI-M). RESULTS Both malingering groups (H-M and TBI-M) performed much more poorly than the nonmalingering groups (H-C and TBI-C). The FGMT overall total score, score on easy items, and score on hard items differed significantly across the four groups. The total score showed the highest classification accuracy in differentiating malingering from nonmalingering. A cutoff of less than 18 (total items) successfully identified 95% of TBI-C and 93.3% of TBI-M participants. The FGMT also demonstrated high test-retest reliability and internal consistency. FGMT scores were not affected by TBI patients' education, gender, age, or intelligence. CONCLUSION Our results suggest that the FGMT can be used as a fast and reliable tool for identification of feigned cognitive impairment in patients with TBI.
Collapse
Affiliation(s)
- Zilong Liu
- Department of Forensic Medicine Tongji Medical College Huazhong University of Science and Technology Wuhan Hubei China
| | - Juan Dong
- Department of Forensic Medicine Tongji Medical College Huazhong University of Science and Technology Wuhan Hubei China
| | - Xiaohong Zhao
- Department of Forensic Medicine Tongji Medical College Huazhong University of Science and Technology Wuhan Hubei China
| | - Xiaorui Chen
- Department of Forensic Medicine Tongji Medical College Huazhong University of Science and Technology Wuhan Hubei China
| | - Sara M Lippa
- Defense and Veterans Brain Injury Center Walter Reed National Military Medical Center Bethesda MD USA
| | - Jerome S Caroselli
- Department of Psychology/Neuropsychology TIRR Memorial Hermann Hospital Houston TX USA
| | - Xiang Fang
- Department of Neurology University of Texas Medical Branch Galveston TX USA
| |
Collapse
|
21
|
Gottfried E, Glassmire D. The Relationship Between Psychiatric and Cognitive Symptom Feigning Among Forensic Inpatients Adjudicated Incompetent to Stand Trial. Assessment 2016; 23:672-682. [DOI: 10.1177/1073191115599640] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The accurate assessment of feigning is an important component of forensic assessment. Two potential strategies of feigning include the fabrication/exaggeration of psychiatric impairments and the fabrication/exaggeration of cognitive deficits. The current study examined the relationship between psychiatric and cognitive feigning strategies using the Structured Interview of Reported Symptoms and Test of Memory Malingering among 150 forensic psychiatric inpatients adjudicated incompetent to stand trial. A greater number of participants scored within the feigning range on the Structured Interview of Reported Symptoms than on the Test of Memory Malingering. Relative risk ratios indicated that individuals shown to be feigning cognitive deficits were 1.68 times more likely to feign psychiatric symptoms than those not shown to be feigning cognitive deficits. Likewise, individuals shown to be feigning psychiatric deficits were 1.86 times more likely to feign cognitive deficits than those not shown to be feigning psychiatric symptoms. Overall, findings suggest that psychiatric feigning and cognitive feigning are related, but can be employed separately as feigning strategies. Therefore, clinicians should consider evaluating for both feigning strategies in forensic assessments where cognitive and psychiatric symptoms are being assessed.
Collapse
|
22
|
Ashendorf L, Sugarman MA. Evaluation of performance validity using a Rey Auditory Verbal Learning Test forced-choice trial. Clin Neuropsychol 2016; 30:599-609. [DOI: 10.1080/13854046.2016.1172668] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Lee Ashendorf
- Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA, USA
- Boston University School of Medicine, Boston, MA, USA
| | - Michael A. Sugarman
- Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA, USA
- Wayne State University, Detroit, MI, USA
| |
Collapse
|
23
|
Glassmire DM, Toofanian Ross P, Kinney DI, Nitch SR. Derivation and Cross-Validation of Cutoff Scores for Patients With Schizophrenia Spectrum Disorders on WAIS-IV Digit Span–Based Performance Validity Measures. Assessment 2015; 23:292-306. [DOI: 10.1177/1073191115587551] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Two studies were conducted to identify and cross-validate cutoff scores on the Wechsler Adult Intelligence Scale–Fourth Edition Digit Span–based embedded performance validity (PV) measures for individuals with schizophrenia spectrum disorders. In Study 1, normative scores were identified on Digit Span–embedded PV measures among a sample of patients ( n = 84) with schizophrenia spectrum diagnoses who had no known incentive to perform poorly and who put forth valid effort on external PV tests. Previously identified cutoff scores resulted in unacceptable false positive rates and lower cutoff scores were adopted to maintain specificity levels ≥90%. In Study 2, the revised cutoff scores were cross-validated within a sample of schizophrenia spectrum patients ( n = 96) committed as incompetent to stand trial. Performance on Digit Span PV measures was significantly related to Full Scale IQ in both studies, indicating the need to consider the intellectual functioning of examinees with psychotic spectrum disorders when interpreting scores on Digit Span PV measures.
Collapse
|