1
|
Ramos Usuga D, Ayearst LE, Rivera D, Krch D, Perrin PB, Carrión CI, Morel Valdés GM, Loro D, Rodriguez MJ, Munoz G, Drago CI, García P, Rivera PM, Arango-Lasprilla JC. A preliminary examination of the TOMM2 in a sample of Spanish speakers in the United States. NeuroRehabilitation 2024; 55:235-242. [PMID: 39240592 DOI: 10.3233/nre-240085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
BACKGROUND The Test of Memory Malingering (TOMM) is a widely used performance validity measure that is available in both English and Spanish. The Spanish version, however, has historically lacked normative data from samples that are representative of the U.S. Hispanic/Spanish speaking population. OBJECTIVE The aim of the current study was to collect normative data on the update TOMM 2 for Hispanic individuals residing in the U.S. METHODS Normative data on the TOMM 2 was collected across 9 sites from different regions of the U.S. The total sample consisted of n = 188 cognitively healthy adults aged 18 and over with no current or prior history of neurological or psychiatric disorder. Descriptive analyses were performed on total raw scores. RESULTS Participants obtained a mean score of 48.15 (SD = 2.81) on trial 1 of the TOMM 2, 49.86 (SD = 0.487) on trial 2, and 49.84 (SD = 0.509) on the recognition trial. Scores are provided for traditional cutoff scores as well as some popular cutoffs reported in the literature. Item level analyses were conducted as well as evaluation of performance based on a variety of demographics. CONCLUSION When compared to the English-speaking normative sample used for the original TOMM, this sample demonstrated better performance on the TOMM 2 indicating better cultural appropriateness of the items. This is the first study conducted that provides culturally appropriate descriptive norms for use with Spanish speakers living in the U.S.
Collapse
Affiliation(s)
- Daniela Ramos Usuga
- Biomedical Research Doctorate Program, University of the Basque Country (UPV/EHU), Leioa, Spain
| | | | - Diego Rivera
- Department of Health Science, Public University of Navarre, Pamplona, Spain
- Instituto de Investigación Sanitaria de Navarra (IdiSNA), Pamplona, Spain
| | - Denise Krch
- Center for Traumatic Brain Injury Research, Kessler Foundation, East Hanover, NJ, USA
- Department of Physical Medicine & Rehabilitation, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Paul B Perrin
- Department of Psychology, University of Virginia, Charlottesville, VA, USA & School of Data Science, University of Virginia, Charlottesville, VA, USA
| | - Carmen I Carrión
- Department of Neurology, Yale School of Medicine, New Haven, CT, USA
| | - Gloria M Morel Valdés
- Department of Neurology, School of Medicine and Public Health, University of Wisconsin, Madison, WI, USA
| | - Delly Loro
- The Chicago School, Los Angeles, CA, USA
| | - Miriam J Rodriguez
- Clinical Psychology Program, Carlos Albizu University, Miami Campus, Miami, FL, USA
- Department of Health and Wellness Design, School of Public Health, Indiana University, Bloomington, IN, USA
| | - Geovani Munoz
- Department of Psychology, Virginia Commonwealth University, Richmond, VA, USA
| | | | - Patricia García
- Department of Neurology, School of Medicine, Indiana University, Indianapolis, IN, USA & Department of Physical Medicine and Rehabilitation, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Patricia M Rivera
- Mental Health Department - Neuropsychology, Kaiser Permanente Northwest, Portland, OR, USA
| | | |
Collapse
|
2
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part II-Methodological Issues. Neuropsychol Rev 2023; 33:604-623. [PMID: 37594690 DOI: 10.1007/s11065-023-09602-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 09/19/2022] [Indexed: 08/19/2023]
Abstract
Forensic neuropsychological examinations to detect malingering in patients with neurocognitive, physical, and psychological dysfunction have tremendous social, legal, and economic importance. Thousands of studies have been published to develop and validate methods to forensically detect malingering based largely on approximately 50 validity tests, including embedded and stand-alone performance and symptom validity tests. This is Part II of a two-part review of statistical and methodological issues in the forensic prediction of malingering based on validity tests. The Part I companion paper explored key statistical issues. Part II examines related methodological issues through conceptual analysis, statistical simulations, and reanalysis of findings from prior validity test validation studies. Methodological issues examined include the distinction between analog simulation and forensic studies, the effect of excluding too-close-to-call (TCTC) cases from analyses, the distinction between criterion-related and construct validation studies, and the application of the Revised Quality Assessment of Diagnostic Accuracy Studies tool (QUADAS-2) in all Test of Memory Malingering (TOMM) validation studies published within approximately the first 20 years following its initial publication to assess risk of bias. Findings include that analog studies are commonly confused for forensic validation studies, and that construct validation studies are routinely presented as if they were criterion-reference validation studies. After accounting for the exclusion of TCTC cases, actual classification accuracy was found to be well below claimed levels. QUADAS-2 results revealed that extant TOMM validation studies all had a high risk of bias, with not a single TOMM validation study with low risk of bias. Recommendations include adoption of well-established guidelines from the biomedical diagnostics literature for good quality criterion-referenced validation studies and examination of implications for malingering determination practices. Design of future studies may hinge on the availability of an incontrovertible reference standard of the malingering status of examinees.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, 1 Drexel Dr, Box 200, New Orleans, LA, 70125, USA.
| |
Collapse
|
3
|
Leonhard C. Review of Statistical and Methodological Issues in the Forensic Prediction of Malingering from Validity Tests: Part I: Statistical Issues. Neuropsychol Rev 2023; 33:581-603. [PMID: 37612531 DOI: 10.1007/s11065-023-09601-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 03/29/2023] [Indexed: 08/25/2023]
Abstract
Forensic neuropsychological examinations with determination of malingering have tremendous social, legal, and economic consequences. Thousands of studies have been published aimed at developing and validating methods to diagnose malingering in forensic settings, based largely on approximately 50 validity tests, including embedded and stand-alone performance validity tests. This is the first part of a two-part review. Part I explores three statistical issues related to the validation of validity tests as predictors of malingering, including (a) the need to report a complete set of classification accuracy statistics, (b) how to detect and handle collinearity among validity tests, and (c) how to assess the classification accuracy of algorithms for aggregating information from multiple validity tests. In the Part II companion paper, three closely related research methodological issues will be examined. Statistical issues are explored through conceptual analysis, statistical simulations, and through reanalysis of findings from prior validation studies. Findings suggest extant neuropsychological validity tests are collinear and contribute redundant information to the prediction of malingering among forensic examinees. Findings further suggest that existing diagnostic algorithms may miss diagnostic accuracy targets under most realistic conditions. The review makes several recommendations to address these concerns, including (a) reporting of full confusion table statistics with 95% confidence intervals in diagnostic trials, (b) the use of logistic regression, and (c) adoption of the consensus model on the "transparent reporting of multivariate prediction models for individual prognosis or diagnosis" (TRIPOD) in the malingering literature.
Collapse
Affiliation(s)
- Christoph Leonhard
- The Chicago School of Professional Psychology at Xavier University of Louisiana, Box 200, 1 Drexel Dr, New Orleans, LA, 70125, USA.
| |
Collapse
|
4
|
Uiterwijk D, Stargatt R, Crowe SF. Objective Cognitive Outcomes and Subjective Emotional Sequelae in Litigating Adults with a Traumatic Brain Injury: The Impact of Performance and Symptom Validity Measures. Arch Clin Neuropsychol 2022; 37:1662-1687. [PMID: 35704852 DOI: 10.1093/arclin/acac039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE This study examined the relative contribution of performance and symptom validity in litigating adults with traumatic brain injury (TBI), as a function of TBI severity, and examined the relationship between self-reported emotional symptoms and cognitive tests scores while controlling for validity test performance. METHOD Participants underwent neuropsychological assessment between January 2012 and June 2021 in the context of compensation-seeking claims related to a TBI. All participants completed a cognitive test battery, the Personality Assessment Inventory (including symptom validity tests; SVTs), and multiple performance validity tests (PVTs). Data analyses included independent t-tests, one-way ANOVAs, correlation analyses, and hierarchical multiple regression. RESULTS A total of 370 participants were included. Atypical PVT and SVT performance were associated with poorer cognitive test performance and higher emotional symptom report, irrespective of TBI severity. PVTs and SVTs had an additive effect on cognitive test performance for uncomplicated mTBI, but less so for more severe TBI. The relationship between emotional symptoms and cognitive test performance diminished substantially when validity test performance was controlled, and validity test performance had a substantially larger impact than emotional symptoms on cognitive test performance. CONCLUSION Validity test performance has a significant impact on the neuropsychological profiles of people with TBI, irrespective of TBI severity, and plays a significant role in the relationship between emotional symptoms and cognitive test performance. Adequate validity testing should be incorporated into every neuropsychological assessment, and associations between emotional symptoms and cognitive outcomes that do not consider validity testing should be interpreted with extreme caution.
Collapse
Affiliation(s)
- Daniel Uiterwijk
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Robyn Stargatt
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Simon F Crowe
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| |
Collapse
|
5
|
Rhoads T, Neale AC, Resch ZJ, Cohen CD, Keezer RD, Cerny BM, Jennette KJ, Ovsiew GP, Soble JR. Psychometric implications of failure on one performance validity test: a cross-validation study to inform criterion group definition. J Clin Exp Neuropsychol 2021; 43:437-448. [PMID: 34233580 DOI: 10.1080/13803395.2021.1945540] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Introduction: Research to date has supported the use of multiple performance validity tests (PVTs) for determining validity status in clinical settings. However, the implications of including versus excluding patients failing one PVT remains a source of debate, and methodological guidelines for PVT research are lacking. This study evaluated three validity classification approaches (i.e. 0 vs. ≥2, 0-1 vs. ≥2, and 0 vs. ≥1 PVT failures) using three reference standards (i.e. criterion PVT groupings) to recommend approaches best suited to establishing validity groups in PVT research methodology.Method: A mixed clinical sample of 157 patients was administered freestanding (Medical Symptom Validity Test, Dot Counting Test, Test of Memory Malingering, Word Choice Test), and embedded PVTs (Reliable Digit Span, RAVLT Effort Score, Stroop Word Reading, BVMT-R Recognition Discrimination) during outpatient neuropsychological evaluation. Three reference standards (i.e. two freestanding and three embedded PVTs from the above list) were created. Rey 15-Item Test and RAVLT Forced Choice were used solely as outcome measures in addition to two freestanding PVTs not employed in the reference standard. Receiver operating characteristic curve analyses evaluated classification accuracy using the three validity classification approaches for each reference standard.Results: When patients failing only one PVT were excluded or classified as valid, classification accuracy ranged from acceptable to excellent. However, classification accuracy was poor to acceptable when patients failing one PVT were classified as invalid. Sensitivity/specificity across two of the validity classification approaches (0 vs. ≥2; 0-1 vs. ≥2) remained reasonably stable.Conclusions: These results reflect that both inclusion and exclusion of patients failing one PVT are acceptable approaches to PVT research methodology and the choice of method likely depends on the study rationale. However, including such patients in the invalid group yields unacceptably poor classification accuracy across a number of psychometrically robust outcome measures and therefore is not recommended.
Collapse
Affiliation(s)
- Tasha Rhoads
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Alec C Neale
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Zachary J Resch
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Cari D Cohen
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Richard D Keezer
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Wheaton College, Wheaton, IL, USA
| | - Brian M Cerny
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Psychology, Illinois Institute of Technology, Chicago, IL, USA
| | - Kyle J Jennette
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Gabriel P Ovsiew
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois College of Medicine, Chicago, IL, USA
| |
Collapse
|
6
|
Martinez KA, Sayers C, Hayes C, Martin PK, Clark CB, Schroeder RW. Normal cognitive test scores cannot be interpreted as accurate measures of ability in the context of failed performance validity testing: A symptom- and detection-coached simulation study. J Clin Exp Neuropsychol 2021; 43:301-309. [PMID: 33998369 DOI: 10.1080/13803395.2021.1926435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Introduction: While use of performance validity tests (PVTs) has become a standard of practice in neuropsychology, there are differing opinions regarding whether to interpret cognitive test data when standard scores fall within normal limits despite PVTs being failed. This study is the first to empirically determine whether normal cognitive test scores underrepresent functioning when PVTs are failed.Method: Participants, randomly assigned to either a simulated malingering group (n = 50) instructed to mildly suppress test performances or a best-effort/control group (n = 50), completed neuropsychological tests which included the North American Adult Reading Test (NAART), California Verbal Learning Test - 2nd Edition (CVLT-II), and Test of Memory Malingering (TOMM).Results: Groups were not significantly different in age, sex, education, or NAART predicted intellectual ability, but simulators performed significantly worse than controls on the TOMM, CVLT-II Forced Choice Recognition, and CVLT-II Short Delay Free Recall. The groups did not significantly differ on other examined CVLT-II measures. Of simulators who failed validity testing, 36% scored no worse than average and 73% scored no worse than low average on any of the examined CVLT-II indices.Conclusions: Of simulated malingerers who failed validity testing, nearly three-fourths were able to produce cognitive test scores that were within normal limits, which indicates that normal cognitive performances cannot be interpreted as accurately reflecting an individual's capabilities when obtained in the presence of validity test failure. At the same time, only 2 of 50 simulators were successful in passing validity testing while scoring within an impaired range on cognitive testing. This latter finding indicates that successfully feigning cognitive deficits is difficult when PVTs are utilized within the examination.
Collapse
Affiliation(s)
- Karen A Martinez
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Courtney Sayers
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Charles Hayes
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Phillip K Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| | - C Brendan Clark
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Ryan W Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| |
Collapse
|
7
|
Clark HA, Martin PK, Okut H, Schroeder RW. A Systematic Review and Meta-Analysis of the Utility of the Test of Memory Malingering in Pediatric Examinees. Arch Clin Neuropsychol 2020; 35:1312-1322. [DOI: 10.1093/arclin/acaa075] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 08/22/2020] [Accepted: 08/24/2020] [Indexed: 11/12/2022] Open
Abstract
Abstract
Objective
This is the first systematic review and meta-analysis of the Test of Memory Malingering (TOMM) in pediatric examinees. It adheres to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines.
Method
A systematic literature search was conducted using PsycINFO and PubMed, reviewing articles from January 1997 to July 2019. Books providing data on pediatric validity testing were also reviewed for references to relevant articles. Eligibility criteria included publication in a peer-reviewed journal, utilizing a pediatric sample, providing sufficient data to calculate specificity and/or sensitivity, and providing a means for evaluating validity status external to the TOMM. After selection criteria were applied, 9 articles remained for meta-analysis. Samples included clinical patients and healthy children recruited for research purposes; ages ranged from 5 to 18. Fixed and random effects models were used to calculate classification accuracy statistics.
Results
Traditional adult-derived cutoffs for Trial 2 and Retention were highly specific (0.96–0.99) in pediatric examinees for both clinical and research samples. Sensitivity was relatively strong (0.68–0.70), although only two studies reported sensitivity rates. A supplemental review of the literature corroborated these findings, revealing that traditional adult-based TOMM cutoffs are supported in most pediatric settings. However, limited research exists on the impact of very young age, extremely low cognitive functioning, and varying clinical diagnoses.
Conclusions
The TOMM, at traditional adult cutoffs, has strong specificity as a performance validity test in pediatric neuropsychological evaluations. This meta-analysis found that specificity values in children are comparable to those of adults. Areas for further research are discussed.
Collapse
Affiliation(s)
- Hilary A Clark
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS 67226, USA
| | - Phillip K Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS 67226, USA
| | - Hayrettin Okut
- Office of Research, University of Kansas School of Medicine – Wichita, Wichita, KS 67214, USA
| | - Ryan W Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS 67226, USA
| |
Collapse
|
8
|
Sherman EMS, Slick DJ, Iverson GL. Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria. Arch Clin Neuropsychol 2020; 35:735-764. [PMID: 32377667 PMCID: PMC7452950 DOI: 10.1093/arclin/acaa019] [Citation(s) in RCA: 152] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 03/12/2020] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES Empirically informed neuropsychological opinion is critical for determining whether cognitive deficits and symptoms are legitimate, particularly in settings where there are significant external incentives for successful malingering. The Slick, Sherman, and Iversion (1999) criteria for malingered neurocognitive dysfunction (MND) are considered a major milestone in the field's operationalization of neurocognitive malingering and have strongly influenced the development of malingering detection methods, including serving as the criterion of malingering in the validation of several performance validity tests (PVTs) and symptom validity tests (SVTs) (Slick, D.J., Sherman, E.M.S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545-561). However, the MND criteria are long overdue for revision to address advances in malingering research and to address limitations identified by experts in the field. METHOD The MND criteria were critically reviewed, updated with reference to research on malingering, and expanded to address other forms of malingering pertinent to neuropsychological evaluation such as exaggeration of self-reported somatic and psychiatric symptoms. RESULTS The new proposed criteria simplify diagnostic categories, expand and clarify external incentives, more clearly define the role of compelling inconsistencies, address issues concerning PVTs and SVTs (i.e., number administered, false positives, and redundancy), better define the role of SVTs and of marked discrepancies indicative of malingering, and most importantly, clearly define exclusionary criteria based on the last two decades of research on malingering in neuropsychology. Lastly, the new criteria provide specifiers to better describe clinical presentations for use in neuropsychological assessment. CONCLUSIONS The proposed multidimensional malingering criteria that define cognitive, somatic, and psychiatric malingering for use in neuropsychological assessment are presented.
Collapse
Affiliation(s)
| | | | - Grant L Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- Spaulding Rehabilitation Hospital and Spaulding Research Institute, Charlestown, MA, USA
- Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Charlestown, MA, USA
| |
Collapse
|
9
|
Olsen DH, Schroeder RW, Martin PK. Cross-validation of the Invalid Forgetting Frequency Index (IFFI) from the Test of Memory Malingering. Arch Clin Neuropsychol 2019; 36:437-441. [DOI: 10.1093/arclin/acz064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 10/08/2019] [Indexed: 11/14/2022] Open
Abstract
Abstract
Objective
To increase sensitivity of the Test of Memory Malingering (TOMM), adjustments have been proposed, including adding consistency indices. The Invalid Forgetting Frequency Index (IFFI) is the most recently developed consistency index. While strong classification accuracy rates were originally reported, it currently lacks cross-validation.
Method
A sample of 184 outpatients was utilized. Valid performers passed all criterion performance validity tests (PVTs) and invalid performers failed two or more PVTs. Classification accuracy statistics were calculated.
Results
AUC for the IFFI was 0.80, demonstrating adequate discrimination between valid and invalid groups. A score of 3 or more inconsistent responses resulted in sensitivity and specificity rates of 63% and 92%, respectively.
Conclusions
This is the first article to cross-validate the IFFI. In both the original IFFI study and the current study, the same cut-off was found to maintain at least 90% specificity while producing higher sensitivity rates than those achieved by traditional TOMM indices.
Collapse
Affiliation(s)
- Daniel H Olsen
- University of Kansas School of Medicine – Wichita, Department of Psychiatry and Behavioral Sciences, Wichita, Kansas, United States
| | - Ryan W Schroeder
- University of Kansas School of Medicine – Wichita, Department of Psychiatry and Behavioral Sciences, Wichita, Kansas, United States
| | - Phillip K Martin
- University of Kansas School of Medicine – Wichita, Department of Psychiatry and Behavioral Sciences, Wichita, Kansas, United States
| |
Collapse
|
10
|
Martin PK, Schroeder RW, Olsen DH, Maloy H, Boettcher A, Ernst N, Okut H. A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two decades of deception detection. Clin Neuropsychol 2019; 34:88-119. [DOI: 10.1080/13854046.2019.1637027] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Phillip K. Martin
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Ryan W. Schroeder
- Department of Psychiatry and Behavioral Sciences, University of Kansas School of Medicine –Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Halley Maloy
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | | | - Nathan Ernst
- University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Hayrettin Okut
- University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
11
|
Schroeder RW, Olsen DH, Martin PK. Classification accuracy rates of four TOMM validity indices when examined independently and jointly. Clin Neuropsychol 2019; 33:1373-1387. [DOI: 10.1080/13854046.2019.1619839] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Ryan W. Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Daniel H. Olsen
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| | - Phillip K. Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine – Wichita, Wichita, KS, USA
| |
Collapse
|
12
|
Schroeder RW, Martin PK, Heinrichs RJ, Baade LE. Research methods in performance validity testing studies: Criterion grouping approach impacts study outcomes. Clin Neuropsychol 2018; 33:466-477. [PMID: 29884112 DOI: 10.1080/13854046.2018.1484517] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
OBJECTIVE Performance validity test (PVT) research studies commonly utilize a known-groups design, but the criterion grouping approaches within the design vary greatly from one study to another. At the present time, it is unclear as to what degree different criterion grouping approaches might impact PVT classification accuracy statistics. METHOD To analyze this, the authors used three different criterion grouping approaches to examine how classification accuracy statistics of a PVT (Word Choice Test; WCT) would differ. The three criterion grouping approaches included: (1) failure of 2+ PVTs versus failure of 0 PVTs, (2) failure of 2+ PVTs versus failure of 0-1 PVT, and (3) failure of a stand-alone PVT versus passing of a stand-alone PVT (Test of Memory Malingering). RESULTS When setting specificity at ≥.90, WCT cutoff scores ranged from 41 to 44 and associated sensitivity values ranged from .64 to .88, depending on the criterion grouping approach that was utilized. CONCLUSIONS When using a stand-alone PVT to define criterion group status, classification accuracy rates of the WCT were higher than expected, likely due to strong correlations between the reference PVT and the WCT. This held true even when considering evidence that this grouping approach results in higher rates of criterion group misclassification. Conversely, when using criterion grouping approaches that utilized failure of 2+ PVTs, accuracy rates were more consistent with expectations. These findings demonstrate that criterion grouping approaches can impact PVT classification accuracy rates and resultant cutoff scores. Strengths, weaknesses, and practical implications of each of the criterion grouping approaches are discussed.
Collapse
Affiliation(s)
- Ryan W Schroeder
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| | - Phillip K Martin
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| | - Robin J Heinrichs
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| | - Lyle E Baade
- a Department of Psychiatry and Behavioral Sciences , University of Kansas School of Medicine - Wichita , Wichita , KS , USA
| |
Collapse
|
13
|
Dorociak KE, Schulze ET, Piper LE, Molokie RE, Janecek JK. Performance validity testing in a clinical sample of adults with sickle cell disease. Clin Neuropsychol 2017. [PMID: 28632024 DOI: 10.1080/13854046.2017.1339830] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Neuropsychologists utilize performance validity tests (PVTs) as objective means for drawing inferences about performance validity. The Test of Memory Malingering (TOMM) is a well-validated, stand-alone PVT and the Reliable Digit Span (RDS) and Reliable Digit Span-Revised (RDS-R) from the Digit Span subtest of the WAIS-IV are commonly employed, embedded PVTs. While research has demonstrated the utility of these PVTs with various clinical samples, no research has investigated their use in adults with sickle cell disease (SCD), a condition associated with multiple neurological, physical, and psychiatric symptoms. Thus, the purpose of this study was to explore PVT performance in adults with SCD. METHOD Fifty-four adults with SCD (Mage = 40.61, SD = 12.35) were consecutively referred by their hematologist for a routine clinical outpatient neuropsychological evaluation. During the evaluation, participants were administered the TOMM (Trials 1 and 2), neuropsychological measures including the WAIS-IV Digit Span subtest, and mood and behavioral questionnaires. RESULTS The average score on the TOMM was 47.70 (SD = 3.47, range = 34-50) for Trial 1 and 49.69 (SD = 1.66, range = 38-50) for Trial 2. Only one participant failed Trial 2 of the TOMM, yielding a 98.1% pass rate for the sample. Pass rates at various RDS and RDS-R values were calculated with TOMM Trial 2 performance as an external criterion. CONCLUSIONS Results support the use of the TOMM as a measure of performance validity for individuals with SCD, while RDS and RDS-R should be interpreted with caution in this population.
Collapse
Affiliation(s)
- Katherine E Dorociak
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Evan T Schulze
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Lauren E Piper
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| | - Robert E Molokie
- b Department of Medicine , University of Illinois at Chicago , Chicago , IL , USA
| | - Julie K Janecek
- a Department of Psychiatry , University of Illinois at Chicago , Chicago , IL , USA
| |
Collapse
|
14
|
Young G. PTSD in Court III: Malingering, assessment, and the law. INTERNATIONAL JOURNAL OF LAW AND PSYCHIATRY 2017; 52:81-102. [PMID: 28366496 DOI: 10.1016/j.ijlp.2017.03.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 03/02/2017] [Indexed: 06/07/2023]
Abstract
This journal's third article on PTSD in Court focuses especially on the topic's "court" component. It first considers the topic of malingering, including in terms of its definition, certainties, and uncertainties. As with other areas of the study of psychological injury and law, generally, and PTSD (posttraumatic stress disorder), specifically, malingering is a contentious area not only definitionally but also empirically, in terms of establishing its base rate in the index populations assessed in the field. Both current research and re-analysis of past research indicates that the malingering prevalence rate at issue is more like 15±15% as opposed to 40±10%. As for psychological tests used to assess PTSD, some of the better ones include the TSI-2 (Trauma Symptom Inventory, Second Edition; Briere, 2011), the MMPI-2-RF (Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form; Ben-Porath & Tellegen, 2008/2011), and the CAPS-5 (The Clinician-Administered PTSD Scale for DSM-5; Weathers, Blake, Schnurr, Kaloupek, Marx, & Keane, 2013b). Assessors need to know their own possible biases, the applicable laws (e.g., the Daubert trilogy), and how to write court-admissible reports. Overall conclusions reflect a moderate approach that navigates the territory between the extreme plaintiff or defense allegiances one frequently encounters in this area of forensic practice.
Collapse
|
15
|
Fazio RL, Denning JH, Denney RL. TOMM Trial 1 as a performance validity indicator in a criminal forensic sample. Clin Neuropsychol 2016; 31:251-267. [DOI: 10.1080/13854046.2016.1213316] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
| | - John H. Denning
- Ralph H. Johnson VA Medical Center, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Robert L. Denney
- Neuropsychological Associates of Southwest Missouri, Springfield, MO, USA
| |
Collapse
|
16
|
Martin PK, Schroeder RW, Wyman-Chick KA, Hunter BP, Heinrichs RJ, Baade LE. Rates of Abnormally Low TOPF Word Reading Scores in Individuals Failing Versus Passing Performance Validity Testing. Assessment 2016; 25:640-652. [DOI: 10.1177/1073191116656796] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The present study examined the impact of performance validity test (PVT) failure on the Test of Premorbid Functioning (TOPF) in a sample of 252 neuropsychological patients. Word reading performance differed significantly according to PVT failure status, and number of PVTs failed accounted for 7.4% of the variance in word reading performance, even after controlling for education. Furthermore, individuals failing ≥2 PVTs were twice as likely as individuals passing all PVTs (33% vs. 16%) to have abnormally low obtained word reading scores relative to demographically predicted scores when using a normative base rate of 10% to define abnormality. When compared with standardization study clinical groups, those failing ≥2 PVTs were twice as likely as patients with moderate to severe traumatic brain injury and as likely as patients with Alzheimer’s dementia to obtain abnormally low TOPF word reading scores. Findings indicate that TOPF word reading based estimates of premorbid functioning should not be interpreted in individuals invalidating cognitive testing.
Collapse
Affiliation(s)
| | | | - Kathryn A. Wyman-Chick
- University of Kansas School of Medicine, Wichita, KS, USA
- University of Virginia School of Medicine, Charlottesville, VA, USA
| | - Ben P. Hunter
- University of Kansas School of Medicine, Wichita, KS, USA
| | | | - Lyle E. Baade
- University of Kansas School of Medicine, Wichita, KS, USA
| |
Collapse
|
17
|
Young G. Towards Balanced VA and SSA Policies in Psychological Injury Disability Assessment. PSYCHOLOGICAL INJURY & LAW 2015. [DOI: 10.1007/s12207-015-9230-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
18
|
Martin PK, Schroeder RW, Odland AP. Neuropsychologists’ Validity Testing Beliefs and Practices: A Survey of North American Professionals. Clin Neuropsychol 2015; 29:741-76. [DOI: 10.1080/13854046.2015.1087597] [Citation(s) in RCA: 189] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
19
|
|
20
|
Odland AP, Lammy AB, Martin PK, Grote CL, Mittenberg W. Advanced Administration and Interpretation of Multiple Validity Tests. PSYCHOLOGICAL INJURY & LAW 2015. [DOI: 10.1007/s12207-015-9216-4] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|