1
|
Puente-López E, Pina D, Rambaud-Quiñones P, Ruiz-Hernández JA, Nieto-Cañaveras MD, Shura RD, Alcazar-Crevillén A, Martinez-Jarreta B. Classification accuracy and resistance to coaching of the Spanish version of the Inventory of Problems-29 and the Inventory of Problems-Memory: A simulation study with mTBI patients. Clin Neuropsychol 2024; 38:738-762. [PMID: 37615421 DOI: 10.1080/13854046.2023.2249171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Objective: The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Method: Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. Results: The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached group and 89.09% for uncoached; specificity = 95.12%). The IOP-M also showed an excellent classification accuracy (cutoff ≤ 29; sensitivity = 87.27% for coached group and 93.55% for uncoached; specificity = 97.56%). Both instruments proved to be resistant to symptom information coaching and performance warnings. Conclusions: The results confirm that both of the IOP measures offer a similarly valid but different perspective compared to SIMS when assessing the credibility of symptoms of mTBI. The encouraging findings indicate that both tests are a valuable addition to the symptom validity practices of forensic professionals. Additional research in multiple contexts and with diverse conditions is warranted.
Collapse
Affiliation(s)
| | - David Pina
- Applied Psychology Service, Universidad de Murcia, Murcia, Spain
| | | | | | | | - Robert D Shura
- Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Salisbury VA Medical Center, Salisbury, NC, USA
| | | | - Begoña Martinez-Jarreta
- Mutua MAZ, Zaragoza, Spain
- Department of Pathological Anatomy, Forensic and Legal Medicine and Toxicology, Universidad de Zaragoza, Zaragoza, Spain
| |
Collapse
|
2
|
Giromini L, Pignolo C, Zennaro A, Sellbom M. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI. Assessment 2024:10731911241235465. [PMID: 38468147 DOI: 10.1177/10731911241235465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Collapse
|
3
|
Ingram PB, Armistead-Jehle P, Childers LG, Herring TT. Cross validation of the response bias scale and the response bias scale-19 in active-duty personnel: use on the MMPI-2-RF and MMPI-3. J Clin Exp Neuropsychol 2024; 46:141-151. [PMID: 38493366 DOI: 10.1080/13803395.2024.2330727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024]
Abstract
The Response Bias Scale (RBS) is the central measure of cognitive over-reporting in the MMPI-family of instruments. Relative to other clinical populations, the research evaluating the detection of over-reporting is more limited in Veteran and Active-Duty personnel, which has produced some psychometric variability across studies. Some have suggested that the original scale construction methods resulted in items which negatively impact classification accuracy and in response crafted an abbreviated version of the RBS (RBS-19; Ratcliffe et al., 2022; Spencer et al., 2022). In addition, the most recent edition of the MMPI is based on new normative data, which impacts the ability to use existing literature to determine effective cut-scores for the RBS (despite all items having been retained across MMPI versions). To date, no published research exists for the MMPI-3 RBS. The current study examined the utility of the RBS and the RBS-19 in a sample of Active-Duty personnel (n = 186) referred for neuropsychological evaluation. Using performance validity tests as the study criterion, we found that the RBS-19 was generally equitably to RBS in classification. Correlations with other MMPI-2-RF over- and under-reporting symptom validity tests were slightly stronger for RBS-19. Implications and directions for research and practice with RBS/RBS-19 are discussed, along with implications for neuropsychological assessment and response validity theory.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, USA, KS
| | | | - Lucas G Childers
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| | - Tristan T Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| |
Collapse
|
4
|
Ingram PB, Keen MA, Greene TE, Morris C, Armistead-Jehle PJ. Development and initial validation of the Scale of Scales (SOS) overreporting scores for the MMPI family of instruments. J Clin Exp Neuropsychol 2024; 46:95-110. [PMID: 38726688 DOI: 10.1080/13803395.2024.2320453] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/08/2024] [Indexed: 05/31/2024]
Abstract
Overreporting is a common problem that complicates psychological evaluations. A challenge facing the effective detection of overreporting is that many of the identified strategies (e.g., symptom severity approaches; see Rogers & Bender, 2020) are not incorporated into broadband measures of personality and psychopathology (e.g., Minnesota Multiphasic Personality Inventory family of instruments). While recent efforts have worked to incorporate some of these newer strategies, no such work has been conducted on the MMPI-3. For instance, recent symptom severity approaches have been used to identify patterns of multivariate base rate "skyline" elevations on the BASC, and similar strategies have been adopted into the PAI to measure psychopathology (Multi-Feigning Index; Gaines et al., 2013) and cognitive symptoms (Cognitive Bias Scale of Scales; Boress et al., 2022b). This study used data from a simulation study (n = 318) and an Active-Duty (AD) clinical sample (n = 290) to develop and cross-validate such a scale on the MMPI-2-RF and MMPI-3. Results suggest that the MMPI SOS (Scale of Scales) scores perform equitably to existing measures of overreporting on the MMPI-2-RF and MMPI-3 and incrementally predict a PVT-classified "known-group" of Active Duty service members. Effects were generally large in magnitude. Classification accuracy achieved desired specificity (.90) and approximated expected sensitivity (.30). Implications of these findings are discussed, which emphasize how alternative overreporting detection strategies may be useful to consider for the MMPI. These alternative strategies have room for expansion and refinement.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
- Eastern Kansas Veteran Affair Healthcare System, Levenworth, Kansas
| | - Megan A Keen
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | - Tina E Greene
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | - Cole Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | | |
Collapse
|
5
|
Shura RD, Sapp A, Ingram PB, Brearly TW. Evaluation of telehealth administration of MMPI symptom validity scales. J Clin Exp Neuropsychol 2024; 46:86-94. [PMID: 38375629 DOI: 10.1080/13803395.2024.2314734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/11/2024] [Indexed: 02/21/2024]
Abstract
INTRODUCTION Telehealth assessment (TA) is a quickly emerging practice, offered with increasing frequency across many different clinical contexts. TA is also well-received by most patients, and there are numerous guidelines and training opportunities which can support effective telehealth practice. Although there are extensive recommended practices, these guidelines have rarely been evaluated empirically, particularly on personality measures. While existing research is limited, it does generally support the idea that TA and in-person assessment (IA) produce fairly equitable test scores. The MMPI-3, a recently released and highly popular personality and psychopathology measure has been the subject of several of those experimental or student (non-client) based studies; however, no study to date has evaluated these trends within a clinical sample. This study empirically tests for differences in TA and IA test scores on the MMPI-3 validity scores when following recommended administration procedures. METHOD Data were from a retrospective chart review. Veterans (n = 550) who underwent psychological assessment in a Veterans Affairs Medical Center ADHD evaluation clinic were contrasted between in person and telehealth assessment modalities on the MMPI-2-RF and MMPI-3. Groups were compared using t tests, chi square, and base rates. RESULTS Results suggest that there were minimal differences in elevation rates or mean scores across modality, supporting the use of TA. CONCLUSIONS This study's findings support the use of the MMPI via TA with ADHD evaluations, Veterans, and in neuro/psychological evaluation settings more generally. Observed elevation rates and mean scores of this study were notably different from those seen in other VA service clinics sampled nationally, which is an area of future investigation.
Collapse
Affiliation(s)
- Robert D Shura
- Research & Academic Affairs Service Line, Salisbury VA Healthcare System, Salisbury, NC, USA
- Neurocognition Research Lab, VA Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Durham, NC, USA
- Department of Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Alison Sapp
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| | - Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
- Department of Veterans Affairs Eastern Kansas Healthcare, Leavenworth VAMC, Leavenworth, KS, USA
| | - Timothy W Brearly
- Department of Neurology, Penn State Milton S. Hershey Medical Center, Hershey, PA, USA
- Penn State College of Medicine, Department of Neurology, Hershey, PA, USA
| |
Collapse
|
6
|
Whitman MR, Gervais RO, Ben-Porath YS. Virtuous victims: Disability claimants who over- and under-report. Clin Neuropsychol 2023; 37:1584-1607. [PMID: 36883429 DOI: 10.1080/13854046.2023.2185686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/23/2023] [Indexed: 03/09/2023]
Abstract
Objective: The present study was the first to investigate the test performance and symptom reports of individuals who engage in both over-reporting (i.e., exaggerating or fabricating symptoms) and under-reporting (i.e., exaggerating positive qualities or denying shortcomings) in the context of a forensic evaluation. We focused on comparing individuals who over- and under-reported (OR + UR) with those who only over-reported (OR-only) on the MMPI-3. Method: Using a disability claimant sample referred for comprehensive psychological evaluations (n = 848), the present study first determined the rates of possible over-reporting (MMPI-3 F ≥ 75 T, Fp ≥ 70 T, Fs ≥ 100 T, or FBS or RBS ≥ 90 T) with (n = 42) and without (n = 332) under-reporting (L ≥ 65 T). Next, we examined group mean differences on MMPI-3 substantive scale scores and scores on several additional measures completed by the disability claimant sample during their evaluation. Results: The small group of individuals identified as both over-reporting and under-reporting (OR + UR) scored meaningfully higher than the OR-only group on several over- and under-reporting symptom validity tests, as well as on measures of emotional and cognitive/somatic complaints, but lower on externalizing measures. The OR + UR group also performed significantly worse than the OR-only group on several performance validity tests and measures of cognitive ability. Conclusions: The present study indicated that disability claimants who engage in simultaneous over- and under-reporting portray themselves as having greater levels of dysfunction but fewer externalizing tendencies relative to claimants who only over-report; however, these portrayals are likely less accurate reflections of their true functioning.
Collapse
Affiliation(s)
- Megan R Whitman
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| | - Roger O Gervais
- Neurobehavioural Associates, Edmonton, AB, Canada
- Department of Educational Psychology, University of Alberta, Edmonton, AB, Canada
| | | |
Collapse
|
7
|
Shura RD, Ingram PB, Miskey HM, Martindale SL, Rowland JA, Armistead-Jehle P. Validation of the personality assessment inventory (PAI) cognitive bias (CBS) and cognitive bias scale of scales (CB-SOS) in a post-deployment veteran sample. Clin Neuropsychol 2023; 37:1548-1565. [PMID: 36271822 DOI: 10.1080/13854046.2022.2131630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/27/2022] [Indexed: 11/03/2022]
Abstract
Objective: The present study evaluated the function of four cognitive, symptom validity scales on the Personality Assessment Inventory (PAI), the Cognitive Bias Scale (CBS) and the Cognitive Bias Scale of Scales (CB-SOS) 1, 2, and 3 in a sample of Veterans who volunteered for a study of neurocognitive functioning. Method: 371 Veterans (88.1% male, 66.1% White) completed a battery including the Miller Forensic Assessment of Symptoms Test (M-FAST), the Word Memory Test (WMT), and the PAI. Independent samples t-tests compared mean differences on cognitive bias scales between valid and invalid groups on the M-FAST and WMT. Area under the curve (AUC), sensitivity, specificity, and hit rate across various scale point-estimates were used to evaluate classification accuracy of the CBS and CB-SOS scales. Results: Group differences were significant with moderate effect sizes for all cognitive bias scales between the WMT-classified groups (d = .52-.55), and large effect sizes between the M-FAST-classified groups (d = 1.27-1.45). AUC effect sizes were moderate across the WMT-classified groups (.650-.676) and large across M-FAST-classified groups (.816-.854). When specificity was set to .90, sensitivity was higher for M-FAST and the CBS performed the best (sensitivity = .42). Conclusion: The CBS and CB-SOS scales seem to better detect symptom invalidity than performance invalidity in Veterans using cutoff scores similar to those found in prior studies with non-Veterans.
Collapse
Affiliation(s)
- Robert D Shura
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Paul B Ingram
- Texas Tech University, Lubbock, TX, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, KS, USA
| | - Holly M Miskey
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Sarah L Martindale
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Jared A Rowland
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | | |
Collapse
|
8
|
Comparative Data for the Morel Emotional Numbing Test: High False-Positive Rate in Older Bona-Fide Neurological Patients. PSYCHOLOGICAL INJURY & LAW 2023. [DOI: 10.1007/s12207-023-09470-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
9
|
Obolsky MA, Resch ZJ, Fellin TJ, Cerny BM, Khan H, Bing-Canar H, McCollum K, Lee RC, Fink JW, Pliskin NH, Soble JR. Concordance of Performance and Symptom Validity Tests Within an Electrical Injury Sample. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09469-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Weitzner DS, Miller BI, Webber TA. Embedded cognitive and emotional/affective self-reported symptom validity indices on the patient competency rating scale. J Clin Exp Neuropsychol 2022; 44:533-549. [PMID: 36369702 DOI: 10.1080/13803395.2022.2138270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Although there is an abundance of research on stand-alone and embedded performance validity tests and stand-alone symptom validity tests (SVTs), less emphasis has been placed on embedded SVTs. The goal of the current study was to examine the ability of embedded indicators within the Patient Competency Rating Scale (PCRS) to separately detect invalid cognitive and/or emotional/affective symptom responding. METHOD Participants included 299 veterans assessed in a VA medical center epilepsy monitoring unit from 2013-2017 (mean age = 48.8 years, SD = 13.5 years). Two SVT composites were created; self-reported cognitive symptom validity (SVT-C) and self-reported emotional/affective symptom validity (SVT-E). Groups were compared on PCRS total and index scores (i.e., cognitive, activities of daily living, emotional, and interpersonal competencies) using ANOVAs. Receiver operating characteristic (ROC) curve analyses assessed the classification accuracy of the PCRS total and index scores for SVT-C and SVT-E. RESULTS In ANOVAs, SVT-C was significantly associated with all PCRS indices, while SVT-E was only significantly associated with the PCRS total, emotional, and interpersonal competency indices. Although the PCRS-T ≤ 90 had the strongest classification of SVT-C and SVT-E (specificities: .90, sensitivities: .44 to .50), PCRS index scores showed suggestive evidence of domain specificity, with PCRS-ADL ≤22, PCRS-C ≤ 20, and PCRS-CADL ≤45 best classifying SVT-C (specificities: .92, sensitivities: .33) and the PCRS-E ≤ 18 best classifying the SVT-E group (specificity: .93, sensitivity: .40). CONCLUSION Results suggest the PCRS may be used to obtain clinically useful information while including embedded indicators that can assess cognitive and/or emotional/affective symptom invalidity.
Collapse
Affiliation(s)
- Daniel S Weitzner
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
11
|
Giromini L, Young G, Sellbom M. Assessing Negative Response Bias Using Self-Report Measures: New Articles, New Issues. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09444-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
AbstractIn psychological injury and related forensic evaluations, two types of tests are commonly used to assess Negative Response Bias (NRB): Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs). SVTs assess the credibility of self-reported symptoms, whereas PVTs assess the credibility of observed performance on cognitive tasks. Compared to the large and ever-growing number of published PVTs, there are still relatively few validated self-report SVTs available to professionals for assessing symptom validity. In addition, while several studies have examined how to combine and integrate the results of multiple independent PVTs, there are few studies to date that have addressed the combination and integration of information obtained from multiple self-report SVTs. The Special Issue of Psychological Injury and Law introduced in this article aims to help fill these gaps in the literature by providing readers with detailed information about the convergent and incremental validity, strengths and weaknesses, and applicability of a number of selected measures of NRB under different conditions and in different assessment contexts. Each of the articles in this Special Issue focuses on a particular self-report SVT or set of SVTs and summarizes their conditions of use, strengths, weaknesses, and possible cut scores and relative hit rates. Here, we review the psychometric properties of the 19 selected SVTs and discuss their advantages and disadvantages. In addition, we make tentative proposals for the field to consider regarding the number of SVTs to be used in an assessment, the number of SVT failures required to invalidate test results, and the issue of redundancy when selecting multiple SVTs for an assessment.
Collapse
|