1
|
Puente-López E, Pina D, Rambaud-Quiñones P, Ruiz-Hernández JA, Nieto-Cañaveras MD, Shura RD, Alcazar-Crevillén A, Martinez-Jarreta B. Classification accuracy and resistance to coaching of the Spanish version of the Inventory of Problems-29 and the Inventory of Problems-Memory: A simulation study with mTBI patients. Clin Neuropsychol 2024; 38:738-762. [PMID: 37615421 DOI: 10.1080/13854046.2023.2249171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 08/12/2023] [Indexed: 08/25/2023]
Abstract
Objective: The present study aims to evaluate the classification accuracy and resistance to coaching of the Inventory of Problems-29 (IOP-29) and the IOP-Memory (IOP-M) with a Spanish sample of patients diagnosed with mild traumatic brain injury (mTBI) and healthy participants instructed to feign. Method: Using a simulation design, 37 outpatients with mTBI (clinical control group) and 213 non-clinical instructed feigners under several coaching conditions completed the Spanish versions of the IOP-29, IOP-M, Structured Inventory of Malingered Symptomatology, and Rivermead Post Concussion Symptoms Questionnaire. Results: The IOP-29 discriminated well between clinical patients and instructed feigners, with an excellent classification accuracy for the recommended cutoff score (FDS ≥ .50; sensitivity = 87.10% for coached group and 89.09% for uncoached; specificity = 95.12%). The IOP-M also showed an excellent classification accuracy (cutoff ≤ 29; sensitivity = 87.27% for coached group and 93.55% for uncoached; specificity = 97.56%). Both instruments proved to be resistant to symptom information coaching and performance warnings. Conclusions: The results confirm that both of the IOP measures offer a similarly valid but different perspective compared to SIMS when assessing the credibility of symptoms of mTBI. The encouraging findings indicate that both tests are a valuable addition to the symptom validity practices of forensic professionals. Additional research in multiple contexts and with diverse conditions is warranted.
Collapse
Affiliation(s)
| | - David Pina
- Applied Psychology Service, Universidad de Murcia, Murcia, Spain
| | | | | | | | - Robert D Shura
- Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Salisbury VA Medical Center, Salisbury, NC, USA
| | | | - Begoña Martinez-Jarreta
- Mutua MAZ, Zaragoza, Spain
- Department of Pathological Anatomy, Forensic and Legal Medicine and Toxicology, Universidad de Zaragoza, Zaragoza, Spain
| |
Collapse
|
2
|
Pignolo C, Giromini L, Ales F, Zennaro A. Detection of Feigning of Different Symptom Presentations With the PAI and IOP-29. Assessment 2023; 30:565-579. [PMID: 34872384 DOI: 10.1177/10731911211061282] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study examined the effectiveness of the negative distortion measures from the Personality Assessment Inventory (PAI) and Inventory of Problems-29 (IOP-29), by investigating data from a community and a forensic sample, across three different symptom presentations (i.e., feigned depression, posttraumatic stress disorder [PTSD], and schizophrenia). The final sample consisted of 513 community-based individuals and 288 inmates (total N = 801); all were administered the PAI and the IOP-29 in an honest or feigning conditions. Statistical analyses compared the average scores of each measure by symptom presentation and data source (i.e., community vs. forensic sample) and evaluated diagnostic efficiency statistics. Results suggest that the PAI Negative Impression Management scale and the IOP-29 are the most effective measures across all symptom presentations, whereas the PAI Malingering Index and Rogers Discriminant Function generated less optimal results, especially when considering feigned PTSD. Practical implications are discussed.
Collapse
|
3
|
On the Use of Eye Movements in Symptom Validity Assessment of Feigned Schizophrenia. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09462-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractAssessing the credibility of reported mental health problems is critical in a variety of assessment situations, particularly in forensic contexts. Previous research has examined how the assessment of performance validity can be improved through the use of bio-behavioral measures (e.g., eye movements). To date, however, there is a paucity of literature on the use of eye tracking technology in assessing the validity of presented symptoms of schizophrenia, a disorder that is known to be associated with oculomotor abnormalities. Thus, we collected eye tracking data from 83 healthy individuals during the completion of the Inventory of Problems – 29 and investigated whether the oculomotor behavior of participants instructed to feign schizophrenia would differ from those of control participants asked to respond honestly. Results showed that feigners had a longer dwell time and a greater number of fixations in the feigning-keyed response options, regardless of whether they eventually endorsed those options (d > 0.80). Implications on how eye tracking technology can deepen comprehension on simulation strategies are discussed, as well as the potential of investigating eye movements to advance the field of symptom validity assessment.
Collapse
|
4
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
5
|
Bosi J, Minassian L, Ales F, Akca AYE, Winters C, Viglione DJ, Zennaro A, Giromini L. The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An online simulation study in a community sample from the United Kingdom. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-13. [PMID: 36027614 DOI: 10.1080/23279095.2022.2115910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser. To address this gap, we conducted a simulation study with a community sample (N = 307) from the United Kingdom. Participants were asked to either (a) respond honestly or (b) pretend to suffer from mTBI or (c) pretend to suffer from depression. Within each feigning group, half of the participants received a description of the symptoms of the disorder to be feigned, and the other half received both a description of the symptoms of the disorder to be feigned and a warning not to over-exaggerate their responses or their presentation would not be credible. Overall, the results confirmed the effectiveness of the two IOP components, both individually and in combination.
Collapse
Affiliation(s)
- Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Laure Minassian
- Department of Psychology, University of Surrey, Guildford, UK
| | - Francesca Ales
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Christina Winters
- Tilburg Institute for Law, Technology, and Society (TLS), Tilburg University, Tilburg, The Netherlands
| | | | | | | |
Collapse
|
6
|
Giromini L, Viglione DJ. Assessing Negative Response Bias with the Inventory of Problems-29 (IOP-29): a Quantitative Literature Review. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-021-09437-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
7
|
Šömen MM, Lesjak S, Majaron T, Lavopa L, Giromini L, Viglione D, Podlesek A. Using the Inventory of Problems-29 (IOP-29) with the Inventory of Problems Memory (IOP-M) in Malingering-Related Assessments: a Study with a Slovenian Sample of Experimental Feigners. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09412-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
8
|
Giromini L, Pignolo C, Young G, Drogin EY, Zennaro A, Viglione DJ. Comparability and Validity of the Online and In-Person Administrations of the Inventory of Problems-29. PSYCHOLOGICAL INJURY & LAW 2021; 14:77-88. [PMID: 33841609 PMCID: PMC8019979 DOI: 10.1007/s12207-021-09406-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 03/23/2021] [Indexed: 11/24/2022]
Abstract
While the psychometric equivalence of computerized versus paper-and-pencil administration formats has been documented for some tests, so far very few studies have focused on the comparability and validity of test scores obtained via in-person versus remote administrations, and none of them have researched a symptom validity test (SVT). To contribute to fill this gap in the literature, we investigated the scores of the Inventory of Problems-29 (IOP-29) generated by various administration formats. More specifically, Study 1 evaluated the equivalence of scores from nonclinical individuals administered the IOP-29 remotely (n = 146) versus in-person via computer (n = 140) versus in-person via paper-and-pencil format (n = 140). Study 2 reviewed published IOP-29 studies conducted using remote/online versus in-person, paper-and-pencil test administrations to determine if remote testing could adversely influence the validity of IOP-29 test results. Taken together, our findings suggest that the effectiveness of the IOP-29 is preserved when alternating between face-to-face and online/remote formats.
Collapse
Affiliation(s)
- Luciano Giromini
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Torino, TO Italy
| | - Claudia Pignolo
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Torino, TO Italy
| | - Gerald Young
- Glendon College, York University, Toronto, Canada
| | - Eric Y Drogin
- Department of Psychiatry, Harvard Medical School, Boston, MA USA
| | - Alessandro Zennaro
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Torino, TO Italy
| | | |
Collapse
|
9
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
10
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
11
|
Giromini L, Viglione DJ, Zennaro A, Maffei A, Erdodi LA. SVT Meets PVT: Development and Initial Validation of the Inventory of Problems – Memory (IOP-M). PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09385-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Winters CL, Giromini L, Crawford TJ, Ales F, Viglione DJ, Warmelink L. An Inventory of Problems-29 (IOP-29) study investigating feigned schizophrenia and random responding in a British community sample. PSYCHIATRY, PSYCHOLOGY, AND LAW : AN INTERDISCIPLINARY JOURNAL OF THE AUSTRALIAN AND NEW ZEALAND ASSOCIATION OF PSYCHIATRY, PSYCHOLOGY AND LAW 2020; 28:235-254. [PMID: 34712094 PMCID: PMC8547855 DOI: 10.1080/13218719.2020.1767720] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Compared to other Western countries, malingering research is still relatively scarce in the United Kingdom, partly because only a few brief and easy-to-use symptom validity tests (SVTs) have been validated for use with British test-takers. This online study examined the validity of the Inventory of Problems-29 (IOP-29) in detecting feigned schizophrenia and random responding in 151 British volunteers. Each participant took three IOP-29 test administrations: (a) responding honestly; (b) pretending to suffer from schizophrenia; and (c) responding at random. Additionally, they also responded to a schizotypy measure (O-LIFE) under standard instruction. The IOP-29's feigning scale (FDS) showed excellent validity in discriminating honest responding from feigned schizophrenia (AUC = .99), and its classification accuracy was not significantly affected by the presence of schizotypal traits. Additionally, a recently introduced IOP-29 scale aimed at detecting random responding (RRS) demonstrated very promising results.
Collapse
Affiliation(s)
| | | | | | - Francesca Ales
- Department of Psychology, University of Turin, Torino, Italy
| | - Donald J Viglione
- California School of Professional Psychology, Alliant International University, San Diego, CA, USA
| | - Lara Warmelink
- Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|