1
|
Bosi J, Minassian L, Ales F, Akca AYE, Winters C, Viglione DJ, Zennaro A, Giromini L. The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An online simulation study in a community sample from the United Kingdom. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1234-1246. [PMID: 36027614 DOI: 10.1080/23279095.2022.2115910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser. To address this gap, we conducted a simulation study with a community sample (N = 307) from the United Kingdom. Participants were asked to either (a) respond honestly or (b) pretend to suffer from mTBI or (c) pretend to suffer from depression. Within each feigning group, half of the participants received a description of the symptoms of the disorder to be feigned, and the other half received both a description of the symptoms of the disorder to be feigned and a warning not to over-exaggerate their responses or their presentation would not be credible. Overall, the results confirmed the effectiveness of the two IOP components, both individually and in combination.
Collapse
Affiliation(s)
- Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Laure Minassian
- Department of Psychology, University of Surrey, Guildford, UK
| | - Francesca Ales
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Christina Winters
- Tilburg Institute for Law, Technology, and Society (TLS), Tilburg University, Tilburg, The Netherlands
| | | | | | | |
Collapse
|
2
|
Giromini L, Pignolo C, Zennaro A, Sellbom M. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI. Assessment 2024:10731911241235465. [PMID: 38468147 DOI: 10.1177/10731911241235465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Collapse
|
3
|
Pignolo C, Giromini L, Ales F, Zennaro A. Detection of Feigning of Different Symptom Presentations With the PAI and IOP-29. Assessment 2023; 30:565-579. [PMID: 34872384 DOI: 10.1177/10731911211061282] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study examined the effectiveness of the negative distortion measures from the Personality Assessment Inventory (PAI) and Inventory of Problems-29 (IOP-29), by investigating data from a community and a forensic sample, across three different symptom presentations (i.e., feigned depression, posttraumatic stress disorder [PTSD], and schizophrenia). The final sample consisted of 513 community-based individuals and 288 inmates (total N = 801); all were administered the PAI and the IOP-29 in an honest or feigning conditions. Statistical analyses compared the average scores of each measure by symptom presentation and data source (i.e., community vs. forensic sample) and evaluated diagnostic efficiency statistics. Results suggest that the PAI Negative Impression Management scale and the IOP-29 are the most effective measures across all symptom presentations, whereas the PAI Malingering Index and Rogers Discriminant Function generated less optimal results, especially when considering feigned PTSD. Practical implications are discussed.
Collapse
|
4
|
Bajjaleh C, Braw YC, Elkana O. Adaptation and initial validation of the Arabic version of the Word Memory Test (WMT ARB). APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:204-213. [PMID: 34043924 DOI: 10.1080/23279095.2021.1923495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. METHODS We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. RESULTS The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. CONCLUSION The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
Collapse
Affiliation(s)
- Christine Bajjaleh
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| | - Yoram C Braw
- Department of Psychology, Ariel University, Ariel, Israel
| | - Odelia Elkana
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| |
Collapse
|
5
|
On the Use of Eye Movements in Symptom Validity Assessment of Feigned Schizophrenia. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09462-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractAssessing the credibility of reported mental health problems is critical in a variety of assessment situations, particularly in forensic contexts. Previous research has examined how the assessment of performance validity can be improved through the use of bio-behavioral measures (e.g., eye movements). To date, however, there is a paucity of literature on the use of eye tracking technology in assessing the validity of presented symptoms of schizophrenia, a disorder that is known to be associated with oculomotor abnormalities. Thus, we collected eye tracking data from 83 healthy individuals during the completion of the Inventory of Problems – 29 and investigated whether the oculomotor behavior of participants instructed to feign schizophrenia would differ from those of control participants asked to respond honestly. Results showed that feigners had a longer dwell time and a greater number of fixations in the feigning-keyed response options, regardless of whether they eventually endorsed those options (d > 0.80). Implications on how eye tracking technology can deepen comprehension on simulation strategies are discussed, as well as the potential of investigating eye movements to advance the field of symptom validity assessment.
Collapse
|
6
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
7
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
8
|
Choca JP, Pignolo C. Assessing Negative Response Bias with the Millon Clinical Multiaxial Inventory-IV (MCMI-IV): a Review of the Literature. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09442-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
9
|
Giromini L, Viglione DJ. Assessing Negative Response Bias with the Inventory of Problems-29 (IOP-29): a Quantitative Literature Review. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-021-09437-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
10
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
11
|
Šömen MM, Lesjak S, Majaron T, Lavopa L, Giromini L, Viglione D, Podlesek A. Using the Inventory of Problems-29 (IOP-29) with the Inventory of Problems Memory (IOP-M) in Malingering-Related Assessments: a Study with a Slovenian Sample of Experimental Feigners. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09412-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
12
|
Giromini L, Pignolo C, Young G, Drogin EY, Zennaro A, Viglione DJ. Comparability and Validity of the Online and In-Person Administrations of the Inventory of Problems-29. PSYCHOLOGICAL INJURY & LAW 2021; 14:77-88. [PMID: 33841609 PMCID: PMC8019979 DOI: 10.1007/s12207-021-09406-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 03/23/2021] [Indexed: 11/24/2022]
Abstract
While the psychometric equivalence of computerized versus paper-and-pencil administration formats has been documented for some tests, so far very few studies have focused on the comparability and validity of test scores obtained via in-person versus remote administrations, and none of them have researched a symptom validity test (SVT). To contribute to fill this gap in the literature, we investigated the scores of the Inventory of Problems-29 (IOP-29) generated by various administration formats. More specifically, Study 1 evaluated the equivalence of scores from nonclinical individuals administered the IOP-29 remotely (n = 146) versus in-person via computer (n = 140) versus in-person via paper-and-pencil format (n = 140). Study 2 reviewed published IOP-29 studies conducted using remote/online versus in-person, paper-and-pencil test administrations to determine if remote testing could adversely influence the validity of IOP-29 test results. Taken together, our findings suggest that the effectiveness of the IOP-29 is preserved when alternating between face-to-face and online/remote formats.
Collapse
Affiliation(s)
- Luciano Giromini
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Torino, TO Italy
| | - Claudia Pignolo
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Torino, TO Italy
| | - Gerald Young
- Glendon College, York University, Toronto, Canada
| | - Eric Y Drogin
- Department of Psychiatry, Harvard Medical School, Boston, MA USA
| | - Alessandro Zennaro
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Torino, TO Italy
| | | |
Collapse
|
13
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
14
|
Sabelli AG, Messa I, Giromini L, Lichtenstein JD, May N, Erdodi LA. Symptom Versus Performance Validity in Patients with Mild TBI: Independent Sources of Non-credible Responding. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09400-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
15
|
Carvalho LDF, Reis A, Colombarolli MS, Pasian SR, Miguel FK, Erdodi LA, Viglione DJ, Giromini L. Discriminating Feigned from Credible PTSD Symptoms: a Validation of a Brazilian Version of the Inventory of Problems-29 (IOP-29). PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09403-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
16
|
Cutler L, Abeare CA, Messa I, Holcomb M, Erdodi LA. This will only take a minute: Time cutoffs are superior to accuracy cutoffs on the forced choice recognition trial of the Hopkins Verbal Learning Test - Revised. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1425-1439. [PMID: 33631077 DOI: 10.1080/23279095.2021.1884555] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of the recently introduced forced-choice recognition trial to the Hopkins Verbal Learning Test - Revised (FCRHVLT-R) as a performance validity test (PVT) in a clinical sample. Time-to-completion (T2C) for FCRHVLT-R was also examined. METHOD Forty-three students were assigned to either the control or the experimental malingering (expMAL) condition. Archival data were collected from 52 adults clinically referred for neuropsychological assessment. Invalid performance was defined using expMAL status, two free-standing PVTs and two validity composites. RESULTS Among students, FCRHVLT-R ≤11 or T2C ≥45 seconds was specific (0.86-0.93) to invalid performance. Among patients, an FCRHVLT-R ≤11 was specific (0.94-1.00), but relatively insensitive (0.38-0.60) to non-credible responding0. T2C ≥35 s produced notably higher sensitivity (0.71-0.89), but variable specificity (0.83-0.96). The T2C achieved superior overall correct classification (81-86%) compared to the accuracy score (68-77%). The FCRHVLT-R provided incremental utility in performance validity assessment compared to previously introduced validity cutoffs on Recognition Discrimination. CONCLUSIONS Combined with T2C, the FCRHVLT-R has the potential to function as a quick, inexpensive and effective embedded PVT. The time-cutoff effectively attenuated the low ceiling of the accuracy scores, increasing sensitivity by 19%. Replication in larger and more geographically and demographically diverse samples is needed before the FCRHVLT-R can be endorsed for routine clinical application.
Collapse
Affiliation(s)
- Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Isabelle Messa
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
17
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
18
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
19
|
Giromini L, Viglione DJ, Zennaro A, Maffei A, Erdodi LA. SVT Meets PVT: Development and Initial Validation of the Inventory of Problems – Memory (IOP-M). PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09385-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
20
|
Winters CL, Giromini L, Crawford TJ, Ales F, Viglione DJ, Warmelink L. An Inventory of Problems-29 (IOP-29) study investigating feigned schizophrenia and random responding in a British community sample. PSYCHIATRY, PSYCHOLOGY, AND LAW : AN INTERDISCIPLINARY JOURNAL OF THE AUSTRALIAN AND NEW ZEALAND ASSOCIATION OF PSYCHIATRY, PSYCHOLOGY AND LAW 2020; 28:235-254. [PMID: 34712094 PMCID: PMC8547855 DOI: 10.1080/13218719.2020.1767720] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Compared to other Western countries, malingering research is still relatively scarce in the United Kingdom, partly because only a few brief and easy-to-use symptom validity tests (SVTs) have been validated for use with British test-takers. This online study examined the validity of the Inventory of Problems-29 (IOP-29) in detecting feigned schizophrenia and random responding in 151 British volunteers. Each participant took three IOP-29 test administrations: (a) responding honestly; (b) pretending to suffer from schizophrenia; and (c) responding at random. Additionally, they also responded to a schizotypy measure (O-LIFE) under standard instruction. The IOP-29's feigning scale (FDS) showed excellent validity in discriminating honest responding from feigned schizophrenia (AUC = .99), and its classification accuracy was not significantly affected by the presence of schizotypal traits. Additionally, a recently introduced IOP-29 scale aimed at detecting random responding (RRS) demonstrated very promising results.
Collapse
Affiliation(s)
| | | | | | - Francesca Ales
- Department of Psychology, University of Turin, Torino, Italy
| | - Donald J Viglione
- California School of Professional Psychology, Alliant International University, San Diego, CA, USA
| | - Lara Warmelink
- Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
21
|
Ilgunaite G, Giromini L, Bosi J, Viglione DJ, Zennaro A. A clinical comparison simulation study using the Inventory of Problems-29 (IOP-29) with the Center for Epidemiologic Studies Depression Scale (CES-D) in Lithuania. APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:155-162. [DOI: 10.1080/23279095.2020.1725518] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Guste Ilgunaite
- Department of Psychology, Mykolas Romeris University, Vilnius, Lithuania
| | | | - Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Donald J. Viglione
- California School of Professional Psychology, Alliant International University, San Diego, CA, USA
| | | |
Collapse
|
22
|
Mazza C, Orrù G, Burla F, Monaro M, Ferracuti S, Colasanti M, Roma P. Indicators to distinguish symptom accentuators from symptom producers in individuals with a diagnosed adjustment disorder: A pilot study on inconsistency subtypes using SIMS and MMPI-2-RF. PLoS One 2019; 14:e0227113. [PMID: 31887214 PMCID: PMC6936836 DOI: 10.1371/journal.pone.0227113] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 12/11/2019] [Indexed: 11/17/2022] Open
Abstract
In the context of legal damage evaluations, evaluees may exaggerate or simulate symptoms in an attempt to obtain greater economic compensation. To date, practitioners and researchers have focused on detecting malingering behavior as an exclusively unitary construct. However, we argue that there are two types of inconsistent behavior that speak to possible malingering-accentuating (i.e., exaggerating symptoms that are actually experienced) and simulating (i.e., fabricating symptoms entirely)-each with its own unique attributes; thus, it is necessary to distinguish between them. The aim of the present study was to identify objective indicators to differentiate symptom accentuators from symptom producers and consistent participants. We analyzed the Structured Inventory of Malingered Symptomatology scales and the Minnesota Multiphasic Personality Inventory-2 Restructured Form validity scales of 132 individuals with a diagnosed adjustment disorder with mixed anxiety and depressed mood who had undergone assessment for psychiatric/psychological damage. The results indicated that the SIMS Total Score, Neurologic Impairment and Low Intelligence scales and the MMPI-2-RF Infrequent Responses (F-r) and Response Bias (RBS) scales successfully discriminated among symptom accentuators, symptom producers, and consistent participants. Machine learning analysis was used to identify the most efficient parameter for classifying these three groups, recognizing the SIMS Total Score as the best indicator.
Collapse
Affiliation(s)
- Cristina Mazza
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Franco Burla
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padova, Padova, Italy
| | - Stefano Ferracuti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Marco Colasanti
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
23
|
Ecological Validity of the Inventory of Problems-29 (IOP-29): an Italian Study of Court-Ordered, Psychological Injury Evaluations Using the Structured Inventory of Malingered Symptomatology (SIMS) as Criterion Variable. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09368-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
24
|
Beyond Rare-Symptoms Endorsement: a Clinical Comparison Simulation Study Using the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) with the Inventory of Problems-29 (IOP-29). PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09357-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|