1
|
Giromini L, Pignolo C, Zennaro A, Sellbom M. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI. Assessment 2024:10731911241235465. [PMID: 38468147 DOI: 10.1177/10731911241235465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
Collapse
|
2
|
Shura RD, Sapp A, Ingram PB, Brearly TW. Evaluation of telehealth administration of MMPI symptom validity scales. J Clin Exp Neuropsychol 2024; 46:86-94. [PMID: 38375629 DOI: 10.1080/13803395.2024.2314734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/11/2024] [Indexed: 02/21/2024]
Abstract
INTRODUCTION Telehealth assessment (TA) is a quickly emerging practice, offered with increasing frequency across many different clinical contexts. TA is also well-received by most patients, and there are numerous guidelines and training opportunities which can support effective telehealth practice. Although there are extensive recommended practices, these guidelines have rarely been evaluated empirically, particularly on personality measures. While existing research is limited, it does generally support the idea that TA and in-person assessment (IA) produce fairly equitable test scores. The MMPI-3, a recently released and highly popular personality and psychopathology measure has been the subject of several of those experimental or student (non-client) based studies; however, no study to date has evaluated these trends within a clinical sample. This study empirically tests for differences in TA and IA test scores on the MMPI-3 validity scores when following recommended administration procedures. METHOD Data were from a retrospective chart review. Veterans (n = 550) who underwent psychological assessment in a Veterans Affairs Medical Center ADHD evaluation clinic were contrasted between in person and telehealth assessment modalities on the MMPI-2-RF and MMPI-3. Groups were compared using t tests, chi square, and base rates. RESULTS Results suggest that there were minimal differences in elevation rates or mean scores across modality, supporting the use of TA. CONCLUSIONS This study's findings support the use of the MMPI via TA with ADHD evaluations, Veterans, and in neuro/psychological evaluation settings more generally. Observed elevation rates and mean scores of this study were notably different from those seen in other VA service clinics sampled nationally, which is an area of future investigation.
Collapse
Affiliation(s)
- Robert D Shura
- Research & Academic Affairs Service Line, Salisbury VA Healthcare System, Salisbury, NC, USA
- Neurocognition Research Lab, VA Mid-Atlantic Mental Illness Research, Education, and Clinical Center, Durham, NC, USA
- Department of Neurology, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Alison Sapp
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| | - Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
- Department of Veterans Affairs Eastern Kansas Healthcare, Leavenworth VAMC, Leavenworth, KS, USA
| | - Timothy W Brearly
- Department of Neurology, Penn State Milton S. Hershey Medical Center, Hershey, PA, USA
- Penn State College of Medicine, Department of Neurology, Hershey, PA, USA
| |
Collapse
|
3
|
Batastini AB, Guyton MR, Bernhard PA, Folk JB, Knuth SB, Kohutis EA, Lugo A, Stanfill ML, Tussey CM. Recommendations for the Use of Telepsychology in Psychology-Law Practice and Research: A Statement by American Psychology-Law Society (APA Division 41). PSYCHOLOGY, PUBLIC POLICY, AND LAW : AN OFFICIAL LAW REVIEW OF THE UNIVERSITY OF ARIZONA COLLEGE OF LAW AND THE UNIVERSITY OF MIAMI SCHOOL OF LAW 2023; 29:255-271. [PMID: 38389918 PMCID: PMC10880951 DOI: 10.1037/law0000394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
In response to the COVID-19 pandemic and subsequent impact on psychological work, Division 41 of the American Psychological Association convened a taskforce to provide guidance to its membership regarding the use of technology for practice and research at the intersection of psychology and law. Drawing from existing research in psychology-law and beyond, as well as the first-hand experience of taskforce members, this document outlines foundational guidance to apply technology to forensic and correctional work while acknowledging these settings provide unique challenges to ethical practice. The recommendations provide support for psychologists involved in assessment, treatment, training, and research. However, these recommendations may not exhaustively apply to all areas of psycholegal practice or all forms of technology. Further, these recommendations are intended to be consulted in conjunction with other professional practice guidelines, emerging research, and policy changes that impact the integration of technologies into this work.
Collapse
Affiliation(s)
- Ashley B. Batastini
- Department of Counseling, Educational Psychology & Research, University of Memphis
| | | | | | | | | | | | | | | | | |
Collapse
|
4
|
Pignolo C, Giromini L, Ales F, Zennaro A. Detection of Feigning of Different Symptom Presentations With the PAI and IOP-29. Assessment 2023; 30:565-579. [PMID: 34872384 DOI: 10.1177/10731911211061282] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study examined the effectiveness of the negative distortion measures from the Personality Assessment Inventory (PAI) and Inventory of Problems-29 (IOP-29), by investigating data from a community and a forensic sample, across three different symptom presentations (i.e., feigned depression, posttraumatic stress disorder [PTSD], and schizophrenia). The final sample consisted of 513 community-based individuals and 288 inmates (total N = 801); all were administered the PAI and the IOP-29 in an honest or feigning conditions. Statistical analyses compared the average scores of each measure by symptom presentation and data source (i.e., community vs. forensic sample) and evaluated diagnostic efficiency statistics. Results suggest that the PAI Negative Impression Management scale and the IOP-29 are the most effective measures across all symptom presentations, whereas the PAI Malingering Index and Rogers Discriminant Function generated less optimal results, especially when considering feigned PTSD. Practical implications are discussed.
Collapse
|
5
|
Holcomb M, Pyne S, Cutler L, Oikle DA, Erdodi LA. Take Their Word for It: The Inventory of Problems Provides Valuable Information on Both Symptom and Performance Validity. J Pers Assess 2022:1-11. [PMID: 36041087 DOI: 10.1080/00223891.2022.2114358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
This study was designed to compare the validity of the Inventory of Problems (IOP-29) and its newly developed memory module (IOP-M) in 150 patients clinically referred for neuropsychological assessment. Criterion groups were psychometrically derived based on established performance and symptom validity tests (PVTs and SVTs). The criterion-related validity of the IOP-29 was compared to that of the Negative Impression Management scale of the Personality Assessment Inventory (NIMPAI) and the criterion-related validity of the IOP-M was compared to that of Trial-1 on the Test of Memory Malingering (TOMM-1). The IOP-29 correlated significantly more strongly (z = 2.50, p = .01) with criterion PVTs than the NIMPAI (rIOP-29 = .34; rNIM-PAI = .06), generating similar overall correct classification values (OCCIOP-29: 79-81%; OCCNIM-PAI: 71-79%). Similarly, the IOP-M correlated significantly more strongly (z = 2.26, p = .02) with criterion PVTs than the TOMM-1 (rIOP-M = .79; rTOMM-1 = .59), generating similar overall correct classification values (OCCIOP-M: 89-91%; OCCTOMM-1: 84-86%). Findings converge with the cumulative evidence that the IOP-29 and IOP-M are valuable additions to comprehensive neuropsychological batteries. Results also confirm that symptom and performance validity are distinct clinical constructs, and domain specificity should be considered while calibrating instruments.
Collapse
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor
| | | | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor
| |
Collapse
|
6
|
Bosi J, Minassian L, Ales F, Akca AYE, Winters C, Viglione DJ, Zennaro A, Giromini L. The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An online simulation study in a community sample from the United Kingdom. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-13. [PMID: 36027614 DOI: 10.1080/23279095.2022.2115910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser. To address this gap, we conducted a simulation study with a community sample (N = 307) from the United Kingdom. Participants were asked to either (a) respond honestly or (b) pretend to suffer from mTBI or (c) pretend to suffer from depression. Within each feigning group, half of the participants received a description of the symptoms of the disorder to be feigned, and the other half received both a description of the symptoms of the disorder to be feigned and a warning not to over-exaggerate their responses or their presentation would not be credible. Overall, the results confirmed the effectiveness of the two IOP components, both individually and in combination.
Collapse
Affiliation(s)
- Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Laure Minassian
- Department of Psychology, University of Surrey, Guildford, UK
| | - Francesca Ales
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Christina Winters
- Tilburg Institute for Law, Technology, and Society (TLS), Tilburg University, Tilburg, The Netherlands
| | | | | | | |
Collapse
|
7
|
Giromini L, Pasqualini S, Corgiat Loia A, Pignolo C, Di Girolamo M, Zennaro A. A Survey of Practices and Beliefs of Italian Psychologists Regarding Malingering and Symptom Validity Assessment. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09452-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractA few years ago, an article describing the current status of Symptom Validity Assessment (SVA) practices and beliefs in European countries reported that there was little research activity in Italy (Merten et al., 2013). The same article also highlighted that Italian practitioners were less inclined to use Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in their assessments, compared with their colleagues from other major European countries. Considering that several articles on malingering and SVA have been published by Italian authors in recent years, we concluded that an update of the practices and beliefs of Italian professionals regarding malingering and SVA would be beneficial. Accordingly, from a larger survey that examined general psychological assessment practices and beliefs of Italian professionals, we extracted a subset of items specifically related to malingering and SVA and analyzed the responses of a sample of Italian psychologists who have some experience with malingering-related assessments. Taken together, the results of our analyses indicated that even though our respondents tend to use SVTs and PVTs relatively often in their evaluations, at this time, they likely trust more their own personal observations, impressions, and overall clinical judgment, in their SVA practice. Additionally, our results also indicated that Italian practitioners with some familiarity with malingering-related evaluations consider malingering to occur in about one-third of psychological assessments in which the evaluee might have an interest in overreporting.
Collapse
|
8
|
Ales F, Meyer GJ, Mihura JL, Loia AC, Pasqualini S, Zennaro A, Giromini L. Can the Rorschach be Administered Remotely? A Review of Options and a Pilot Study Using a Newly Developed R-PAS App. PSYCHOLOGICAL INJURY & LAW 2022; 16:1-17. [PMID: 35308458 PMCID: PMC8923744 DOI: 10.1007/s12207-022-09447-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 02/26/2022] [Indexed: 11/16/2022]
Abstract
The ongoing COVID-19 pandemic has required psychologists to adopt measures like physical distancing and mask wearing, though other safety procedures such as travel restrictions or prohibitions on in-person practice and research have fostered the use of tele-health tools. In this article, we review options for using the Rorschach task via videoconference and provide preliminary data from using a new electronic app for remote R-PAS administration to determine whether the remote administration in an electronic form yields different information than in-person administration with the cards in hand. As a pilot study, our focus is on the "first factor" of all Rorschach scores, i.e., complexity. Data were collected from 60 adult Italian community volunteers, and statistical analyses evaluated the extent to which the average complexity score significantly departed from R-PAS normative expectations (SS = 100), accompanied by Bayesian likelihoods for supporting the null hypothesis. Results suggest that the general level of complexity shown by the test-takers when administered the Rorschach remotely with the new R-PAS app closely resembles that previously observed using "standard" in-person procedures. Tentative analyses of other R-PAS scores suggested normative departures that could be due to the effects of the app, testing at home, or responses to the pandemic. We offer recommendations for future research and discuss practical implications.
Collapse
Affiliation(s)
- Francesca Ales
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Turin, TO Italy
| | | | - Joni L. Mihura
- Department of Psychology, University of Toledo, Toledo, OH USA
| | - Andrea Corgiat Loia
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Turin, TO Italy
| | - Sara Pasqualini
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Turin, TO Italy
| | - Alessandro Zennaro
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Turin, TO Italy
| | - Luciano Giromini
- Department of Psychology, University of Turin, Via Verdi 10, 10123 Turin, TO Italy
| |
Collapse
|
9
|
Giromini L, Viglione DJ. Assessing Negative Response Bias with the Inventory of Problems-29 (IOP-29): a Quantitative Literature Review. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-021-09437-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
10
|
Symptom and Performance Validity Assessment in European Countries: an Update. PSYCHOLOGICAL INJURY & LAW 2021; 15:116-127. [PMID: 34849185 PMCID: PMC8612718 DOI: 10.1007/s12207-021-09436-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/13/2021] [Indexed: 11/23/2022]
Abstract
In 2013, a special issue of the Spanish journal Clínica y Salud published a review on symptom and performance validity assessment in European countries (Merten et al. in Clínica y Salud, 24(3), 129–138, 2013). At that time, developments were judged to be in their infancy in many countries, with major publication activities stemming from only four countries: Spain, The Netherlands, Great Britain, and Germany. As an introduction to a special issue of Psychological Injury and Law, this is an updated report of developments during the last 10 years. In that period of time, research activities have reached a level where it is difficult to follow all developments; some validity measures were newly developed, others were adapted for European languages, and validity assessment has found a much stronger place in real-world evaluation contexts. Next to an update from the four nations mentioned above, reports are now given from Austria, Italy, and Switzerland, too.
Collapse
|