1
|
Ingram PB, Armistead-Jehle P, Childers LG, Herring TT. Cross validation of the response bias scale and the response bias scale-19 in active-duty personnel: use on the MMPI-2-RF and MMPI-3. J Clin Exp Neuropsychol 2024; 46:141-151. [PMID: 38493366 DOI: 10.1080/13803395.2024.2330727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024]
Abstract
The Response Bias Scale (RBS) is the central measure of cognitive over-reporting in the MMPI-family of instruments. Relative to other clinical populations, the research evaluating the detection of over-reporting is more limited in Veteran and Active-Duty personnel, which has produced some psychometric variability across studies. Some have suggested that the original scale construction methods resulted in items which negatively impact classification accuracy and in response crafted an abbreviated version of the RBS (RBS-19; Ratcliffe et al., 2022; Spencer et al., 2022). In addition, the most recent edition of the MMPI is based on new normative data, which impacts the ability to use existing literature to determine effective cut-scores for the RBS (despite all items having been retained across MMPI versions). To date, no published research exists for the MMPI-3 RBS. The current study examined the utility of the RBS and the RBS-19 in a sample of Active-Duty personnel (n = 186) referred for neuropsychological evaluation. Using performance validity tests as the study criterion, we found that the RBS-19 was generally equitably to RBS in classification. Correlations with other MMPI-2-RF over- and under-reporting symptom validity tests were slightly stronger for RBS-19. Implications and directions for research and practice with RBS/RBS-19 are discussed, along with implications for neuropsychological assessment and response validity theory.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, USA, KS
| | | | - Lucas G Childers
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| | - Tristan T Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, USA, TX
| |
Collapse
|
2
|
Ingram PB, Keen MA, Greene TE, Morris C, Armistead-Jehle PJ. Development and initial validation of the Scale of Scales (SOS) overreporting scores for the MMPI family of instruments. J Clin Exp Neuropsychol 2024; 46:95-110. [PMID: 38726688 DOI: 10.1080/13803395.2024.2320453] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/08/2024] [Indexed: 05/31/2024]
Abstract
Overreporting is a common problem that complicates psychological evaluations. A challenge facing the effective detection of overreporting is that many of the identified strategies (e.g., symptom severity approaches; see Rogers & Bender, 2020) are not incorporated into broadband measures of personality and psychopathology (e.g., Minnesota Multiphasic Personality Inventory family of instruments). While recent efforts have worked to incorporate some of these newer strategies, no such work has been conducted on the MMPI-3. For instance, recent symptom severity approaches have been used to identify patterns of multivariate base rate "skyline" elevations on the BASC, and similar strategies have been adopted into the PAI to measure psychopathology (Multi-Feigning Index; Gaines et al., 2013) and cognitive symptoms (Cognitive Bias Scale of Scales; Boress et al., 2022b). This study used data from a simulation study (n = 318) and an Active-Duty (AD) clinical sample (n = 290) to develop and cross-validate such a scale on the MMPI-2-RF and MMPI-3. Results suggest that the MMPI SOS (Scale of Scales) scores perform equitably to existing measures of overreporting on the MMPI-2-RF and MMPI-3 and incrementally predict a PVT-classified "known-group" of Active Duty service members. Effects were generally large in magnitude. Classification accuracy achieved desired specificity (.90) and approximated expected sensitivity (.30). Implications of these findings are discussed, which emphasize how alternative overreporting detection strategies may be useful to consider for the MMPI. These alternative strategies have room for expansion and refinement.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
- Eastern Kansas Veteran Affair Healthcare System, Levenworth, Kansas
| | - Megan A Keen
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | - Tina E Greene
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | - Cole Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas
| | | |
Collapse
|
3
|
Ingram PB, Armistead-Jehle P, Herring TT, Morris CS. Cross validation of the Personality Assessment Inventory (PAI) Cognitive Bias Scale of Scales (CB-SOS) over-reporting indicators in a military sample. MILITARY PSYCHOLOGY 2024; 36:192-202. [PMID: 37651693 PMCID: PMC10880507 DOI: 10.1080/08995605.2022.2160151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 12/09/2022] [Indexed: 01/07/2023]
Abstract
Following the development of the Cognitive Bias Scale (CBS), three other cognitive over-reporting indicators were created. This study cross-validates these new Cognitive Bias Scale of Scales (CB-SOS) measurements in a military sample and contrasts their performance to the CBS. We analyzed data from 288 active-duty soldiers who underwent neuropsychological evaluation. Groups were established based on performance validity testing (PVT) failure. Medium effects (d = .71 to .74) were observed between those passing and failing PVTs. The CB-SOS scales have high specificity (≥.90) but low sensitivity across the suggested cut scores. While all CB-SOS were able to achieve .90, lower scores were typically needed. CBS demonstrated incremental validity beyond CB-SOS-1 and CB-SOS-3; only CB-SOS-2 was incremental beyond CBS. In a military sample, the CB-SOS scales have more limited sensitivity than in its original validation, indicating an area of limited utility despite easier calculation. The CBS performs comparably, if not better, than CB-SOS scales. CB-SOS-2's differences in performance in this study and its initial validation suggest that its psychometric properties may be sample dependent. Given their ease of calculation and relatively high specificity, our study supports the interpretation of elevated CB-SOS scores indicating those who are likely to fail concurrent PVTs.
Collapse
Affiliation(s)
- Paul B. Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, Kansas, USA
| | | | - Tristan T. Herring
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| | - Cole S. Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| |
Collapse
|
4
|
Boress K, Gaasedelen O, Kim JH, Basso MR, Whiteside DM. Examination of the relationship between symptom and performance validity measures across referral subtypes. J Clin Exp Neuropsychol 2024; 46:162-171. [PMID: 37791494 DOI: 10.1080/13803395.2023.2261633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 09/17/2023] [Indexed: 10/05/2023]
Abstract
INTRODUCTION The extent to which performance validity (PVT) and symptom validity (SVT) tests measure separate constructs is unclear. Prior research using the Minnesota Multiphasic Personality Inventory (MMPI-2 & RF) suggested that PVTs and SVTs are separate but related constructs. However, the relationship between Personality Assessment Inventory (PAI) SVTs and PVTs has not been explored. This study aimed to replicate previous MMPI research using the PAI, exploring the relationship between PVTs and overreporting SVTs across three subsamples, neurodevelopmental (attention deficit-hyperactivity disorder (ADHD)/learning disorder), psychiatric, and mild traumatic brain injury (mTBI). METHODS Participants included 561 consecutive referrals who completed the Test of Memory Malingering (TOMM) and the PAI. Three subgroups were created based on referral question. The relationship between PAI SVTs and the PVT was evaluated through multiple regression analysis. RESULTS The results demonstrated the relationship between PAI symptom overreporting SVTs, including Negative Impression Management (NIM), Malingering Index (MAL), and Cognitive Bias Scale (CBS), and PVTs varied by referral subgroup. Specifically, overreporting on CBS but not NIM and MAL significantly predicted poorer PVT performance in the full sample and the mTBI sample. In contrast, none of the overreporting SVTs significantly predicted PVT performance in the ADHD/learning disorder sample but conversely, all SVTs predicted PVT performance in the psychiatric sample. CONCLUSIONS The results partially replicated prior research comparing SVTs and PVTs and suggested that constructs measured by SVTs and PVTs vary depending upon population. The results support the necessity of both PVTs and SVTs in clinical neuropsychological practice.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Jeong Hye Kim
- Department of Psychiatry, University of Iowa Hospitals and Clinics, lowa, IA, USA
| | | | - Douglas M Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
5
|
Whiteside DM, Basso MR. Innovations in performance and symptom validity testing: introduction to symptom validity section of the special issue. J Clin Exp Neuropsychol 2024; 46:81-85. [PMID: 38654620 DOI: 10.1080/13803395.2024.2346022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Affiliation(s)
| | - Michael R Basso
- Department of Psychiatry and Psychology, Mayo Clinic-Rochester
| |
Collapse
|
6
|
Dong H, Koerts J, Pijnenborg GHM, Scherbaum N, Müller BW, Fuermaier ABM. Cognitive Underperformance in a Mixed Neuropsychiatric Sample at Diagnostic Evaluation of Adult ADHD. J Clin Med 2023; 12:6926. [PMID: 37959391 PMCID: PMC10647211 DOI: 10.3390/jcm12216926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 11/15/2023] Open
Abstract
(1) Background: The clinical assessment of attention-deficit/hyperactivity disorder (ADHD) in adulthood is known to show non-trivial base rates of noncredible performance and requires thorough validity assessment. (2) Objectives: The present study estimated base rates of noncredible performance in clinical evaluations of adult ADHD on one or more of 17 embedded validity indicators (EVIs). This study further examines the effect of the order of test administration on EVI failure rates, the association between cognitive underperformance and symptom overreporting, and the prediction of cognitive underperformance by clinical information. (3) Methods: A mixed neuropsychiatric sample (N = 464, ADHD = 227) completed a comprehensive neuropsychological assessment battery on the Vienna Test System (VTS; CFADHD). Test performance allows the computation of 17 embedded performance validity indicators (PVTs) derived from eight different neuropsychological tests. Further, all participants completed several self- and other-report symptom rating scales assessing depressive symptoms and cognitive functioning. The Conners' Adult ADHD Rating Scale and the Beck Depression Inventory-II were administered to derive embedded symptom validity measures (SVTs). (4) Results and conclusion: Noncredible performance occurs in a sizeable proportion of about 10% up to 30% of individuals throughout the entire battery. Tests for attention and concentration appear to be the most adequate and sensitive for detecting underperformance. Cognitive underperformance represents a coherent construct and seems dissociable from symptom overreporting. These results emphasize the importance of performing multiple PVTs, at different time points, and promote more accurate calculation of the positive and negative predictive values of a given validity measure for noncredible performance during clinical assessments. Future studies should further examine whether and how the present results stand in other clinical populations, by implementing rigorous reference standards of noncredible performance, characterizing those failing PVT assessments, and differentiating between underlying motivations.
Collapse
Affiliation(s)
- Hui Dong
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Janneke Koerts
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Gerdina H. M. Pijnenborg
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| | - Norbert Scherbaum
- LVR University Hospital, Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Duisburg-Essen, 45147 Essen, Germany; (N.S.); (B.W.M.)
| | - Bernhard W. Müller
- LVR University Hospital, Department of Psychiatry and Psychotherapy, Faculty of Medicine, University of Duisburg-Essen, 45147 Essen, Germany; (N.S.); (B.W.M.)
- Department of Psychology, University of Wuppertal, 42119 Wuppertal, Germany
| | - Anselm B. M. Fuermaier
- Department of Clinical and Developmental Neuropsychology, Faculty of Behavioral and Social Sciences, University of Groningen, 9712 TS Groningen, The Netherlands; (H.D.); (J.K.); (G.H.M.P.)
| |
Collapse
|
7
|
Shura RD, Ingram PB, Miskey HM, Martindale SL, Rowland JA, Armistead-Jehle P. Validation of the personality assessment inventory (PAI) cognitive bias (CBS) and cognitive bias scale of scales (CB-SOS) in a post-deployment veteran sample. Clin Neuropsychol 2023; 37:1548-1565. [PMID: 36271822 DOI: 10.1080/13854046.2022.2131630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/27/2022] [Indexed: 11/03/2022]
Abstract
Objective: The present study evaluated the function of four cognitive, symptom validity scales on the Personality Assessment Inventory (PAI), the Cognitive Bias Scale (CBS) and the Cognitive Bias Scale of Scales (CB-SOS) 1, 2, and 3 in a sample of Veterans who volunteered for a study of neurocognitive functioning. Method: 371 Veterans (88.1% male, 66.1% White) completed a battery including the Miller Forensic Assessment of Symptoms Test (M-FAST), the Word Memory Test (WMT), and the PAI. Independent samples t-tests compared mean differences on cognitive bias scales between valid and invalid groups on the M-FAST and WMT. Area under the curve (AUC), sensitivity, specificity, and hit rate across various scale point-estimates were used to evaluate classification accuracy of the CBS and CB-SOS scales. Results: Group differences were significant with moderate effect sizes for all cognitive bias scales between the WMT-classified groups (d = .52-.55), and large effect sizes between the M-FAST-classified groups (d = 1.27-1.45). AUC effect sizes were moderate across the WMT-classified groups (.650-.676) and large across M-FAST-classified groups (.816-.854). When specificity was set to .90, sensitivity was higher for M-FAST and the CBS performed the best (sensitivity = .42). Conclusion: The CBS and CB-SOS scales seem to better detect symptom invalidity than performance invalidity in Veterans using cutoff scores similar to those found in prior studies with non-Veterans.
Collapse
Affiliation(s)
- Robert D Shura
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Paul B Ingram
- Texas Tech University, Lubbock, TX, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, KS, USA
| | - Holly M Miskey
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Sarah L Martindale
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - Jared A Rowland
- W. G. (Bill) Hefner VA Healthcare System, Salisbury, NC, USA
- VA Mid-Atlantic (VISN 6) Mental Illness Research, Education, and Clinical Center (MIRECC), Durham, NC, USA
- Wake Forest School of Medicine, Winston-Salem, NC, USA
| | | |
Collapse
|
8
|
Jinkerson JD, Lu LH, Kennedy J, Armistead-Jehle P, Nelson JT, Seegmiller RA. Grooved Pegboard adds incremental value over memory-apparent performance validity tests in predicting psychiatric symptom report. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-9. [PMID: 37094095 DOI: 10.1080/23279095.2023.2192409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
The present study evaluated whether Grooved Pegboard (GPB), when used as a performance validity test (PVT), can incrementally predict psychiatric symptom report elevations beyond memory-apparent PVTs. Participants (N = 111) were military personnel and were predominantly White (84%), male (76%), with a mean age of 43 (SD = 12) and having on average 16 years of education (SD = 2). Individuals with disorders potentially compromising motor dexterity were excluded. Participants were administered GPB, three memory-apparent PVTs (Medical Symptom Validity Test, Non-Verbal Medical Symptom Validity Test, Reliable Digit Span), and a symptom validity test (Personality Assessment Inventory Negative Impression Management [NIM]). Results from the three memory-apparent PVTs were entered into a model for predicting NIM, where failure of two or more PVTs was categorized as evidence of non-credible responding. Hierarchical regression revealed that non-dominant hand GPB T-score incrementally predicted NIM beyond memory-apparent PVTs (F(2,108) = 16.30, p < .001; R2 change = .05, β = -0.24, p < .01). In a second hierarchical regression, GPB performance was dichotomized into pass or fail, using T-score cutoffs (≤29 for either hand, ≤31 for both). Non-dominant hand GPB again predicted NIM beyond memory-apparent PVTs (F(2,108) = 18.75, p <.001; R2 change = .08, β = -0.28, p < .001). Results indicated that noncredible/failing GPB performance adds incremental value over memory-apparent PVTs in predicting psychiatric symptom report.
Collapse
Affiliation(s)
| | - Lisa H Lu
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
- TBI Center of Excellence (TBICoE), Arlington, VA, USA
- General Dynamics Information Technology, Falls Church, VA, USA
| | - Jan Kennedy
- Brooke Army Medical Center, JBSA - Ft Sam Houston, San Antonio, TX, USA
- TBI Center of Excellence (TBICoE), Arlington, VA, USA
- General Dynamics Information Technology, Falls Church, VA, USA
| | | | | | | |
Collapse
|
9
|
Aparcero M, Picard EH, Nijdam-Jones A, Rosenfeld B. Comparing the Ability of MMPI-2 and MMPI-2-RF Validity Scales to Detect Feigning: A Meta-Analysis. Assessment 2023; 30:744-760. [PMID: 34991350 DOI: 10.1177/10731911211067535] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Several meta-analyses of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) have examined these instruments' ability to detect symptom exaggeration or feigning. However, limited research has directly compared whether the scales across these two instruments are equally effective. This study used a moderated meta-analysis to compare 109 MMPI-2 and 41 MMPI-2-RF feigning studies, 83 (56.46%) of which were not included in previous meta-analyses. Although there were differences between the two test versions, with most MMPI-2 validity scales generating larger effect sizes than the corresponding MMPI-2-RF scales, these differences were not significant after controlling for study design and type of symptoms being feigned. Additional analyses showed that the F and Fp-r scales generated the largest effect sizes in identifying feigned psychiatric symptoms, while the FBS and RBS were better at detecting exaggerated medical symptoms. The findings indicate that the MMPI-2 validity scales and their MMPI-2-RF counterparts were similarly effective in differentiating genuine responders from those exaggerating or feigning psychiatric and medical symptoms. These results provide reassurance for the use of both the MMPI-2 and MMPI-2-RF in settings where symptom exaggeration or feigning is likely. Findings are discussed in the context of the recently released MMPI-3.
Collapse
Affiliation(s)
| | - Emilie H Picard
- University of Virginia Health System, Charlottesville, VA, USA
| | | | | |
Collapse
|
10
|
Comprehensive Analysis of MMPI-2-RF Symptom Validity Scales and Performance Validity Test Relationships in a Diverse Mixed Neuropsychiatric Setting. PSYCHOLOGICAL INJURY & LAW 2023; 16:61-72. [PMID: 36348958 PMCID: PMC9633118 DOI: 10.1007/s12207-022-09467-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/24/2022] [Indexed: 11/06/2022]
Abstract
The utility of symptom (SVT) and performance (PVT) validity tests has been independently established in neuropsychological evaluations, yet research on the relationship between these two types of validity indices is limited to circumscribed populations and measures. This study examined the relationship between SVTs on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) and PVTs in a mixed neuropsychiatric setting. This cross-sectional study included data from 181 diagnostically and demographically diverse patients with neuropsychiatric conditions referred for outpatient clinical neuropsychological evaluation at an academic medical center. All patients were administered a uniform neuropsychological battery, including the MMPI-2-RF and five PVTs (i.e., Dot Counting Test; Medical Symptom Validity Test; Reliable Digit Span; Test of Memory Malingering-Trial 1; Word Choice Test). Nonsignificant associations emerged between SVT and PVT performance. Although the Response Bias Scale was most predictive of PVT performance, MMPI-2-RF SVTs generally had low classification accuracy for predicting PVT performance. Neuropsychological test performance was related to MMPI-2-RF SVT status only when overreporting elevations were at extreme scores. The current study further supports that SVTs and PVTs measure unique and dissociable constructs among diverse patients with neuropsychiatric conditions, consistent with literature from other clinical contexts. Therefore, objective evidence of symptom overreporting on MMPI-2-RF SVTs cannot be interpreted as definitively indicating invalid performance on tests of neurocognitive abilities. As such, clinicians should include both SVTs and PVTs as part of a comprehensive neuropsychological evaluation as they provide unique information regarding performance and symptom validity.
Collapse
|
11
|
Obolsky MA, Resch ZJ, Fellin TJ, Cerny BM, Khan H, Bing-Canar H, McCollum K, Lee RC, Fink JW, Pliskin NH, Soble JR. Concordance of Performance and Symptom Validity Tests Within an Electrical Injury Sample. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09469-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
12
|
Spencer RJ, Hale AC, Campbell EB, Ratcliffe LN. Examining the item composition of the RBS in veterans undergoing neuropsychological evaluation. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-5. [PMID: 36369757 DOI: 10.1080/23279095.2022.2142123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The Response Bias Scale (RBS) is a measure of protocol validity that is composed of items from the Minnesota Multiphasic Personality Inventory - 2. The RBS has been successfully cross-validated as a whole, but the composition of the scale has not been reexamined until recently when three types of items were identified. In this study we sought to examine the reliability of the scale as a whole, as well as the items that are (a) empirically supported and conceptually similar (ES/CS), (b) empirically supported but not conceptually similar (ES/NS), and (c) not empirically supported (NES). Participants included 56 veterans undergoing neuropsychological evaluation for suspected traumatic brain injury. Results generally replicated Ratcliffe et al. finding that removing key NES items improved the internal consistency of the RBS from 0.706 to 0.747. Examined separately, ES/CS and ES/NS had internal consistencies of 0.629 and 0.605, respectively. One of the nine NES items had strong internal consistency, but none of the remaining eight had corrected item-total correlations above 0.194. NES items had an internal consistency of 0.177. Although the RBS is well-validated in detecting non-credible cognitive presentations, it may prove even more valuable after further item refinement whereby items detracting from its reliability and validity are excised.
Collapse
Affiliation(s)
- Robert J Spencer
- Mental Health, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
| | - Andrew C Hale
- Neuropsychology, Ann Arbor VA Medical Center, Ann Arbor, MI, USA
| | | | | |
Collapse
|
13
|
Using machine-learning strategies to solve psychometric problems. Sci Rep 2022; 12:18922. [PMID: 36344737 PMCID: PMC9640572 DOI: 10.1038/s41598-022-23678-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 11/03/2022] [Indexed: 11/09/2022] Open
Abstract
Validating scales for clinical use is a common procedure in medicine and psychology. Through the application of computational methods, we present a new strategy for estimating construct validity and criterion validity. XGBoost, Random Forest and Support-Vector machine learning algorithms were employed in order to make predictions based on the pattern of participants' responses by systematically controlling computational experiments with artificial experiments whose results are guaranteed. According to these findings, these approaches are capable of achieving construct and criterion validity and therefore could provide an additional layer of evidence to traditional validation approaches. In particular, this study examined the extent to which measured items are inferable by theoretically related items, as well as the extent to which the information carried by a given construct can be translated into other theoretically compatible normative scales based on other constructs (thereby providing information about construct validity); as well as the replicability of clinical decision rules on several partitions (thereby providing information about criterion validity).
Collapse
|
14
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Replication and cross-validation of the personality assessment inventory (PAI) cognitive bias scale (CBS) in a mixed clinical sample. Clin Neuropsychol 2022; 36:1860-1877. [PMID: 33612093 PMCID: PMC8454137 DOI: 10.1080/13854046.2021.1889681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/08/2021] [Indexed: 01/27/2023]
Abstract
Objective: This study is a cross-validation of the Cognitive Bias Scale (CBS) from the Personality Assessment Inventory (PAI), a ten-item scale designed to assess symptom endorsement associated with performance validity test failure in neuropsychological samples. The study utilized a mixed neuropsychological sample of consecutively referred patients at a large academic medical center in the Midwest. Participants and Methods: Participants were 332 patients who completed embedded and free-standing performance validity tests (PVTs) and the PAI. Pass and fail groups were created based on PVT performance to evaluate classification accuracy of the CBS. Results: The results were generally consistent with the initial study for overall classification accuracy, sensitivity, and cut-off score. Consistent with the validation study, CBS had better classification accuracy than the original PAI validity scales and a comparable effect size to that obtained in the original validation publication; however, the Somatic Complaints scale (SOM) and the Conversion subscale (SOM-C) also demonstrated good classification accuracy. The CBS had incremental predictive ability compared to existing PAI scales. Conclusions: The results supported the CBS, but further research is needed on specific populations. Findings from this present study also suggest the relationship between conversion tendencies and PVT failure may be stronger in some geographic locations or population types (forensic versus clinical patients).
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, USA
| |
Collapse
|
15
|
Boress K, Gaasedelen OJ, Croghan A, Johnson MK, Caraher K, Basso MR, Whiteside DM. Validation of the Personality Assessment Inventory (PAI) scale of scales in a mixed clinical sample. Clin Neuropsychol 2022; 36:1844-1859. [PMID: 33730975 PMCID: PMC8474121 DOI: 10.1080/13854046.2021.1900400] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: This exploratory study examined the classification accuracy of three derived scales aimed at detecting cognitive response bias in neuropsychological samples. The derived scales are composed of existing scales from the Personality Assessment Inventory (PAI). A mixed clinical sample of consecutive outpatients referred for neuropsychological assessment at a large Midwestern academic medical center was utilized. Participants and Methods: Participants included 332 patients who completed study's embedded and free-standing performance validity tests (PVTs) and the PAI. PASS and FAIL groups were created based on PVT performance to evaluate the classification accuracy of the derived scales. Three new scales, Cognitive Bias Scale of Scales 1-3, (CB-SOS1-3) were derived by combining existing scales by either summing the scales together and dividing by the total number of scales summed, or by logistically deriving a variable from the contributions of several scales. Results: All of the newly derived scales significantly differentiated between PASS and FAIL groups. All of the derived SOS scales demonstrated acceptable classification accuracy (i.e. CB-SOS1 AUC = 0.72; CB-SOS2 AUC = 0.73; CB-SOS3 AUC = 0.75). Conclusions: This exploratory study demonstrates that attending to scale-level PAI data may be a promising area of research in improving prediction of PVT failure.
Collapse
Affiliation(s)
- Kaley Boress
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Anna Croghan
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Marcie King Johnson
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA,Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| | - Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
16
|
Weitzner DS, Miller BI, Webber TA. Embedded cognitive and emotional/affective self-reported symptom validity indices on the patient competency rating scale. J Clin Exp Neuropsychol 2022; 44:533-549. [PMID: 36369702 DOI: 10.1080/13803395.2022.2138270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Although there is an abundance of research on stand-alone and embedded performance validity tests and stand-alone symptom validity tests (SVTs), less emphasis has been placed on embedded SVTs. The goal of the current study was to examine the ability of embedded indicators within the Patient Competency Rating Scale (PCRS) to separately detect invalid cognitive and/or emotional/affective symptom responding. METHOD Participants included 299 veterans assessed in a VA medical center epilepsy monitoring unit from 2013-2017 (mean age = 48.8 years, SD = 13.5 years). Two SVT composites were created; self-reported cognitive symptom validity (SVT-C) and self-reported emotional/affective symptom validity (SVT-E). Groups were compared on PCRS total and index scores (i.e., cognitive, activities of daily living, emotional, and interpersonal competencies) using ANOVAs. Receiver operating characteristic (ROC) curve analyses assessed the classification accuracy of the PCRS total and index scores for SVT-C and SVT-E. RESULTS In ANOVAs, SVT-C was significantly associated with all PCRS indices, while SVT-E was only significantly associated with the PCRS total, emotional, and interpersonal competency indices. Although the PCRS-T ≤ 90 had the strongest classification of SVT-C and SVT-E (specificities: .90, sensitivities: .44 to .50), PCRS index scores showed suggestive evidence of domain specificity, with PCRS-ADL ≤22, PCRS-C ≤ 20, and PCRS-CADL ≤45 best classifying SVT-C (specificities: .92, sensitivities: .33) and the PCRS-E ≤ 18 best classifying the SVT-E group (specificity: .93, sensitivity: .40). CONCLUSION Results suggest the PCRS may be used to obtain clinically useful information while including embedded indicators that can assess cognitive and/or emotional/affective symptom invalidity.
Collapse
Affiliation(s)
- Daniel S Weitzner
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Neurology Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| | - Troy A Webber
- Mental Health Care Line, Michael E. DeBakey VA Medical Center, Houston, TX, USA.,Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
17
|
Tylicki JL, Gervais RO, Ben-Porath YS. Examination of the MMPI-3 over-reporting scales in a forensic disability sample. Clin Neuropsychol 2022; 36:1878-1901. [PMID: 33319631 DOI: 10.1080/13854046.2020.1856414] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Objective: The aim of this investigation was to provide information about the utility of the newly revised and renormed Minnesota Multiphasic Personality Inventory-3 (MMPI-3) over-reporting scales in a forensic disability sample. Method: Participants consisted of 550 non-head injury disability-related referrals (i.e. 95.6% for worker's compensation) and were primarily diagnosed with an adjustment disorder, depressive disorder, or posttraumatic stress disorder. Criterion measures included performance validity indicators and non-MMPI symptom validity indicators. Results: Correlation analyses showed that validity scale F was most strongly associated with non-MMPI symptom validity indicators, whereas F, Fs, FBS, and RBS were comparable to each other in their associations with performance validity indicators. Group mean comparisons between Pass versus Fail PVT groups showed that RBS consistently yielded the largest effect sizes. Using established structured criteria for Malingered Neurocognitive Dysfunction (MND), additional group mean comparisons showed that RBS, followed by Fs, F, and FBS, performed well in differentiating genuine responders from MND examinees. Classification accuracy estimates indicated that the MMPI-3 over-reporting scales performed well in the prediction of Probable/Definite MND and, as expected, to a lesser degree of Possible MND. Conclusions: Practical applications, study limitations, and directions for future research are discussed. The overall findings from this study provide empirical support for the utility of the MMPI-3 over-reporting scales in detecting negative response bias in forensic disability evaluations.
Collapse
Affiliation(s)
- Jessica L Tylicki
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| | - Roger O Gervais
- Neurobehavioural Associates, Edmonton, AB, Canada.,Department of Educational Psychology, University of Alberta, Edmonton, AB, Canada
| | | |
Collapse
|
18
|
Ratcliffe LN, Hale AC, Gradwohl BD, Spencer RJ. Preliminary findings from reevaluating the MMPI Response Bias Scale items in veterans undergoing neuropsychological evaluation. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-8. [PMID: 35917583 DOI: 10.1080/23279095.2022.2106571] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The Response Bias Scale (RBS) was developed to predict non-credible cognitive presentations among disability claimants without head injury. Developers used empirical keying, which is independent of apparent content, to select items from the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) item pool that distinguished between individuals passing or failing performance validity tests (PVTs). No study has examined which of these items would have psychometric value when used in clinical neuropsychological evaluations. This study reexamined items comprising RBS with reference to manifest item content, internal consistency, PVTs, and a symptom validity test (SVT) in a sample of 173 predominately White male veterans (MAGE = 50.70, MEDU = 13.73) in a VA outpatient neuropsychology clinic. Participants completed the MMPI-2 Restructured Form (MMPI-2-RF), PVTs, and an SVT. The 28-item RBS appears to contain three types of items: those that manifestly address cognitive functioning, those that are supported but do not appear to address cognitive functioning, and nine items that were unrelated to cognition and not statistically supported. The 19 empirically supported items, or RBS-19, predicted PVT and SVT failures marginally better than the RBS. Both the RBS and RBS-19 had stronger relationships with SVTs relative to PVTs. Although the removal of the nine problematic items improved the diagnostic accuracy of the scale, it still did not reach the level that is generally considered to be clinically optimal. The RBS-19 offers a measure with improved internal consistency and predictive validity compared to the RBS and warrants additional research.
Collapse
Affiliation(s)
- Lauren N Ratcliffe
- Mental Health Service, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
- Department of Psychiatry, Michigan Medicine, Ann Arbor, MI, USA
- Department of Clinical Psychology, Mercer University College of Health Professions, Atlanta, GA, USA
| | - Andrew C Hale
- Mental Health Service, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
- Department of Psychiatry, Michigan Medicine, Ann Arbor, MI, USA
| | - Brian D Gradwohl
- Mental Health Service, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
- Department of Psychiatry, Michigan Medicine, Ann Arbor, MI, USA
| | - Robert J Spencer
- Mental Health Service, VA Ann Arbor Healthcare System, Ann Arbor, MI, USA
- Department of Psychiatry, Michigan Medicine, Ann Arbor, MI, USA
| |
Collapse
|
19
|
Morris NM, Ingram PB, Armistead-Jehle P. Relationship of personality assessment inventory (PAI) over-reporting scales to performance validity testing in a military neuropsychological sample. MILITARY PSYCHOLOGY 2022. [DOI: 10.1080/08995605.2021.2013059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Nicole M. Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
| | - Paul B. Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, Texas, USA
- Dwight D. Eisenhower Veteran Affairs Medical Center, Eastern Kansas Veteran Healthcare System, Leavenworth, Kansas, USA
| | | |
Collapse
|
20
|
Nussbaum S, May N, Cutler L, Abeare CA, Watson M, Erdodi LA. Failing Performance Validity Cutoffs on the Boston Naming Test (BNT) Is Specific, but Insensitive to Non-Credible Responding. Dev Neuropsychol 2022; 47:17-31. [PMID: 35157548 DOI: 10.1080/87565641.2022.2038602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
This study was designed to examine alternative validity cutoffs on the Boston Naming Test (BNT).Archival data were collected from 206 adults assessed in a medicolegal setting following a motor vehicle collision. Classification accuracy was evaluated against three criterion PVTs.The first cutoff to achieve minimum specificity (.87-.88) was T ≤ 35, at .33-.45 sensitivity. T ≤ 33 improved specificity (.92-.93) at .24-.34 sensitivity. BNT validity cutoffs correctly classified 67-85% of the sample. Failing the BNT was unrelated to self-reported emotional distress. Although constrained by its low sensitivity, the BNT remains a useful embedded PVT.
Collapse
Affiliation(s)
- Shayna Nussbaum
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Mark Watson
- Mark S. Watson Psychology Professional Corporation, Mississauga, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
21
|
Choca JP, Pignolo C. Assessing Negative Response Bias with the Millon Clinical Multiaxial Inventory-IV (MCMI-IV): a Review of the Literature. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09442-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
Wygant DB, Tylicki JL, Disney LF, Connelly AI. Structured Interview of Reported Symptoms-2nd Edition (SIRS-2): Use and Admissibility in Forensic Mental Health Assessment. J Pers Assess 2021; 104:265-280. [PMID: 34871131 DOI: 10.1080/00223891.2021.2006673] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Assessment of symptom feigning is paramount in forensic psychological and psychiatric assessment. The Structured Interview of Reported Symptoms, 2nd Edition (SIRS-2; Rogers et al., 2010) is a revised edition to the original SIRS (Rogers et al., Structured Interview of Reported Symptoms (SIRS) and professional manual. Psychological Assessment Resources, Inc, 1992) and was developed to assess feigned psychiatric symptoms. The current paper reviews use of the SIRS-2 in forensic assessment, specifically addressing topics such as translations of the instrument, its use in assessing psychiatric and cognitive feigning, and its use in special populations. Empirical foundation and psychometric properties of the SIRS-2 is also covered. The SIRS-2 was revised in part to reduce false positive classifications of feigning. Research suggests that this goal was largely accomplished, albeit at the expense of reduced sensitivity. The paper also provides a review of federal and state appellate cases that mention the SIRS-2. Notably, most cases that cite the SIRS-2 do not actually center on the SIRS-2, and the test's admissibility has never been outrightly challenged. The paper concludes with a discussion of expert testimony concerning the SIRS-2.
Collapse
|
23
|
Assessing Negative Response Bias: a Review of the Noncredible Overreporting Scales of the MMPI-2-RF and MMPI-3. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09435-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
24
|
|
25
|
Future Directions in Performance Validity Assessment to Optimize Detection of Invalid Neuropsychological Test Performance: Special Issue Introduction. PSYCHOLOGICAL INJURY & LAW 2021; 14:227-231. [PMID: 34567346 PMCID: PMC8455301 DOI: 10.1007/s12207-021-09425-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 09/13/2021] [Indexed: 11/27/2022]
|
26
|
Modiano YA, Taiwo Z, Pastorek NJ, Webber TA. The Structured Inventory of Malingered Symptomatology Amnestic Disorders Scale (SIMS-AM) Is Insensitive to Cognitive Impairment While Accurately Identifying Invalid Cognitive Symptom Reporting. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09420-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
27
|
Relations Among Performance and Symptom Validity, Mild Traumatic Brain Injury, and Posttraumatic Stress Disorder Symptom Burden in Postdeployment Veterans. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09415-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
28
|
Resch ZJ, Paxton JL, Obolsky MA, Lapitan F, Cation B, Schulze ET, Calderone V, Fink JW, Lee RC, Pliskin NH, Soble JR. Establishing the base rate of performance invalidity in a clinical electrical injury sample: Implications for neuropsychological test performance. J Clin Exp Neuropsychol 2021; 43:213-223. [PMID: 33858295 DOI: 10.1080/13803395.2021.1914002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Objective: The base rate of neuropsychological performance invalidity in electrical injury, a clinically-distinct and frequently compensation-seeking population, is not well established. This study determined the base rate of performance invalidity in a large electrical injury sample, and examined patient characteristics, injury parameters, and neuropsychological test performance based on validity status.Method: This cross-sectional study included data from 101 patients with electrical injury consecutively referred for post-acute neuropsychological evaluation. Eighty-five percent of the sample was compensation-seeking. Multiple performance validity tests (PVTs) were administered as part of standard clinical evaluation. For patients with four or more PVTs, valid performance was operationalized as less than or equal to one PVT failure and invalid performance as two or more failures.Results: Frequency analysis revealed 66% (n = 67) had valid performance while 29% (n = 29) demonstrated probable invalid performance; the remaining 5% (n = 5) had indeterminate validity. No significant differences in demographics or injury parameters emerged between validity groups (0 vs. 1 vs. ≥2 PVT failures). In contrast, the electrical injury group with invalid performance performed significantly worse across tests of processing speed and executive abilities than those with valid performance (ps< .05, ηp2 = .19-.25).Conclusions: The current study is the first to establish the base rate of neuropsychological performance invalidity in electrical injury survivors using empirical methods and current practice standards. Patient and clinical variables, including compensation-seeking status, did not differ between validity groups; however, neuropsychological test performance did, supporting the need for multi-method, objective performance validity assessment.
Collapse
Affiliation(s)
- Zachary J Resch
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA
| | - Jessica L Paxton
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Maximillian A Obolsky
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Franchezka Lapitan
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Bailey Cation
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Psychology, Roosevelt University, Chicago, IL, USA
| | - Evan T Schulze
- Department of Neurology, Saint Louis University, St. Louis, MO, USA
| | - Veroly Calderone
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA
| | - Joseph W Fink
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Psychiatry and Behavioral Neuroscience, University of Chicago, Chicago, IL, USA
| | - Raphael C Lee
- The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Departments of Surgery, Medicine and Organismal Biology, University of Chicago, Chicago, IL, USA
| | - Neil H Pliskin
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,The Chicago Electrical Trauma Rehabilitation Institute (CETRI), Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| | - Jason R Soble
- Department of Psychiatry, University of Illinois at Chicago College of Medicine, Chicago, IL, USA.,Department of Neurology, University of Illinois at Chicago College of Medicine, Chicago, IL, USA
| |
Collapse
|
29
|
Abstract
This is the first controlled study regarding personality and psychopathology in adults with Noonan syndrome (NS). Anxiety, depression, alexithymia and symptoms of Attention Deficit-Hyperactivity Disorder and Autism Spectrum Disorder, have been previously described in NS. More information regarding personality and psychopathology in NS could improve mental health care for this population. Therefore, scores on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF), a widely used self-report questionnaire of personality and psychopathology, were compared between patients with NS (n = 18) and matched, healthy controls (n = 18). Furthermore, correlations between MMPI-2-RF scores and alexithymia, measured by the Toronto Alexithymia Scale-20, were investigated. Patients with NS showed significantly higher scores, with medium effect sizes, on MMPI-2-RF scales reflecting infrequent responses (F-r), somatic and cognitive complaints (FBS-r and RBS-r), internalizing problems (EID), demoralization (RCd) and introversion (INTR-r), although the overall profile in both groups was within the non-clinical range. Alexithymia correlated with internalizing problems and negative emotionality in the patient group. In conclusion, patients with NS showed higher levels of introversion, which may predispose them to internalizing problems. These problems were indeed more frequent in patients with NS, especially higher levels of demoralization. Patients may benefit from psychological interventions aimed to decrease internalizing problems, introversion and alexithymia.
Collapse
|
30
|
Abeare K, Razvi P, Sirianni CD, Giromini L, Holcomb M, Cutler L, Kuzmenka P, Erdodi LA. Introducing Alternative Validity Cutoffs to Improve the Detection of Non-credible Symptom Report on the BRIEF. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09402-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
31
|
Sabelli AG, Messa I, Giromini L, Lichtenstein JD, May N, Erdodi LA. Symptom Versus Performance Validity in Patients with Mild TBI: Independent Sources of Non-credible Responding. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09400-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
32
|
Gegner J, Erdodi LA, Giromini L, Viglione DJ, Bosi J, Brusadelli E. An Australian study on feigned mTBI using the Inventory of Problems - 29 (IOP-29), its Memory Module (IOP-M), and the Rey Fifteen Item Test (FIT). APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1221-1230. [PMID: 33403885 DOI: 10.1080/23279095.2020.1864375] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
We investigated the classification accuracy of the Inventory of Problems - 29 (IOP-29), its newly developed memory module (IOP-M) and the Fifteen Item Test (FIT) in an Australian community sample (N = 275). One third of the participants (n = 93) were asked to respond honestly, two thirds were instructed to feign mild TBI. Half of the feigners (n = 90) were coached to avoid detection by not exaggerating, half were not (n = 92). All measures successfully discriminated between honest responders and feigners, with large effect sizes (d ≥ 1.96). The effect size for the IOP-29 (d ≥ 4.90), however, was about two-to-three times larger than those produced by the IOP-M and FIT. Also noteworthy, the IOP-29 and IOP-M showed excellent sensitivity (>90% the former, > 80% the latter), in both the coached and uncoached feigning conditions, at perfect specificity. Instead, the sensitivity of the FIT was 71.7% within the uncoached simulator group and 53.3% within the coached simulator group, at a nearly perfect specificity of 98.9%. These findings suggest that the validity of the IOP-29 and IOP-M should generalize to Australian examinees and that the IOP-29 and IOP-M likely outperform the FIT in the detection of feigned mTBI.
Collapse
Affiliation(s)
- Jennifer Gegner
- Department of Psychology, University of Wollongong, Wollongong, Australia
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | | | | | | |
Collapse
|
33
|
Serrano burneo DC, Bowden SC, Simpson LC. Incremental Validity of the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI‐2) Relative to the Beck Depression Inventory‐Second Edition (BDI‐II) in the Detection of Depressive Symptoms. AUSTRALIAN PSYCHOLOGIST 2020. [DOI: 10.1111/ap.12231] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Daniela C. Serrano burneo
- Centre for Clinical Neurosciences & Neurological Research, St Vincent's Hospital Melbourne, Fitzroy, Australia,
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia,
| | - Stephen C. Bowden
- Centre for Clinical Neurosciences & Neurological Research, St Vincent's Hospital Melbourne, Fitzroy, Australia,
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia,
| | - Leonie C. Simpson
- Centre for Clinical Neurosciences & Neurological Research, St Vincent's Hospital Melbourne, Fitzroy, Australia,
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Australia,
| |
Collapse
|
34
|
Armistead-Jehle P, Ingram PB, Morris CS. Personality Assessment Inventory Cognitive Bias Scale: Validation in a Military Sample. Arch Clin Neuropsychol 2020; 35:1154–1161. [PMID: 32738043 DOI: 10.1093/arclin/acaa049] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 04/13/2020] [Accepted: 06/23/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE Recently, in a mixed neuropsychological outpatient sample, a measure of cognitive response bias has been developed for the Personality Assessment Inventory (PAI) called the Cognitive Bias Scale (CBS). This study sought to cross-validate this measure in a military sample. METHOD Retrospective review of 197 active duty soldiers referred to an Army outpatient clinic for neuropsychological evaluation. Groups were created based on the number of failed performance validity tests (0, 1, or 2-3 performance validity testing [PVT] failures). RESULTS The magnitude of effect for the 10-item CBS scale was medium-to-large when comparing those with one PVT failure to those with two to three (d = .98) and those with no failures (d = 1.21); however, effects between the 1 and 2-3 PVT failure groups were less pronounced. In 1 and 2-3 PVT failure groups, a score of $\ge$16 had high specificity (.92 and .95, respectively) and low to moderate sensitivity (.20 and .55, respectively). CONCLUSIONS In a military sample, the CBS demonstrated high specificity with relatively low sensitivity. The measure operated similarly to the original study and the current data supports the CBS to rule in, but not rule out, over-reported cognitive symptoms on the PAI.
Collapse
Affiliation(s)
| | - Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| | - Cole S Morris
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| |
Collapse
|
35
|
Sherman EMS, Slick DJ, Iverson GL. Multidimensional Malingering Criteria for Neuropsychological Assessment: A 20-Year Update of the Malingered Neuropsychological Dysfunction Criteria. Arch Clin Neuropsychol 2020; 35:735-764. [PMID: 32377667 PMCID: PMC7452950 DOI: 10.1093/arclin/acaa019] [Citation(s) in RCA: 137] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 03/12/2020] [Indexed: 11/17/2022] Open
Abstract
OBJECTIVES Empirically informed neuropsychological opinion is critical for determining whether cognitive deficits and symptoms are legitimate, particularly in settings where there are significant external incentives for successful malingering. The Slick, Sherman, and Iversion (1999) criteria for malingered neurocognitive dysfunction (MND) are considered a major milestone in the field's operationalization of neurocognitive malingering and have strongly influenced the development of malingering detection methods, including serving as the criterion of malingering in the validation of several performance validity tests (PVTs) and symptom validity tests (SVTs) (Slick, D.J., Sherman, E.M.S., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13(4), 545-561). However, the MND criteria are long overdue for revision to address advances in malingering research and to address limitations identified by experts in the field. METHOD The MND criteria were critically reviewed, updated with reference to research on malingering, and expanded to address other forms of malingering pertinent to neuropsychological evaluation such as exaggeration of self-reported somatic and psychiatric symptoms. RESULTS The new proposed criteria simplify diagnostic categories, expand and clarify external incentives, more clearly define the role of compelling inconsistencies, address issues concerning PVTs and SVTs (i.e., number administered, false positives, and redundancy), better define the role of SVTs and of marked discrepancies indicative of malingering, and most importantly, clearly define exclusionary criteria based on the last two decades of research on malingering in neuropsychology. Lastly, the new criteria provide specifiers to better describe clinical presentations for use in neuropsychological assessment. CONCLUSIONS The proposed multidimensional malingering criteria that define cognitive, somatic, and psychiatric malingering for use in neuropsychological assessment are presented.
Collapse
Affiliation(s)
| | | | - Grant L Iverson
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, USA
- Spaulding Rehabilitation Hospital and Spaulding Research Institute, Charlestown, MA, USA
- Home Base, A Red Sox Foundation and Massachusetts General Hospital Program, Charlestown, MA, USA
| |
Collapse
|
36
|
Giromini L, Viglione DJ, Zennaro A, Maffei A, Erdodi LA. SVT Meets PVT: Development and Initial Validation of the Inventory of Problems – Memory (IOP-M). PSYCHOLOGICAL INJURY & LAW 2020. [DOI: 10.1007/s12207-020-09385-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
Huber BN, Jones RG, Capps SC, Buchanan EM. Memory complaints inventory profiles: Differentiating neurocognitive impairment, depression, and non-credible performance. APPLIED NEUROPSYCHOLOGY. ADULT 2020; 29:234-243. [PMID: 32186416 DOI: 10.1080/23279095.2020.1735388] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The Memory Complaints Inventory (MCI) is a symptom validity measure designed to assess exaggerated memory complaints. The aim of current study was to develop memory complaint profiles on the MCI to distinguish between various neurocognitive disorders, depression, and non-credible performance. This study utilized MCI scores (N = 244) from a neuropsychology clinic to determine the presence of, and difference between, subjective memory complaints between a depression group, non-credible group, and subgroups of cognitive impairment (Alzheimer's Dementia, Vascular Dementia, and Mild Cognitive Impairment). Significant differences were found on MCI endorsement between cognitive impairment, depression, and non-credible groups. This pattern indicated fewer memory complaints for cognitive impairment groups when compared to depression and non-credible groups; the non-credible group had the highest MCI scores overall. ROC analyses revealed recommended clinical cutoff values with high specificity for distinguishing between the non-credible group and other groups. The findings provided further evidence for the MCI as a symptom validity measure, given its ability to differentiate between a non-credible group and clinical groups. Replication of the study's findings would result in reliable genuine subjective memory complaint profiles to provide additional diagnostic and prognostic specificity in neuropsychological practice.
Collapse
Affiliation(s)
- Becca N Huber
- Psychology, Idaho State University, Pocatello, ID, USA.,Psychology, Missouri State University, Springfield, MO, USA
| | - Ryan G Jones
- Neuropsychology, CoxHealth, Springfield, MO, USA
| | - Steven C Capps
- Psychology, Missouri State University, Springfield, MO, USA
| | - Erin M Buchanan
- Psychology, Missouri State University, Springfield, MO, USA.,Cognitive Analytics, Harrisburg University of Science and Technology, Harrisburg, PA, USA
| |
Collapse
|
38
|
Ingram PB, Golden BL, Armistead-Jehle PJ. Evaluating the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) over-reporting scales in a military neuropsychology clinic. J Clin Exp Neuropsychol 2020; 42:263-273. [PMID: 31900041 DOI: 10.1080/13803395.2019.1708271] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Introduction: This study examines the utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) validity scales to detect invalid responding within a sample of active duty United States Army soldiers referred for neuropsychological evaluations.Method: This study examines the relationship between performance validity testing and performance on the MMPI-2-RF over-reporting scales. Specifically, mean differences between those who passed (n = 152; 75.6%) or failed (n = 49; 24.4%) performance validity testing were compared. Receiver operator characteristic analyzes were also conducted to expand available information on the MMPI-2-RF over-reporting sensitivity and specificity in an Army sample.Results: This study has two distinct findings. First, effect size differences between those passing and failing performance validity testing are classified as small to medium in magnitude (ranging from d = . 30/g = .32 on F-r to d = .66/g = .73 on RBS). Second, over-reporting scales have higher specificity and poorer sensitivity. Likewise, performance of the over-reporting scales suggests that those who exceeding recommended cut scores are likely to have failed extra-test performance validity measures.Conclusion: These findings suggest that many who fail external performance measures may be undetected on the MMPI-2-RF over-reporting scales and that those exceeding recommended cut scores are likely to have failed extra-test performance validity testing. Implications for research on, and practice with, the MMPI-2-RF in military populations are discussed.
Collapse
Affiliation(s)
- Paul B Ingram
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA.,Dwight D. Eisenhower VAMC, Eastern Kansas Veteran Healthcare System, Leavenworth, KS, USA
| | - Brittney L Golden
- Department of Psychological Sciences, Texas Tech University, Lubbock, TX, USA
| | | |
Collapse
|
39
|
Whiteside DM, Hunt I, Choate A, Caraher K, Basso MR. Stratified performance on the Test of Memory Malingering (TOMM) is associated with differential responding on the Personality Assessment Inventory (PAI). J Clin Exp Neuropsychol 2019; 42:131-141. [DOI: 10.1080/13803395.2019.1695749] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Douglas M. Whiteside
- Department of Rehabilitation Medicine, Neuropsychology Laboratory, University of Minnesota, Minneapolis, MN, USA
| | - Isaac Hunt
- Department of Neurology, St. Mary’s Medical Center, Essentia Health, Duluth, MN, USA
| | - Alyssa Choate
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Kristen Caraher
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Michael R. Basso
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, NY, USA
| |
Collapse
|
40
|
Gaasedelen OJ, Whiteside DM, Altmaier E, Welch C, Basso MR. The construction and the initial validation of the Cognitive Bias Scale for the Personality Assessment Inventory. Clin Neuropsychol 2019; 33:1467-1484. [DOI: 10.1080/13854046.2019.1612947] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Owen J. Gaasedelen
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
- New Mexico VA Health Care System, Albuquerque, NM, USA
| | - Douglas M. Whiteside
- Department of Psychiatry, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Elizabeth Altmaier
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
| | - Catherine Welch
- Department of Psychological and Quantitative Foundations, University of Iowa, Iowa City, IA, USA
| | | |
Collapse
|
41
|
Giger P, Merten T. Equivalence of the German and the French Versions of the Self-Report Symptom Inventory. SWISS JOURNAL OF PSYCHOLOGY 2019. [DOI: 10.1024/1421-0185/a000218] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract. Against the background of the growing importance of symptom validity assessment both in forensic and clinical or rehabilitation contexts, a new instrument for identifying overreporting was developed. In order to study the equivalence of the German and the French versions, we divided the item pool of the Self-Report Symptom Inventory (SRSI) into two presumably equivalent half-forms. A sample of 40 adult bilingual Swiss nationals with a mean age of 39.9 years responded honestly to one of the half-forms in German and to the other in French. In a subsequent experimental malingering condition, they were asked to simulate sequelae of a whiplash injury and to respond to the SRSI again. In both conditions, they also filled out the Structured Inventory of Malingered Symptomatology (SIMS). The results showed no differences between the two language versions in both conditions. Classification accuracy was very high (100% specificity, 90% sensitivity for the standard cutoff score). Reliability estimates were 0.93 for endorsement of genuine symptoms and 0.97 for pseudosymptom endorsement. In the malingering condition, the correlation between the number of reported pseudosymptoms and the SIMS scores was 0.69. The current results add to the database available for the SRSI and support the appropriateness of the French version.
Collapse
Affiliation(s)
- Peter Giger
- formerly Department of Defence, Civil Protection and Sport, Bern, Switzerland
| | - Thomas Merten
- Department of Neurology, Vivantes Klinikum im Friedrichshain, Berlin, Germany
| |
Collapse
|
42
|
Ashendorf L. Neurobehavioral symptom validity in U.S. Department of Veterans Affairs (VA) mild traumatic brain injury evaluations. J Clin Exp Neuropsychol 2019; 41:432-441. [DOI: 10.1080/13803395.2019.1567693] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Lee Ashendorf
- Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA
- Mental Health Service Line, VA Central Western Massachusetts, Worcester, MA, USA
| |
Collapse
|
43
|
Abstract
OBJECTIVES The aim of this study was to investigate the relationship of psychological variables to cognitive performance validity test (PVT) results in mixed forensic and nonforensic clinical samples. METHODS Participants included 183 adults who underwent comprehensive neuropsychological examination. Criterion groups were formed, that is, Credible Group or Noncredible Group, based upon their performance on the Word Memory Test and other stand-alone and embedded PVT measures. RESULTS Multivariate logistic regression analysis identified three significant predictors of cognitive performance validity. These included two psychological constructs, for example, Cogniphobia (perception that cognitive effort will exacerbate neurological symptoms), and Symptom Identity (perception that current symptoms are the result of illness or injury), and one contextual factor (forensic). While there was no interaction between these factors, elevated scores were most often observed in the forensic sample, suggesting that these independently contributing intrinsic psychological factors are more likely to occur in a forensic environment. CONCLUSIONS Illness perceptions were significant predictors of cognitive performance validity particularly when they reached very elevated levels. Extreme elevations were more common among participants in the forensic sample, and potential reasons for this pattern are explored. (JINS, 2018, 24, 735-745).
Collapse
|
44
|
Morey LC. Examining a novel performance validity task for the detection of feigned attentional problems. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 26:255-267. [DOI: 10.1080/23279095.2017.1409749] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Leslie C. Morey
- Department of Psychology, Texas A&M University, College Station, Texas, USA
| |
Collapse
|
45
|
Nichols DS. Fake bad scale: the case of the missing construct, a response to Larrabee, Bianchini, Boone, and Rohling (2017). Clin Neuropsychol 2017; 31:1396-1400. [PMID: 28866953 DOI: 10.1080/13854046.2017.1365934] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The 'Commentary' of Drs. Larrabee, Bianchini, Boone, and Rohling (2017) attributes to us a view of the Fake Bad/Symptom Validity Scale (FBS/FBS-r) that is wholly erroneous, a view we do not hold and have never taken. In doing so, the authors have confused the thrust of our article with the assertions made in an earlier article which preceded publication of the FBS. This earlier article held that many physical and cognitive symptoms/complaints observed in personal injury plaintiffs are most parsimoniously understood as manifestations of the stresses that may routinely accompany plaintiffs' involvement in such litigation. In this response, I therefore, wish to clarify this misunderstanding and to elaborate upon several of the issues raised in our article.
Collapse
|
46
|
Morin RT, Axelrod BN. Use of Latent Class Analysis to define groups based on validity, cognition, and emotional functioning. Clin Neuropsychol 2017. [PMID: 28632025 DOI: 10.1080/13854046.2017.1341550] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Latent Class Analysis (LCA) was used to classify a heterogeneous sample of neuropsychology data. In particular, we used measures of performance validity, symptom validity, cognition, and emotional functioning to assess and describe latent groups of functioning in these areas. METHOD A data-set of 680 neuropsychological evaluation protocols was analyzed using a LCA. Data were collected from evaluations performed for clinical purposes at an urban medical center. RESULTS A four-class model emerged as the best fitting model of latent classes. The resulting classes were distinct based on measures of performance validity and symptom validity. Class A performed poorly on both performance and symptom validity measures. Class B had intact performance validity and heightened symptom reporting. The remaining two Classes performed adequately on both performance and symptom validity measures, differing only in cognitive and emotional functioning. In general, performance invalidity was associated with worse cognitive performance, while symptom invalidity was associated with elevated emotional distress. CONCLUSIONS LCA appears useful in identifying groups within a heterogeneous sample with distinct performance patterns. Further, the orthogonal nature of performance and symptom validities is supported.
Collapse
Affiliation(s)
- Ruth T Morin
- a Department of Counseling and Clinical Psychology , Teachers College, Columbia University , New York , NY , USA.,b John D. Dingell VA Medical Center , Detroit , MI , USA
| | | |
Collapse
|
47
|
Detecting Feigned Attention-Deficit/Hyperactivity Disorder (ADHD): Current Methods and Future Directions. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9286-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
48
|
Young G. PTSD in Court III: Malingering, assessment, and the law. INTERNATIONAL JOURNAL OF LAW AND PSYCHIATRY 2017; 52:81-102. [PMID: 28366496 DOI: 10.1016/j.ijlp.2017.03.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 03/02/2017] [Indexed: 06/07/2023]
Abstract
This journal's third article on PTSD in Court focuses especially on the topic's "court" component. It first considers the topic of malingering, including in terms of its definition, certainties, and uncertainties. As with other areas of the study of psychological injury and law, generally, and PTSD (posttraumatic stress disorder), specifically, malingering is a contentious area not only definitionally but also empirically, in terms of establishing its base rate in the index populations assessed in the field. Both current research and re-analysis of past research indicates that the malingering prevalence rate at issue is more like 15±15% as opposed to 40±10%. As for psychological tests used to assess PTSD, some of the better ones include the TSI-2 (Trauma Symptom Inventory, Second Edition; Briere, 2011), the MMPI-2-RF (Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form; Ben-Porath & Tellegen, 2008/2011), and the CAPS-5 (The Clinician-Administered PTSD Scale for DSM-5; Weathers, Blake, Schnurr, Kaloupek, Marx, & Keane, 2013b). Assessors need to know their own possible biases, the applicable laws (e.g., the Daubert trilogy), and how to write court-admissible reports. Overall conclusions reflect a moderate approach that navigates the territory between the extreme plaintiff or defense allegiances one frequently encounters in this area of forensic practice.
Collapse
|
49
|
Armistead-Jehle P, Cole WR, Stegman RL. Performance and Symptom Validity Testing as a Function of Medical Board Evaluation in U.S. Military Service Members with a History of Mild Traumatic Brain Injury. Arch Clin Neuropsychol 2017; 33:120-124. [DOI: 10.1093/arclin/acx031] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 03/21/2017] [Indexed: 11/13/2022] Open
|
50
|
Lau L, Basso MR, Estevis E, Miller A, Whiteside DM, Combs D, Arentsen TJ. Detecting coached neuropsychological dysfunction: a simulation experiment regarding mild traumatic brain injury. Clin Neuropsychol 2017; 31:1412-1431. [DOI: 10.1080/13854046.2017.1318954] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Lily Lau
- Department of Psychology, University of Tulsa, Tulsa, OK, USA
| | | | - Eduardo Estevis
- Department of Psychology, University of Tulsa, Tulsa, OK, USA
| | - Ashley Miller
- Department of Psychology, University of Tulsa, Tulsa, OK, USA
| | | | - Dennis Combs
- Department of Psychology, University of Texas at Tyler, Tyler, TX, USA
| | | |
Collapse
|