1
|
Bosi J, Minassian L, Ales F, Akca AYE, Winters C, Viglione DJ, Zennaro A, Giromini L. The sensitivity of the IOP-29 and IOP-M to coached feigning of depression and mTBI: An online simulation study in a community sample from the United Kingdom. APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:1234-1246. [PMID: 36027614 DOI: 10.1080/23279095.2022.2115910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Assessing the credibility of symptoms is critical to neuropsychological assessment in both clinical and forensic settings. To this end, the Inventory of Problems-29 (IOP-29) and its recently added memory module (Inventory of Problems-Memory; IOP-M) appear to be particularly useful, as they provide a rapid and cost-effective measure of both symptom and performance validity. While numerous studies have already supported the effectiveness of the IOP-29, research on its newly developed module, the IOP-M, is much sparser. To address this gap, we conducted a simulation study with a community sample (N = 307) from the United Kingdom. Participants were asked to either (a) respond honestly or (b) pretend to suffer from mTBI or (c) pretend to suffer from depression. Within each feigning group, half of the participants received a description of the symptoms of the disorder to be feigned, and the other half received both a description of the symptoms of the disorder to be feigned and a warning not to over-exaggerate their responses or their presentation would not be credible. Overall, the results confirmed the effectiveness of the two IOP components, both individually and in combination.
Collapse
Affiliation(s)
- Jessica Bosi
- Department of Psychology, University of Surrey, Guildford, UK
| | - Laure Minassian
- Department of Psychology, University of Surrey, Guildford, UK
| | - Francesca Ales
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Christina Winters
- Tilburg Institute for Law, Technology, and Society (TLS), Tilburg University, Tilburg, The Netherlands
| | | | | | | |
Collapse
|
2
|
Buelow MT, Barnhart WR, Crook T, Suhr JA. Are correlations among behavioral decision making tasks moderated by simulated cognitive impairment? APPLIED NEUROPSYCHOLOGY. ADULT 2024; 31:901-916. [PMID: 35737425 DOI: 10.1080/23279095.2022.2088289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Behavioral decision making tasks are common in research settings, with only the Iowa Gambling Task available for clinical assessments. However, correlations among these tasks are low, indicating each may assess a distinct component of decision making. In addition, it is unclear whether these tasks are sensitive to invalid performance or even simulated impairment. The present study examined relationships among decision making tasks and whether simulated impairment moderates the relationships among them. Across two studies (Study 1: n = 166, Study 2: n = 130), undergraduate student participants were asked to try their best or to simulate a specific diagnosis (Attention-Deficit/Hyperactivity Disorder; Study 1), decision making impairment (Study 2), or general cognitive impairment (Study 2). They then completed a battery of tests including embedded and standalone performance validity tests (PVTs) and three behavioral decision making tasks. Across studies, participants simulating impairment were not distinguishable from controls on any of the behavioral tasks. Few significant correlations emerged among tasks across studies and the pattern of relationships between tasks did not differ on the basis of simulator or PVT failure status. Collectively, our findings suggest that these tasks may not be vulnerable to simulated cognitive impairment, and that the tasks measure largely non-overlapping aspects of decision making.
Collapse
Affiliation(s)
- Melissa T Buelow
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Wesley R Barnhart
- Department of Psychology, Bowling Green State University, Bowling Green, OH, USA
| | - Thomas Crook
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Julie A Suhr
- Department of Psychology, Ohio University, Athens, OH, USA
| |
Collapse
|
3
|
Crișan I, Ali S, Cutler L, Matei A, Avram L, Erdodi LA. Geographic variability in limited English proficiency: A cross-cultural study of cognitive profiles. J Int Neuropsychol Soc 2023; 29:972-983. [PMID: 37246143 DOI: 10.1017/s1355617723000280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE This study was designed to evaluate the effect of limited English proficiency (LEP) on neurocognitive profiles. METHOD Romanian (LEP-RO; n = 59) and Arabic (LEP-AR; n = 30) native speakers were compared to Canadian native speakers of English (NSE; n = 24) on a strategically selected battery of neuropsychological tests. RESULTS As predicted, participants with LEP demonstrated significantly lower performance on tests with high verbal mediation relative to US norms and the NSE sample (large effects). In contrast, several tests with low verbal mediation were robust to LEP. However, clinically relevant deviations from this general pattern were observed. The level of English proficiency varied significantly within the LEP-RO and was associated with a predictable performance pattern on tests with high verbal mediation. CONCLUSIONS The heterogeneity in cognitive profiles among individuals with LEP challenges the notion that LEP status is a unitary construct. The level of verbal mediation is an imperfect predictor of the performance of LEP examinees during neuropsychological testing. Several commonly used measures were identified that are robust to the deleterious effects of LEP. Administering tests in the examinee's native language may not be the optimal solution to contain the confounding effect of LEP in cognitive evaluations.
Collapse
Affiliation(s)
- Iulia Crișan
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Alina Matei
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Luisa Avram
- Department of Psychology, West University of Timișoara, Timișoara, Romania
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
4
|
Mascarenhas MA, Cocunato JL, Armstrong IT, Harrison AG, Zakzanis KK. Base rates of non-credible performance in a post-secondary student sample seeking accessibility accommodations. Clin Neuropsychol 2023; 37:1608-1628. [PMID: 36646463 DOI: 10.1080/13854046.2023.2167737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 01/09/2023] [Indexed: 01/18/2023]
Abstract
Objective: Performance Validity Tests (PVTs) have been used to identify non-credible performance in clinical, medicolegal, forensic, and, more recently, academic settings. The inclusion of PVTs when administering psychoeducational assessments is essential given that specific accommodation such as flexible deadlines and increased writing time can provide an external incentive for students without disabilities to feign symptoms. Method: The present study used archival data to establish base rates of non-credible performance in a sample of post-secondary students (n = 1045) who underwent a comprehensive psychoeducational evaluation for the purposes of obtaining academic accommodations. In accordance with current guidelines, non-credible performance was determined by failure on two or more freestanding or embedded PVTs. Results: 9.4% of participants failed at least two of the PVTs they were administered, of which 8.5% failed two PVTs, and approximately 1% failed three PVTs. Base rates of failure for specific PVTs ranged from 25% (b Test) to 11.2% (TOVA). Conclusions: The present study found a lower base rate of non-credible performance than previously observed in comparable populations. This likely reflects the utilization of conservative criteria in detecting non-credible performance to avoid false positives. By contrast, inconsistent base rates previously found in the literature may reflect inconsistent methodologies. These results further emphasize the importance of administering multiple PVTs during psychoeducational assessments. The implications of these findings can further inform clinicians administering assessments in academic settings and aid in the appropriate utilization of PVTs in psychoeducational evaluation to determine accessibility accommodations.
Collapse
Affiliation(s)
- Melanie A Mascarenhas
- Graduate Department of Psychological Clinical Science, University of Toronto Scarborough, Toronto, Canada
| | - Jessica L Cocunato
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| | - Irene T Armstrong
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| | - Allyson G Harrison
- Regional Assessment and Resource Centre, Queen's University, Kingston, Canada
| | - Konstantine K Zakzanis
- Graduate Department of Psychological Clinical Science, University of Toronto Scarborough, Toronto, Canada
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| |
Collapse
|
5
|
Scott JC, Moore TM, Roalf DR, Satterthwaite TD, Wolf DH, Port AM, Butler ER, Ruparel K, Nievergelt CM, Risbrough VB, Baker DG, Gur RE, Gur RC. Development and application of novel performance validity metrics for computerized neurocognitive batteries. J Int Neuropsychol Soc 2023; 29:789-797. [PMID: 36503573 PMCID: PMC10258222 DOI: 10.1017/s1355617722000893] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric concepts using data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). METHODS We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n = 9498); and 2) adult servicemembers from the Marine Resiliency Study-II (n = 1444). RESULTS Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. CONCLUSIONS These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.
Collapse
Affiliation(s)
- J. Cobb Scott
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel H. Wolf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Allison M. Port
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ellyn R. Butler
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kosha Ruparel
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Caroline M. Nievergelt
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Victoria B. Risbrough
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Dewleen G. Baker
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
6
|
Low rate of performance validity failures among individuals with bipolar disorder. J Int Neuropsychol Soc 2023; 29:298-305. [PMID: 35403599 DOI: 10.1017/s1355617722000145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Assessing performance validity is imperative in both clinical and research contexts as data interpretation presupposes adequate participation from examinees. Performance validity tests (PVTs) are utilized to identify instances in which results cannot be interpreted at face value. This study explored the hit rates for two frequently used PVTs in a research sample of individuals with and without histories of bipolar disorder (BD). METHOD As part of an ongoing longitudinal study of individuals with BD, we examined the performance of 736 individuals with BD and 255 individuals with no history of mental health disorder on the Test of Memory Malingering (TOMM) and the California Verbal Learning Test forced choice trial (CVLT-FC) at three time points. RESULTS Undiagnosed individuals demonstrated 100% pass rate on PVTs and individuals with BD passed over 98% of the time. A mixed effects model adjusting for relevant demographic variables revealed no significant difference in TOMM scores between the groups, a = .07, SE = .07, p = .31. On the CVLT-FC, no clinically significant differences were observed (ps < .001). CONCLUSIONS Perfect PVT scores were obtained by the majority of individuals, with no differences in failure rates between groups. The tests have approximately >98% specificity in BD and 100% specificity among non-diagnosed individuals. Further, nearly 90% of individuals with BD obtained perfect scores on both measures, a trend observed at each time point.
Collapse
|
7
|
Robinson A, Huber M, Breaux E, Pugh E, Calamia M. Failing The b Test: The influence of cutoff scores and criterion group approaches in a sample of adults referred for psychoeducational evaluation. J Clin Exp Neuropsychol 2022; 44:619-626. [PMID: 36727266 DOI: 10.1080/13803395.2022.2153805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE Previous research has shown that both criterion grouping approaches and cutoff scores can impact PVT classification accuracy statistics. This study aimed to examine the influence of cutoff scores and criterion grouping approaches on The b Test, a measure designed to identify feigned impairment in visual scanning, processing speed, and letter identification. METHOD Two hundred ninety-seven adults referred for psychoeducational testing were included with the majority of individuals seeking academic accommodations (n = 215). Cutoff scores of ≥82, ≥90, and ≥120 were utilized along with two different criterion group approaches, 0 PVT failures vs. ≥2 PVT failures and 0 PVT failures versus ≥ 1 PVT failures. RESULTS Failure rates for The b Test in the overall sample ranged from 12.5% to 16.2%. Subgroup analyses in those referred specifically for ADHD revealed failure rates for The b Test ranging from 10.5% to 14.2%. ROC curves within the full sample and ADHD subsample demonstrated significant AUCs utilizing both criterion group approaches (AUC = .66 - .78). Sensitivity and specificity varied as a function of criterion group approach and cutoff score, with 0 PVT failures vs. ≥ 2 PVT failures resulting in the greatest sensitivity when maximizing specificity at ≥.90 in the full sample and ADHD sample. CONCLUSIONS The results demonstrate that criterion approaches and cutoff scores impact classification accuracy of The b Test with 0 PVT vs. ≥ 2 PVT failures demonstrating the greatest classification accuracy. Special considerations should be made with regard to clinical decision making in the context of psychoeducational evaluations given that a large portion of individuals seeking accommodations fail only one PVT. Limitations of this study are also discussed.
Collapse
Affiliation(s)
- Anthony Robinson
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Marissa Huber
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Eathan Breaux
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Erika Pugh
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| |
Collapse
|
8
|
Ali S, Crisan I, Abeare CA, Erdodi LA. Cross-Cultural Performance Validity Testing: Managing False Positives in Examinees with Limited English Proficiency. Dev Neuropsychol 2022; 47:273-294. [PMID: 35984309 DOI: 10.1080/87565641.2022.2105847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Base rates of failure (BRFail) on performance validity tests (PVTs) were examined in university students with limited English proficiency (LEP). BRFail was calculated for several free-standing and embedded PVTs. All free-standing PVTs and certain embedded indicators were robust to LEP. However, LEP was associated with unacceptably high BRFail (20-50%) on several embedded PVTs with high levels of verbal mediation (even multivariate models of PVT could not contain BRFail). In conclusion, failing free-standing/dedicated PVTs cannot be attributed to LEP. However, the elevated BRFail on several embedded PVTs in university students suggest an unacceptably high overall risk of false positives associated with LEP.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Iulia Crisan
- Department of Psychology, West University of Timişoara, Timişoara, Romania
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
9
|
Erdodi LA. Multivariate Models of Performance Validity: The Erdodi Index Captures the Dual Nature of Non-Credible Responding (Continuous and Categorical). Assessment 2022:10731911221101910. [PMID: 35757996 DOI: 10.1177/10731911221101910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study was designed to examine the classification accuracy of the Erdodi Index (EI-5), a novel method for aggregating validity indicators that takes into account both the number and extent of performance validity test (PVT) failures. Archival data were collected from a mixed clinical/forensic sample of 452 adults referred for neuropsychological assessment. The classification accuracy of the EI-5 was evaluated against established free-standing PVTs. The EI-5 achieved a good combination of sensitivity (.65) and specificity (.97), correctly classifying 92% of the sample. Its classification accuracy was comparable with that of another free-standing PVT. An indeterminate range between Pass and Fail emerged as a legitimate third outcome of performance validity assessment, indicating that the underlying construct is an inherently continuous variable. Results support the use of the EI model as a practical and psychometrically sound method of aggregating multiple embedded PVTs into a single-number summary of performance validity. Combining free-standing PVTs with the EI-5 resulted in a better separation between credible and non-credible profiles, demonstrating incremental validity. Findings are consistent with recent endorsements of a three-way outcome for PVTs (Pass, Borderline, and Fail).
Collapse
|
10
|
Brantuo MA, An K, Biss RK, Ali S, Erdodi LA. Neurocognitive Profiles Associated With Limited English Proficiency in Cognitively Intact Adults. Arch Clin Neuropsychol 2022; 37:1579-1600. [PMID: 35694764 DOI: 10.1093/arclin/acac019] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The objective of the present study was to examine the neurocognitive profiles associated with limited English proficiency (LEP). METHOD A brief neuropsychological battery including measures with high (HVM) and low verbal mediation (LVM) was administered to 80 university students: 40 native speakers of English (NSEs) and 40 with LEP. RESULTS Consistent with previous research, individuals with LEP performed more poorly on HVM measures and equivalent to NSEs on LVM measures-with some notable exceptions. CONCLUSIONS Low scores on HVM tests should not be interpreted as evidence of acquired cognitive impairment in individuals with LEP, because these measures may systematically underestimate cognitive ability in this population. These findings have important clinical and educational implications.
Collapse
Affiliation(s)
- Maame A Brantuo
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Kelly An
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Renee K Biss
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Sami Ali
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor ON, Canada
| |
Collapse
|
11
|
Ali S, Elliott L, Biss RK, Abumeeiz M, Brantuo M, Kuzmenka P, Odenigbo P, Erdodi LA. The BNT-15 provides an accurate measure of English proficiency in cognitively intact bilinguals - a study in cross-cultural assessment. APPLIED NEUROPSYCHOLOGY. ADULT 2022; 29:351-363. [PMID: 32449371 DOI: 10.1080/23279095.2020.1760277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This study was designed to replicate earlier reports of the utility of the Boston Naming Test - Short Form (BNT-15) as an index of limited English proficiency (LEP). Twenty-eight English-Arabic bilingual student volunteers were administered the BNT-15 as part of a brief battery of cognitive tests. The majority (23) were women, and half had LEP. Mean age was 21.1 years. The BNT-15 was an excellent psychometric marker of LEP status (area under the curve: .990-.995). Participants with LEP underperformed on several cognitive measures (verbal comprehension, visuomotor processing speed, single word reading, and performance validity tests). Although no participant with LEP failed the accuracy cutoff on the Word Choice Test, 35.7% of them failed the time cutoff. Overall, LEP was associated with an increased risk of failing performance validity tests. Previously published BNT-15 validity cutoffs had unacceptably low specificity (.33-.52) among participants with LEP. The BNT-15 has the potential to serve as a quick and effective objective measure of LEP. Students with LEP may need academic accommodations to compensate for slower test completion time. Likewise, LEP status should be considered for exemption from failing performance validity tests to protect against false positive errors.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Lauren Elliott
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Renee K Biss
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Mustafa Abumeeiz
- Behaviour-Cognition-Neuroscience Program, University of Windsor, Windsor, Canada
| | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, Canada
| | | | - Paula Odenigbo
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
12
|
Ali S, Brantuo MA, Cutler L, Kennedy A, Erdodi LA. Limited English proficiency inhibits auditory verbal learning in cognitively healthy young adults - exploring culturally responsive diagnostic and educational safeguards. APPLIED NEUROPSYCHOLOGY. CHILD 2022; 12:97-103. [PMID: 35148226 DOI: 10.1080/21622965.2022.2034628] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
This study was designed to examine the effect of limited English proficiency (LEP) on the Hopkins Verbal Learning Test-Revised (HVLT-R). The HVLT-R was administered to 28 undergraduate student volunteers. Half were native speakers of English (NSE), half had LEP. The LEP sample performed significantly below NSE on individual acquisition trials and delayed free recall (large effects). In addition, participants with LEP scored 1.5-2 SDs below the normative mean. There was no difference in performance during recognition testing. LEP status was associated with a clinically significant deficit on the HVLT-R in a sample of cognitively healthy university students. Results suggest that low scores on auditory verbal learning tests in individuals with LEP should not be automatically interpreted as evidence of memory impairment or learning disability. LEP should be considered as grounds for academic accommodations. The generalizability of the findings is constrained by the small sample size.
Collapse
Affiliation(s)
- Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Maame A Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Arianna Kennedy
- School of Social Work, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
13
|
Dunn A, Pyne S, Tyson B, Roth R, Shahein A, Erdodi L. Critical Item Analysis Enhances the Classification Accuracy of the Logical Memory Recognition Trial as a Performance Validity Indicator. Dev Neuropsychol 2021; 46:327-346. [PMID: 34525856 DOI: 10.1080/87565641.2021.1956499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE : Replicate previous research on Logical Memory Recognition (LMRecog) and perform a critical item analysis. METHOD : Performance validity was psychometrically operationalized in a mixed clinical sample of 213 adults. Classification of the LMRecog and nine critical items (CR-9) was computed. RESULTS : LMRecog ≤20 produced a good combination of sensitivity (.30-.35) and specificity (.89-.90). CR-9 ≥5 and ≥6 had comparable classification accuracy. CR-9 ≥5 increased sensitivity by 4% over LMRecog ≤20; CR-9 ≥6 increased specificity by 6-8% over LMRecog ≤20; CR-9 ≥7 increased specificity by 8-15%. CONCLUSIONS : Critical item analysis enhances the classification accuracy of the optimal LMRecog cutoff (≤20).
Collapse
Affiliation(s)
- Alexa Dunn
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Sadie Pyne
- Windsor Neuropsychology, Windsor, Canada
| | - Brad Tyson
- Neuroscience Institute, Evergreen Neuroscience Institute, EvergreenHealth Medical Center, Kirkland, USA
| | - Robert Roth
- Neuropsychology Services, Dartmouth-Hitchcock Medical Center, USA
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | - Laszlo Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
14
|
Erdodi LA. Five shades of gray: Conceptual and methodological issues around multivariate models of performance validity. NeuroRehabilitation 2021; 49:179-213. [PMID: 34420986 DOI: 10.3233/nre-218020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE This study was designed to empirically investigate the signal detection profile of various multivariate models of performance validity tests (MV-PVTs) and explore several contested assumptions underlying validity assessment in general and MV-PVTs specifically. METHOD Archival data were collected from 167 patients (52.4%male; MAge = 39.7) clinicially evaluated subsequent to a TBI. Performance validity was psychometrically defined using two free-standing PVTs and five composite measures, each based on five embedded PVTs. RESULTS MV-PVTs had superior classification accuracy compared to univariate cutoffs. The similarity between predictor and criterion PVTs influenced signal detection profiles. False positive rates (FPR) in MV-PVTs can be effectively controlled using more stringent multivariate cutoffs. In addition to Pass and Fail, Borderline is a legitimate third outcome of performance validity assessment. Failing memory-based PVTs was associated with elevated self-reported psychiatric symptoms. CONCLUSIONS Concerns about elevated FPR in MV-PVTs are unsubstantiated. In fact, MV-PVTs are psychometrically superior to individual components. Instrumentation artifacts are endemic to PVTs, and represent both a threat and an opportunity during the interpretation of a given neurocognitive profile. There is no such thing as too much information in performance validity assessment. Psychometric issues should be evaluated based on empirical, not theoretical models. As the number/severity of embedded PVT failures accumulates, assessors must consider the possibility of non-credible presentation and its clinical implications to neurorehabilitation.
Collapse
|
15
|
Messa I, Holcomb M, Lichtenstein JD, Tyson BT, Roth RM, Erdodi LA. They are not destined to fail: a systematic examination of scores on embedded performance validity indicators in patients with intellectual disability. AUST J FORENSIC SCI 2021. [DOI: 10.1080/00450618.2020.1865457] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | | | - Brad T Tyson
- Neuropsychological Service, EvergreenHealth Medical Center, Kirkland, WA, USA
| | - Robert M Roth
- Department of Psychiatry, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
16
|
Abeare CA, An K, Tyson B, Holcomb M, Cutler L, May N, Erdodi LA. The emotion word fluency test as an embedded performance validity indicator - Alone and in a multivariate validity composite. APPLIED NEUROPSYCHOLOGY. CHILD 2021; 11:713-724. [PMID: 34424798 DOI: 10.1080/21622965.2021.1939027] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This project was designed to cross-validate existing performance validity cutoffs embedded within measures of verbal fluency (FAS and animals) and develop new ones for the Emotion Word Fluency Test (EWFT), a novel measure of category fluency. METHOD The classification accuracy of the verbal fluency tests was examined in two samples (70 cognitively healthy university students and 52 clinical patients) against psychometrically defined criterion measures. RESULTS A demographically adjusted T-score of ≤31 on the FAS was specific (.88-.97) to noncredible responding in both samples. Animals T ≤ 29 achieved high specificity (.90-.93) among students at .27-.38 sensitivity. A more conservative cutoff (T ≤ 27) was needed in the patient sample for a similar combination of sensitivity (.24-.45) and specificity (.87-.93). An EWFT raw score ≤5 was highly specific (.94-.97) but insensitive (.10-.18) to invalid performance. Failing multiple cutoffs improved specificity (.90-1.00) at variable sensitivity (.19-.45). CONCLUSIONS Results help resolve the inconsistency in previous reports, and confirm the overall utility of existing verbal fluency tests as embedded validity indicators. Multivariate models of performance validity assessment are superior to single indicators. The clinical utility and limitations of the EWFT as a novel measure are discussed.
Collapse
Affiliation(s)
- Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Kelly An
- Private Practice, London, Ontario, Canada
| | - Brad Tyson
- Evergreen Health Medical Center, Kirkland, Washington, USA
| | - Matthew Holcomb
- Jefferson Neurobehavioral Group, New Orleans, Louisiana, USA
| | - Laura Cutler
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Natalie May
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| | - Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
17
|
Weitzner DS, Calamia M, Parsons TD. Test-retest reliability and practice effects of the virtual environment grocery store (VEGS). J Clin Exp Neuropsychol 2021; 43:547-557. [PMID: 34376099 DOI: 10.1080/13803395.2021.1960277] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
INTRODUCTION The use of virtual reality (VR) technology has been suggested as a method to increase ecological validity in neuropsychological assessments. Although validity has been a focus in VR research, little attention has been paid to other psychometric properties such as test-retest reliability and practice effects. Practice effects are common on traditional neuropsychological tests and can be impacted by novelty. Because VR is not widely used it was expected that participants would demonstrate higher practice effects on VR as compared to paper-and-pencil testing. METHOD To compare test-retest reliability and practice effects in VR and traditional paper-and-pencil testing, the Virtual Environment Grocery Store (VEGS) and California Verbal Learning Test - Second Edition (CVLT-II) were used in healthy adults (n = 44). Participants received follow-up testing approximately 2 weeks after the initial visit. RESULTS Significant practice effects of similar magnitude were seen on memory scores (i.e., total learning, long-delay free recall, and long-delay cued recall) on the VEGS and the CVLT-II. The VEGS and CVLT-II memory scores also demonstrated strong test-retest reliability (r's > .71). Lastly, total learning scores (d = .32) and long-delay cued recall (d = .70) scores were significantly higher on the CVLT-II compared to the VEGS (p's < .01). CONCLUSIONS Results suggested similar test-retest reliability and practice effects of the VEGS and CVLT-II, although the VEGS has the benefit of being an immersive technology that simulates an everyday activity. The study replicated past findings that the VEGS is more difficult than the CVLT-II which may be a useful property for clinical assessment.
Collapse
Affiliation(s)
- Daniel S Weitzner
- Psychology Department, Louisiana State University, Baton Rouge, LA, USA
| | - Matthew Calamia
- Psychology Department, Louisiana State University, Baton Rouge, LA, USA
| | - Thomas D Parsons
- Computations Neuropsychology & Simulation, University of North Texas, Denton, TX, USA.,College of Information, University of North Texas, Denton, TX, USA.,iCenter for Affective Neurotechnologies (Ican), University of North Texas, Denton, TX, USA
| |
Collapse
|
18
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
19
|
Carvalho LDF, Reis A, Colombarolli MS, Pasian SR, Miguel FK, Erdodi LA, Viglione DJ, Giromini L. Discriminating Feigned from Credible PTSD Symptoms: a Validation of a Brazilian Version of the Inventory of Problems-29 (IOP-29). PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09403-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
Suchy Y, Mullen CM, Brothers S, Niermeyer MA. Interpreting executive and lower-order error scores on the timed subtests of the Delis-Kaplan Executive Function System (D-KEFS) battery: Error analysis across the adult lifespan. J Clin Exp Neuropsychol 2020; 42:982-997. [PMID: 33267731 DOI: 10.1080/13803395.2020.1832203] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVE The Delis-Kaplan Executive Function System (D-KEFS) is a battery of tests designed to measure executive functions (EF). Additionally, the D-KEFS contains lower-order tasks, designed to control for speed of visual scanning, sequencing, and verbal and graphomotor output. The construct and criterion validities of D-KEFS scores that are time-based are well established. However, the constructs measured by the D-KEFS error scores are poorly understood, making clinical interpretations of such scores difficult. This study examined the construct validity of D-KEFS errors committed on EF tasks and tasks designed to measure lower-order processes (i.e., non-EF tasks), across the adult lifespan. METHOD Participants were 427 adults (18-93 years) who completed the timed subtests of the D-KEFS. Four hundred two participants also completed the Push-Turn-Taptap (PTT; a separate measure of EF) to allow cross-validation. RESULTS General linear regressions showed that D-KEFS errors committed on the EF tests were associated with EF timed performance (assessed using the D-KEFS time-based scores and the PTT), but only among older adults. Importantly, errors committed on the D-KEFS tasks of lower-order processes were also associated with D-KEFS time-based EF performance, and this relationship held across the adult lifespan. CONCLUSIONS These findings suggest that among older adults EF errors on the D-KEFS can be interpreted as indices of EF, but such interpretations are not automatically warranted for younger adults. Additionally, errors committed on non-EF tasks contained within the D-KEFS battery can be interpreted as reflecting EF weaknesses across the adult lifespan.
Collapse
Affiliation(s)
- Yana Suchy
- Department of Psychology, University of Utah , Salt Lake City, Utah
| | - Christine M Mullen
- Department of Physical Medicine & Rehabilitation, University of Utah , Salt Lake City, Utah
| | - Stacey Brothers
- Department of Psychology, University of Utah , Salt Lake City, Utah
| | - Madison A Niermeyer
- Department of Physical Medicine & Rehabilitation, University of Utah , Salt Lake City, Utah
| |
Collapse
|
21
|
Erdodi LA, Abeare CA. Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Arch Clin Neuropsychol 2020; 35:188-204. [PMID: 31696203 DOI: 10.1093/arclin/acz032] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/18/2019] [Accepted: 06/22/2019] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). METHOD Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. RESULTS Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). CONCLUSIONS In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. BRIEF SUMMARY Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
22
|
McCaffrey RJ, Lynch JK. Base Rates of Performance Validity Test Failure Among Children and Young Adult Litigants with Histories of Elevated Blood Lead Levels. JOURNAL OF PEDIATRIC NEUROPSYCHOLOGY 2020. [DOI: 10.1007/s40817-020-00091-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
23
|
Abeare CA, Hurtubise JL, Cutler L, Sirianni C, Brantuo M, Makhzoum N, Erdodi LA. Introducing a forced choice recognition trial to the Hopkins Verbal Learning Test – Revised. Clin Neuropsychol 2020; 35:1442-1470. [DOI: 10.1080/13854046.2020.1779348] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | | | - Maame Brantuo
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nadeen Makhzoum
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
24
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
25
|
Olla P, Rykulski N, Hurtubise JL, Bartol S, Foote R, Cutler L, Abeare K, McVinnie N, Sabelli AG, Hastings M, Erdodi LA. Short-term effects of cannabis consumption on cognitive performance in medical cannabis patients. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:647-657. [PMID: 31790276 DOI: 10.1080/23279095.2019.1681424] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This observational study examined the acute cognitive effects of cannabis. We hypothesized that cognitive performance would be negatively affected by acute cannabis intoxication. Twenty-two medical cannabis patients from Southwestern Ontario completed the study. The majority (n = 13) were male. Mean age was 36.0 years, and mean level of education was 13.7 years. Participants were administered the same brief neurocognitive battery three times during a six-hour period: at baseline ("Baseline"), once after they consumed a 20% THC cannabis product ("THC"), and once again several hours later ("Recovery"). The average self-reported level of cannabis intoxication prior to the second assessment (i.e., during THC) was 5.1 out of 10. Contrary to expectations, performance on neuropsychological tests remained stable or even improved during the acute intoxication stage (THC; d: .49-.65, medium effect), and continued to increase during Recovery (d: .45-.77, medium-large effect). Interestingly, the failure rate on performance validity indicators increased during THC. Contrary to our hypothesis, there was no psychometric evidence for a decline in cognitive ability following THC intoxication. There are several possible explanations for this finding but, in the absence of a control group, no definitive conclusion can be reached at this time.
Collapse
Affiliation(s)
| | - Nicholas Rykulski
- College of Human Medicine, Michigan State University, Lansing, MI, USA
| | | | - Stephen Bartol
- School of Medicine, Wayne State University, Detroit, MI, USA
| | | | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Nora McVinnie
- Brain-Cognition-Neuroscience Program, University of Windsor, Windsor, ON, Canada
| | - Alana G Sabelli
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Maurissa Hastings
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
26
|
Bernstein JPK, Roye S, Weitzner D, Calamia M. Evaluating the construct validity of the King-Devick test in a psychological outpatient clinical sample. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 28:627-632. [PMID: 31612728 DOI: 10.1080/23279095.2019.1678159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The King-Devick test (K-D) has demonstrated sensitivity as a screener measure of ocular motor and cognitive problems. Despite its empirical support in the assessment of patients with certain injuries and disorders (e.g., concussion, reading disorders), less is known about the construct validity of the K-D. This study examined this topic in an outpatient, diagnostically heterogeneous clinical sample. A total of 70 individuals seen for an outpatient psychoeducational evaluation completed the K-D in addition to measures of intellectual abilities, speeded reading ability, simple and sustained attention, and executive functioning. Pearson correlation coefficients revealed that poorer K-D performance was associated with poorer processing speed, speeded reading ability and response time to target stimuli (r = .26-.31, p < .05). K-D performance was unrelated to other intellectual abilities, other aspects of attention, or executive functioning (all p > .05). Results suggest that the K-D demonstrates good convergent and discriminant validity in a heterogeneous outpatient clinical sample including individuals with attention-deficit hyperactivity disorder, specific learning disorders, and a number of different depressive and anxiety disorders. Findings support its wider use as a measure of reading ability and processing speed in clinical contexts.
Collapse
Affiliation(s)
- John P K Bernstein
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Scott Roye
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Daniel Weitzner
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| | - Matthew Calamia
- Department of Psychology, Louisiana State University, Baton Rouge, LA, USA
| |
Collapse
|
27
|
Geographic Variation and Instrumentation Artifacts: in Search of Confounds in Performance Validity Assessment in Adults with Mild TBI. PSYCHOLOGICAL INJURY & LAW 2019. [DOI: 10.1007/s12207-019-09354-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|