1
|
Tierney SM, Matchanova A, Miller BI, Troyanskaya M, Romesser J, Sim A, Pastorek NJ. Cognitive "success" in the setting of performance validity test failure. J Clin Exp Neuropsychol 2024; 46:46-54. [PMID: 37555316 DOI: 10.1080/13803395.2023.2244161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Accepted: 07/27/2023] [Indexed: 08/10/2023]
Abstract
BACKGROUND Although studies have shown unique variance contributions from performance invalidity, it is difficult to interpret the meaning of cognitive data in the setting of performance validity test (PVT) failure. The current study aimed to examine cognitive outcomes in this context. METHOD Two hundred and twenty-two veterans with a history of mild traumatic brain injury referred for clinical evaluation completed cognitive and performance validity measures. Standardized scores were characterized as Within Normal Limits (≥16th normative percentile) and Below Normal Limits (<16th percentile). Cognitive outcomes are examined across four commonly used PVTs. Self-reported employment and student status were used as indicators of "productivity" to assess potential functional differences related to lower cognitive performance. RESULTS Among participants who performed in the invalid range on Test of Memory Malingering trial 1, Word Memory Test, Wechsler Adult Intelligence Scale-Fourth Edition Digit Span aged corrected scaled score, and the California Verbal Learning Test-Second Edition Forced Choice index, 16-88% earned broadly within normal limits scores across cognitive testing. Depending on which PVT measure was applied, the average number of cognitive performances below the 16th percentile ranged from 5 to 7 of 14 tasks. There were no differences in the total number of below normal limits performances on cognitive measures between "productive" and "non-productive" participants (T = 1.65, p = 1.00). CONCLUSIONS Results of the current study suggest that the range of within normal limits cognitive performance in the context of failed PVTs varies greatly. Importantly, our findings indicate that neurocognitive data may still provide important practical information regarding cognitive abilities, despite poor PVT outcomes. Further, given that rates of below normal limits cognitive performance did not differ among "productivity" groups, results have important implications for functional abilities and recommendations in a clinical setting.
Collapse
Affiliation(s)
- Savanna M Tierney
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
| | - Anastasia Matchanova
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
| | - Brian I Miller
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
- H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| | - Maya Troyanskaya
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
- H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| | - Jennifer Romesser
- Department of Psychology, VA Salt Lake City Health Care System, Salt Lake City, UT, USA
| | - Anita Sim
- Physical Medicine & Rehabilitation, Minneapolis VA Health Care System, Minneapolis, MN, USA
| | - Nicholas J Pastorek
- Rehabilitation and Extended Care Line, Michael E DeBakey VA Medical Center, Houston, TX, USA
- H. Ben Taub Department of Physical Medicine and Rehabilitation, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
2
|
Scott JC, Moore TM, Roalf DR, Satterthwaite TD, Wolf DH, Port AM, Butler ER, Ruparel K, Nievergelt CM, Risbrough VB, Baker DG, Gur RE, Gur RC. Development and application of novel performance validity metrics for computerized neurocognitive batteries. J Int Neuropsychol Soc 2023; 29:789-797. [PMID: 36503573 PMCID: PMC10258222 DOI: 10.1017/s1355617722000893] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric concepts using data embedded within the Penn Computerized Neurocognitive Battery (PennCNB). METHODS We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort (n = 9498); and 2) adult servicemembers from the Marine Resiliency Study-II (n = 1444). RESULTS Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates. CONCLUSIONS These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.
Collapse
Affiliation(s)
- J. Cobb Scott
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Daniel H. Wolf
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Allison M. Port
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ellyn R. Butler
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kosha Ruparel
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Caroline M. Nievergelt
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Victoria B. Risbrough
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Dewleen G. Baker
- Center for Excellent in Stress and Mental Health, VA San Diego Healthcare System, San Diego, CA, USA
- Department of Psychiatry, University of California (UCSD), San Diego, CA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA
- Lifespan Brain Institute, Department of Child and Adolescent Psychiatry and Behavioral Sciences, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
3
|
Bajjaleh C, Braw YC, Elkana O. Adaptation and initial validation of the Arabic version of the Word Memory Test (WMT ARB). APPLIED NEUROPSYCHOLOGY. ADULT 2023; 30:204-213. [PMID: 34043924 DOI: 10.1080/23279095.2021.1923495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
BACKGROUND The feigning of cognitive impairment is common in neuropsychological assessments, especially in a medicolegal setting. The Word Memory Test (WMT) is a forced-choice recognition memory performance validity test (PVT) which is widely used to detect noncredible performance. Though translated to several languages, this was not done for one of the most common languages, Arabic. The aim of the current study was to evaluate the convergent validity of the Arabic adaptation of the WMT (WMTARB) among Israeli Arabic speakers. METHODS We adapted the WMT to Arabic using the back-translation method and in accordance with relevant guidelines. We then randomly assigned healthy Arabic speaking adults (N = 63) to either a simulation or honest control condition. The participants then performed neuropsychological tests which included the WMTARB and the Test of Memory Malingering (TOMM), a well-validated nonverbal PVT. RESULTS The WMTARB had high split-half reliability and its measures were significantly correlated with that of the TOMM (p < .001). High concordance was found in classification of participants using the WMTARB and TOMM (specificity = 94.29% and sensitivity = 100% using the conventional TOMM trial 2 cutoff as gold standard). As expected, simulators' accuracy on the WMTARB was significantly lower than that of honest controls. None of the demographic variables significantly correlated with WMTARB measures. CONCLUSION The WMTARB shows initial evidence of reliability and validity, emphasizing its potential use in the large population of Arabic speakers and universality in detecting noncredible performance. The findings, however, are preliminary and mandate validation in clinical settings.
Collapse
Affiliation(s)
- Christine Bajjaleh
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| | - Yoram C Braw
- Department of Psychology, Ariel University, Ariel, Israel
| | - Odelia Elkana
- Department of Psychology, the Academic College of Tel Aviv-Yaffo, Tel Aviv-Yaffo, Israel
| |
Collapse
|
4
|
Exploring the Structured Inventory of Malingered Symptomatology in Patients with Multiple Sclerosis. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09424-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
5
|
Lace JW, Merz ZC, Galioto R. Examining the Clinical Utility of Selected Memory-Based Embedded Performance Validity Tests in Neuropsychological Assessment of Patients with Multiple Sclerosis. Neurol Int 2021; 13:477-486. [PMID: 34698256 PMCID: PMC8544445 DOI: 10.3390/neurolint13040047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 09/02/2021] [Accepted: 09/08/2021] [Indexed: 11/16/2022] Open
Abstract
Within the neuropsychological assessment, clinicians are responsible for ensuring the validity of obtained cognitive data. As such, increased attention is being paid to performance validity in patients with multiple sclerosis (pwMS). Experts have proposed batteries of neuropsychological tests for use in this population, though none contain recommendations for standalone performance validity tests (PVTs). The California Verbal Learning Test, Second Edition (CVLT-II) and Brief Visuospatial Memory Test, Revised (BVMT-R)—both of which are included in the aforementioned recommended neuropsychological batteries—include previously validated embedded PVTs (which offer some advantages, including expedience and reduced costs), with no prior work exploring their utility in pwMS. The purpose of the present study was to determine the potential clinical utility of embedded PVTs to detect the signal of non-credibility as operationally defined by below criterion standalone PVT performance. One hundred thirty-three (133) patients (M age = 48.28; 76.7% women; 85.0% White) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into “credible” (n = 100) or “noncredible” (n = 33) groups based on a standalone PVT criterion. Classification statistics for four CVLT-II and BVMT-R PVTs of interest in isolation were poor (AUCs = 0.58–0.62). Several arithmetic and logistic regression-derived multivariate formulas were calculated, all of which similarly demonstrated poor discriminability (AUCs = 0.61–0.64). Although embedded PVTs may arguably maximize efficiency and minimize test burden in pwMS, common ones in the CVLT-II and BVMT-R may not be psychometrically appropriate, sufficiently sensitive, nor substitutable for standalone PVTs in this population. Clinical neuropsychologists who evaluate such patients are encouraged to include standalone PVTs in their assessment batteries to ensure that clinical care conclusions drawn from neuropsychological data are valid.
Collapse
Affiliation(s)
- John W. Lace
- Neurological Institute, Section of Neuropsychology, Cleveland Clinic Foundation, Cleveland, OH 44195, USA;
- Correspondence:
| | - Zachary C. Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC 27401, USA;
| | - Rachel Galioto
- Neurological Institute, Section of Neuropsychology, Cleveland Clinic Foundation, Cleveland, OH 44195, USA;
- Mellen Center for Multiple Sclerosis, Cleveland Clinic Foundation, Cleveland, OH 44195, USA
| |
Collapse
|
6
|
Zhang X, Gao R, Zhang C, Chen H, Wang R, Zhao Q, Zhu T, Chen C. Evidence for Cognitive Decline in Chronic Pain: A Systematic Review and Meta-Analysis. Front Neurosci 2021; 15:737874. [PMID: 34630023 PMCID: PMC8492915 DOI: 10.3389/fnins.2021.737874] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 08/20/2021] [Indexed: 02/05/2023] Open
Abstract
Background: People with chronic pain (CP) sometimes report impaired cognitive function, including a deficit of attention, memory, executive planning, and information processing. However, the association between CP and cognitive decline was still not clear. Our study aimed to assess the association of CP as a risk factor with cognitive decline among adults. Methods: We included data from clinical studies. Publications were identified using a systematic search strategy from PubMed, Embase, and Cochrane Library databases from inception to October 10, 2020. We used the mean cognitive outcome data and the standard deviations from each group. The standardized mean difference (SMD) or odds ratio (OR), and 95% confidence intervals (CI) were performed for each cognitive decline outcome. I 2-values were assessed to quantify the heterogeneities. Results: We included 37 studies with a total of 52,373 patients with CP and 80,434 healthy control participants. Because these studies used different evaluative methods, we analyzed these studies. The results showed CP was associated with cognitive decline when the short-form 36 health survey questionnaire (SF-36) mental component summary (SMD = -1.50, 95% CI = -2.19 to -0.81), the Montreal cognitive assessment (SMD = -1.11, 95% CI = -1.60 to -0.61), performance validity testing (SMD = 3.05, 95% CI = 1.74 to 4.37), or operation span (SMD = -1.83, 95% CI = -2.98 to -0.68) were used. However, we got opposite results when the studies using International Classification of Diseases and Related Health Problems classification (OR = 1.58, 95% CI = 0.97 to 2.56), the Mini-Mental State Examination (SMD = -0.42, 95% CI = -0.94 to 0.10; OR = 1.14, 95% CI = 0.91 to 1.42), and Repeatable Battery for the Assessment of Neuropsychological Status memory component (SMD = -0.06, 95% CI = -0.37 to 0.25). Conclusion: There may be an association between CP and the incidence of cognitive decline when some cognitive, evaluative methods were used, such as short-form 36 health survey questionnaire, Montreal cognitive assessment, performance validity testing, and operation span.
Collapse
Affiliation(s)
- Xueying Zhang
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| | - Rui Gao
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| | - Changteng Zhang
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| | - Hai Chen
- Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Ruiqun Wang
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| | - Qi Zhao
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| | - Tao Zhu
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| | - Chan Chen
- Department of Anesthesiology and Translational Neuroscience Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
7
|
Sanborn V, Lace J, Gunstad J, Galioto R. Considerations regarding noncredible performance in the neuropsychological assessment of patients with multiple sclerosis: A case series. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 30:458-467. [PMID: 34514920 DOI: 10.1080/23279095.2021.1971229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Determining the validity of data during clinical neuropsychological assessment is crucial for proper interpretation, and extensive literature has emphasized myriad methods of doing so in diverse samples. However, little research has considered noncredible presentation in persons with multiple sclerosis (pwMS). PwMS often experience one or more factors known to impact validity of data, including major neurocognitive impairment, psychological distress/psychogenic interference, and secondary gain. This case series aimed to illustrate the potential relationships between these factors and performance validity testing in pwMS. Six cases from an IRB-approved database containing pwMS referred for neuropsychological assessment at a large, academic medical center involving at least one of the above-stated factors were identified. Backgrounds, neuropsychological test data, and clinical considerations for each were reviewed. Interestingly, no pwMS diagnosed with major neurocognitive impairment was found to have noncredible performance, nor was any patient with noncredible performance in the absence of notable psychological distress. Given the variability of noncredible performance and multiplicity of factors affecting performance validity in pwMS, clinicians are strongly encouraged to consider psychometrically appropriate methods for evaluating validity of cognitive data in pwMS. Additional research aiming to elucidate base rates of, mechanisms begetting, and methods for assessing noncredible performance in pwMS is imperative.
Collapse
Affiliation(s)
| | - John Lace
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA
| | - John Gunstad
- Psychological Sciences, Kent State University, Kent, OH, USA.,Brain Health Research Institute, Kent State University, Kent, OH, USA
| | - Rachel Galioto
- Cleveland Clinic, Neurological Institute, Section of Neuropsychology, Cleveland, OH, USA.,Cleveland Clinic, Mellen Center for Multiple Sclerosis, Cleveland, OH, USA
| |
Collapse
|
8
|
Lace JW, Merz ZC, Galioto R. Nonmemory Composite Embedded Performance Validity Formulas in Patients with Multiple Sclerosis. Arch Clin Neuropsychol 2021; 37:309-321. [PMID: 34467368 DOI: 10.1093/arclin/acab066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/21/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE Research regarding performance validity tests (PVTs) in patients with multiple sclerosis (MS) is scant, with recommended batteries for neuropsychological evaluations in this population lacking suggestions to include PVTs. Moreover, limited work has examined embedded PVTs in this population. As previous investigations indicated that nonmemory-based embedded PVTs provide clinical utility in other populations, this study sought to determine if a logistic regression-derived PVT formula can be identified from selected nonmemory variables in a sample of patients with MS. METHOD A total of 184 patients (M age = 48.45; 76.6% female) with MS were referred for neuropsychological assessment at a large, Midwestern academic medical center. Patients were placed into "credible" (n = 146) or "noncredible" (n = 38) groups according to performance on standalone PVT. Missing data were imputed with HOTDECK. RESULTS Classification statistics for a variety of embedded PVTs were examined, with none appearing psychometrically appropriate in isolation (areas under the curve [AUCs] = .48-.64). Four exponentiated equations were created via logistic regression. Six, five, and three predictor equations yielded acceptable discriminability (AUC = .71-.74) with modest sensitivity (.34-.39) while maintaining good specificity (≥.90). The two predictor equation appeared unacceptable (AUC = .67). CONCLUSIONS Results suggest that multivariate combinations of embedded PVTs may provide some clinical utility while minimizing test burden in determining performance validity in patients with MS. Nonetheless, the authors recommend routine inclusion of several PVTs and utilization of comprehensive clinical judgment to maximize signal detection of noncredible performance and avoid incorrect conclusions. Clinical implications, limitations, and avenues for future research are discussed.
Collapse
Affiliation(s)
- John W Lace
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA
| | - Zachary C Merz
- LeBauer Department of Neurology, The Moses H. Cone Memorial Hospital, Greensboro, NC, USA
| | - Rachel Galioto
- Section of Neuropsychology, P57, Cleveland Clinic, Cleveland, OH, USA.,Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
9
|
Martinez KA, Sayers C, Hayes C, Martin PK, Clark CB, Schroeder RW. Normal cognitive test scores cannot be interpreted as accurate measures of ability in the context of failed performance validity testing: A symptom- and detection-coached simulation study. J Clin Exp Neuropsychol 2021; 43:301-309. [PMID: 33998369 DOI: 10.1080/13803395.2021.1926435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Introduction: While use of performance validity tests (PVTs) has become a standard of practice in neuropsychology, there are differing opinions regarding whether to interpret cognitive test data when standard scores fall within normal limits despite PVTs being failed. This study is the first to empirically determine whether normal cognitive test scores underrepresent functioning when PVTs are failed.Method: Participants, randomly assigned to either a simulated malingering group (n = 50) instructed to mildly suppress test performances or a best-effort/control group (n = 50), completed neuropsychological tests which included the North American Adult Reading Test (NAART), California Verbal Learning Test - 2nd Edition (CVLT-II), and Test of Memory Malingering (TOMM).Results: Groups were not significantly different in age, sex, education, or NAART predicted intellectual ability, but simulators performed significantly worse than controls on the TOMM, CVLT-II Forced Choice Recognition, and CVLT-II Short Delay Free Recall. The groups did not significantly differ on other examined CVLT-II measures. Of simulators who failed validity testing, 36% scored no worse than average and 73% scored no worse than low average on any of the examined CVLT-II indices.Conclusions: Of simulated malingerers who failed validity testing, nearly three-fourths were able to produce cognitive test scores that were within normal limits, which indicates that normal cognitive performances cannot be interpreted as accurately reflecting an individual's capabilities when obtained in the presence of validity test failure. At the same time, only 2 of 50 simulators were successful in passing validity testing while scoring within an impaired range on cognitive testing. This latter finding indicates that successfully feigning cognitive deficits is difficult when PVTs are utilized within the examination.
Collapse
Affiliation(s)
- Karen A Martinez
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Courtney Sayers
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Charles Hayes
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Phillip K Martin
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| | - C Brendan Clark
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Ryan W Schroeder
- Department of Psychiatry & Behavioral Sciences, University of Kansas School of Medicine - Wichita, Wichita, KS, USA
| |
Collapse
|
10
|
Loring DW, Meador KJ, Goldstein FC. Valid or not: A critique of Graver and Green. APPLIED NEUROPSYCHOLOGY. ADULT 2020; 29:639-642. [PMID: 32735139 DOI: 10.1080/23279095.2020.1798961] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Disagreements in science and medicine are not uncommon, and formal exchanges of disagreements serve a variety of valuable roles. As identified by a Nature Methods editorial entitled "The Power of Disagreement" (2016), disagreements bring attention to best practices so that differences in interpretation do not result from inferior data sets or confirmation bias, "prompting researchers to take a second look at evidence that is not in agreement with their hypothesis, rather than dismiss it as artifacts." Graver and Green published reasons why they disagree with a recent clinical case report and a decades old randomized control trial characterizing the effect of an acute 2 mg dosing of lorazepam on the Word Memory Test. In this article, we formally responded to their commentary to further clarify the reasons for our data interpretations. These two opposing views provide an excellent learning opportunity, particularly for students, demonstrating the importance of careful articulation of the rationale behind certain conclusions from different perspectives. We encourage careful review of the original articles being discussed so the neuropsychologists can read both positions and decide which interpretation of the findings they consider most sound.
Collapse
Affiliation(s)
- David W Loring
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA.,Department of Pediatrics, Emory University School of Medicine, Atlanta, GA, USA
| | - Kimford J Meador
- Department of Neurology & Neurological Sciences, Stanford University School of Medicine, Stanford, CA, USA
| | - Felicia C Goldstein
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
11
|
Graver C, Green P. Misleading conclusions about word memory test results in multiple sclerosis (MS) by Loring and Goldstein (2019). APPLIED NEUROPSYCHOLOGY-ADULT 2020; 29:315-323. [DOI: 10.1080/23279095.2020.1748035] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|