1
|
Huston CA, Poreh AM. Preliminary validation of the computerized N-Tri - A Tri-Choice naming and response bias test. APPLIED NEUROPSYCHOLOGY. ADULT 2022:1-7. [PMID: 35995131 DOI: 10.1080/23279095.2022.2110872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The study describes the validation of a computerized adaptation of the novel Tri-Choice Naming and Response Bias Measure (N-Tri) developed to detect untruthful responding while being less susceptible to coaching than existing measures. We hypothesized that the N-Tri would have comparable sensitivity and specificity to traditional tests but would have improved accuracy for detecting coached simulators. Four-hundred volunteers were randomly assigned to one of three groups: uncoached simulators' group (n = 118), coached simulators' group (n = 136), or control group (n = 146). Both simulator groups were asked to feign concussion symptoms, but the coached group received a test-taking strategy and a description of concussion symptoms. The participants were administered the computerized version of the new measure in conjunction with computerized adaptations of two well-validated response bias tests commonly used to detect cognitive malingering, the Reliable Digit Span (RDS) and Portland Digit Recognition Test (PDRT). Our data show the new measure correlated highly with other established measures. However, the classification accuracy did not significantly increase when compared to the traditional tests. Our findings support that the N-Tri performs at a comparable level to existing forced choice measures of response bias. Nevertheless, the N-Tri could potentially improve the detection of response bias as existing tests become more recognizable by the public.
Collapse
Affiliation(s)
- Chloe A Huston
- Department of Psychology, Cleveland State University, Cleveland, OH, USA
| | - Amir M Poreh
- Department of Psychology, Cleveland State University, Cleveland, OH, USA
- Department of Psychiatry, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| |
Collapse
|
2
|
Koenitzer JC, Herron JE, Whitlow JW, Barbuscak CM, Patel NR, Pletcher R, Christensen J. Development and Initial Validation of the Perceptual Assessment of Memory (PASSOM): A Simulator Study. Arch Clin Neuropsychol 2021; 36:1326-1340. [PMID: 33388765 DOI: 10.1093/arclin/acaa126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/08/2020] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE Performance validity tests (PVTs) are an integral component of neuropsychological assessment. There is a need for the development of more PVTs, especially those employing covert determinations. The aim of the present study was to provide initial validation of a new computerized PVT, the Perceptual Assessment of Memory (PASSOM). METHOD Participants were 58 undergraduate students randomly assigned to a simulator (SIM) or control (CON) group. All participants were provided written instructions for their role prior to testing and were administered the PASSOM as part of a brief battery of neurocognitive tests. Indices of interest included response accuracy for Trials 1 and 2, and total errors across Trials, as well as response time (RT) for Trials 1 and 2, and total RT for both Trials. RESULTS The SIM group produced significantly more errors than the CON group for Trials 1 and 2, and committed more total errors across trials. Significantly longer response latencies were found for the SIM group compared to the CON group for all RT indices examined. Linear regression modeling indicated excellent group classification for all indices studied, with areas under the curve ranging from 0.92 to 0.95. Sensitivity and specificity rates were good for several cut scores across all of the accuracy and RT indices, and sensitivity improved greatly by combining RT cut scores with the more traditional accuracy cut scores. CONCLUSION Findings demonstrate the ability of the PASSOM to distinguish individuals instructed to feign cognitive impairment from those told to perform to the best of their ability.
Collapse
Affiliation(s)
- Justin C Koenitzer
- Neuropsychology Department, Orlando VA Medical Center, Orlando, FL 32827, USA
| | - Janice E Herron
- Neuropsychology Department, Orlando VA Medical Center, Orlando, FL 32827, USA
| | - Jesse W Whitlow
- Psychology Department, Rutgers University, Camden, NJ 08102, USA
| | | | - Nitin R Patel
- Department of Veterans Affairs, VHA Office of Community Care, Washington, DC 20420 USA
| | - Ryan Pletcher
- Psychology Department, Rutgers University, Camden, NJ 08102, USA
| | | |
Collapse
|
3
|
Patrick SD, Rapport LJ, Kanser RJ, Hanks RA, Bashem JR. Performance validity assessment using response time on the Warrington Recognition Memory Test. Clin Neuropsychol 2021; 35:1154-1173. [PMID: 32068486 DOI: 10.1080/13854046.2020.1716997] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
Abstract
OBJECTIVE The present study tested the incremental utility of response time (RT) on the Warrington Recognition Memory Test - Words (RMT-W) in classifying bona fide versus feigned TBI. METHOD Participants were 173 adults: 55 with moderate to severe TBI, 69 healthy comparisons (HC) instructed to perform their best, and 49 healthy adults coached to simulate TBI (SIM). Participants completed a computerized version of the RMT-W in the context of a comprehensive neuropsychological battery. Groups were compared on RT indices including mean RT (overall, correct trials, incorrect trials) and variability, as well as the traditional RMT-W accuracy score. RESULTS Several RT indices differed significantly across groups, although RMT-W accuracy predicted group membership more strongly than any individual RT index. SIM showed longer average RT than both TBI and HC. RT variability and RT for incorrect trials distinguished SIM-HC but not SIM-TBI comparisons. In general, results for SIM-TBI comparisons were weaker than SIM-HC results. For SIM-HC comparisons, classification accuracy was excellent for all multivariable models incorporating RMT-W accuracy with one of the RT indices. For SIM-TBI comparisons, classification accuracies for multivariable models ranged from acceptable to excellent discriminability. In addition to mean RT and RT on correct trials, the ratio of RT on correct items to incorrect items showed incremental predictive value to accuracy. CONCLUSION Findings support the growing body of research supporting the value of combining RT with PVTs in discriminating between verified and feigned TBI. The diagnostic accuracy of the RMT-W can be improved by incorporating RT.
Collapse
Affiliation(s)
- Sarah D Patrick
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robert J Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A Hanks
- Department of Psychology, Wayne State University, Detroit, MI, USA
- Department of Physical Medicine and Rehabilitation, Wayne State University School of Medicine, Detroit, MI, USA
| | - Jesse R Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| |
Collapse
|
4
|
Braw Y. Response Time Measures as Supplementary Validity Indicators in Forced-Choice Recognition Memory Performance Validity Tests: A Systematic Review. Neuropsychol Rev 2021; 32:71-98. [PMID: 33821424 DOI: 10.1007/s11065-021-09499-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Accepted: 03/05/2021] [Indexed: 01/17/2023]
Abstract
Performance validity tests (PVTs) based on the forced-choice recognition memory (FCRM) paradigm are commonly used for the detection of noncredible performance. Examinees' response times (RTs) are affected by cognitive processes associated with deception and can also be gathered without lengthening the duration of the assessment. Consequently, interest in the utility of these measures as supplementary validity indicators in FCRM-PVTs has grown over the years. The current systematic review summarizes both clinical and simulation (i.e., healthy participants simulating cognitive impairment) studies of RTs in FCRM-PVTs. The findings of 25 peer-reviewed articles (n = 26 empirical studies) indicate that noncredible performance in FCRM-PVTs is associated with longer RTs. Additionally, there are indications that noncredible performance is associated with larger variability in RTs. RT measures, however, have lower discrimination capacity than conventional accuracy measures. Their utility may therefore lie in reaching decisions regarding cases with border zone accuracy scores, as well as aiding in the detection of more sophisticated examinees who are aware of the use of accuracy-based validity indicators in FCRM-PVTs. More research, however, is required before these measures are incorporated in daily practice and clinical decision-making processes.
Collapse
Affiliation(s)
- Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel.
| |
Collapse
|
5
|
Neal J, Strothkamp S, Bedingar E, Cordero P, Wagner B, Vagnini V, Jiang Y. Discriminating Fake From True Brain Injury Using Latency of Left Frontal Neural Responses During Old/New Memory Recognition. Front Neurosci 2019; 13:988. [PMID: 31611760 PMCID: PMC6777439 DOI: 10.3389/fnins.2019.00988] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 09/02/2019] [Indexed: 11/13/2022] Open
Abstract
Traumatic brain injury (TBI) is a major public health concern that affects 69 million individuals each year worldwide. Neuropsychologists report that up to 40% of individuals undergoing evaluations for TBI may be malingering neurocognitive deficits for a compensatory reward. The memory recognition test of malingering detection is effective but can be coached behaviorally. There is great need to develop a novel neural based method for discriminating fake from true brain injury. Here we test the hypothesis that decision making of faking memory deficits prolongs frontal neural responses. We applied an advanced method measuring decision latency in milliseconds for discriminating true TBI from malingerers who fake brain injury. To test this hypothesis, latencies of memory-related brain potentials were compared among true patients with moderate or severe TBI, and healthy age-matched individuals who were assigned either to be honest or faking memory deficit. Scalp signals of electroencephalography (EEG) were recorded with a 32-channel cap during an Old/New memory recognition task in three age- and education-matched groups: honest (n = 12), malingering (n = 15), and brain injured (n = 14) individuals. Bilateral fractional latencies of late positive ERP at frontal sites were compared among the three groups under both studied (Old) and non-studied (New) memory recognition conditions. Results show a significant difference between the fractional latencies of the late positive component during recognition of studied items in malingerers (averaged latencies = 396 ms) and the true brain injured subjects (mean = 312 ms) in the frontal sites. Only malingers showed asymmetrical frontal activity compared to the two other groups. These new findings support the hypothesis that that additional frontal processing of malingering individuals is measurably different from those of actual patients with brain injury. In contrast to our previous reported method using difference waves of amplitudes at frontal to posterior midline sites during new items recognition (Vagnini et al., 2008), there was no significant latency difference among groups during recognition of New items. The current method using delayed left frontal neural responses during studied items reached sensitivity of 80% and specificity of 79% in detecting malingers from true brain injury.
Collapse
Affiliation(s)
- Jennifer Neal
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Stephanie Strothkamp
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Esias Bedingar
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States.,Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Patrick Cordero
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Benjamin Wagner
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| | - Victoria Vagnini
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States.,Louisville VA Medical Center, Louisville, KY, United States
| | - Yang Jiang
- Department of Behavioral Science, University of Kentucky College of Medicine, Lexington, KY, United States
| |
Collapse
|
6
|
Kanser RJ, Rapport LJ, Bashem JR, Hanks RA. Detecting malingering in traumatic brain injury: Combining response time with performance validity test accuracy. Clin Neuropsychol 2018; 33:90-107. [DOI: 10.1080/13854046.2018.1440006] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Robert J. Kanser
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Lisa J. Rapport
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Jesse R. Bashem
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Robin A. Hanks
- Department of Physical Medicine and Rehabilitation, Wayne State University, Detroit, MI, USA
| |
Collapse
|
7
|
Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter. Psychiatry Res 2016; 245:74-82. [PMID: 27529665 DOI: 10.1016/j.psychres.2016.08.022] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2016] [Revised: 07/19/2016] [Accepted: 08/06/2016] [Indexed: 11/22/2022]
Abstract
Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias.
Collapse
|
8
|
Roebuck-Spencer TM, Vincent AS, Gilliland K, Johnson DR, Cooper DB. Initial clinical validation of an embedded performance validity measure within the automated neuropsychological metrics (ANAM). Arch Clin Neuropsychol 2013; 28:700-10. [PMID: 23887185 DOI: 10.1093/arclin/act055] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The measurement of effort and performance validity is essential for computerized testing where less direct supervision is needed. The clinical validation of an Automated Neuropsychological Metrics-Performance Validity Index (ANAM-PVI) was examined by converting ANAM test scores into a common metric based on their relative infrequency in an outpatient clinic sample with presumed good effort. Optimal ANAM-PVI cut-points were determined using receiver operator characteristic (ROC) curve analyses and an a priori specificity of 90%. Sensitivity/specificity was examined in available validation samples (controls, simulators, and neurorehabilitation patients). ANAM-PVI scores differed between groups with simulators scoring the highest. ROC curve analysis indicated excellent discriminability of ANAM-PVI scores ≥5 to detect simulators versus controls (area under the curve = 0.858; odds ratio for detecting suboptimal performance = 15.6), but resulted in a 27% false-positive rate in the clinical sample. When specificity in the clinical sample was set at 90%, sensitivity decreased (68%), but was consistent with other embedded effort measures. Results support the ANAM-PVI as an embedded effort measure and demonstrate the value of sample-specific cut-points in groups with cognitive impairment. Examination of different cut-points indicates that clinicians should choose sample-specific cut-points based on sensitivity and specificity rates that are most appropriate for their patient population with higher cut-points for those expected to have severe cognitive impairment (e.g., dementia or severe acquired brain injury).
Collapse
|
9
|
|
10
|
Vilar-López R, Gómez-Río M, Caracuel-Romero A, Llamas-Elvira J, Pérez-García M. Use of specific malingering measures in a Spanish sample. J Clin Exp Neuropsychol 2008; 30:710-22. [DOI: 10.1080/13803390701684562] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Raquel Vilar-López
- a Departamento de Personalidad, Evaluación y Tratamiento Psicológico , Universidad de Granada , Granada, Spain
| | - Manuel Gómez-Río
- b Medicina Nuclear , Hospital Universitario Virgen de las Nieves , Granada, Spain
| | - Alfonso Caracuel-Romero
- a Departamento de Personalidad, Evaluación y Tratamiento Psicológico , Universidad de Granada , Granada, Spain
| | - Jose Llamas-Elvira
- b Medicina Nuclear , Hospital Universitario Virgen de las Nieves , Granada, Spain
| | - Miguel Pérez-García
- a Departamento de Personalidad, Evaluación y Tratamiento Psicológico , Universidad de Granada , Granada, Spain
| |
Collapse
|
11
|
Vagnini VL, Berry DTR, Clark JA, Jiang Y. New measures to detect malingered neurocognitive deficit: applying reaction time and event-related potentials. J Clin Exp Neuropsychol 2008; 30:766-76. [PMID: 18608662 DOI: 10.1080/13803390701754746] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The ability of the Test of Memory Malingering (TOMM), reaction times (RTs), and event-related potentials (ERPs) to detect malingered neurocognitive deficit (MNCD) was examined in 32 normal individuals answering under honest (HON; n = 16) or malingering (MAL; n = 16) instructions as well as in 15 patients with traumatic brain injury (TBI) who answered under honest instructions. Overall, the TOMM was the most effective at classifying groups. However, new accuracy, RT, and ERP measures reached promising hit rates in the range of 71-88%. In particular, the difference in frontal versus posterior ERP obtained during an old-new task was effective at classifying MAL versus TBI (hit rate = 87%).
Collapse
|
12
|
Abstract
Malingering of mental illness has been studied extensively; however, malingered medical illness has been examined much less avidly. While in theory any ailment can be fabricated or self-induced, pain--including lower back pain, cervical pain, and fibromyalgia--and cognitive deficits associated with mild head trauma or toxic exposure are feigned most frequently, especially in situations where there are financial incentives to malinger. Structured assessments have been developed to help detect both types of malingering; however, in daily practice, the physician should generally suspect malingering when there are tangible incentives and when reported symptoms do not match the physical examination or no organic basis for the physical complaints is found.
Collapse
|
13
|
DenBoer JW, Hall S. Neuropsychological Test Performance of Successful Brain Injury Simulators. Clin Neuropsychol 2007; 21:943-55. [PMID: 17886152 DOI: 10.1080/13854040601020783] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
This study provided an examination of the performance characteristics of successful brain injury simulators (SBIS). Coached (n = 56) and uncoached (n = 35) brain injury simulators received instructions to fake cognitive impairment; controls were asked to do their best. The Test of Memory Malingering (TOMM) was administered along with standard neuropsychological measures (e.g., Wisconsin Card Sorting Test). The TOMM identified 80% of uncoached and 60% of coached brain injury simulators. SBIS were participants from the brain injury simulation groups whose TOMM performance indicated adequate effort. A total of 32% of all brain injury simulators scored above the TOMM cutoff scores for adequate effort (the SBIS group). Significantly more coached than uncoached participants composed the SBIS group (76% vs. 24%, respectively). SBIS performed significantly worse than controls and significantly better than unsuccessful brain injury simulators on select standard neuropsychological measures. The SBIS scores were lowered compared to controls; in some instances this lowered performance was at a clinically relevant level.
Collapse
|
14
|
|
15
|
Use of the Abbreviated Portland Digit Recognition Test in Simulated Malingering and Neurological Groups. ACTA ACUST UNITED AC 2004. [DOI: 10.1300/j151v04n01_02] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
16
|
Victor TL, Abeles N. Coaching Clients to Take Psychological and Neuropsychological Tests: A Clash of Ethical Obligations. ACTA ACUST UNITED AC 2004. [DOI: 10.1037/0735-7028.35.4.373] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
|
18
|
|
19
|
Tardif HP, Barry RJ, Fox AM, Johnstone SJ. Detection of feigned recognition memory impairment using the old/new effect of the event-related potential. Int J Psychophysiol 2000; 36:1-9. [PMID: 10700618 DOI: 10.1016/s0167-8760(00)00083-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
Twenty-four undergraduate university students with no known neurological disorders completed the Recognition Memory Test (Warrington, A., 1984. Recognition Memory Test manual. Windsor, Berkshire: NFER-Nelson.) while event-related potentials (ERPs) were recorded. Twelve subjects were instructed to feign a recognition memory deficit (malingering group), while the remainder served as controls. The malingerers performed poorly on the test compared to the control group. The 'old/new effect', an ERP measure thought to reflect recognition memory processes, did not differ between the groups, indicating recognition of previously learned material in the malingering group despite poor test performance. The study also revealed a second, early, old/new effect, maximal at left frontal sites in the malingering relative to the control group, suggesting task-related processing differences between the two groups. These effects appear to be of potential value in the detection of malingering of cognitive impairment in the clinical situation.
Collapse
Affiliation(s)
- H P Tardif
- Department of Psychology, University of Wollongong, Northfields Avenue, Wollongong, Australia
| | | | | | | |
Collapse
|
20
|
Heubrock D, Petermann F. Neuropsychological Assessment of Suspected Malingering: Research Results, Evaluation Techniques, and Further Directions of Research and Application. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 1998. [DOI: 10.1027/1015-5759.14.3.211] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Possibilities for neuropsychological assessment of suspected malingering are addressed. First, various forms of deception (malingering, factitious disorders, hysteria) and their implications for clinical neuropsychology are discussed. Then, threshold models for the detection of malingering as well as specially designed assessment techniques (e. g., individual tests, deficit testing, tests specifically for malingerers, and symptom validity testing) are described. Finally, the current status of clinical methods and research strategies is summarized, and recent and further developments of assessment and research are reported.
Collapse
|
21
|
|
22
|
Slick DJ, Hopp G, Strauss E, Spellacy FJ. Victoria Symptom Validity Test: efficiency for detecting feigned memory impairment and relationship to neuropsychological tests and MMPI-2 validity scales. J Clin Exp Neuropsychol 1996; 18:911-22. [PMID: 9157115 DOI: 10.1080/01688639608408313] [Citation(s) in RCA: 98] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Error scores and response times from a computer-administered, forced-choice recognition test of symptom validity were evaluated for efficiency in detecting feigned memory deficits. Participants included controls (n = 95), experimental malingerers (n = 43), compensation-seeking patients (n = 206), and patients not seeking financial compensation (n = 32). Adopting a three-level cut-score system that classified participant performance as malingered, questionable, or valid greatly improved sensitivity with relatively little impact on specificity. For error scores, convergent validity was found to be adequate and divergent validity was found to be excellent. Although response times showed promise for assisting in the detection of feigned impairment, divergent and convergent validity were weaker, suggesting somewhat less utility than error scores.
Collapse
Affiliation(s)
- D J Slick
- Department of Psychology, University of Victoria, B.C., Canada
| | | | | | | |
Collapse
|
23
|
Abstract
There has recently been a dramatic increase of empirical studies that investigate methods for detecting malingering of cognitive deficits. The present review focuses on a comparison of simulated and suspected malingerers in the malingering literature, and critiques the numerous approaches to the detection of malingering. The approaches that are reviewed include detection of floor effects, discrepancies of information, response bias, neuropsychological tests and batteries, symptom validity testing, and anomalous performance on memory tests. The latter approach has only recently been proposed by researchers and may show the most promise.
Collapse
Affiliation(s)
- M E Haines
- Department of Psychology, Texas A & M University, College Station 77843-4235, USA
| | | |
Collapse
|