1
|
Miles G, Smith M, Zook N, Zhang W. EM-COGLOAD: An investigation into age and cognitive load detection using eye tracking and deep learning. Comput Struct Biotechnol J 2024; 24:264-280. [PMID: 38638116 PMCID: PMC11024913 DOI: 10.1016/j.csbj.2024.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/15/2024] [Accepted: 03/16/2024] [Indexed: 04/20/2024] Open
Abstract
Alzheimer's Disease is the most prevalent neurodegenerative disease, and is a leading cause of disability among the elderly. Eye movement behaviour demonstrates potential as a non-invasive biomarker for Alzheimer's Disease, with changes detectable at an early stage after initial onset. This paper introduces a new publicly available dataset: EM-COGLOAD (available at https://osf.io/zjtdq/, DOI: 10.17605/OSF.IO/ZJTDQ). A dual-task paradigm was used to create effects of declined cognitive performance in 75 healthy adults as they carried out visual tracking tasks. Their eye movement was recorded, and time series classification of the extracted eye movement traces was explored using a range of deep learning techniques. The results of this showed that convolutional neural networks were able to achieve an accuracy of 87.5% when distinguishing between eye movement under low and high cognitive load, and 76% when distinguishing between the oldest and youngest age groups.
Collapse
Affiliation(s)
- Gabriella Miles
- Centre for Machine Vision, Bristol Robotics Laboratory, University of the West of England, T Block, Frenchay Campus, Coldharbour Lane, Bristol BS16 1QY, UK
| | - Melvyn Smith
- Centre for Machine Vision, Bristol Robotics Laboratory, University of the West of England, T Block, Frenchay Campus, Coldharbour Lane, Bristol BS16 1QY, UK
| | - Nancy Zook
- Faculty of Health and Applied Sciences, University of the West of England, Bristol BS16 1QY, UK
| | - Wenhao Zhang
- Centre for Machine Vision, Bristol Robotics Laboratory, University of the West of England, T Block, Frenchay Campus, Coldharbour Lane, Bristol BS16 1QY, UK
| |
Collapse
|
2
|
Finley JCA. Performance validity testing: the need for digital technology and where to go from here. Front Psychol 2024; 15:1452462. [PMID: 39193033 PMCID: PMC11347285 DOI: 10.3389/fpsyg.2024.1452462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 07/29/2024] [Indexed: 08/29/2024] Open
Affiliation(s)
- John-Christopher A. Finley
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| |
Collapse
|
3
|
Omer E, Braw Y. The Effects of Cognitive Load on Strategy Utilization in a Forced-Choice Recognition Memory Performance Validity Test. EUROPEAN JOURNAL OF PSYCHOLOGICAL ASSESSMENT 2022. [DOI: 10.1027/1015-5759/a000636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Despite the importance of detecting feigned cognitive impairment, we have a limited understanding of the theoretical foundation of the phenomenon and the factors that affect it. Studies regarding the formation and implementation of feigning strategies during neuropsychological assessments are numbered, though there are indications that they tax cognitive resources. The current study assessed the effect of cognitive load manipulation on feigning strategies. To achieve this aim, we utilized a 2 × 2 experimental design; condition (simulators/honest responders) and cognitive load (load/no load) were manipulated while participants ( N = 154) performed a well-established performance validity test (PVT). The cognitive load manipulation reduced the quantity of feigning strategies, while also affecting their composition (i.e., strategies tended to be more intuitive). This suggests that reduced cognitive resources among those feigning cognitive impairment may impact the use of in-vivo feigning strategies. These findings, though preliminary, will hopefully encourage further research that will uncover the cognitive factors involved in the utilization of feigning strategies in neuropsychological assessments.
Collapse
Affiliation(s)
- Elad Omer
- Department of Psychology, Ariel University, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Israel
| |
Collapse
|
4
|
Abeare K, Romero K, Cutler L, Sirianni CD, Erdodi LA. Flipping the Script: Measuring Both Performance Validity and Cognitive Ability with the Forced Choice Recognition Trial of the RCFT. Percept Mot Skills 2021; 128:1373-1408. [PMID: 34024205 PMCID: PMC8267081 DOI: 10.1177/00315125211019704] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In this study we attempted to replicate the classification accuracy of the newly introduced Forced Choice Recognition trial (FCR) of the Rey Complex Figure Test (RCFT) in a clinical sample. We administered the RCFTFCR and the earlier Yes/No Recognition trial from the RCFT to 52 clinically referred patients as part of a comprehensive neuropsychological test battery and incentivized a separate control group of 83 university students to perform well on these measures. We then computed the classification accuracies of both measures against criterion performance validity tests (PVTs) and compared results between the two samples. At previously published validity cutoffs (≤16 & ≤17), the RCFTFCR remained specific (.84-1.00) to psychometrically defined non-credible responding. Simultaneously, the RCFTFCR was more sensitive to examinees' natural variability in visual-perceptual and verbal memory skills than the Yes/No Recognition trial. Even after being reduced to a seven-point scale (18-24) by the validity cutoffs, both RCFT recognition scores continued to provide clinically useful information on visual memory. This is the first study to validate the RCFTFCR as a PVT in a clinical sample. Our data also support its use for measuring cognitive ability. Replication studies with more diverse samples and different criterion measures are still needed before large-scale clinical application of this scale.
Collapse
Affiliation(s)
- Kaitlyn Abeare
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Kristoffer Romero
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| | | | - Laszlo A Erdodi
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
5
|
Strategies to detect invalid performance in cognitive testing: An updated and extended meta-analysis. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01659-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
6
|
Omer E, Elbaum T, Braw Y. Identifying Feigned Cognitive Impairment: Investigating the Utility of Diffusion Model Analyses. Assessment 2020; 29:198-208. [PMID: 32988242 DOI: 10.1177/1073191120962317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Forced-choice performance validity tests are routinely used for the detection of feigned cognitive impairment. The drift diffusion model deconstructs performance into distinct cognitive processes using accuracy and response time measures. It thereby offers a unique approach for gaining insight into examinees' speed-accuracy trade-offs and the cognitive processes that underlie their performance. The current study is the first to perform such analyses using a well-established forced-choice performance validity test. To achieve this aim, archival data of healthy participants, either simulating cognitive impairment in the Word Memory Test or performing it to the best of their ability, were analyzed using the EZ-diffusion model (N = 198). The groups differed in the three model parameters, with drift rate emerging as the best predictor of group membership. These findings provide initial evidence for the usefulness of the drift diffusion model in clarifying the cognitive processes underlying feigned cognitive impairment and encourage further research.
Collapse
|
7
|
Erdodi LA, Abeare CA. Stronger Together: The Wechsler Adult Intelligence Scale-Fourth Edition as a Multivariate Performance Validity Test in Patients with Traumatic Brain Injury. Arch Clin Neuropsychol 2020; 35:188-204. [PMID: 31696203 DOI: 10.1093/arclin/acz032] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 06/18/2019] [Accepted: 06/22/2019] [Indexed: 12/17/2022] Open
Abstract
OBJECTIVE This study was designed to evaluate the classification accuracy of a multivariate model of performance validity assessment using embedded validity indicators (EVIs) within the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). METHOD Archival data were collected from 100 adults with traumatic brain injury (TBI) consecutively referred for neuropsychological assessment in a clinical setting. The classification accuracy of previously published individual EVIs nested within the WAIS-IV and a composite measure based on six independent EVIs were evaluated against psychometrically defined non-credible performance. RESULTS Univariate validity cutoffs based on age-corrected scaled scores on Coding, Symbol Search, Digit Span, Letter-Number-Sequencing, Vocabulary minus Digit Span, and Coding minus Symbol Search were strong predictors of psychometrically defined non-credible responding. Failing ≥3 of these six EVIs at the liberal cutoff improved specificity (.91-.95) over univariate cutoffs (.78-.93). Conversely, failing ≥2 EVIs at the more conservative cutoff increased and stabilized sensitivity (.43-.67) compared to univariate cutoffs (.11-.63) while maintaining consistently high specificity (.93-.95). CONCLUSIONS In addition to being a widely used test of cognitive functioning, the WAIS-IV can also function as a measure of performance validity. Consistent with previous research, combining information from multiple EVIs enhanced the classification accuracy of individual cutoffs and provided more stable parameter estimates. If the current findings are replicated in larger, diagnostically and demographically heterogeneous samples, the WAIS-IV has the potential to become a powerful multivariate model of performance validity assessment. BRIEF SUMMARY Using a combination of multiple performance validity indicators embedded within the subtests of theWechsler Adult Intelligence Scale, the credibility of the response set can be establishedwith a high level of confidence. Multivariatemodels improve classification accuracy over individual tests. Relying on existing test data is a cost-effective approach to performance validity assessment.
Collapse
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| | - Christopher A Abeare
- Department of Psychology, Neuropsychology Track, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
8
|
Hurtubise J, Baher T, Messa I, Cutler L, Shahein A, Hastings M, Carignan-Querqui M, Erdodi LA. Verbal fluency and digit span variables as performance validity indicators in experimentally induced malingering and real world patients with TBI. APPLIED NEUROPSYCHOLOGY-CHILD 2020; 9:337-354. [DOI: 10.1080/21622965.2020.1719409] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Affiliation(s)
| | - Tabarak Baher
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Isabelle Messa
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laura Cutler
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Ayman Shahein
- Department of Clinical Neurosciences, University of Calgary, Calgary, Canada
| | | | | | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| |
Collapse
|
9
|
Elbaum T, Golan L, Lupu T, Wagner M, Braw Y. Establishing supplementary response time validity indicators in the Word Memory Test (WMT) and directions for future research. APPLIED NEUROPSYCHOLOGY-ADULT 2019; 27:403-413. [DOI: 10.1080/23279095.2018.1555161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Tomer Elbaum
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Lior Golan
- Department of Psychology, Ariel University, Ariel, Israel
| | - Tamar Lupu
- Department of Psychology, Ariel University, Ariel, Israel
| | - Michael Wagner
- Department of Industrial Engineering & Management, Ariel University, Ariel, Israel
| | - Yoram Braw
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
10
|
The Grooved Pegboard Test as a Validity Indicator—a Study on Psychogenic Interference as a Confound in Performance Validity Research. PSYCHOLOGICAL INJURY & LAW 2018. [DOI: 10.1007/s12207-018-9337-7] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
11
|
An KY, Charles J, Ali S, Enache A, Dhuga J, Erdodi LA. Reexamining performance validity cutoffs within the Complex Ideational Material and the Boston Naming Test–Short Form using an experimental malingering paradigm. J Clin Exp Neuropsychol 2018; 41:15-25. [DOI: 10.1080/13803395.2018.1483488] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jordan Charles
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Sami Ali
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Anca Enache
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Jasmine Dhuga
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, ON, Canada
| |
Collapse
|
12
|
Erdodi LA, Abeare CA, Medoff B, Seke KR, Sagar S, Kirsch NL. A Single Error Is One Too Many: The Forced Choice Recognition Trial of the CVLT-II as a Measure of Performance Validity in Adults with TBI. Arch Clin Neuropsychol 2017; 33:845-860. [DOI: 10.1093/acn/acx110] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2017] [Indexed: 11/12/2022] Open
Affiliation(s)
- Laszlo A Erdodi
- Department of Psychology, University of Windsor, 168 Chrysler Hall South, Windsor, Canada ON
| | - Christopher A Abeare
- Department of Psychology, University of Windsor, 170 Chrysler Hall South, Windsor, Canada ON
| | - Brent Medoff
- The Commonwealth Medical College, 525 Pine St, Scranton, PA, USA
| | - Kristian R Seke
- University of Windsor, Brain-Cognition-Neuroscience Program, G105 Chrysler Hall North, Windsor, Canada ON
| | - Sanya Sagar
- Department of Psychology, University of Windsor, 109 Chrysler Hall North, Windsor, Canada ON
| | - Ned L Kirsch
- Department of Physical Medicine and Rehabilitation, University of Michigan, Briarwood Circle #4 Ann Arbor, MI, USA
| |
Collapse
|
13
|
Erdodi LA. Aggregating validity indicators: The salience of domain specificity and the indeterminate range in multivariate models of performance validity assessment. APPLIED NEUROPSYCHOLOGY-ADULT 2017; 26:155-172. [PMID: 29111772 DOI: 10.1080/23279095.2017.1384925] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
This study was designed to examine the "domain specificity" hypothesis in performance validity tests (PVTs) and the epistemological status of an "indeterminate range" when evaluating the credibility of a neuropsychological profile using a multivariate model of performance validity assessment. While previous research suggests that aggregating PVTs produces superior classification accuracy compared to individual instruments, the effect of the congruence between the criterion and predictor variable on signal detection and the issue of classifying borderline cases remain understudied. Data from a mixed clinical sample of 234 adults referred for cognitive evaluation (MAge = 46.6; MEducation = 13.5) were collected. Two validity composites were created: one based on five verbal PVTs (EI-5VER) and one based on five nonverbal PVTs (EI-5NV) and compared against several other PVTs. Overall, language-based tests of cognitive ability were more sensitive to elevations on the EI-5VER compared to visual-perceptual tests; whereas, the opposite was observed with the EI-5NV. However, the match between predictor and criterion variable had a more complex relationship with classification accuracy, suggesting the confluence of multiple factors (sensory modality, cognitive domain, testing paradigm). An "indeterminate range" of performance validity emerged that was distinctly different from both the Pass and the Fail group. Trichotomized criterion PVTs (Pass-Borderline-Fail) had a negative linear relationship with performance on tests of cognitive ability, providing further support for an "in-between" category separating the unequivocal Pass and unequivocal Fail classification range. The choice of criterion variable can influence classification accuracy in PVT research. Establishing a Borderline range between Pass and Fail more accurately reflected the distribution of scores on multiple PVTs. The traditional binary classification system imposes an artificial dichotomy on PVTs that was not fully supported by the data. Accepting "indeterminate" as a legitimate third outcome of performance validity assessment has the potential to improve the clinical utility of PVTs and defuse debates regarding "near-Passes" and "soft Fails."
Collapse
Affiliation(s)
- Laszlo A Erdodi
- a Department of Psychology , University of Windsor , Windsor , Canada
| |
Collapse
|
14
|
Soble JR, Santos OA, Bain KM, Kirton JW, Bailey KC, Critchfield EA, O’Rourke JJF, Highsmith JM, González DA. The Dot Counting Test adds up: Validation and response pattern analysis in a mixed clinical veteran sample. J Clin Exp Neuropsychol 2017; 40:317-325. [DOI: 10.1080/13803395.2017.1342773] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Jason R. Soble
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Octavio A. Santos
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Kathleen M. Bain
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Joshua W. Kirton
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - K. Chase Bailey
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | - Edan A. Critchfield
- Psychology Service, South Texas Veterans Health Care System, San Antonio, TX, USA
| | | | | | - David Andrés González
- Department of Neurology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| |
Collapse
|
15
|
Psychometric Markers of Genuine and Feigned Neurodevelopmental Disorders in the Context of Applying for Academic Accommodations. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9287-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
16
|
Erdodi LA, Nussbaum S, Sagar S, Abeare CA, Schwartz ES. Limited English Proficiency Increases Failure Rates on Performance Validity Tests with High Verbal Mediation. PSYCHOLOGICAL INJURY & LAW 2017. [DOI: 10.1007/s12207-017-9282-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
17
|
An KY, Kaploun K, Erdodi LA, Abeare CA. Performance validity in undergraduate research participants: a comparison of failure rates across tests and cutoffs. Clin Neuropsychol 2016; 31:193-206. [DOI: 10.1080/13854046.2016.1217046] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Kelly Y. An
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Kristen Kaploun
- Department of Psychology, University of Windsor, Windsor, Canada
| | - Laszlo A. Erdodi
- Department of Psychology, University of Windsor, Windsor, Canada
| | | |
Collapse
|
18
|
Eglit GML, Lynch JK, McCaffrey RJ. Not all performance validity tests are created equal: The role of recollection and familiarity in the Test of Memory Malingering and Word Memory Test. J Clin Exp Neuropsychol 2016; 39:173-189. [DOI: 10.1080/13803395.2016.1210573] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
19
|
Grills CE, Armistead-Jehle P. Performance validity test and neuropsychological assessment battery screening module performances in an active-duty sample with a history of concussion. APPLIED NEUROPSYCHOLOGY-ADULT 2016; 23:295-301. [DOI: 10.1080/23279095.2015.1079713] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
Ross TP, Poston AM, Rein PA, Salvatore AN, Wills NL, York TM. Performance Invalidity Base Rates Among Healthy Undergraduate Research Participants. Arch Clin Neuropsychol 2015; 31:97-104. [PMID: 26490230 DOI: 10.1093/arclin/acv062] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/01/2015] [Indexed: 11/14/2022] Open
Abstract
Few studies have examined base rates of suboptimal effort among healthy, undergraduate students recruited for neuropsychological research. An and colleagues (2012, Conducting research with non-clinical healthy undergraduates: Does effort play a role in neuropsychological test performance? Archives of Clinical Neuropsychology, 27, 849-857) reported high rates of performance invalidity (30.8%-55.6%), calling into question the validity of findings generated from samples of college students. In contrast, subsequent studies have reported much lower base rates ranging from 2.6% to 12%. The present study replicated and extended previous work by examining the performance of 108 healthy undergraduates on the Dot Counting Test, Victoria Symptom Validity Test, Word Memory Test, and a brief battery of neuropsychological measures. During initial testing, 8.3% of the sample scored below cutoffs on at least one Performance Validity Test, while 3.7% were classified as invalid at Time 2 (M interval = 34.4 days). The present findings add to a growing number of studies that suggest performance invalidity base rates in samples of non-clinical, healthy college students are much lower than An and colleagues initial findings. Although suboptimal effort is much less problematic than suggested by An and colleagues, recent reports as high as 12% indicate including measures of effort may be of value when using college students as participants. Methodological issues and recommendations for future research are presented.
Collapse
|
21
|
Bigler ED. Neuroimaging as a biomarker in symptom validity and performance validity testing. Brain Imaging Behav 2015; 9:421-44. [DOI: 10.1007/s11682-015-9409-1] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|