1
|
Woods D, Pebler P, Johnson DK, Herron T, Hall K, Blank M, Geraci K, Williams G, Chok J, Lwi S, Curran B, Schendel K, Spinelli M, Baldo J. The California Cognitive Assessment Battery (CCAB). Front Hum Neurosci 2024; 17:1305529. [PMID: 38273881 PMCID: PMC10809797 DOI: 10.3389/fnhum.2023.1305529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 11/28/2023] [Indexed: 01/27/2024] Open
Abstract
Introduction We are developing the California Cognitive Assessment Battery (CCAB) to provide neuropsychological assessments to patients who lack test access due to cost, capacity, mobility, and transportation barriers. Methods The CCAB consists of 15 non-verbal and 17 verbal subtests normed for telemedical assessment. The CCAB runs on calibrated tablet computers over cellular or Wi-Fi connections either in a laboratory or in participants' homes. Spoken instructions and verbal stimuli are delivered through headphones using naturalistic text-to-speech voices. Verbal responses are scored in real time and recorded and transcribed offline using consensus automatic speech recognition which combines the transcripts from seven commercial ASR engines to produce timestamped transcripts more accurate than those of any single ASR engine. The CCAB is designed for supervised self-administration using a web-browser application, the Examiner. The Examiner permits examiners to record observations, view subtest performance in real time, initiate video chats, and correct potential error conditions (e.g., training and performance failures, etc.,) for multiple participants concurrently. Results Here we describe (1) CCAB usability with older (ages 50 to 89) participants; (2) CCAB psychometric properties based on normative data from 415 older participants; (3) Comparisons of the results of at-home vs. in-lab CCAB testing; (4) We also present preliminary analyses of the effects of COVID-19 infection on performance. Mean z-scores averaged over CCAB subtests showed impaired performance of COVID+ compared to COVID- participants after factoring out the contributions of Age, Education, and Gender (AEG). However, inter-cohort differences were no longer significant when performance was analyzed with a comprehensive model that factored out the influences of additional pre-existing demographic factors that distinguished COVID+ and COVID- cohorts (e.g., vocabulary, depression, race, etc.,). In contrast, unlike AEG scores, comprehensive scores correlated significantly with the severity of COVID infection. (5) Finally, we found that scoring models influenced the classification of individual participants with Mild Cognitive Impairment (MCI, z-scores < -1.50) where the comprehensive model accounted for more than twice as much variance as the AEG model and reduced racial bias in MCI classification. Discussion The CCAB holds the promise of providing scalable laboratory-quality neurodiagnostic assessments to underserved urban, exurban, and rural populations.
Collapse
Affiliation(s)
- David Woods
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - Peter Pebler
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - David K. Johnson
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Timothy Herron
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
- VA Northern California Health Care System, Martinez, CA, United States
| | - Kat Hall
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - Mike Blank
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - Kristi Geraci
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | | | - Jas Chok
- VA Northern California Health Care System, Martinez, CA, United States
| | - Sandy Lwi
- VA Northern California Health Care System, Martinez, CA, United States
| | - Brian Curran
- VA Northern California Health Care System, Martinez, CA, United States
| | - Krista Schendel
- VA Northern California Health Care System, Martinez, CA, United States
| | - Maria Spinelli
- VA Northern California Health Care System, Martinez, CA, United States
| | - Juliana Baldo
- VA Northern California Health Care System, Martinez, CA, United States
| |
Collapse
|
2
|
Linnhoff S, Haghikia A, Zaehle T. Effects of repetitive twice-weekly transcranial direct current stimulations on fatigue and fatigability in people with multiple sclerosis. Sci Rep 2023; 13:5878. [PMID: 37041183 PMCID: PMC10090173 DOI: 10.1038/s41598-023-32779-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 04/02/2023] [Indexed: 04/13/2023] Open
Abstract
Fatigue is associated with a dramatically decreased quality of life in people with multiple sclerosis (pwMS). It refers to a constant subjective feeling of exhaustion and performance decline, known as fatigability. However, inconsistency and heterogeneity in defining and assessing fatigue have led to limited advances in understanding and treating MS-associated fatigue. Transcranial direct current stimulation (tDCS) has emerged as a promising, non-pharmaceutical treatment strategy for subjective fatigue. However, whether repetitive tDCS also have long-term effects on time-on-task performance has not yet been investigated. This pseudorandomized, single-blinded, and sham-controlled study investigated tDCS effects on behavioral and electrophysiological parameters. 18 pwMS received eight twice-weekly 30 min stimulations over the left dorsolateral prefrontal cortex. Fatigability was operationalized as time-on-task-related changes in reaction time variability and P300 amplitude. Additionally, subjective trait and state fatigue ratings were assessed. The results revealed an overall decrease in subjective trait fatigue ratings that lasted at least four weeks after the stimulations. However, the ratings declined after both anodal and sham tDCS. No effects were found on subjective state fatigue and objective fatigability parameters. Linear Mixed Models and Bayesian Regression models likewise favored the absence of a tDCS effect on fatigability parameters. The results confirm the complex relationship between MS-associated fatigue and fatigability. Reliable and clinically relevant parameters need to be established to extend the potential of tDCS for treating fatigability. Furthermore, our results indicate that consecutive stimulations rather than twice-weekly stimulations should be the preferred stimulation scheme in future studies.
Collapse
Affiliation(s)
- Stefanie Linnhoff
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Leipziger Street 44, 39120, Magdeburg, Germany
| | - Aiden Haghikia
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Leipziger Street 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), 39106, Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), 39120, Magdeburg, Germany
| | - Tino Zaehle
- Department of Neurology, Otto-von-Guericke-University Magdeburg, Leipziger Street 44, 39120, Magdeburg, Germany.
- Center for Behavioral Brain Sciences (CBBS), 39106, Magdeburg, Germany.
| |
Collapse
|
3
|
Fuermaier ABM, Dandachi-Fitzgerald B, Lehrner J. Attention Performance as an Embedded Validity Indicator in the Cognitive Assessment of Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2022. [DOI: 10.1007/s12207-022-09468-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
AbstractThe assessment of performance validity is essential in any neuropsychological evaluation. However, relatively few measures exist that are based on attention performance embedded within routine cognitive tasks. The present study explores the potential value of a computerized attention test, the Cognitrone, as an embedded validity indicator in the neuropsychological assessment of early retirement claimants. Two hundred and sixty-five early retirement claimants were assessed with the Word Memory Test (WMT) and the Cognitrone. WMT scores were used as the independent criterion to determine performance validity. Speed and accuracy measures of the Cognitrone were analyzed in receiver operating characteristics (ROC) to classify group membership. The Cognitrone was sensitive in revealing attention deficits in early retirement claimants. Further, 54% (n = 143) of the individuals showed noncredible cognitive performance, whereas 46% (n = 122) showed credible cognitive performance. Individuals failing the performance validity assessment showed slower (AUC = 79.1%) and more inaccurate (AUC = 79.5%) attention performance than those passing the performance validity assessment. A compound score integrating speed and accuracy revealed incremental value as indicated by AUC = 87.9%. Various cut scores are suggested, resulting in equal rates of 80% sensitivity and specificity (cut score = 1.297) or 69% sensitivity with 90% specificity (cut score = 0.734). The present study supports the sensitivity of the Cognitrone for the assessment of attention deficits in early retirement claimants and its potential value as an embedded validity indicator. Further research on different samples and with multidimensional criteria for determining invalid performance are required before clinical application can be suggested.
Collapse
|
4
|
Motor Reaction Times as an Embedded Measure of Performance Validity: a Study with a Sample of Austrian Early Retirement Claimants. PSYCHOLOGICAL INJURY & LAW 2021. [DOI: 10.1007/s12207-021-09431-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
AbstractAmong embedded measures of performance validity, reaction time parameters appear to be less common. However, their potential may be underestimated. In the German-speaking countries, reaction time is often examined using the Alertness subtest of the Test of Attention Performance (TAP). Several previous studies have examined its suitability for validity assessment. The current study was conceived to examine a variety of reaction time parameters of the TAP Alertness subtest with a sample of 266 Austrian civil forensic patients. Classification results from the Word Memory Test (WMT) were used as an external indicator to distinguish between valid and invalid symptom presentations. Results demonstrated that the WMT fail group performed worse in reaction time as well as its intraindividual variation across trials when compared to the WMT pass group. Receiver operating characteristic analyses revealed areas under the curve of .775–.804. Logistic regression models indicated the parameter intraindividual variation of motor reaction time with warning sound as being the best predictor for invalid test performance. Suggested cut scores yielded a sensitivity of .62 and a specificity of .90, or .45 and .95, respectively, when the accepted false-positive rate was set lower. The results encourage the use of the Alertness subtest as an embedded measure of performance validity.
Collapse
|
5
|
Woods DL, Wyma JM, Herron TJ, Yund EW, Reed B. The Dyad-Adaptive Paced Auditory Serial Addition Test (DA-PASAT): Normative data and the effects of repeated testing, simulated malingering, and traumatic brain injury. PLoS One 2018; 13:e0178148. [PMID: 29677192 PMCID: PMC5909896 DOI: 10.1371/journal.pone.0178148] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2016] [Accepted: 11/24/2017] [Indexed: 11/25/2022] Open
Abstract
The Paced Auditory Serial Addition Test (PASAT) is widely used to evaluate processing speed and executive function in patients with multiple sclerosis, traumatic brain injury, and other neurological disorders. In the PASAT, subjects listen to sequences of digits while continuously reporting the sum of the last two digits presented. Four different stimulus onset asynchronies (SOAs) are usually tested, with difficulty increasing as SOAs are reduced. Ceiling effects are common at long SOAs, while the digit delivery rate often exceeds the subject’s processing capacity at short SOAs, causing some subjects to stop performing altogether. In addition, subjects may adopt an “alternate answer” strategy at short SOAs, which reduces the test’s demands on working-memory and processing speed. Consequently, studies have shown that the number of dyads (consecutive correct answers) is a more sensitive measure of PASAT performance than the overall number of correct sums. Here, we describe a 2.5-minute computerized test, the Dyad-Adaptive PASAT (DA-PASAT), where SOAs are adjusted with a 2:1 staircase, decreasing after each pair of correct responses and increasing after misses. Processing capacity is reflected in the minimum SOA (minSOA) achieved in 54 trials. Experiment 1 gathered normative data in two large populations: 1617 subjects in New Zealand ranging in age from 18 to 65 years, and 214 Californians ranging in age from 18 to 82 years. Minimum SOAs were influenced by age, education, and daily hours of computer-use. Minimum SOA z-scores, calculated after factoring out the influence of these factors, were virtually identical in the two control groups, as were response times (RTs) and dyad ratios (the proportion of hits occurring in dyads). Experiment 2 measured the test-retest reliability of the DA-PASAT in 44 young subjects who underwent three test sessions at weekly intervals. High intraclass correlation coefficients (ICCs) were found for minSOAs (0.87), response times (0.76), and dyad ratios (0.87). Performance improved across test sessions for all measures. Experiment 3 investigated the effects of simulated malingering in 50 subjects: 42% of simulated malingerers produced abnormal (p< 0.05) minSOA z-scores. Simulated malingerers with abnormal scores were distinguished with 87% sensitivity and 69% specificity from control subjects with abnormal scores by excessive differences between training performance and the actual test. Experiment 4 investigated patients with traumatic brain injury (TBI): patients with mild TBI performed within the normal range while patients with severe TBI showed deficits. The DA-PASAT reduces the time and stress of PASAT assessment while gathering sensitive measures of dyad processing that reveal the effects of aging, malingering, and traumatic brain injury on performance.
Collapse
Affiliation(s)
- David L. Woods
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, California, United States of America
- UC Davis Department of Neurology, Sacramento, California, United States of America
- Center for Neurosciences, UC Davis, Davis, California, United States of America
- UC Davis Center for Mind and Brain, Davis, California, United States of America
- NeuroBehavioral Systems, Inc., Berkeley, California, United States of America
- * E-mail:
| | - John M. Wyma
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, California, United States of America
| | - Timothy J. Herron
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, California, United States of America
| | - E. William Yund
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, California, United States of America
| | - Bruce Reed
- UC Davis Department of Neurology, Sacramento, California, United States of America
- Alzheimer’s Disease Center, Davis, California, United States of America
| |
Collapse
|
6
|
Woods DL, Wyma JM, Herron TJ, Yund EW. The Bay Area Verbal Learning Test (BAVLT): Normative Data and the Effects of Repeated Testing, Simulated Malingering, and Traumatic Brain Injury. Front Hum Neurosci 2017; 10:654. [PMID: 28127280 PMCID: PMC5226952 DOI: 10.3389/fnhum.2016.00654] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Accepted: 12/08/2016] [Indexed: 01/08/2023] Open
Abstract
Verbal learning tests (VLTs) are widely used to evaluate memory deficits in neuropsychiatric and developmental disorders. However, their validity has been called into question by studies showing significant differences in VLT scores obtained by different examiners. Here we describe the computerized Bay Area Verbal Learning Test (BAVLT), which minimizes inter-examiner differences by incorporating digital list presentation and automated scoring. In the 10-min BAVLT, a 12-word list is presented on three acquisition trials, followed by a distractor list, immediate recall of the first list, and, after a 30-min delay, delayed recall and recognition. In Experiment 1, we analyzed the performance of 195 participants ranging in age from 18 to 82 years. Acquisition trials showed strong primacy and recency effects, with scores improving over repetitions, particularly for mid-list words. Inter-word intervals (IWIs) increased with successive words recalled. Omnibus scores (summed over all trials except recognition) were influenced by age, education, and sex (women outperformed men). In Experiment 2, we examined BAVLT test-retest reliability in 29 participants tested with different word lists at weekly intervals. High intraclass correlation coefficients were seen for omnibus and acquisition scores, IWIs, and a categorization index reflecting semantic reorganization. Experiment 3 examined the performance of Experiment 2 participants when feigning symptoms of traumatic brain injury. Although 37% of simulated malingerers showed abnormal (p < 0.05) omnibus z-scores, z-score cutoffs were ineffective in discriminating abnormal malingerers from control participants with abnormal scores. In contrast, four malingering indices (recognition scores, primacy/recency effects, learning rate across acquisition trials, and IWIs) discriminated the two groups with 80% sensitivity and 80% specificity. Experiment 4 examined the performance of a small group of patients with mild or severe TBI. Overall, both patient groups performed within the normal range, although significant performance deficits were seen in some patients. The BAVLT improves the speed and replicability of verbal learning assessments while providing comprehensive measures of retrieval timing, semantic organization, and primacy/recency effects that clarify the nature of performance.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care SystemMartinez, CA, USA; University of California Davis Department of NeurologySacramento, CA, USA; Center for Neurosciences, University of California DavisDavis, CA, USA; University of California Davis Center for Mind and BrainDavis, CA, USA; NeuroBehavioral Systems, Inc.Berkeley, CA, USA
| | - John M Wyma
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System Martinez, CA, USA
| | - Timothy J Herron
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System Martinez, CA, USA
| | - E William Yund
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System Martinez, CA, USA
| |
Collapse
|
7
|
Woods DL, Wyma JM, Herron TJ, Yund EW. Computerized Analysis of Verbal Fluency: Normative Data and the Effects of Repeated Testing, Simulated Malingering, and Traumatic Brain Injury. PLoS One 2016; 11:e0166439. [PMID: 27936001 PMCID: PMC5147824 DOI: 10.1371/journal.pone.0166439] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 10/29/2016] [Indexed: 12/15/2022] Open
Abstract
In verbal fluency (VF) tests, subjects articulate words in a specified category during a short test period (typically 60 s). Verbal fluency tests are widely used to study language development and to evaluate memory retrieval in neuropsychiatric disorders. Performance is usually measured as the total number of correct words retrieved. Here, we describe the properties of a computerized VF (C-VF) test that tallies correct words and repetitions while providing additional lexical measures of word frequency, syllable count, and typicality. In addition, the C-VF permits (1) the analysis of the rate of responding over time, and (2) the analysis of the semantic relationships between words using a new method, Explicit Semantic Analysis (ESA), as well as the established semantic clustering and switching measures developed by Troyer et al. (1997). In Experiment 1, we gathered normative data from 180 subjects ranging in age from 18 to 82 years in semantic ("animals") and phonemic (letter "F") conditions. The number of words retrieved in 90 s correlated with education and daily hours of computer-use. The rate of word production declined sharply over time during both tests. In semantic conditions, correct-word scores correlated strongly with the number of ESA and Troyer-defined semantic switches as well as with an ESA-defined semantic organization index (SOI). In phonemic conditions, ESA revealed significant semantic influences in the sequence of words retrieved. In Experiment 2, we examined the test-retest reliability of different measures across three weekly tests in 40 young subjects. Different categories were used for each semantic ("animals", "parts of the body", and "foods") and phonemic (letters "F", "A", and "S") condition. After regressing out the influences of education and computer-use, we found that correct-word z-scores in the first session did not differ from those of the subjects in Experiment 1. Word production was uniformly greater in semantic than phonemic conditions. Intraclass correlation coefficients (ICCs) of correct-word z-scores were higher for phonemic (0.91) than semantic (0.77) tests. In semantic conditions, good reliability was also seen for the SOI (ICC = 0.68) and ESA-defined switches in semantic categories (ICC = 0.62). In Experiment 3, we examined the performance of subjects from Experiment 2 when instructed to malinger: 38% showed abnormal (p< 0.05) performance in semantic conditions. Simulated malingerers with abnormal scores could be distinguished with 80% sensitivity and 89% specificity from subjects with abnormal scores in Experiment 1 using lexical, temporal, and semantic measures. In Experiment 4, we tested patients with mild and severe traumatic brain injury (mTBI and sTBI). Patients with mTBI performed within the normal range, while patients with sTBI showed significant impairments in correct-word z-scores and category shifts. The lexical, temporal, and semantic measures of the C-VF provide an automated and comprehensive description of verbal fluency performance.
Collapse
Affiliation(s)
- David L. Woods
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, CA, United States of America
- UC Davis Department of Neurology, Sacramento, CA. United States of America
- Center for Neurosciences, UC Davis, Davis, CA United States of America
- UC Davis Center for Mind and Brain, Davis, CA United States of America
- NeuroBehavioral Systems, Inc., Berkeley, CA United States of America
| | - John M. Wyma
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, CA, United States of America
- NeuroBehavioral Systems, Inc., Berkeley, CA United States of America
| | - Timothy J. Herron
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, CA, United States of America
| | - E. William Yund
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, CA, United States of America
| |
Collapse
|
8
|
Abstract
Tests of design fluency (DF) assess a participant’s ability to generate geometric patterns and are thought to measure executive functions involving the non-dominant frontal lobe. Here, we describe the properties of a rapidly administered computerized design-fluency (C-DF) test that measures response times, and is automatically scored. In Experiment 1, we found that the number of unique patterns produced over 90 s by 180 control participants (ages 18 to 82 years) correlated with age, education, and daily computer-use. Each line in the continuous 4-line patterns required approximately 1.0 s to draw. The rate of pattern production and the incidence of repeated patterns both increased over the 90 s test. Unique pattern z-scores (corrected for age and computer-use) correlated with the results of other neuropsychological tests performed on the same day. Experiment 2 analyzed C-DF test-retest reliability in 55 participants in three test sessions at weekly intervals and found high z-score intraclass correlation coefficients (ICC = 0.79). Z-scores in the first session did not differ significantly from those of Experiment 1, but performance improved significantly over repeated tests. Experiment 3 investigated the performance of Experiment 2 participants when instructed to simulate malingering. Z-scores were significantly reduced and pattern repetitions increased, but there was considerable overlap with the performance of the control population. Experiment 4 examined performance in veteran patients tested more than one year after traumatic brain injury (TBI). Patients with mild TBI performed within the normal range, but patients with severe TBI showed reduced z-scores. The C-DF test reliably measures visuospatial pattern generation ability and reveals performance deficits in patients with severe TBI.
Collapse
|
9
|
Woods DL, Wyma JM, Herron TJ, Yund EW. The Effects of Repeat Testing, Malingering, and Traumatic Brain Injury on Computerized Measures of Visuospatial Memory Span. Front Hum Neurosci 2016; 9:690. [PMID: 26779001 PMCID: PMC4700270 DOI: 10.3389/fnhum.2015.00690] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Accepted: 12/07/2015] [Indexed: 11/13/2022] Open
Abstract
Spatial span tests (SSTs) such as the Corsi Block Test (CBT) and the SST of the Wechsler Memory Scale are widely used to assess deficits in spatial working memory. We conducted three experiments to evaluate the test-retest reliability and clinical sensitivity of a new computerized spatial span test (C-SST) that incorporates psychophysical methods to improve the precision of spatial span measurement. In Experiment 1, we analyzed C-SST test-retest reliability in 49 participants who underwent three test sessions at weekly intervals. Intraclass correlation coefficients (ICC) were higher for a psychophysically derived mean span (MnS) metric (0.83) than for the maximal span and total correct metrics used in traditional spatial-span tests. Response times (ReTs) also showed high ICCs (0.93) that correlated negatively with MnS scores and correlated positively with response-time latencies from other tests of processing speed. Learning effects were insignificant. Experiment 2 examined the performance of Experiment 1 participants when instructed to feign symptoms of traumatic brain injury (TBI): 57% showed abnormal MnS z-scores. A MnS z-score cutoff of 3.0 correctly classified 36% of simulated malingerers and 91% of the subgroup of 11 control participants with abnormal spans. Malingerers also made more substitution errors than control participants with abnormal spans (sensitivity = 43%, specificity = 91%). In addition, malingerers showed no evidence of ReT slowing, in contrast to significant abnormalities seen on other malingered tests of processing speed. As a result, differences between ReT z-scores and z-scores on other processing speed tests showed very high sensitivity and specificity in distinguishing malingering and control participants with either normal or abnormal spans. Experiment 3 examined C-SST performance in a group of patients with predominantly mild TBI: neither MnS nor ReT z-scores showed significant group-level abnormalities. The C-SST improves the reliability and sensitivity of spatial span testing, can accurately detect malingering, and shows that visuospatial working memory is largely preserved in patients with predominantly mild TBI.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez, CAUSA; Department of Neurology, University of California, Davis, Sacramento, CAUSA; Center for Neuroscience, University of California, Davis, Davis, CAUSA; Center for Mind and Brain, University of California, Davis, Davis, CAUSA
| | - John M Wyma
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez, CA USA
| | - Timothy J Herron
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez, CA USA
| | - E W Yund
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez, CA USA
| |
Collapse
|
10
|
Woods DL, Wyma JM, Yund EW, Herron TJ. The Effects of Repeated Testing, Simulated Malingering, and Traumatic Brain Injury on Visual Choice Reaction Time. Front Hum Neurosci 2015; 9:595. [PMID: 26635569 PMCID: PMC4656817 DOI: 10.3389/fnhum.2015.00595] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Accepted: 10/13/2015] [Indexed: 11/23/2022] Open
Abstract
Choice reaction time (CRT), the time required to discriminate and respond appropriately to different stimuli, is a basic measure of attention and processing speed. Here, we describe the reliability and clinical sensitivity of a new CRT test that presents lateralized visual stimuli and adaptively adjusts stimulus onset asynchronies using a staircase procedure. Experiment 1 investigated the test–retest reliability in three test sessions performed at weekly intervals. Performance in the first test session was accurately predicted from age and computer-use regression functions obtained in a previously studied normative cohort. Central processing time (CentPT), the difference between the CRTs and simple reaction time latencies measured in a separate experiment, accounted for 55% of CRT latency and more than 85% of CRT latency variance. Performance improved significantly across the three test sessions. High intraclass correlation coefficients were seen for CRTs (0.90), CentPTs (0.87), and an omnibus performance measure (0.81) that combined CRT and minimal SOA z-scores. Experiment 2 investigated performance in the same participants when instructed to feign symptoms of traumatic brain injury (TBI): 87% produced abnormal omnibus z-scores. Simulated malingerers showed greater elevations in simple reaction times than CRTs, and hence reduced CentPTs. Latency-consistency z-scores, based on the difference between the CRTs obtained and those predicted based on CentPT latencies, discriminated malingering participants from controls with high sensitivity and specificity. Experiment 3 investigated CRT test performance in military veterans who had suffered combat-related TBI and symptoms of post-traumatic stress disorder, and revealed small but significant deficits in performance in the TBI population. The results indicate that the new CRT test shows high test–retest reliability, can assist in detecting participants performing with suboptimal effort, and is sensitive to the effects of TBI on the speed and accuracy of visual processing.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez CA, USA ; UC Davis Department of Neurology, Sacramento CA, USA ; Center for Neurosciences, University of California, Davis Davis, CA, USA ; UC Davis Center for Mind and Brain, Davis CA, USA
| | - John M Wyma
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez CA, USA
| | - E W Yund
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez CA, USA
| | - Timothy J Herron
- Human Cognitive Neurophysiology Laboratory, Veterans Affairs Northern California Health Care System, Martinez CA, USA
| |
Collapse
|