1
|
Woods D, Pebler P, Johnson DK, Herron T, Hall K, Blank M, Geraci K, Williams G, Chok J, Lwi S, Curran B, Schendel K, Spinelli M, Baldo J. The California Cognitive Assessment Battery (CCAB). Front Hum Neurosci 2024; 17:1305529. [PMID: 38273881 PMCID: PMC10809797 DOI: 10.3389/fnhum.2023.1305529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 11/28/2023] [Indexed: 01/27/2024] Open
Abstract
Introduction We are developing the California Cognitive Assessment Battery (CCAB) to provide neuropsychological assessments to patients who lack test access due to cost, capacity, mobility, and transportation barriers. Methods The CCAB consists of 15 non-verbal and 17 verbal subtests normed for telemedical assessment. The CCAB runs on calibrated tablet computers over cellular or Wi-Fi connections either in a laboratory or in participants' homes. Spoken instructions and verbal stimuli are delivered through headphones using naturalistic text-to-speech voices. Verbal responses are scored in real time and recorded and transcribed offline using consensus automatic speech recognition which combines the transcripts from seven commercial ASR engines to produce timestamped transcripts more accurate than those of any single ASR engine. The CCAB is designed for supervised self-administration using a web-browser application, the Examiner. The Examiner permits examiners to record observations, view subtest performance in real time, initiate video chats, and correct potential error conditions (e.g., training and performance failures, etc.,) for multiple participants concurrently. Results Here we describe (1) CCAB usability with older (ages 50 to 89) participants; (2) CCAB psychometric properties based on normative data from 415 older participants; (3) Comparisons of the results of at-home vs. in-lab CCAB testing; (4) We also present preliminary analyses of the effects of COVID-19 infection on performance. Mean z-scores averaged over CCAB subtests showed impaired performance of COVID+ compared to COVID- participants after factoring out the contributions of Age, Education, and Gender (AEG). However, inter-cohort differences were no longer significant when performance was analyzed with a comprehensive model that factored out the influences of additional pre-existing demographic factors that distinguished COVID+ and COVID- cohorts (e.g., vocabulary, depression, race, etc.,). In contrast, unlike AEG scores, comprehensive scores correlated significantly with the severity of COVID infection. (5) Finally, we found that scoring models influenced the classification of individual participants with Mild Cognitive Impairment (MCI, z-scores < -1.50) where the comprehensive model accounted for more than twice as much variance as the AEG model and reduced racial bias in MCI classification. Discussion The CCAB holds the promise of providing scalable laboratory-quality neurodiagnostic assessments to underserved urban, exurban, and rural populations.
Collapse
Affiliation(s)
- David Woods
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - Peter Pebler
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - David K Johnson
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Timothy Herron
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
- VA Northern California Health Care System, Martinez, CA, United States
| | - Kat Hall
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - Mike Blank
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | - Kristi Geraci
- NeuroBehavioral Systems Inc., Berkeley, CA, United States
| | | | - Jas Chok
- VA Northern California Health Care System, Martinez, CA, United States
| | - Sandy Lwi
- VA Northern California Health Care System, Martinez, CA, United States
| | - Brian Curran
- VA Northern California Health Care System, Martinez, CA, United States
| | - Krista Schendel
- VA Northern California Health Care System, Martinez, CA, United States
| | - Maria Spinelli
- VA Northern California Health Care System, Martinez, CA, United States
| | - Juliana Baldo
- VA Northern California Health Care System, Martinez, CA, United States
| |
Collapse
|
2
|
Skirrow C, Meszaros M, Meepegama U, Lenain R, Papp KV, Weston J, Fristed E. Validation of a Remote and Fully Automated Story Recall Task to Assess for Early Cognitive Impairment in Older Adults: Longitudinal Case-Control Observational Study. JMIR Aging 2022; 5:e37090. [PMID: 36178715 PMCID: PMC9568813 DOI: 10.2196/37090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 07/07/2022] [Accepted: 07/13/2022] [Indexed: 01/23/2023] Open
Abstract
Background Story recall is a simple and sensitive cognitive test that is commonly used to measure changes in episodic memory function in early Alzheimer disease (AD). Recent advances in digital technology and natural language processing methods make this test a candidate for automated administration and scoring. Multiple parallel test stimuli are required for higher-frequency disease monitoring. Objective This study aims to develop and validate a remote and fully automated story recall task, suitable for longitudinal assessment, in a population of older adults with and without mild cognitive impairment (MCI) or mild AD. Methods The “Amyloid Prediction in Early Stage Alzheimer’s disease” (AMYPRED) studies recruited participants in the United Kingdom (AMYPRED-UK: NCT04828122) and the United States (AMYPRED-US: NCT04928976). Participants were asked to complete optional daily self-administered assessments remotely on their smart devices over 7 to 8 days. Assessments included immediate and delayed recall of 3 stories from the Automatic Story Recall Task (ASRT), a test with multiple parallel stimuli (18 short stories and 18 long stories) balanced for key linguistic and discourse metrics. Verbal responses were recorded and securely transferred from participants’ personal devices and automatically transcribed and scored using text similarity metrics between the source text and retelling to derive a generalized match score. Group differences in adherence and task performance were examined using logistic and linear mixed models, respectively. Correlational analysis examined parallel-forms reliability of ASRTs and convergent validity with cognitive tests (Logical Memory Test and Preclinical Alzheimer’s Cognitive Composite with semantic processing). Acceptability and usability data were obtained using a remotely administered questionnaire. Results Of the 200 participants recruited in the AMYPRED studies, 151 (75.5%)—78 cognitively unimpaired (CU) and 73 MCI or mild AD—engaged in optional remote assessments. Adherence to daily assessment was moderate and did not decline over time but was higher in CU participants (ASRTs were completed each day by 73/106, 68.9% participants with MCI or mild AD and 78/94, 83% CU participants). Participants reported favorable task usability: infrequent technical problems, easy use of the app, and a broad interest in the tasks. Task performance improved modestly across the week and was better for immediate recall. The generalized match scores were lower in participants with MCI or mild AD (Cohen d=1.54). Parallel-forms reliability of ASRT stories was moderate to strong for immediate recall (mean rho 0.73, range 0.56-0.88) and delayed recall (mean rho=0.73, range=0.54-0.86). The ASRTs showed moderate convergent validity with established cognitive tests. Conclusions The unsupervised, self-administered ASRT task is sensitive to cognitive impairments in MCI and mild AD. The task showed good usability, high parallel-forms reliability, and high convergent validity with established cognitive tests. Remote, low-cost, low-burden, and automatically scored speech assessments could support diagnostic screening, health care, and treatment monitoring.
Collapse
Affiliation(s)
| | | | | | | | - Kathryn V Papp
- Center for Alzheimer Research and Treatment, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.,Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | | | | |
Collapse
|
3
|
Oliva I, Losa J. Validation of the Computerized Cognitive Assessment Test: NNCT. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10495. [PMID: 36078210 PMCID: PMC9518179 DOI: 10.3390/ijerph191710495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 08/19/2022] [Accepted: 08/20/2022] [Indexed: 06/15/2023]
Abstract
Population aging brings with it cognitive impairment. One of the challenges of the coming years is the early and accessible detection of cognitive impairment. Therefore, this study aims to validate a neuropsychological screening test, self-administered and in software format, called NAIHA Neuro Cognitive Test (NNCT), designed for elderly people with and without cognitive impairment. This test aims to digitize cognitive assessments to add greater accessibility than classic tests, as well as to present results in real time and reduce costs. To this end, a comparison is made with tests such as MMSE, Clock Drawing Test (CDT) and CAMCOG. For this purpose, the following statistical analyses were performed: correlations, ROC curves, and three ANOVAs. The NNCT test evaluates seven cognitive areas and shows a significant and positive correlation with other tests, at total and subareas levels. Scores are established for the detection of both mild cognitive impairment and dementia, presenting optimal sensitivity and specificity. It is concluded that the NNCT test is a valid method of detection of cognitive impairment.
Collapse
|
4
|
Stricker NH, Stricker JL, Karstens AJ, Geske JR, Fields JA, Hassenstab J, Schwarz CG, Tosakulwong N, Wiste HJ, Jack CR, Kantarci K, Mielke MM. A novel computer adaptive word list memory test optimized for remote assessment: Psychometric properties and associations with neurodegenerative biomarkers in older women without dementia. ALZHEIMER'S & DEMENTIA (AMSTERDAM, NETHERLANDS) 2022; 14:e12299. [PMID: 35280963 PMCID: PMC8905660 DOI: 10.1002/dad2.12299] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 02/04/2022] [Accepted: 02/08/2022] [Indexed: 11/18/2022]
Abstract
Introduction This study established the psychometric properties and preliminary validity of the Stricker Learning Span (SLS), a novel computer adaptive word list memory test designed for remote assessment and optimized for smartphone use. Methods Women enrolled in the Mayo Clinic Specialized Center of Research Excellence (SCORE) were recruited via e-mail or phone to complete two remote cognitive testing sessions. Convergent validity was assessed through correlation with previously administered in-person neuropsychological tests (n = 96, ages 55-79) and criterion validity through associations with magnetic resonance imaging measures of neurodegeneration sensitive to Alzheimer's disease (n = 47). Results SLS performance significantly correlated with the Auditory Verbal Learning Test and measures of neurodegeneration (temporal meta-regions of interest and entorhinal cortical thickness, adjusting for age and education). Test-retest reliabilities across two sessions were 0.71-0.76 (two-way mixed intraclass correlation coefficients). Discussion The SLS is a valid and reliable self-administered memory test that shows promise for remote assessment of aging and neurodegenerative disorders.
Collapse
Affiliation(s)
- Nikki H. Stricker
- Department of Psychiatry and PsychologyMayo ClinicRochesterMinnesotaUSA
| | - John L. Stricker
- Department of Psychiatry and PsychologyMayo ClinicRochesterMinnesotaUSA
- Department of Information TechnologyMayo ClinicRochesterMinnesotaUSA
| | - Aimee J. Karstens
- Department of Psychiatry and PsychologyMayo ClinicRochesterMinnesotaUSA
| | - Jennifer R. Geske
- Department of Quantitative Health SciencesMayo ClinicRochesterMinnesotaUSA
| | - Julie A. Fields
- Department of Psychiatry and PsychologyMayo ClinicRochesterMinnesotaUSA
| | - Jason Hassenstab
- Department of Neurology and Psychological & Brain SciencesWashington University in St. LouisSt. LouisMissouriUSA
| | | | | | - Heather J. Wiste
- Department of Quantitative Health SciencesMayo ClinicRochesterMinnesotaUSA
| | | | | | - Michelle M. Mielke
- Department of Quantitative Health SciencesMayo ClinicRochesterMinnesotaUSA
- Department of NeurologyMayo ClinicRochesterMinnesotaUSA
| |
Collapse
|