Attarha M, Mahncke H, Merzenich M. The Real-World Usability, Feasibility, and Performance Distributions of Deploying a Digital Toolbox of Computerized Assessments to Remotely Evaluate Brain Health: Development and Usability Study.
JMIR Form Res 2024;
8:e53623. [PMID:
38739916 PMCID:
PMC11130778 DOI:
10.2196/53623]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 03/15/2024] [Accepted: 04/11/2024] [Indexed: 05/16/2024] Open
Abstract
BACKGROUND
An ongoing global challenge is managing brain health and understanding how performance changes across the lifespan.
OBJECTIVE
We developed and deployed a set of self-administrable, computerized assessments designed to measure key indexes of brain health across the visual and auditory sensory modalities. In this pilot study, we evaluated the usability, feasibility, and performance distributions of the assessments in a home-based, real-world setting without supervision.
METHODS
Potential participants were untrained users who self-registered on an existing brain training app called BrainHQ. Participants were contacted via a recruitment email and registered remotely to complete a demographics questionnaire and 29 unique assessments on their personal devices. We examined participant engagement, descriptive and psychometric properties of the assessments, associations between performance and self-reported demographic variables, cognitive profiles, and factor loadings.
RESULTS
Of the 365,782 potential participants contacted via a recruitment email, 414 (0.11%) registered, of whom 367 (88.6%) completed at least one assessment and 104 (25.1%) completed all 29 assessments. Registered participants were, on average, aged 63.6 (SD 14.8; range 13-107) years, mostly female (265/414, 64%), educated (329/414, 79.5% with a degree), and White (349/414, 84.3% White and 48/414, 11.6% people of color). A total of 72% (21/29) of the assessments showed no ceiling or floor effects or had easily modifiable score bounds to eliminate these effects. When correlating performance with self-reported demographic variables, 72% (21/29) of the assessments were sensitive to age, 72% (21/29) of the assessments were insensitive to gender, 93% (27/29) of the assessments were insensitive to race and ethnicity, and 93% (27/29) of the assessments were insensitive to education-based differences. Assessments were brief, with a mean duration of 3 (SD 1.0) minutes per task. The pattern of performance across the assessments revealed distinctive cognitive profiles and loaded onto 4 independent factors.
CONCLUSIONS
The assessments were both usable and feasible and warrant a full normative study. A digital toolbox of scalable and self-administrable assessments that can evaluate brain health at a glance (and longitudinally) may lead to novel future applications across clinical trials, diagnostics, and performance optimization.
Collapse