1
|
Rystedt E, Morén J, Lindbäck J, Tedim Cruz V, Ingelsson M, Kilander L, Lunet N, Pais J, Ruano L, Westman G. Validation of a web-based self-administered test for cognitive assessment in a Swedish geriatric setting. PLoS One 2024; 19:e0297575. [PMID: 38300935 PMCID: PMC10833583 DOI: 10.1371/journal.pone.0297575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 01/08/2024] [Indexed: 02/03/2024] Open
Abstract
Computerized cognitive tests have the potential to cost-effectively detect and monitor cognitive impairments and thereby facilitate treatment for these conditions. However, relatively few of these tests have been validated in a variety of populations. Brain on Track, a self-administered web-based test, has previously been shown to have a good ability to differentiate between healthy individuals and patients with cognitive impairment in Portuguese populations. The objective of this study was to validate the differential ability and evaluate the usability of Brain on Track in a Swedish memory clinic setting. Brain on Track was administered to 30 patients with mild cognitive impairment/mild dementia and 30 healthy controls, all scheduled to perform the test from home after one week and after three months. To evaluate the usability, the patient group was interviewed after completion of the testing phase. Patients scored lower than healthy controls at both the first (median score 42.4 vs 54.1, p<0.001) and the second test (median score 42.3 vs 55.0, p<0.001). The test-retest intra-class correlation was 0.87. A multiple logistic regression model accounting for effects of age, gender and education rendered an ability of Brain on Track to differentiate between the groups with an area under the receiver operation characteristics curve of 0.90 for the first and 0.88 for the second test. In the subjective evaluation, nine patients left positive comments, nine were negative whereas five left mixed comments regarding the test experience. Sixty percent of patients had received help from relatives to log on to the platform. In conclusion, Brain on Track performed well in differentiating healthy controls from patients with cognitive impairment and showed a high test-retest reliability, on par with results from previous studies. However, the substantial proportion of patients needing help to log in could to some extent limit an independent use of the platform.
Collapse
Affiliation(s)
- Einar Rystedt
- Department of Public Health and Caring Sciences, Geriatrics, Uppsala University, Uppsala, Sweden
| | - Jakob Morén
- Department of Medical Sciences, Infection medicine, Uppsala University, Uppsala, Sweden
| | - Johan Lindbäck
- Uppsala Clinical Research center, Uppsala University, Uppsala, Sweden
| | - Vitor Tedim Cruz
- Serviço de Neurologia, Unidade Local de Saúde de Matosinhos, Matosinhos, Portugal
- EPIUnit–Instituto de Saúde Pública, Universidade do Porto, Porto, Portugal
- Laboratório para a Investigação Integrativa e Translacional em Saúde Populacional (ITR), Porto, Portugal
| | - Martin Ingelsson
- Department of Public Health and Caring Sciences, Geriatrics, Uppsala University, Uppsala, Sweden
- Krembil Brain Institute, University Health Network, Toronto, Ontario, Canada
- Departments of Medicine and Laboratory Medicine & Pathobiology, Tanz Centre for Research in Neurodegenerative Diseases, University of Toronto, Toronto, Ontario, Canada
| | - Lena Kilander
- Department of Public Health and Caring Sciences, Geriatrics, Uppsala University, Uppsala, Sweden
| | - Nuno Lunet
- EPIUnit–Instituto de Saúde Pública, Universidade do Porto, Porto, Portugal
- Laboratório para a Investigação Integrativa e Translacional em Saúde Populacional (ITR), Porto, Portugal
- Departamento de Ciências da Saúde Pública e Forenses e Educação Médica, Faculdade de Medicina da Universidade do Porto, Porto, Portugal
| | - Joana Pais
- EPIUnit–Instituto de Saúde Pública, Universidade do Porto, Porto, Portugal
- Laboratório para a Investigação Integrativa e Translacional em Saúde Populacional (ITR), Porto, Portugal
| | - Luis Ruano
- EPIUnit–Instituto de Saúde Pública, Universidade do Porto, Porto, Portugal
- Laboratório para a Investigação Integrativa e Translacional em Saúde Populacional (ITR), Porto, Portugal
- Departamento de Ciências da Saúde Pública e Forenses e Educação Médica, Faculdade de Medicina da Universidade do Porto, Porto, Portugal
- Serviço de Neurologia, Centro Hospitalar Entre Douro e Vouga, Santa Maria da Feira, Portugal
| | - Gabriel Westman
- Department of Medical Sciences, Infection medicine, Uppsala University, Uppsala, Sweden
| |
Collapse
|
2
|
Launes J, Uurainen H, Virta M, Hokkanen L. Self-administered online test of memory functions. NORDIC PSYCHOLOGY 2022. [DOI: 10.1080/19012276.2022.2074525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Jyrki Launes
- Faculty of Medicine, Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Hanna Uurainen
- Faculty of Medicine, Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Maarit Virta
- Faculty of Medicine, Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Laura Hokkanen
- Faculty of Medicine, Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| |
Collapse
|
3
|
Wang CSM, Wu JY, Hsu WT, Chien PF, Chen PL, Huang YC, Cheng KS. Using Self-Administered Game-Based Cognitive Assessment to Screen for Degenerative Dementia: A Pilot Study. J Alzheimers Dis 2022; 86:877-890. [PMID: 35147533 DOI: 10.3233/jad-215142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The earlier detection of dementia is needed as cases increase yearly in the aging populations of Taiwan and the world. In recent years, the global internet usage rate has gradually increased among older people. To expand dementia screening and provide timely medical intervention, a simple self-administrated assessment tool to assist in easily screening for dementia is needed. OBJECTIVE The two-part goal of this pilot study was, first, to develop a Game-Based Cognitive Assessment (GBCA) tool, and then, to evaluate its validity at early screening for patients with cognitive impairment. METHOD The researchers recruited 67 patients with neurocognitive disorders (NCDs) and 57 healthy controls (HCs). Each participant underwent the GBCA and other clinical cognitive assessments (CDR, CASI, and MMSE), and filled out a questionnaire evaluating their experience of using the GBCA. Statistical analyses were used to measure the validity of the GBCA at screening for degenerative dementia. RESULTS The average GBCA scores of the HC and NCD groups were 87 (SD = 7.9) and 52 (SD = 21.7), respectively. The GBCA correlated well with the CASI (r2 = 0.90, p < 0.001) and with the MMSE (r2 = 0.92, p < 0.001), indicating concurrent validity. The GBCA cut-off of 75/76 corresponded to measurements of sensitivity, specificity, and area under curve of 85.1%, 91.5%, and 0.978, respectively. The positive predictive value was 91.9%, and the negative predictive value was 84.4%. The results of the user-experience questionnaire for the HC and NCD groups were good and acceptable, respectively. CONCLUSION The GBCA is an effective and acceptable tool for screening for degenerative dementia.
Collapse
Affiliation(s)
- Carol Sheei-Meei Wang
- Department of BioMedical Engineering, National Cheng Kung University, Tainan City, Taiwan.,Department of Psychiatry, Tainan Hospital, Ministry of Health and Welfare, Tainan City, Taiwan.,Department of Psychiatry, National Cheng Kung University Hospital, Tainan City, Taiwan
| | - Jia-Yun Wu
- Department of BioMedical Engineering, National Cheng Kung University, Tainan City, Taiwan
| | - Wen-Tzu Hsu
- Department of BioMedical Engineering, National Cheng Kung University, Tainan City, Taiwan
| | - Pei-Fang Chien
- Department of Psychiatry, Tainan Hospital, Ministry of Health and Welfare, Tainan City, Taiwan
| | | | - Ying-Che Huang
- Department of Neurology, Tainan Hospital, Ministry of Health and Welfare, Tainan City, Taiwan
| | - Kuo-Sheng Cheng
- Department of BioMedical Engineering, National Cheng Kung University, Tainan City, Taiwan
| |
Collapse
|
4
|
Diagnostic performance of digital cognitive tests for the identification of MCI and dementia: A systematic review. Ageing Res Rev 2021; 72:101506. [PMID: 34744026 DOI: 10.1016/j.arr.2021.101506] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 09/21/2021] [Accepted: 10/26/2021] [Indexed: 11/21/2022]
Abstract
BACKGROUND The use of digital cognitive tests is getting common nowadays. Older adults or their family members may use online tests for self-screening of dementia. However, the diagnostic performance across different digital tests is still to clarify. The objective of this study was to evaluate the diagnostic performance of digital cognitive tests for MCI and dementia in older adults. METHODS Literature searches were systematically performed in the OVID databases. Validation studies that reported the diagnostic performance of a digital cognitive test for MCI or dementia were included. The main outcome was the diagnostic performance of the digital test for the detection of MCI or dementia. RESULTS A total of 56 studies with 46 digital cognitive tests were included in this study. Most of the digital cognitive tests were shown to have comparable diagnostic performances with the paper-and-pencil tests. Twenty-two digital cognitive tests showed a good diagnostic performance for dementia, with a sensitivity and a specificity over 0.80, such as the Computerized Visuo-Spatial Memory test and Self-Administered Tasks Uncovering Risk of Neurodegeneration. Eleven digital cognitive tests showed a good diagnostic performance for MCI such as the Brain Health Assessment. However, all the digital tests only had a few validation studies to verify their performance. CONCLUSIONS Digital cognitive tests showed good performances for MCI and dementia. The digital test can collect digital data that is far beyond the traditional ways of cognitive tests. Future research is suggested on these new forms of cognitive data for the early detection of MCI and dementia.
Collapse
|
5
|
Tsoy E, Zygouris S, Possin KL. Current State of Self-Administered Brief Computerized Cognitive Assessments for Detection of Cognitive Disorders in Older Adults: A Systematic Review. JPAD-JOURNAL OF PREVENTION OF ALZHEIMERS DISEASE 2021; 8:267-276. [PMID: 34101783 PMCID: PMC7987552 DOI: 10.14283/jpad.2021.11] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Early diagnosis of cognitive disorders in older adults is a major healthcare priority with benefits to patients, families, and health systems. Rapid advances in digital technology offer potential for developing innovative diagnostic pathways to support early diagnosis. Brief self-administered computerized cognitive tools in particular hold promise for clinical implementation by minimizing demands on staff time. In this study, we conducted a systematic review of self-administered computerized cognitive assessment measures designed for the detection of cognitive impairment in older adults. Studies were identified via a systematic search of published peer-reviewed literature across major scientific databases. All studies reporting on psychometric validation of brief (≤30 minutes) self-administered computerized measures for detection of MCI and all-cause dementia in older adults were included. Seventeen studies reporting on 10 cognitive tools met inclusion criteria and were subjected to systematic review. There was substantial variability in characteristics of validation samples and reliability and validity estimates. Only 2 measures evaluated feasibility and usability in the intended clinical settings. Similar to past reviews, we found variability across measures with regard to psychometric rigor and potential for widescale applicability in clinical settings. Despite the promise that self-administered cognitive tests hold for clinical implementation, important gaps in scientific rigor in development, validation, and feasibility studies of these measures remain. Developments in technology and biomarker studies provide potential avenues for future directions on the use of digital technology in clinical care.
Collapse
Affiliation(s)
- E Tsoy
- Katherine L. Possin, PhD, Associate Professor in Residence, Department of Neurology, University of California San Francisco, Memory and Aging Center, Box 1207, 675 Nelson Rising Lane, Suite 190, San Francisco, CA 94158, Tel: 415-476-1889, E-mail:
| | | | | |
Collapse
|
6
|
Bissig D, Kaye J, Erten‐Lyons D. Validation of SATURN, a free, electronic, self-administered cognitive screening test. ALZHEIMER'S & DEMENTIA (NEW YORK, N. Y.) 2020; 6:e12116. [PMID: 33392382 PMCID: PMC7771179 DOI: 10.1002/trc2.12116] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 10/19/2020] [Accepted: 10/27/2020] [Indexed: 11/28/2022]
Abstract
BACKGROUND Cognitive screening is limited by clinician time and variability in administration and scoring. We therefore developed Self-Administered Tasks Uncovering Risk of Neurodegeneration (SATURN), a free, public-domain, self-administered, and automatically scored cognitive screening test, and validated it on inexpensive (<$100) computer tablets. METHODS SATURN is a 30-point test including orientation, word recall, and math items adapted from the Saint Louis University Mental Status test, modified versions of the Stroop and Trails tasks, and other assessments of visuospatial function and memory. English-speaking neurology clinic patients and their partners 50 to 89 years of age were given SATURN, the Montreal Cognitive Assessment (MoCA), and a brief survey about test preferences. For patients recruited from dementia clinics (n = 23), clinical status was quantified with the Clinical Dementia Rating (CDR) scale. Care partners (n = 37) were assigned CDR = 0. RESULTS SATURN and MoCA scores were highly correlated (P < .00001; r = 0.90). CDR sum-of-boxes scores were well-correlated with both tests (P < .00001) (r = -0.83 and -0.86, respectively). Statistically, neither test was superior. Most participants (83%) reported that SATURN was easy to use, and most either preferred SATURN over the MoCA (47%) or had no preference (32%). DISCUSSION Performance on SATURN-a fully self-administered and freely available (https://doi.org/10.5061/dryad.02v6wwpzr) cognitive screening test-is well-correlated with MoCA and CDR scores.
Collapse
Affiliation(s)
- David Bissig
- Department of NeurologyUniversity of California–DavisSacramentoCaliforniaUSA
| | - Jeffrey Kaye
- Department of NeurologyOregon Health and Science UniversityPortlandOregonUSA
| | - Deniz Erten‐Lyons
- Department of NeurologyVeterans Affairs Medical CenterPortlandOregonUSA
| |
Collapse
|
7
|
Brief cognitive screening instruments for early detection of Alzheimer's disease: a systematic review. ALZHEIMERS RESEARCH & THERAPY 2019; 11:21. [PMID: 30819244 PMCID: PMC6396539 DOI: 10.1186/s13195-019-0474-3] [Citation(s) in RCA: 90] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVES The objective of this systematic review was (1) to give an overview of the available short screening instruments for the early detection of Alzheimer's disease (AD) and (2) to review the psychometric properties of these instruments. METHODS First, a systematic search of titles and abstracts of PubMed and Web of Science was conducted between February and July 2015 and updated in April 2016 and May 2018. Only papers written in English or Dutch were considered. All full-text papers about cognitive screening instruments for the early detection of AD were included, resulting in the identification of 38 pencil and paper tests and 12 computer tests. In a second step, the psychometric quality of these instruments was evaluated. Therefore, the same databases were searched again to identify papers that described the psychometric properties of the instruments meanwhile applying diagnostic criteria for the diagnostic groups included. RESULTS Out of 1454 papers, 96 clearly discussed the psychometric properties of the instruments. Eighty-nine papers discussed pencil and paper tests of which 80 were validated in a memory clinic setting. Based on the number of studies (31 articles) and the sensitivity (84%) and specificity (74%) values, the Montreal Cognitive Assessment (MoCA) seems to be a promising (pencil and paper) screening test for memory clinic testing as well as for population screening. Regarding computer tests, validation studies were only available for 7 out of 12 tests. CONCLUSIONS A large number of screening tests for AD are available. However, most tests are only validated in a memory clinic setting and description of the psychometric properties of the instruments is limited. Especially, computer tests require further research. The MoCA is a promising instrument, but the specificity to detect early AD is rather low.
Collapse
|
8
|
Aslam RW, Bates V, Dundar Y, Hounsome J, Richardson M, Krishan A, Dickson R, Boland A, Fisher J, Robinson L, Sikdar S. A systematic review of the diagnostic accuracy of automated tests for cognitive impairment. Int J Geriatr Psychiatry 2018; 33:561-575. [PMID: 29356098 PMCID: PMC5887872 DOI: 10.1002/gps.4852] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 11/21/2017] [Indexed: 12/13/2022]
Abstract
OBJECTIVE The aim of this review is to determine whether automated computerised tests accurately identify patients with progressive cognitive impairment and, if so, to investigate their role in monitoring disease progression and/or response to treatment. METHODS Six electronic databases (Medline, Embase, Cochrane, Institute for Scientific Information, PsycINFO, and ProQuest) were searched from January 2005 to August 2015 to identify papers for inclusion. Studies assessing the diagnostic accuracy of automated computerised tests for mild cognitive impairment (MCI) and early dementia against a reference standard were included. Where possible, sensitivity, specificity, positive predictive value, negative predictive value, and likelihood ratios were calculated. The Quality Assessment of Diagnostic Accuracy Studies tool was used to assess risk of bias. RESULTS Sixteen studies assessing 11 diagnostic tools for MCI and early dementia were included. No studies were eligible for inclusion in the review of tools for monitoring progressive disease and response to treatment. The overall quality of the studies was good. However, the wide range of tests assessed and the non-standardised reporting of diagnostic accuracy outcomes meant that statistical analysis was not possible. CONCLUSION Some tests have shown promising results for identifying MCI and early dementia. However, concerns over small sample sizes, lack of replicability of studies, and lack of evidence available make it difficult to make recommendations on the clinical use of the computerised tests for diagnosing, monitoring progression, and treatment response for MCI and early dementia. Research is required to establish stable cut-off points for automated computerised tests used to diagnose patients with MCI or early dementia.
Collapse
Affiliation(s)
| | - Vickie Bates
- Health ServicesUniversity of LiverpoolLiverpoolUK
| | | | | | | | | | | | | | | | - Louise Robinson
- Institute of Health and SocietyNewcastle UniversityNewcastle upon TyneUK
| | | |
Collapse
|
9
|
Aslam RW, Bates V, Dundar Y, Hounsome J, Richardson M, Krishan A, Dickson R, Boland A, Kotas E, Fisher J, Sikdar S, Robinson L. Automated tests for diagnosing and monitoring cognitive impairment: a diagnostic accuracy review. Health Technol Assess 2018; 20:1-74. [PMID: 27767932 DOI: 10.3310/hta20770] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND Cognitive impairment is a growing public health concern, and is one of the most distinctive characteristics of all dementias. The timely recognition of dementia syndromes can be beneficial, as some causes of dementia are treatable and are fully or partially reversible. Several automated cognitive assessment tools for assessing mild cognitive impairment (MCI) and early dementia are now available. Proponents of these tests cite as benefits the tests' repeatability and robustness and the saving of clinicians' time. However, the use of these tools to diagnose and/or monitor progressive cognitive impairment or response to treatment has not yet been evaluated. OBJECTIVES The aim of this review was to determine whether or not automated computerised tests could accurately identify patients with progressive cognitive impairment in MCI and dementia and, if so, to investigate their role in monitoring disease progression and/or response to treatment. DATA SOURCES Five electronic databases (MEDLINE, EMBASE, The Cochrane Library, ISI Web of Science and PsycINFO), plus ProQuest, were searched from 2005 to August 2015. The bibliographies of retrieved citations were also examined. Trial and research registers were searched for ongoing studies and reviews. A second search was run to identify individual test costs and acquisition costs for the various tools identified in the review. REVIEW METHODS Two reviewers independently screened all titles and abstracts to identify potentially relevant studies for inclusion in the review. Full-text copies were assessed independently by two reviewers. Data were extracted and assessed for risk of bias by one reviewer and independently checked for accuracy by a second. The results of the data extraction and quality assessment for each study are presented in structured tables and as a narrative summary. RESULTS The electronic searching of databases, including ProQuest, resulted in 13,542 unique citations. The titles and abstracts of these were screened and 399 articles were shortlisted for full-text assessment. Sixteen studies were included in the diagnostic accuracy review. No studies were eligible for inclusion in the review of tools for monitoring progressive disease. Eleven automated computerised tests were assessed in the 16 included studies. The overall quality of the studies was good; however, the wide range of tests assessed and the non-standardised reporting of diagnostic accuracy outcomes meant that meaningful synthesis or statistical analysis was not possible. LIMITATIONS The main limitation of this review is the substantial heterogeneity of the tests assessed in the included studies. As a result, no meta-analyses could be undertaken. CONCLUSION The quantity of information available is insufficient to be able to make recommendations on the clinical use of the computerised tests for diagnosing and monitoring MCI and early dementia progression. The value of these tests also depends on the costs of acquisition, training, administration and scoring. FUTURE WORK Research is required to establish stable cut-off points for automated computerised tests that are used to diagnose patients with MCI or early dementia. Additionally, the costs associated with acquiring and using these tests in clinical practice should be estimated. STUDY REGISTRATION The study is registered as PROSPERO CRD42015025410. FUNDING The National Institute for Health Research Health Technology Assessment programme.
Collapse
Affiliation(s)
- Rabeea'h W Aslam
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Vickie Bates
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Yenal Dundar
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK.,Community Mental Health Team, Mersey Care NHS Foundation Trust, Southport, UK
| | - Juliet Hounsome
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Marty Richardson
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Ashma Krishan
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Rumona Dickson
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Angela Boland
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Eleanor Kotas
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Joanne Fisher
- Liverpool Review and Implementation Group (LRiG), University of Liverpool, Liverpool, UK
| | - Sudip Sikdar
- Older Adults Mental Health Team, Mersey Care NHS Foundation Trust, Waterloo, Liverpool, UK.,Department of Psychological Sciences, University of Liverpool, Liverpool, UK
| | - Louise Robinson
- Newcastle University Institute for Ageing, Newcastle University, Newcastle upon Tyne, UK.,Institute for Health and Society, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
10
|
Abstract
OBJECTIVE This article is a review of computerized tests and batteries used in the cognitive assessment of older adults. METHOD A literature search on Medline followed by cross-referencing yielded a total of 76 citations. RESULTS Seventeen test batteries were identified and categorized according to their scope. Computerized adaptive testing (CAT) and the Cambridge Cognitive Examination CAT battery as well as 3 experimental batteries and an experimental test are discussed in separate sections. All batteries exhibit strengths associated with computerized testing such as standardization of administration, accurate measurement of many variables, automated record keeping, and savings of time and costs. Discriminant validity and test-retest reliability were well documented for most batteries while documentation of other psychometric properties varied. CONCLUSION The large number of available batteries can be beneficial to the clinician or researcher; however, care should be taken in order to choose the correct battery for each application.
Collapse
Affiliation(s)
- Stelios Zygouris
- 3rd Department of Neurology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Magda Tsolaki
- 3rd Department of Neurology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|