1
|
Izgi B, Moore TM, Yalcinay-Inan M, Port AM, Kuscu K, Gur RC, Yapici Eser H. Test-retest reliability of the Turkish translation of the Penn Computerized Neurocognitive Battery. APPLIED NEUROPSYCHOLOGY-ADULT 2021; 29:1258-1267. [PMID: 33492171 DOI: 10.1080/23279095.2020.1866572] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Psychiatric disorders are associated with cognitive dysfunction (CD), and reliable screening and follow-up of CD is essential both for research and clinical practice globally; yet, most assessments are in Western languages. We aimed to evaluate the test-retest reliability of the Turkish version of the Penn Computerized Neurocognitive Battery (PennCNB) to guide confident interpretation of results. Fifty-eight healthy individuals completed the PennCNB Turkish version in two sessions. After quality control, reliability analysis was conducted using Intraclass Correlation Coefficients (ICC), corrected for practice effects. Most measures were not significantly different between the sessions and had acceptable ICC values, with several exceptions. Scores were improved considerably for some memory measures, including immediate Facial Memory and Spatial Memory, and for incorrect responses in abstraction and mental flexibility, with correspondingly acceptable ICCs. Test-retest assessment of the Turkish version of the PennCNB shows that it can be used as a reliable real-time measurement of cognitive function in snapshot cross-sectional or longitudinal determinations. Preliminary validity assessment in this normative sample showed expected positive correlations with education level and negative correlations with age. Thus, the Turkish version of the PennCNB can be considered a reliable neuropsychological testing tool in research and clinical practice. Practice effects should be considered, especially when applied in short intervals. Significantly better performances in the retest, beyond practice effect, likely reflect nonlinear improvements in some participants who "learned how to learn" the memory tests or had insight on solving the abstraction and mental flexibility test.
Collapse
Affiliation(s)
- Busra Izgi
- Graduate School of Health Sciences, Neuroscience Ph.D. program, Koç University, Istanbul, Turkey.,Research Center for Translational Medicine (KUTTAM), Koç University, Istanbul, Turkey
| | - Tyler M Moore
- Brain Behavior Laboratory, Neurodevelopment and Psychosis Section, Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | | | - Allison M Port
- Research Center for Translational Medicine (KUTTAM), Koç University, Istanbul, Turkey
| | - Kemal Kuscu
- School of Medicine, Department of Psychiatry, Koç University, Istanbul, Turkey
| | - Ruben C Gur
- Brain Behavior Laboratory, Neurodevelopment and Psychosis Section, Department of Psychiatry, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
| | - Hale Yapici Eser
- Research Center for Translational Medicine (KUTTAM), Koç University, Istanbul, Turkey.,School of Medicine, Department of Psychiatry, Koç University, Istanbul, Turkey
| |
Collapse
|
2
|
Naifeh JA, Mash HBH, Stein MB, Fullerton CS, Kessler RC, Ursano RJ. The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS): progress toward understanding suicide among soldiers. Mol Psychiatry 2019; 24:34-48. [PMID: 30104726 PMCID: PMC6756108 DOI: 10.1038/s41380-018-0197-z] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Revised: 06/22/2018] [Accepted: 07/02/2018] [Indexed: 01/11/2023]
Abstract
Responding to an unprecedented increase in the suicide rate among soldiers, in 2008 the US Army and US National Institute of Mental Health funded the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS), a multicomponent epidemiological and neurobiological study of risk and resilience factors for suicidal thoughts and behaviors, and their psychopathological correlates among Army personnel. Using a combination of administrative records, representative surveys, computerized neurocognitive tests, and blood samples, Army STARRS and its longitudinal follow-up study (STARRS-LS) are designed to identify potentially actionable findings to inform the Army's suicide prevention efforts. The current report presents a broad overview of Army STARRS and its findings to date on suicide deaths, attempts, and ideation, as well as other important outcomes that may increase suicide risk (e.g., mental disorders, sexual assault victimization). The findings highlight the complexity of environmental and genetic risk and protective factors in different settings and contexts, and the importance of life and career history in understanding suicidal thoughts and behaviors.
Collapse
Affiliation(s)
- James A. Naifeh
- 0000 0001 0421 5525grid.265436.0Department of Psychiatry, Center for the Study of Traumatic Stress, Uniformed Services University of the Health Sciences, Bethesda, MD USA
| | - Holly B. Herberman Mash
- 0000 0001 0421 5525grid.265436.0Department of Psychiatry, Center for the Study of Traumatic Stress, Uniformed Services University of the Health Sciences, Bethesda, MD USA
| | - Murray B. Stein
- 0000 0001 2107 4242grid.266100.3Department of Psychiatry and Department of Family Medicine and Public Health, University of California San Diego, La Jolla, CA USA ,0000 0004 0419 2708grid.410371.0VA San Diego Healthcare System, San Diego, CA USA
| | - Carol S. Fullerton
- 0000 0001 0421 5525grid.265436.0Department of Psychiatry, Center for the Study of Traumatic Stress, Uniformed Services University of the Health Sciences, Bethesda, MD USA
| | - Ronald C. Kessler
- 000000041936754Xgrid.38142.3cDepartment of Health Care Policy, Harvard Medical School, Boston, MA USA
| | - Robert J. Ursano
- 0000 0001 0421 5525grid.265436.0Department of Psychiatry, Center for the Study of Traumatic Stress, Uniformed Services University of the Health Sciences, Bethesda, MD USA
| |
Collapse
|
3
|
Thomas ML, Brown GG, Gur RC, Moore TM, Patt VM, Risbrough VB, Baker DG. A signal detection-item response theory model for evaluating neuropsychological measures. J Clin Exp Neuropsychol 2018; 40:745-760. [PMID: 29402152 PMCID: PMC6050112 DOI: 10.1080/13803395.2018.1427699] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias. METHOD Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness. RESULTS SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education. CONCLUSIONS SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the development of computerized adaptive tests and integration with mixture and random-effects models.
Collapse
Affiliation(s)
- Michael L. Thomas
- Department of Psychiatry, University of California San Diego, La Jolla, CA
- VA Center of Excellence for Stress and Mental Health (CESAMH), San Diego, CA
| | - Gregory G. Brown
- Department of Psychiatry, University of California San Diego, La Jolla, CA
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System, San Diego, CA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA
| | - Virginie M. Patt
- Department of Psychiatry, University of California San Diego, La Jolla, CA
- Joint Doctoral Program in Clinical Psychology, San Diego State University/University of California, San Diego, CA
| | - Victoria B. Risbrough
- Department of Psychiatry, University of California San Diego, La Jolla, CA
- VA Center of Excellence for Stress and Mental Health (CESAMH), San Diego, CA
| | - Dewleen G. Baker
- Department of Psychiatry, University of California San Diego, La Jolla, CA
- VA Center of Excellence for Stress and Mental Health (CESAMH), San Diego, CA
| |
Collapse
|
4
|
Karr JE, Areshenkoff CN, Rast P, Hofer SM, Iverson GL, Garcia-Barrera MA. The unity and diversity of executive functions: A systematic review and re-analysis of latent variable studies. Psychol Bull 2018; 144:1147-1185. [PMID: 30080055 DOI: 10.1037/bul0000160] [Citation(s) in RCA: 258] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Confirmatory factor analysis (CFA) has been frequently applied to executive function measurement since first used to identify a three-factor model of inhibition, updating, and shifting; however, subsequent CFAs have supported inconsistent models across the life span, ranging from unidimensional to nested-factor models (i.e., bifactor without inhibition). This systematic review summarized CFAs on performance-based tests of executive functions and reanalyzed summary data to identify best-fitting models. Eligible CFAs involved 46 samples (N = 9,756). The most frequently accepted models varied by age (i.e., preschool = one/two-factor; school-age = three-factor; adolescent/adult = three/nested-factor; older adult = two/three-factor), and most often included updating/working memory, inhibition, and shifting factors. A bootstrap reanalysis simulated 5,000 samples from 21 correlation matrices (11 child/adolescent; 10 adult) from studies including the three most common factors, fitting seven competing models. Model results were summarized as the mean percent accepted (i.e., average rate at which models converged and met fit thresholds: CFI ≥ .90/RMSEA ≤ .08) and mean percent selected (i.e., average rate at which a model showed superior fit to other models: ΔCFI ≥ .005/.010/ΔRMSEA ≤ -.010/-.015). No model consistently converged and met fit criteria in all samples. Among adult samples, the nested-factor was accepted (41-42%) and selected (8-30%) most often. Among child/adolescent samples, the unidimensional model was accepted (32-36%) and selected (21-53%) most often, with some support for two-factor models without a differentiated shifting factor. Results show some evidence for greater unidimensionality of executive function among child/adolescent samples and both unity and diversity among adult samples. However, low rates of model acceptance/selection suggest possible bias toward the publication of well-fitting but potentially nonreplicable models with underpowered samples. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
5
|
Brown GG, Thomas ML, Patt V. Parametric model measurement: reframing traditional measurement ideas in neuropsychological practice and research. Clin Neuropsychol 2017; 31:1047-1072. [PMID: 28617067 DOI: 10.1080/13854046.2017.1334829] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Neuropsychology is an applied measurement field with its psychometric work primarily built upon classical test theory (CTT). We describe a series of psychometric models to supplement the use of CTT in neuropsychological research and test development. METHOD We introduce increasingly complex psychometric models as measurement algebras, which include model parameters that represent abilities and item properties. Within this framework of parametric model measurement (PMM), neuropsychological assessment involves the estimation of model parameters with ability parameter values assuming the role of test 'scores'. Moreover, the traditional notion of measurement error is replaced by the notion of parameter estimation error, and the definition of reliability becomes linked to notions of item and test information. The more complex PMM approaches incorporate into the assessment of neuropsychological performance formal parametric models of behavior validated in the experimental psychology literature, along with item parameters. These PMM approaches endorse the use of experimental manipulations of model parameters to assess a test's construct representation. Strengths and weaknesses of these models are evaluated by their implications for measurement error conditional upon ability level, sensitivity to sample characteristics, computational challenges to parameter estimation, and construct validity. CONCLUSION A family of parametric psychometric models can be used to assess latent processes of interest to neuropsychologists. By modeling latent abilities at the item level, psychometric studies in neuropsychology can investigate construct validity and measurement precision within a single framework and contribute to a unification of statistical methods within the framework of generalized latent variable modeling.
Collapse
Affiliation(s)
- Gregory G Brown
- a Psychology Service (116B) , VA San Diego Healthcare System , San Diego , CA , USA
| | - Michael L Thomas
- b Department of Psychiatry , University of California , San Diego , CA , USA
| | - Virginie Patt
- c San Diego State University/University of California San Diego Joint Doctoral Program in Clinical Psychology , San Diego , CA , USA
| |
Collapse
|
6
|
Moore TM, Gur RC, Thomas ML, Brown GG, Nock MK, Savitt AP, Keilp JG, Heeringa S, Ursano RJ, Stein MB. Development, Administration, and Structural Validity of a Brief, Computerized Neurocognitive Battery: Results From the Army Study to Assess Risk and Resilience in Servicemembers. Assessment 2017; 26:125-143. [PMID: 28135828 DOI: 10.1177/1073191116689820] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS) is a research project aimed at identifying risk and protective factors for suicide and related mental health outcomes among Army Soldiers. The New Soldier Study component of Army STARRS included the assessment of a range of cognitive- and emotion-processing domains linked to brain systems related to suicidal behavior including posttraumatic stress disorder, mood disorders, substance use disorders, and impulsivity. We describe the design and application of the Army STARRS neurocognitive test battery to a sample of 56,824 soldiers. We investigate its structural and concurrent validity through factor analysis and correlation of scores with demographics. We conclude that, in addition to being composed of previously well-validated measures, the Army STARRS neurocognitive battery as a whole demonstrates good psychometric properties. Correlations of scores with age and sex differences mostly replicate previously published findings, highlighting moderate to large effect sizes even within this restricted age range. Factor structures of scores conform to theoretical expectations. This neurocognitive battery provides a brief, valid measurement of neurocognition that may be helpful in predicting mental health and military performance. These measures can be integrated with neuroimaging to offer a powerful tool for assessing neurocognition in Servicemembers.
Collapse
Affiliation(s)
| | - Ruben C Gur
- 1 University of Pennsylvania, Philadelphia, PA, USA.,2 Philadelphia Veterans Administration Medical Center, Philadelphia, PA, USA
| | | | - Gregory G Brown
- 3 University of California, San Diego, La Jolla, CA, USA.,4 VA San Diego Healthcare System, San Diego, CA, USA
| | | | | | - John G Keilp
- 6 New York State Psychiatric Institute, New York, NY, USA.,7 Columbia University, New York, NY, USA
| | | | - Robert J Ursano
- 9 Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Murray B Stein
- 3 University of California, San Diego, La Jolla, CA, USA
| | | |
Collapse
|