1
|
Lopez KL, Monachino AD, Vincent KM, Peck FC, Gabard-Durnam LJ. Stability, change, and reliable individual differences in electroencephalography measures: a lifespan perspective on progress and opportunities. Neuroimage 2023; 275:120116. [PMID: 37169118 DOI: 10.1016/j.neuroimage.2023.120116] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/27/2023] [Accepted: 04/13/2023] [Indexed: 05/13/2023] Open
Abstract
Electroencephalographic (EEG) methods have great potential to serve both basic and clinical science approaches to understand individual differences in human neural function. Importantly, the psychometric properties of EEG data, such as internal consistency and test-retest reliability, constrain their ability to differentiate individuals successfully. Rapid and recent technological and computational advancements in EEG research make it timely to revisit the topic of psychometric reliability in the context of individual difference analyses. Moreover, pediatric and clinical samples provide some of the most salient and urgent opportunities to apply individual difference approaches, but the changes these populations experience over time also provide unique challenges from a psychometric perspective. Here we take a developmental neuroscience perspective to consider progress and new opportunities for parsing the reliability and stability of individual differences in EEG measurements across the lifespan. We first conceptually map the different profiles of measurement reliability expected for different types of individual difference analyses over the lifespan. Next, we summarize and evaluate the state of the field's empirical knowledge and need for testing measurement reliability, both internal consistency and test-retest reliability, across EEG measures of power, event-related potentials, nonlinearity, and functional connectivity across ages. Finally, we highlight how standardized pre-processing software for EEG denoising and empirical metrics of individual data quality may be used to further improve EEG-based individual differences research moving forward. We also include recommendations and resources throughout that individual researchers can implement to improve the utility and reproducibility of individual differences analyses with EEG across the lifespan.
Collapse
Affiliation(s)
- K L Lopez
- Northeastern University, 360 Huntington Ave, Boston, MA, United States
| | - A D Monachino
- Northeastern University, 360 Huntington Ave, Boston, MA, United States
| | - K M Vincent
- Northeastern University, 360 Huntington Ave, Boston, MA, United States
| | - F C Peck
- University of California, Los Angeles, Los Angeles, CA, United States
| | - L J Gabard-Durnam
- Northeastern University, 360 Huntington Ave, Boston, MA, United States.
| |
Collapse
|
2
|
Trafimow D. Some Implications of Distinguishing Between Unexplained Variance That Is Systematic or Random. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2018; 78:482-503. [PMID: 30140103 PMCID: PMC6096465 DOI: 10.1177/0013164417691573] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Because error variance alternatively can be considered to be the sum of systematic variance associated with unknown variables and randomness, a tripartite assumption is proposed that total variance in the dependent variable can be partitioned into three variance components. These are variance in the dependent variable that is explained by the independent variable, variance in the dependent variable that is unexplained but systematic (associated with variance in unknown variables), and random variance. Based on the tripartite assumption, classical measurement theory, and simple mathematics, it is shown that these components can be estimated using observable data. Mathematical and computer simulations illustrate some of the important issues and implications.
Collapse
|
3
|
Trafimow D, Rice S. Is consistency a domain-general individual differences characteristic? The Journal of General Psychology 2014; 142:1-22. [PMID: 25539183 DOI: 10.1080/00221309.2014.961999] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
We explored randomness in responding in two ways across six experiments. First, we predicted that people would differ from each other in randomness in a stable way when tested in the same domain across two sessions; people who responded more randomly in a particular domain in one session also should respond more randomly in a second session whereas people who responded less randomly in one session also should respond less randomly in a second session. Second, we predicted that there would be some domain general randomness; people's randomness in one domain should predict their randomness in another domain. We used consistency coefficients across blocks of a session as an inverse measure of randomness and found (a) consistency coefficients correlated across sessions within the same domain and (b) consistency coefficients in one domain correlated with consistency coefficients in other domains.
Collapse
|
4
|
Marks MJ, Trafimow D, Rice SC. Attachment-related individual differences in the consistency of relationship behavior interpretation. J Pers 2013; 82:237-49. [PMID: 23750636 DOI: 10.1111/jopy.12048] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The consistency with which people interpret relationship-based information has important implications for attachment theory and research. Our objective is to determine whether there are attachment-related individual differences in the manner and the consistency with which individuals interpret hypothetical relationship behaviors. In two studies (N = 629, 79% female, 63% American, M(age) = 29; N = 820, 78% female, 65% American, M(age) = 29), we assessed participants' ability and consistency in relationship behavior interpretation across two blocks and estimated how they would have performed had they interpreted information perfectly consistently. Secure participants were generally more consistent in their interpretations relative to insecure participants. Estimates of perfectly consistent interpretation revealed that improvements to both systematic factors related to behavior interpretation (e.g., working models) and consistency would have led to a more secure interpretation style for participants of all attachment styles. Results imply that both secure and insecure individuals process relationship-based information according to secure scripts, but insecure individuals do so inconsistently. Our results imply that, due to the inconsistent behavioral responses that may occur as a result of inconsistent information processing, the consistency with which people process relationship-related information will be related to relationship satisfaction. Further directions for future research are discussed.
Collapse
|
5
|
Hunt G, Rice S, Trafimow D, Sandry J. Using potential performance theory to analyze systematic and random factors in enumeration tasks. AMERICAN JOURNAL OF PSYCHOLOGY 2013; 126:23-32. [PMID: 23505956 DOI: 10.5406/amerjpsyc.126.1.0023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Prior research has shown that as the number of items being enumerated increases, performance decreases, especially when the amount of time is limited. Researchers studying nonverbal enumeration have found that random noise increases as a function of the number of items presented. Over a series of 2 experiments, the authors used potential performance theory to expand these findings and discover precisely how much random noise actually influences observed performance and what performance might look like in the absence of random factors. Participants briefly viewed a visual stimulus comprising a set of 4 to 9 dots presented horizontally (Experiment 1) or randomly (Experiment 2) on a computer monitor. Findings from both experiments indicate that the decrease in performance for larger set sizes resulted almost entirely from a reduction in consistency (or an increase in random noise), whereas potential performance remained fairly constant until the maximum set size.
Collapse
Affiliation(s)
- Gayle Hunt
- Department of Psychology, New Mexico State University, Las Cruces, NM 88003, USA.
| | | | | | | |
Collapse
|
6
|
|
7
|
Rice S, Trafimow D. Time pressure heuristics can improve performance due to increased consistency. The Journal of General Psychology 2012; 139:273-88. [PMID: 24837178 DOI: 10.1080/00221309.2012.705187] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Our goal is to demonstrate that potential performance theory (PPT) provides a unique type of methodology for studying the use of heuristics under time pressure. While most theories tend to focus on different types of strategies, PPT distinguishes between random and nonrandom effects on performance. We argue that the use of a heuristic under time pressure actually can increase performance by decreasing randomness in responding. We conducted an experiment where participants performed a task under time pressure or not. In turn, PPT equations make it possible to parse the observed change in performance from the unspeeded to the speeded condition into that which is due to a change in the participant's randomness in responding versus that which is due to a change in systematic factors. We found that the change in randomness was slightly more important than the change in systematic factors.
Collapse
|
8
|
Abstract
According to many theories of decision making, of which signal detection theory is the most prominent, randomness is the main factor responsible for imperfect performance. These theories imply that correcting for attenuation due to randomness should result in perfect scores as long as the participants use nonextreme decision criteria. On the basis of a recent advance termed potential performance theory (Trafimow & Rice, Psychological Review 115:447-462, 2008), we performed auditory and visual detection experiments and corrected the scores for attenuation. Most participants in both experiments tended to perform at a less-than-perfect level, even after their scores were corrected. The findings demonstrate that at least one systematic factor influences detection that is not included in signal detection theory.
Collapse
|
9
|
Trafimow D. The role of mechanisms, integration, and unification in science and psychology. THEORY & PSYCHOLOGY 2012. [DOI: 10.1177/0959354311433929] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
I assume the importance of unifying theories in science but suggest that psychologists do not understand the concept. To clarify, I distinguish unification from both mechanisms and from integration. The psychology literature is replete with mechanisms and also integration, but there is insufficient unification. Finally, I use potential performance theory as an example of unification in a small way in psychology. Given that this theory provides small-scale unification, there is reason to hope for large-scale unification in the future—a goal to which psychologists can and should aspire.
Collapse
|
10
|
Nimon K, Zientek LR, Henson RK. The assumption of a reliable instrument and other pitfalls to avoid when considering the reliability of data. Front Psychol 2012; 3:102. [PMID: 22518107 PMCID: PMC3324779 DOI: 10.3389/fpsyg.2012.00102] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2011] [Accepted: 03/19/2012] [Indexed: 12/30/2022] Open
Abstract
The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data.
Collapse
Affiliation(s)
- Kim Nimon
- Department of Learning Technologies, University of North TexasDenton, TX, USA
| | | | - Robin K. Henson
- Department of Educational Psychology, UNT, University of North TexasDenton, TX, USA
| |
Collapse
|
11
|
TRAFIMOW DAVID. The Concept of Unit Coherence and Its Application to Psychology Theories. JOURNAL FOR THE THEORY OF SOCIAL BEHAVIOUR 2012. [DOI: 10.1111/j.1468-5914.2011.00483.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
12
|
Rice S, Geels K, Hackett HR, Trafimow D, McCarley JS, Schwark J, Hunt G. The Harder the Task, the More Inconsistent the Performance: A PPT Analysis on Task Difficulty. The Journal of General Psychology 2012; 139:1-18. [DOI: 10.1080/00221309.2011.619223] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
13
|
Trafimow D, Rice S. Using a sharp instrument to parse apart strategy and consistency: an evaluation of PPT and its assumptions. The Journal of General Psychology 2011; 138:169-84. [PMID: 21842621 DOI: 10.1080/00221309.2011.574173] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Potential Performance Theory (PPT) is a general theory for parsing observed performance into the underlying strategy and the consistency with which it is used. Although empirical research has supported that PPT is useful, it is desirable to have more information about the bias and standard errors of PPT findings. It also is beneficial to know the effects of violations of PPT assumptions. The authors present computer simulations that evaluate bias and standard errors at varying levels of strategy, consistency, and number of trials per participant. The simulations show that, when the assumptions are true, there is very little bias and the standard errors are low when there are moderate or large numbers of trials per participant (e.g., N=50 or N=100). But when the independence assumption is violated, PPT provides biased findings, although the bias is quite small unless the violations are large.
Collapse
Affiliation(s)
- David Trafimow
- Department of Psychology, MSC 3452, New Mexico State University, P.O. Box 30001, Las Cruces, NM 88003-8001, USA.
| | | |
Collapse
|
14
|
Trafimow D, Hunt G, Rice S, Geels K. Using Potential Performance Theory to Test Five Hypotheses About Meta-Attribution. The Journal of General Psychology 2011; 138:81-93. [DOI: 10.1080/00221309.2010.540591] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Rice S, Trafimow D, Keller D, Hunt G, Geels K. Using PPT to Correct for Inconsistency in a Speeded Task. The Journal of General Psychology 2010; 138:12-34. [DOI: 10.1080/00221309.2010.531791] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|