1
|
Abstract
Although the name‐letter task (NLT) has become an increasingly popular technique to measure implicit self‐esteem (ISE), researchers have relied on different algorithms to compute NLT scores and the psychometric properties of these differently computed scores have never been thoroughly investigated. Based on 18 independent samples, including 2690 participants, the current research examined the optimality of five scoring algorithms based on the following criteria: reliability; variability in reliability estimates across samples; types of systematic error variance controlled for; systematic production of outliers and shape of the distribution of scores. Overall, an ipsatized version of the original algorithm exhibited the most optimal psychometric properties, which is recommended for future research using the NLT. Copyright © 2009 John Wiley & Sons, Ltd.
Collapse
|
2
|
LeBel EP, McCarthy RJ, Earp BD, Elson M, Vanpaemel W. A Unified Framework to Quantify the Credibility of Scientific Findings. Advances in Methods and Practices in Psychological Science 2018. [DOI: 10.1177/2515245918787489] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Societies invest in scientific studies to better understand the world and attempt to harness such improved understanding to address pressing societal problems. Published research, however, can be useful for theory or application only if it is credible. In science, a credible finding is one that has repeatedly survived risky falsification attempts. However, state-of-the-art meta-analytic approaches cannot determine the credibility of an effect because they do not account for the extent to which each included study has survived such attempted falsification. To overcome this problem, we outline a unified framework for estimating the credibility of published research by examining four fundamental falsifiability-related dimensions: (a) transparency of the methods and data, (b) reproducibility of the results when the same data-processing and analytic decisions are reapplied, (c) robustness of the results to different data-processing and analytic decisions, and (d) replicability of the effect. This framework includes a standardized workflow in which the degree to which a finding has survived scrutiny is quantified along these four facets of credibility. The framework is demonstrated by applying it to published replications in the psychology literature. Finally, we outline a Web implementation of the framework and conclude by encouraging the community of researchers to contribute to the development and crowdsourcing of this platform.
Collapse
Affiliation(s)
| | - Randy J. McCarthy
- Center for the Study of Family Violence and Sexual Assault, Northern Illinois University
| | - Brian D. Earp
- Department of Philosophy, Yale University
- Department of Psychology, Yale University
| | - Malte Elson
- Psychology of Human Technology Interaction Unit, Ruhr University Bochum
| | - Wolf Vanpaemel
- Quantitative Psychology and Individual Differences Unit, University of Leuven (KU Leuven)
| |
Collapse
|
3
|
|
4
|
|
5
|
Heino MTJ, Fried EI, LeBel EP. Commentary: Reproducibility in Psychological Science: When Do Psychological Phenomena Exist? Front Psychol 2017; 8:1004. [PMID: 28690559 PMCID: PMC5479906 DOI: 10.3389/fpsyg.2017.01004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Accepted: 05/31/2017] [Indexed: 01/24/2023] Open
Affiliation(s)
- Matti T. J. Heino
- Department of Social Sciences, University of TampereTampere, Finland
- Department of Social Research, University of HelsinkiHelsinki, Finland
| | - Eiko I. Fried
- Department of Psychology, University of AmsterdamAmsterdam, Netherlands
| | | |
Collapse
|
6
|
Abstract
Experimental paradigms designed to assess “implicit” representations are currently very popular in many areas of psychology. The present article addresses the validity of three widespread assumptions in research using these paradigms: that (a) implicit measures reflect unconscious or introspectively inaccessible representations; (b) the major difference between implicit measures and self-reports is that implicit measures are resistant or less susceptible to social desirability; and (c) implicit measures reflect highly stable, older representations that have their roots in long-term socialization experiences. Drawing on a review of the available evidence, we conclude that the validity of all three assumptions is equivocal and that theoretical interpretations should be adjusted accordingly. We discuss an alternative conceptualization that distinguishes between activation and validation processes.
Collapse
|
7
|
|
8
|
Abstract
There is currently an unprecedented level of doubt regarding the reliability of research findings in psychology. Many recommendations have been made to improve the current situation. In this article, we report results from PsychDisclosure.org , a novel open-science initiative that provides a platform for authors of recently published articles to disclose four methodological design specification details that are not required to be disclosed under current reporting standards but that are critical for accurate interpretation and evaluation of reported findings. Grassroots sentiment—as manifested in the positive and appreciative response to our initiative—indicates that psychologists want to see changes made at the systemic level regarding disclosure of such methodological details. Almost 50% of contacted researchers disclosed the requested design specifications for the four methodological categories (excluded subjects, nonreported conditions and measures, and sample size determination). Disclosed information provided by participating authors also revealed several instances of questionable editorial practices, which need to be thoroughly examined and redressed. On the basis of these results, we argue that the time is now for mandatory methods disclosure statements for all psychology journals, which would be an important step forward in improving the reliability of findings in psychology.
Collapse
Affiliation(s)
| | | | | | - Fred Hasselman
- Behavioural Science Institute, Radboud University Nijmegen
| | | | | | | |
Collapse
|
9
|
LeBel EP, Campbell L. The Interactive Role of Implicit and Explicit Partner Evaluations on Ongoing Affective and Behavioral Romantic Realities. Social Psychological and Personality Science 2012. [DOI: 10.1177/1948550612448196] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Past research on close relationships has increasingly focused on the assessment of implicit constructs to shed new light on relationship processes. However, virtually nothing is known about the role of such constructs in understanding ongoing affective and behavioral romantic realities and how implicit and explicit relationship constructs interact in the context of daily relationship outcomes. Using a 21-day diary approach, the present research examined the unique and interactive role of implicit partner evaluations and explicit partner perceptions on relationship outcomes (daily relationship quality and positive relationship behaviors enacted toward partner). Results showed that more positive implicit partner evaluations uniquely predicted more positive relationship outcomes during the 21-day diary period, but that this was especially pronounced in individuals who did not explicitly perceive their partner’s attributes in an overly positive manner. Implications for the close relationship literature are discussed.
Collapse
Affiliation(s)
- Etienne P. LeBel
- Department of Psychology, The University of Western Ontario, Social Science Centre, London, Ontario, Canada
| | - Lorne Campbell
- Department of Psychology, The University of Western Ontario, Social Science Centre, London, Ontario, Canada
| |
Collapse
|
10
|
Abstract
Past studies of socially desirable self-reports on the items of personality measures have found inconsistent effects of the response bias on the measures' predictive validities, with some studies reporting small effects and other studies reporting large effects. Using Monte Carlo methods, we evaluated various models of socially desirable responding by systematically adding predetermined amounts of the bias to the simulated personality trait scores of hypothetical test respondents before computing test-criterion validity correlations. Our study generally supported previous findings that have reported relatively minor decrements in criterion prediction, even with personality scores that were massively infused with desirability bias. Furthermore, the response bias failed to reveal itself as a statistical moderator of test validity or as a suppressor of validity. Large differences between some respondents' obtained test scores and their true trait scores, however, meant that the personality measure's construct validity would be severely compromised and, more specifically, that estimates of those individuals' criterion performance would be grossly in error. Our discussion focuses on reasons for the discrepant results reported in the literature pertaining to the effect of socially desirable responding on criterion validity. More important, we explain why the lack of effects of desirability bias on the usual indicators of validity, moderation, and suppression should not be surprising.
Collapse
Affiliation(s)
- Sampo V Paunonen
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5C2, Canada.
| | | |
Collapse
|
11
|
Abstract
Although the name-letter task is one of the most frequently used measures of implicit self-esteem, no research has examined whether the name-letter effect emerges for new last name initials and abandoned birth name initials in the context of marriage. Additionally, no systematic investigation has examined the robustness of the name-letter effect across age cohorts. In a large heterogeneous sample (N = 1,380), reliable letter preferences were found for new last name initials and for abandoned birth name initials, even after 20 years of marriage. In addition, robust name-letter effects emerged across all assessed age cohorts. Implications for the implicit self-esteem literature regarding the robustness of the name-letter task for married and nonmarried individuals of all post-pubescent ages are discussed.
Collapse
Affiliation(s)
- Stefan Stieger
- Department of Basic Psychological Research, School of Psychology, University of Vienna, Austria
| | - Etienne P. LeBel
- Department of Psychology, The University of Western Ontario, London, ONT, Canada
| |
Collapse
|
12
|
LeBel EP, Peters KR. Fearing the Future of Empirical Psychology: Bem's (2011) Evidence of Psi as a Case Study of Deficiencies in Modal Research Practice. Review of General Psychology 2011. [DOI: 10.1037/a0025172] [Citation(s) in RCA: 92] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this methodological commentary, we use Bem's (2011) recent article reporting experimental evidence for psi as a case study for discussing important deficiencies in modal research practice in empirical psychology. We focus on (a) overemphasis on conceptual rather than close replication, (b) insufficient attention to verifying the soundness of measurement and experimental procedures, and (c) flawed implementation of null hypothesis significance testing. We argue that these deficiencies contribute to weak method-relevant beliefs that, in conjunction with overly strong theory-relevant beliefs, lead to a systemic and pernicious bias in the interpretation of data that favors a researcher's theory. Ultimately, this interpretation bias increases the risk of drawing incorrect conclusions about human psychology. Our analysis points to concrete recommendations for improving research practice in empirical psychology. We recommend (a) a stronger emphasis on close replication, (b) routinely verifying the integrity of measurement instruments and experimental procedures, and (c) using stronger, more diagnostic forms of null hypothesis testing.
Collapse
Affiliation(s)
- Etienne P. LeBel
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada
| | | |
Collapse
|
13
|
LeBel EP, Paunonen SV. Sexy But Often Unreliable: The Impact of Unreliability on the Replicability of Experimental Findings With Implicit Measures. Pers Soc Psychol Bull 2011; 37:570-83. [DOI: 10.1177/0146167211400619] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Implicit measures have contributed to important insights in almost every area of psychology. However, various issues and challenges remain concerning their use, one of which is their considerable variation in reliability, with many implicit measures having questionable reliability. The goal of the present investigation was to examine an overlooked consequence of this liability with respect to replication, when such implicit measures are used as dependent variables in experimental studies. Using a Monte Carlo simulation, the authors demonstrate that a higher level of unreliability in such dependent variables is associated with substantially lower levels of replicability. The results imply that this overlooked consequence can have far-reaching repercussions for the development of a cumulative science. The authors recommend the routine assessment and reporting of the reliability of implicit measures and also urge the improvement of implicit measures with low reliability.
Collapse
|
14
|
Gawronski B, Cunningham WA, LeBel EP, Deutsch R. Attentional influences on affective priming: Does categorisation influence spontaneous evaluations of multiply categorisable objects? Cogn Emot 2010. [DOI: 10.1080/02699930903112712] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
15
|
|
16
|
Gawronski B, LeBel EP. Understanding patterns of attitude change: When implicit measures show change, but explicit measures do not. Journal of Experimental Social Psychology 2008. [DOI: 10.1016/j.jesp.2008.04.005] [Citation(s) in RCA: 110] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
17
|
Gawronski B, Peters KR, LeBel EP. What Makes Mental Associations Personal or Extra-Personal? Conceptual Issues in the Methodological Debate about Implicit Attitude Measures. Social and Personality Psychology Compass 2008. [DOI: 10.1111/j.1751-9004.2008.00085.x] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Abstract
Over the last decade, implicit measures of mental associations (e.g., Implicit Association Test, sequential priming) have become increasingly popular in many areas of psychological research. Even though successful applications provide preliminary support for the validity of these measures, their underlying mechanisms are still controversial. The present article addresses the role of a particular mechanism that is hypothesized to mediate the influence of activated associations on task performance in many implicit measures: response interference (RI). Based on a review of relevant evidence, we argue that RI effects in implicit measures depend on participants attention to association-relevant stimulus features, which in turn can influence the reliability and the construct validity of these measures. Drawing on a moderated-mediation model (MMM) of task performance in RI paradigms, we provide several suggestions on how to address these problems in research using implicit measures.
Collapse
|