1
|
Auspurg K, Brüderl J. Toward a more credible assessment of the credibility of science by many-analyst studies. Proc Natl Acad Sci U S A 2024; 121:e2404035121. [PMID: 39236231 DOI: 10.1073/pnas.2404035121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024] Open
Abstract
We discuss a relatively new meta-scientific research design: many-analyst studies that attempt to assess the replicability and credibility of research based on large-scale observational data. In these studies, a large number of analysts try to answer the same research question using the same data. The key idea is the greater the variation in results, the greater the uncertainty in answering the research question and, accordingly, the lower the credibility of any individual research finding. Compared to individual replications, the large crowd of analysts allows for a more systematic investigation of uncertainty and its sources. However, many-analyst studies are also resource-intensive, and there are some doubts about their potential to provide credible assessments. We identify three issues that any many-analyst study must address: 1) identifying the source of variation in the results; 2) providing an incentive structure similar to that of standard research; and 3) conducting a proper meta-analysis of the results. We argue that some recent many-analyst studies have failed to address these issues satisfactorily and have therefore provided an overly pessimistic assessment of the credibility of science. We also provide some concrete guidance on how future many-analyst studies could provide a more constructive assessment.
Collapse
Affiliation(s)
- Katrin Auspurg
- Department of Sociology, Ludwig-Maximilians-Universität (LMU) Munich, Munich 80801, Germany
| | - Josef Brüderl
- Department of Sociology, Ludwig-Maximilians-Universität (LMU) Munich, Munich 80801, Germany
| |
Collapse
|
2
|
Shrier I, Impellizzeri FM, Stovitz SD. Identifying and Minimizing Incentives for Competing Interests in Sports Medicine Publications. Sports Med 2024; 54:1991-2000. [PMID: 38714641 DOI: 10.1007/s40279-024-02037-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/17/2024] [Indexed: 05/10/2024]
Abstract
Academics in sports medicine as well as other medical fields are generally expected to publish research and opinions in peer-reviewed journals. The peer-review process is intended to protect against the publication of flawed research and unsubstantiated claims. However, both financial and non-financial competing interests may result in sub-optimal results by affecting investigators, editors, peer reviewers, academic institutions, and publishers. In this article, we focus on the non-financial competing interests created in our current academic system. Because these competing interests are embedded in our current scholastic framework, the potential biases are difficult to quantify. To minimize the effect of these competing interests, we review and highlight some underlying incentives for each stakeholder and some potential solutions to mitigate their effects.
Collapse
Affiliation(s)
- Ian Shrier
- Centre for Clinical Epidemiology, Lady Davis Institute, Jewish General Hospital, McGill University, 3755 Cote Sainte Catherine Road, Montreal, QC, H3T 1E2, Canada.
| | - Franco M Impellizzeri
- School of Sport, Exercise and Rehabilitation, Faculty of Health, University of Technology Sydney, Sydney, NSW, Australia
| | - Steven D Stovitz
- Department of Family Medicine and Community Health, University of Minnesota, Minnesota, USA
| |
Collapse
|
3
|
Protzko J, Krosnick J, Nelson L, Nosek BA, Axt J, Berent M, Buttrick N, DeBell M, Ebersole CR, Lundmark S, MacInnis B, O'Donnell M, Perfecto H, Pustejovsky JE, Roeder SS, Walleczek J, Schooler JW. High replicability of newly discovered social-behavioural findings is achievable. Nat Hum Behav 2024; 8:311-319. [PMID: 37945809 PMCID: PMC10896719 DOI: 10.1038/s41562-023-01749-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 10/05/2023] [Indexed: 11/12/2023]
Abstract
Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (P < 0.05) in 86% of attempts, slightly exceeding the maximum expected replicability based on observed effect sizes and sample sizes. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that in the original study. This high replication rate justifies confidence in rigour-enhancing methods to increase the replicability of new discoveries.
Collapse
Affiliation(s)
- John Protzko
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA.
- Department of Psychological Science, Central Connecticut State University, New Britain, CT, USA.
| | - Jon Krosnick
- Institute for Research in the Social Sciences, Stanford University, Stanford, CA, USA
| | - Leif Nelson
- Haas School of Business, University of California, Berkeley, Berkeley, CA, USA
| | - Brian A Nosek
- Center for Open Science, Charlottesville, VA, USA
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | - Jordan Axt
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| | | | - Nicholas Buttrick
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Matthew DeBell
- Institute for Research in the Social Sciences, Stanford University, Stanford, CA, USA
| | - Charles R Ebersole
- Department of Psychology, University of Virginia, Charlottesville, VA, USA
| | | | - Bo MacInnis
- Institute for Research in the Social Sciences, Stanford University, Stanford, CA, USA
| | - Michael O'Donnell
- McDonough School of Business, Georgetown University, Washington, DC, USA
| | - Hannah Perfecto
- Olin School of Business, Washington University in St. Louis, St. Louis, MO, USA
| | - James E Pustejovsky
- Educational Psychology Department, University of Wisconsin-Madison, Madison, WI, USA
| | - Scott S Roeder
- Darla Moore School of Business, University of South Carolina, Columbia, SC, USA
| | | | - Jonathan W Schooler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA
| |
Collapse
|
4
|
Luijken K, Lohmann A, Alter U, Claramunt Gonzalez J, Clouth FJ, Fossum JL, Hesen L, Huizing AHJ, Ketelaar J, Montoya AK, Nab L, Nijman RCC, Penning de Vries BBL, Tibbe TD, Wang YA, Groenwold RHH. Replicability of simulation studies for the investigation of statistical methods: the RepliSims project. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231003. [PMID: 38234442 PMCID: PMC10791519 DOI: 10.1098/rsos.231003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/14/2023] [Indexed: 01/19/2024]
Abstract
Results of simulation studies evaluating the performance of statistical methods can have a major impact on the way empirical research is implemented. However, so far there is limited evidence of the replicability of simulation studies. Eight highly cited statistical simulation studies were selected, and their replicability was assessed by teams of replicators with formal training in quantitative methodology. The teams used information in the original publications to write simulation code with the aim of replicating the results. The primary outcome was to determine the feasibility of replicability based on reported information in the original publications and supplementary materials. Replicasility varied greatly: some original studies provided detailed information leading to almost perfect replication of results, whereas other studies did not provide enough information to implement any of the reported simulations. Factors facilitating replication included availability of code, detailed reporting or visualization of data-generating procedures and methods, and replicator expertise. Replicability of statistical simulation studies was mainly impeded by lack of information and sustainability of information sources. We encourage researchers publishing simulation studies to transparently report all relevant implementation details either in the research paper itself or in easily accessible supplementary material and to make their simulation code publicly available using permanent links.
Collapse
Affiliation(s)
- K. Luijken
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Epidemiology, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, University Utrecht, Utrecht, The Netherlands
| | - A. Lohmann
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - U. Alter
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - J. Claramunt Gonzalez
- Methodology and Statistics Unit, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - F. J. Clouth
- Department of Methodology and Statistics, Tilburg University, Tilburg, The Netherlands
- Netherlands Comprehensive Cancer Organisation (IKNL), Utrecht, The Netherlands
| | - J. L. Fossum
- Department of Psychology, University of California, Los Angeles, CA, USA
- Department of Psychology, Seattle Pacific University, Seattle, WA, USA
| | - L. Hesen
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - A. H. J. Huizing
- TNO (Netherlands Organization for Applied Scientific Research), Expertise Group Child Health, Leiden, The Netherlands
| | - J. Ketelaar
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - A. K. Montoya
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - L. Nab
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - R. C. C. Nijman
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
| | - B. B. L. Penning de Vries
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, The Netherlands
| | - T. D. Tibbe
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Y. A. Wang
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - R. H. H. Groenwold
- Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, The Netherlands
- Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, The Netherlands
| |
Collapse
|
5
|
Yang Q, Zhang W, Liu S, Gong W, Han Y, Lu J, Jiang D, Nie J, Lyu X, Liu R, Jiao M, Qu C, Zhang M, Sun Y, Zhou X, Zhang Q. Unraveling controversies over civic honesty measurement: An extended field replication in China. Proc Natl Acad Sci U S A 2023; 120:e2213824120. [PMID: 37428923 PMCID: PMC10629568 DOI: 10.1073/pnas.2213824120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 05/23/2023] [Indexed: 07/12/2023] Open
Abstract
Cohn et al. (2019) conducted a wallet drop experiment in 40 countries to measure "civic honesty around the globe," which has received worldwide attention but also sparked controversies over using the email response rate as the sole metric of civic honesty. Relying on the lone measurement may overlook cultural differences in behaviors that demonstrate civic honesty. To investigate this issue, we conducted an extended replication study in China, utilizing email response and wallet recovery to assess civic honesty. We found a significantly higher level of civic honesty in China, as measured by the wallet recovery rate, than reported in the original study, while email response rates remained similar. To resolve the divergent results, we introduce a cultural dimension, individualism versus collectivism, to study civic honesty across diverse cultures. We hypothesize that cultural differences in individualism and collectivism could influence how individuals prioritize actions when handling a lost wallet, such as contacting the wallet owner or safeguarding the wallet. In reanalyzing Cohn et al.'s data, we found that email response rates were inversely related to collectivism indices at the country level. However, our replication study in China demonstrated that the likelihood of wallet recovery was positively correlated with collectivism indicators at the provincial level. Consequently, relying solely on email response rates to gauge civic honesty in cross-country comparisons may neglect the vital individualism versus collectivism dimension. Our study not only helps reconcile the controversy surrounding Cohn et al.'s influential field experiment but also furnishes a fresh cultural perspective to evaluate civic honesty.
Collapse
Affiliation(s)
- Qian Yang
- School of Public Health, and the Department of Geriatrics of the Fourth Affiliated Hospital, Zhejiang University School of Medicine, Zhejiang University, Hangzhou310058, China
| | - Weiwei Zhang
- Research Institute of Economics and Management, Southwestern University of Finance and Economics, Chengdu610074, China
| | - Shiyong Liu
- Institute of Advanced Studies in Humanities and Social Sciences, Beijing Normal University at Zhuhai, Zhuhai, Guangdong519087, China
| | - Wenjin Gong
- School of Public Health and Management, Guangzhou University of Chinese Medicine, Guangzhou, Guangdong510006, China
| | - Youli Han
- School of Public Health, Capital Medical University, Fengtai District, Beijing100069, China
| | - Jun Lu
- School of Public Health, China Research Center on Disability, Fudan University, Xuhui District, Shanghai200032, China
| | - Donghong Jiang
- College of Psychology, Shenzhen University, Shenzhen, Guangdong518060, China
| | - Jingchun Nie
- Center for Experimental Economics in Education, Shaanxi Normal University, Xi’an, Shaanxi710119, China
| | - Xiaokang Lyu
- Department of Social Psychology, Zhou Enlai School of Government, Nankai University, Jinnan District, Tianjin300071, China
| | - Rugang Liu
- School of Health Policy & Management, Nanjing Medical University, Nanjing, Jiangsu211166, China
| | - Mingli Jiao
- Department of Health Policy and Hospital Management, School of Public Health, Harbin Medical University, Harbin, Heilongjiang150081, China
| | - Chen Qu
- Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou510631, China
| | - Mingji Zhang
- School of Public Health, Shanghai Jiao Tong University, Huangpu District, Shanghai200025, China
| | - Yacheng Sun
- Department of Marketing School of Economics and Management, Tsinghua University, Haidian District, Beijing100084, China
| | - Xinyue Zhou
- School of Management, Zhejiang University, Hangzhou310058, China
| | - Qi Zhang
- School of Community and Environmental Health, Old Dominion University, Norfolk, VA23529
- China Research Center on Disability, Fudan University, Xuhui District, Shanghai200032, China
| |
Collapse
|
6
|
Hecht CA, Bryan CJ, Yeager DS. A values-aligned intervention fosters growth mindset-supportive teaching and reduces inequality in educational outcomes. Proc Natl Acad Sci U S A 2023; 120:e2210704120. [PMID: 37307478 PMCID: PMC10288618 DOI: 10.1073/pnas.2210704120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 04/15/2023] [Indexed: 06/14/2023] Open
Abstract
Group-based educational disparities are smaller in classrooms where teachers express a belief that students can improve their abilities. However, a scalable method for motivating teachers to adopt such growth mindset-supportive teaching practices has remained elusive. In part, this is because teachers often already face overwhelming demands on their time and attention and have reason to be skeptical of the professional development advice they receive from researchers and other experts. We designed an intervention that overcame these obstacles and successfully motivated high-school teachers to adopt specific practices that support students' growth mindsets. The intervention used the values-alignment approach. This approach motivates behavioral change by framing a desired behavior as aligned with a core value-one that is an important criterion for status and admiration in the relevant social reference group. First, using qualitative interviews and a nationally representative survey of teachers, we identified a relevant core value: inspiring students' enthusiastic engagement with learning. Next, we designed a ~45-min, self-administered, online intervention that persuaded teachers to view growth mindset-supportive practices as a way to foster such student engagement and thus live up to that value. We randomly assigned 155 teachers (5,393 students) to receive the intervention and 164 teachers (6,167 students) to receive a control module. The growth mindset-supportive teaching intervention successfully promoted teachers' adoption of the suggested practices, overcoming major barriers to changing teachers' classroom practices that other scalable approaches have failed to surmount. The intervention also substantially improved student achievement in socioeconomically disadvantaged classes, reducing inequality in educational outcomes.
Collapse
Affiliation(s)
- Cameron A. Hecht
- Department of Psychology and Population Research Center, The University of Texas at Austin, Austin, TX78712
| | - Christopher J. Bryan
- Department of Business, Government, and Society, The University of Texas at Austin, Austin, TX78712
| | - David S. Yeager
- Department of Psychology and Population Research Center, The University of Texas at Austin, Austin, TX78712
| |
Collapse
|
7
|
Zher-Wen, Yu R. Unconscious integration: Current evidence for integrative processing under subliminal conditions. Br J Psychol 2023; 114:430-456. [PMID: 36689339 DOI: 10.1111/bjop.12631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/05/2023] [Indexed: 01/24/2023]
Abstract
Integrative processing is traditionally believed to be dependent on consciousness. While earlier studies within the last decade reported many types of integration under subliminal conditions (i.e. without perceptual awareness), these findings are widely challenged recently. This review evaluates the current evidence for 10 types of subliminal integration that are widely studied: arithmetic processing, object-context integration, multi-word processing, same-different processing, multisensory integration and 5 different types of associative learning. Potential methodological issues concerning awareness measures are also taken into account. It is concluded that while there is currently no reliable evidence for subliminal integration, this does not necessarily refute 'unconscious' integration defined through non-subliminal (e.g. implicit) approaches.
Collapse
Affiliation(s)
- Zher-Wen
- Department of Management, Hong Kong Baptist University, Hong Kong, China.,Department of Psychology, National University of Singapore, Singapore City, Singapore
| | - Rongjun Yu
- Department of Management, Hong Kong Baptist University, Hong Kong, China
| |
Collapse
|
8
|
Efficiently exploring the causal role of contextual moderators in behavioral science. Proc Natl Acad Sci U S A 2023; 120:e2216315120. [PMID: 36577065 PMCID: PMC9910482 DOI: 10.1073/pnas.2216315120] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Behavioral science interventions have the potential to address longstanding policy problems, but their effects are typically heterogeneous across contexts (e.g., teachers, schools, and geographic regions). This contextual heterogeneity is poorly understood, however, which reduces the field's impact and its understanding of mechanisms. Here, we present an efficient way to interrogate heterogeneity and address these gaps in knowledge. This method a) presents scenarios that vividly represent different moderating contexts, b) measures a short-term behavioral outcome (e.g., an academic choice) that is known to relate to typical intervention outcomes (e.g., academic achievement), and c) assesses the causal effect of the moderating context on the link between the psychological variable typically targeted by interventions and this short-term outcome. We illustrated the utility of this approach across four experiments (total n = 3,235) that directly tested contextual moderators of the links between growth mindset, which is the belief that ability can be developed, and students' academic choices. The present results showed that teachers' growth mindset-supportive messages and the structural opportunities they provide moderated the link between students' mindsets and their choices (studies 1 to 3). This pattern was replicated in a nationally representative sample of adolescents and did not vary across demographic subgroups (study 2), nor was this pattern the result of several possible confounds (studies 3 to 4). Discussion centers on how this method of interrogating contextual heterogeneity can be applied to other behavioral science interventions and broaden their impact in other policy domains.
Collapse
|
9
|
Hecht CA, Latham AG, Buskirk RE, Hansen DR, Yeager DS. Peer-Modeled Mindsets: An Approach to Customizing Life Sciences Studying Interventions. CBE LIFE SCIENCES EDUCATION 2022; 21:ar82. [PMID: 36282273 PMCID: PMC9727603 DOI: 10.1187/cbe.22-07-0143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 09/23/2022] [Accepted: 09/28/2022] [Indexed: 06/16/2023]
Abstract
Mindset interventions, which shift students' beliefs about classroom experiences, have shown promise for promoting diversity in science, technology, engineering, and mathematics (STEM). Psychologists have emphasized the importance of customizing these interventions to specific courses, but there is not yet a protocol for doing so. We developed a protocol for creating customized "peer-modeled" mindset interventions that elicit advice from former students in videotaped interviews. In intervention activities, clips from these interviews, in which the former students' stories model the changes in thinking about challenge and struggle that helped them succeed in a specific course, are provided to incoming life sciences students. Using this protocol, we developed a customized intervention for three sections of Introductory Biology I at a large university and tested it in a randomized controlled trial (N = 917). The intervention shifted students' attributions for struggle in the class away from a lack of potential to succeed and toward the need to develop a better approach to studying. The intervention also improved students' approaches to studying and sense of belonging and had promising effects on performance and persistence in biology. Effects were pronounced among first-generation college students and underrepresented racial/ethnic minority students, who have been historically underrepresented in the STEM fields.
Collapse
Affiliation(s)
- Cameron A. Hecht
- Department of Psychology and Population Research Center, University of Texas at Austin, Austin, TX 78712
| | - Anita G. Latham
- Biology Instructional Office, University of Texas at Austin, Austin, TX 78712
| | - Ruth E. Buskirk
- Biology Instructional Office, University of Texas at Austin, Austin, TX 78712
| | - Debra R. Hansen
- Biology Instructional Office, University of Texas at Austin, Austin, TX 78712
| | - David S. Yeager
- Department of Psychology and Population Research Center, University of Texas at Austin, Austin, TX 78712
| |
Collapse
|
10
|
Semken C, Rossell D. Specification analysis for technology use and teenager well‐being: Statistical validity and a Bayesian proposal. J R Stat Soc Ser C Appl Stat 2022. [DOI: 10.1111/rssc.12578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Christoph Semken
- Universitat Pompeu Fabra BarcelonaSpain
- Barcelona School of Economics BarcelonaSpain
| | - David Rossell
- Universitat Pompeu Fabra BarcelonaSpain
- Barcelona School of Economics BarcelonaSpain
| |
Collapse
|
11
|
Gervais SJ, Baildon AE, Lorenz TK. On Methods and Marshmallows: A Roadmap for Science That Is Openly Feminist and Radically Open. PSYCHOLOGY OF WOMEN QUARTERLY 2021. [DOI: 10.1177/03616843211032632] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this commentary, we argue that feminist science and open science can benefit from each other’s wisdom and critiques in service of creating systems that produce the highest quality science with the maximum potential for improving the lives of women. To do this, we offer a constructive analysis, focusing on common methods used in open science, including open materials and data, preregistration, and large sample sizes, and illuminate potential benefits and costs from a feminist science perspective. We also offer some solutions and deeper questions both for individual researchers and the feminist psychology and open science communities. By broadening our focus from a myopic prioritization of certain methodological and analytic approaches in open science, we hope to give a balanced perspective of science that emerges from each movement’s strengths and is openly feminist and radically open.
Collapse
|
12
|
Behavioural science is unlikely to change the world without a heterogeneity revolution. Nat Hum Behav 2021; 5:980-989. [PMID: 34294901 PMCID: PMC8928154 DOI: 10.1038/s41562-021-01143-3] [Citation(s) in RCA: 92] [Impact Index Per Article: 30.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 05/21/2021] [Indexed: 02/06/2023]
Abstract
In the past decade, behavioural science has gained influence in policymaking but suffered a crisis of confidence in the replicability of its findings. Here, we describe a nascent heterogeneity revolution that we believe these twin historical trends have triggered. This revolution will be defined by the recognition that most treatment effects are heterogeneous, so the variation in effect estimates across studies that defines the replication crisis is to be expected as long as heterogeneous effects are studied without a systematic approach to sampling and moderation. When studied systematically, heterogeneity can be leveraged to build more complete theories of causal mechanism that could inform nuanced and dependable guidance to policymakers. We recommend investment in shared research infrastructure to make it feasible to study behavioural interventions in heterogeneous and generalizable samples, and suggest low-cost steps researchers can take immediately to avoid being misled by heterogeneity and begin to learn from it instead.
Collapse
|
13
|
Ferraro PJ, Agrawal A. Synthesizing evidence in sustainability science through harmonized experiments: Community monitoring in common pool resources. Proc Natl Acad Sci U S A 2021; 118:e2106489118. [PMID: 34257156 PMCID: PMC8307536 DOI: 10.1073/pnas.2106489118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Affiliation(s)
- Paul J Ferraro
- Carey Business School, The Johns Hopkins University, Baltimore, MD 21202;
- Department of Environmental Health and Engineering, a joint department of the Bloomberg School of Public Health and the Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD 21212
| | - Arun Agrawal
- School for Environment and Sustainability, Gerald R. Ford School of Public Policy, University of Michigan, Ann Arbor, MI 48109
| |
Collapse
|
14
|
Lantian A. Les pratiques de recherche ouvertes en psychologie. PSYCHOLOGIE FRANCAISE 2021. [PMCID: PMC7540208 DOI: 10.1016/j.psfr.2020.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Cet article vise à offrir une vision d’ensemble des récentes évolutions des pratiques de recherche en psychologie. Un rappel des différents symptômes de la crise de la réplicabilité (et de confiance) ayant affecté la psychologie sera suivi par une discussion approfondie et nuancée des facteurs responsables de cette situation. Il s’agira ensuite, en s’appuyant sur des illustrations et des ressources, de démontrer le rôle crucial des pratiques de recherche ouvertes comme moyen de résoudre ces difficultés. La connaissance et l’adoption de ces pratiques de recherche popularisées par le mouvement de la science ouverte sont indispensables afin de contribuer, via la transparence et l’ouverture, à l’effort collectif d’amélioration de la fiabilité et de la réplicabilité des résultats en psychologie.
Collapse
|
15
|
Infection threat shapes our social instincts. Behav Ecol Sociobiol 2021; 75:47. [PMID: 33583997 PMCID: PMC7873116 DOI: 10.1007/s00265-021-02975-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 01/05/2021] [Accepted: 01/11/2021] [Indexed: 02/07/2023]
Abstract
We social animals must balance the need to avoid infections with the need to interact with conspecifics. To that end we have evolved, alongside our physiological immune system, a suite of behaviors devised to deal with potentially contagious individuals. Focusing mostly on humans, the current review describes the design and biological innards of this behavioral immune system, laying out how infection threat shapes sociality and sociality shapes infection threat. The paper shows how the danger of contagion is detected and posted to the brain; how it affects individuals’ mate choice and sex life; why it strengthens ties within groups but severs those between them, leading to hostility toward anyone who looks, smells, or behaves unusually; and how it permeates the foundation of our moral and political views. This system was already in place when agriculture and animal domestication set off a massive increase in our population density, personal connections, and interaction with other species, amplifying enormously the spread of disease. Alas, pandemics such as COVID-19 not only are a disaster for public health, but, by rousing millions of behavioral immune systems, could prove a threat to harmonious cohabitation too.
Collapse
|
16
|
Abstract
Some leaders display high levels of narcissism. Does the link between narcissism levels and leadership exist in childhood? We conducted, to our knowledge, the first study of the relationship between narcissism levels and various aspects of leadership in children (N = 332, ages 7-14 years). We assessed narcissism levels using the Childhood Narcissism Scale and assessed leadership emergence in classrooms using peer nominations. Children then performed a group task in which one child was randomly assigned as leader. We assessed perceived and actual leadership functioning. Children with higher narcissism levels more often emerged as leaders in classrooms. When given a leadership role in the task, children with higher narcissism levels perceived themselves as better leaders, but their actual leadership functioning did not differ significantly from that of other leaders. Specification-curve analyses corroborated these findings. Thus, children with relatively high narcissism levels tend to emerge as leaders, even though they may not excel as leaders.
Collapse
Affiliation(s)
- Eddie Brummelman
- Research Institute of Child Development and Education, University of Amsterdam
| | - Barbara Nevicka
- Department of Work and Organizational Psychology, University of Amsterdam
| | | |
Collapse
|
17
|
Abstract
The COVID-19 pandemic points to the need for scientists to pool their efforts in order to understand this disease and respond to the ensuing crisis. Other global challenges also require such scientific cooperation. Yet in academic institutions, reward structures and incentives are based on systems that primarily fuel the competition between (groups of) scientific researchers. Competition between individual researchers, research groups, research approaches, and scientific disciplines is seen as an important selection mechanism and driver of academic excellence. These expected benefits of competition have come to define the organizational culture in academia. There are clear indications that the overreliance on competitive models undermines cooperative exchanges that might lead to higher quality insights. This damages the well-being and productivity of individual researchers and impedes efforts towards collaborative knowledge generation. Insights from social and organizational psychology on the side effects of relying on performance targets, prioritizing the achievement of success over the avoidance of failure, and emphasizing self-interest and efficiency, clarify implicit mechanisms that may spoil valid attempts at transformation. The analysis presented here elucidates that a broader change in the academic culture is needed to truly benefit from current attempts to create more open and collaborative practices for cumulative knowledge generation.
Collapse
|
18
|
Röseler L, Schütz A, Blank PA, Dück M, Fels S, Kupfer J, Scheelje L, Seida C. Evidence against subliminal anchoring: Two close, highly powered, preregistered, and failed replication attempts. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2020; 92:104066. [PMID: 33100377 PMCID: PMC7567524 DOI: 10.1016/j.jesp.2020.104066] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 09/29/2020] [Accepted: 09/30/2020] [Indexed: 10/28/2022]
Abstract
•The only two published studies on subliminal anchoring report contradictory results.•In two replications of these studies we could not find any evidence for subliminal anchoring.•Replications were as close as possible and high data quality was ensured, for example by introducing new manipulation checks.•Both our studies feature entirely open materials and preregistration of all materials including the analysis scripts. OPEN SCIENCE BADGES: Please grant all three badges, Preregistration, Open Materials, and Open Data.
Collapse
Affiliation(s)
- Lukas Röseler
- University of Bamberg, Germany.,Harz University of Applied Sciences, Germany
| | | | | | | | | | | | | | | |
Collapse
|
19
|
Iso-Ahola SE. Replication and the Establishment of Scientific Truth. Front Psychol 2020; 11:2183. [PMID: 33041887 PMCID: PMC7525033 DOI: 10.3389/fpsyg.2020.02183] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 08/04/2020] [Indexed: 11/13/2022] Open
Abstract
The idea of replication is based on the premise that there are empirical regularities or universal laws to be replicated and verified, and the scientific method is adequate for doing it. Scientific truth, however, is not absolute but relative to time, context, and the method used. Time and context are inextricably intertwined in that time (e.g., Christmas Day vs. New Year's Day) creates different contexts for behaviors and contexts create different experiences of time, rendering psychological phenomena inherently variable. This means that internal and external conditions fluctuate and are different in a replication study vs. the original. Thus, a replication experiment is just another empirical investigation in an ongoing effort to establish scientific truth. Neither the original nor a replication is the final arbiter of whether or not something exists. Discovered patterns need not be permanent laws of human behavior proven by the pinpoint statistical verification through replication. To move forward, phenomenon replications are needed to investigate phenomena in different ways, forms, contexts, and times. Such investigations look at phenomena not just in terms the magnitude of their effects but also by their frequency, duration, and intensity in labs and real life. They will also shed light on the extent to which lab manipulations may make many phenomena subjectively conscious events and effects (e.g., causal attributions) when they are nonconsciously experienced in real life, or vice versa. As scientific knowledge in physics is temporary and incomplete, should it be any surprise that science can only provide "temporary winners" for psychological knowledge of human behavior?
Collapse
Affiliation(s)
- Seppo E. Iso-Ahola
- Department of Kinesiology, School of Public Health, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
20
|
Wilson BM, Harris CR, Wixted JT. Science is not a signal detection problem. Proc Natl Acad Sci U S A 2020; 117:5559-5567. [PMID: 32127477 PMCID: PMC7084063 DOI: 10.1073/pnas.1914237117] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
The perceived replication crisis and the reforms designed to address it are grounded in the notion that science is a binary signal detection problem. However, contrary to null hypothesis significance testing (NHST) logic, the magnitude of the underlying effect size for a given experiment is best conceptualized as a random draw from a continuous distribution, not as a random draw from a dichotomous distribution (null vs. alternative). Moreover, because continuously distributed effects selected using a P < 0.05 filter must be inflated, the fact that they are smaller when replicated (reflecting regression to the mean) is no reason to sound the alarm. Considered from this perspective, recent replication efforts suggest that most published P < 0.05 scientific findings are "true" (i.e., in the correct direction), with observed effect sizes that are inflated to varying degrees. We propose that original science is a screening process, one that adopts NHST logic as a useful fiction for selecting true effects that are potentially large enough to be of interest to other scientists. Unlike original science, replication science seeks to precisely measure the underlying effect size associated with an experimental protocol via large-N direct replication, without regard for statistical significance. Registered reports are well suited to (often resource-intensive) direct replications, which should focus on influential findings and be published regardless of outcome. Conceptual replications play an important but separate role in validating theories. However, because they are part of NHST-based original science, conceptual replications cannot serve as the field's self-correction mechanism. Only direct replications can do that.
Collapse
Affiliation(s)
- Brent M Wilson
- Department of Psychology, University of California San Diego, La Jolla, CA 92093
| | - Christine R Harris
- Department of Psychology, University of California San Diego, La Jolla, CA 92093
| | - John T Wixted
- Department of Psychology, University of California San Diego, La Jolla, CA 92093
| |
Collapse
|
21
|
Chen X, Latham GP, Piccolo RF, Itzchakov G. An Enumerative Review and a Meta‐Analysis of Primed Goal Effects on Organizational Behavior. APPLIED PSYCHOLOGY-AN INTERNATIONAL REVIEW-PSYCHOLOGIE APPLIQUEE-REVUE INTERNATIONALE 2020. [DOI: 10.1111/apps.12239] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Xiao Chen
- University of Prince Edward Island Canada
| | | | | | - Guy Itzchakov
- University of Toronto Canada
- University of Haifa Israel
| |
Collapse
|
22
|
Orrù G, Monaro M, Conversano C, Gemignani A, Sartori G. Machine Learning in Psychometrics and Psychological Research. Front Psychol 2020; 10:2970. [PMID: 31998200 PMCID: PMC6966768 DOI: 10.3389/fpsyg.2019.02970] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 12/16/2019] [Indexed: 11/28/2022] Open
Abstract
Recent controversies about the level of replicability of behavioral research analyzed using statistical inference have cast interest in developing more efficient techniques for analyzing the results of psychological experiments. Here we claim that complementing the analytical workflow of psychological experiments with Machine Learning-based analysis will both maximize accuracy and minimize replicability issues. As compared to statistical inference, ML analysis of experimental data is model agnostic and primarily focused on prediction rather than inference. We also highlight some potential pitfalls resulting from adoption of Machine Learning based experiment analysis. If not properly used it can lead to over-optimistic accuracy estimates similarly observed using statistical inference. Remedies to such pitfalls are also presented such and building model based on cross validation and the use of ensemble models. ML models are typically regarded as black boxes and we will discuss strategies aimed at rendering more transparent the predictions.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Ciro Conversano
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
23
|
Replicator degrees of freedom allow publication of misleading failures to replicate. Proc Natl Acad Sci U S A 2019; 116:25535-25545. [PMID: 31767750 PMCID: PMC6925985 DOI: 10.1073/pnas.1910951116] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
We show that commonly exercised flexibility at the experimental design and data analysis stages of replication testing makes it easy to publish false-negative replication results while maintaining the impression of methodological rigor. These findings have important implications for how the many ostensible nonreplications already in the literature should be interpreted and for how future replication tests should be conducted. In recent years, the field of psychology has begun to conduct replication tests on a large scale. Here, we show that “replicator degrees of freedom” make it far too easy to obtain and publish false-negative replication results, even while appearing to adhere to strict methodological standards. Specifically, using data from an ongoing debate, we show that commonly exercised flexibility at the experimental design and data analysis stages of replication testing can make it appear that a finding was not replicated when, in fact, it was. The debate that we focus on is representative, on key dimensions, of a large number of other replication tests in psychology that have been published in recent years, suggesting that the lessons of this analysis may be far reaching. The problems with current practice in replication science that we uncover here are particularly worrisome because they are not adequately addressed by the field’s standard remedies, including preregistration. Implications for how the field could develop more effective methodological standards for replication are discussed.
Collapse
|