1
|
Robinson MM, Williams JR, Wixted JT, Brady TF. Zooming in on what counts as core and auxiliary: A case study on recognition models of visual working memory. Psychon Bull Rev 2024:10.3758/s13423-024-02562-9. [PMID: 39289241 DOI: 10.3758/s13423-024-02562-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/03/2024] [Indexed: 09/19/2024]
Abstract
Research on best practices in theory assessment highlights that testing theories is challenging because they inherit a new set of assumptions as soon as they are linked to a specific methodology. In this article, we integrate and build on this work by demonstrating the breadth of these challenges. We show that tracking auxiliary assumptions is difficult because they are made at different stages of theory testing and at multiple levels of a theory. We focus on these issues in a reanalysis of a seminal study and its replications, both of which use a simple working-memory paradigm and a mainstream computational modeling approach. These studies provide the main evidence for "all-or-none" recognition models of visual working memory and are still used as the basis for how to measure performance in popular visual working-memory tasks. In our reanalysis, we find that core practical auxiliary assumptions were unchecked and violated; the original model comparison metrics and data were not diagnostic in several experiments. Furthermore, we find that models were not matched on "theory general" auxiliary assumptions, meaning that the set of tested models was restricted, and not matched in theoretical scope. After testing these auxiliary assumptions and identifying diagnostic testing conditions, we find evidence for the opposite conclusion. That is, continuous resource models outperform all-or-none models. Together, our work demonstrates why tracking and testing auxiliary assumptions remains a fundamental challenge, even in prominent studies led by careful, computationally minded researchers. Our work also serves as a conceptual guide on how to identify and test the gamut of auxiliary assumptions in theory assessment, and we discuss these ideas in the context of contemporary approaches to scientific discovery.
Collapse
Affiliation(s)
| | | | - John T Wixted
- Department of Psychology, University of California, San Diego, CA, USA
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, CA, USA
| |
Collapse
|
2
|
Wegener DT, Pek J, Fabrigar LR. Accumulating evidence across studies: Consistent methods protect against false findings produced by p-hacking. PLoS One 2024; 19:e0307999. [PMID: 39208346 PMCID: PMC11361653 DOI: 10.1371/journal.pone.0307999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 07/15/2024] [Indexed: 09/04/2024] Open
Abstract
Much empirical science involves evaluating alternative explanations for the obtained data. For example, given certain assumptions underlying a statistical test, a "significant" result generally refers to implausibility of a null (zero) effect in the population producing the obtained study data. However, methodological work on various versions of p-hacking (i.e., using different analysis strategies until a "significant" result is produced) questions whether significant p-values might often reflect false findings. Indeed, initial simulations of single studies showed that the potential for finding "significant" but false findings might be much higher than the nominal .05 value when various analysis flexibilities are undertaken. In many settings, however, research articles report multiple studies using consistent methods across the studies, where those consistent methods would constrain the flexibilities used to produce high false-finding rates for simulations of single studies. Thus, we conducted simulations of study sets. These simulations show that consistent methods across studies (i.e., consistent in terms of which measures are analyzed, which conditions are included, and whether and how covariates are included) dramatically reduce the potential for flexible research practices (p-hacking) to produce consistent sets of significant results across studies. For p-hacking to produce even modest probabilities of a consistent set of studies would require (a) a large amount of selectivity in study reporting and (b) severe (and quite intentional) versions of p-hacking. With no more than modest selective reporting and with consistent methods across studies, p-hacking does not provide a plausible explanation for consistent empirical results across studies, especially as the size of the reported study set increases. In addition, the simulations show that p-hacking can produce high rates of false findings for single studies with very large samples. In contrast, a series of methodologically-consistent studies (even with much smaller samples) is much less vulnerable to the forms of p-hacking examined in the simulations.
Collapse
Affiliation(s)
- Duane T. Wegener
- Department of Psychology, Ohio State University, Columbus, Ohio, United States of America
| | - Jolynn Pek
- Department of Psychology, Ohio State University, Columbus, Ohio, United States of America
| | | |
Collapse
|
3
|
Schneider JW, Allum N, Andersen JP, Petersen MB, Madsen EB, Mejlgaard N, Zachariae R. Is something rotten in the state of Denmark? Cross-national evidence for widespread involvement but not systematic use of questionable research practices across all fields of research. PLoS One 2024; 19:e0304342. [PMID: 39133711 PMCID: PMC11318862 DOI: 10.1371/journal.pone.0304342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/10/2024] [Indexed: 08/15/2024] Open
Abstract
Questionable research practices (QRP) are believed to be widespread, but empirical assessments are generally restricted to a few types of practices. Furthermore, conceptual confusion is rife with use and prevalence of QRPs often being confused as the same quantity. We present the hitherto most comprehensive study examining QRPs across scholarly fields and knowledge production modes. We survey perception, use, prevalence and predictors of QRPs among 3,402 researchers in Denmark and 1,307 in the UK, USA, Croatia and Austria. Results reveal remarkably similar response patterns among Danish and international respondents (τ = 0.85). Self-reported use indicates whether respondents have used a QRP in recent publications. 9 out of 10 respondents admitted using at least one QRP. Median use is three out of nine QRP items. Self-reported prevalence reflects the frequency of use. On average, prevalence rates were roughly three times lower compared to self-reported use. Findings indicated that the perceived social acceptability of QRPs influenced self-report patterns. Results suggest that most researchers use different types of QRPs within a restricted time period. The prevalence estimates, however, do not suggest outright systematic use of specific QRPs. Perceived pressure was the strongest systemic predictor for prevalence. Conversely, more local attention to research cultures and academic age was negatively related to prevalence. Finally, the personality traits conscientiousness and, to a lesser degree, agreeableness were also inversely associated with self-reported prevalence. Findings suggest that explanations for engagement with QRPs are not only attributable to systemic factors, as hitherto suggested, but a complicated mixture of experience, systemic and individual factors, and motivated reasoning.
Collapse
Affiliation(s)
- Jesper W. Schneider
- Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark
| | - Nick Allum
- Department of Sociology, University of Essex, Essex, United Kingdom
| | - Jens Peter Andersen
- Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark
| | | | - Emil B. Madsen
- Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark
| | - Niels Mejlgaard
- Danish Centre for Studies in Research and Research Policy, Aarhus University, Aarhus, Denmark
| | - Robert Zachariae
- Unit for Psychooncology and Health Psychology (EPoS), Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
- Department Psychology and Behavioral Science, Aarhus University, Aarhus, Denmark
| |
Collapse
|
4
|
Jahanshad N, Lenzini P, Bijsterbosch J. Current best practices and future opportunities for reproducible findings using large-scale neuroimaging in psychiatry. Neuropsychopharmacology 2024:10.1038/s41386-024-01938-8. [PMID: 39117903 DOI: 10.1038/s41386-024-01938-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 06/05/2024] [Accepted: 07/09/2024] [Indexed: 08/10/2024]
Abstract
Research into the brain basis of psychopathology is challenging due to the heterogeneity of psychiatric disorders, extensive comorbidities, underdiagnosis or overdiagnosis, multifaceted interactions with genetics and life experiences, and the highly multivariate nature of neural correlates. Therefore, increasingly larger datasets that measure more variables in larger cohorts are needed to gain insights. In this review, we present current "best practice" approaches for using existing databases, collecting and sharing new repositories for big data analyses, and future directions for big data in neuroimaging and psychiatry with an emphasis on contributing to collaborative efforts and the challenges of multi-study data analysis.
Collapse
Affiliation(s)
- Neda Jahanshad
- Mark and Mary Stevens Neuroimaging & Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, 90292, USA.
| | - Petra Lenzini
- Department of Radiology, Washington University School of Medicine, Saint Louis, MO, 63110, USA
| | - Janine Bijsterbosch
- Department of Radiology, Washington University School of Medicine, Saint Louis, MO, 63110, USA.
| |
Collapse
|
5
|
Papaefthimiou M, Kontou PI, Bagos PG, Braliou GG. Integration of Antioxidant Activity Assays Data of Stevia Leaf Extracts: A Systematic Review and Meta-Analysis. Antioxidants (Basel) 2024; 13:692. [PMID: 38929131 PMCID: PMC11201069 DOI: 10.3390/antiox13060692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/22/2024] [Accepted: 05/28/2024] [Indexed: 06/28/2024] Open
Abstract
Stevia rebaudiana Bertoni, a no-calorie natural sweetener, contains a plethora of polyphenols that exert antioxidant properties with potential medicinal significance. Due to the variety of functional groups, polyphenols exhibit varying solubility depending on the nature of the extraction solvents (water, organic, or their mixtures, defined further on as hydroalcoholic extracts). In the present study, we performed a systematic review, following PRISMA guidelines, and meta-analysis, synthesizing all available data from 45 articles encompassing 250 different studies. Our results showed that the total phenolic content (TPC) of hydroalcoholic and aqueous extracts presents higher values (64.77 and 63.73 mg GAE/g) compared to organic extracts (33.39). Total flavonoid content (TFC) was also higher in aqueous and hydroalcoholic extracts; meta-regression analysis revealed that outcomes in different measuring units (mg QE/g, mg CE/g, and mg RUE/g) do not present statistically significant differences and can be synthesized in meta-analysis. Using meta-regression analysis, we showed that outcomes from the chemical-based ABTS, FRAP, and ORAC antioxidant assays for the same extract type can be combined in meta-analysis because they do not differ statistically significantly. Meta-analysis of ABTS, FRAP, and ORAC assays outcomes revealed that the antioxidant activity profile of various extract types follows that of their phenolic and flavonoid content. Using regression meta-analysis, we also presented that outcomes from SOD, CAT, and POX enzymatic antioxidant assays are independent of the assay type (p-value = 0.905) and can be combined. Our study constitutes the first effort to quantitatively and statistically synthesize the research results of individual studies using all methods measuring the antioxidant activity of stevia leaf extracts. Our results, in light of evidence-based practice, uncover the need for a broadly accepted, unified, methodological strategy to perform antioxidant tests, and offer documentation that the use of ethanol:water 1:1 mixtures or pure water can more efficiently extract stevia antioxidant compounds.
Collapse
Affiliation(s)
- Maria Papaefthimiou
- Department of Computer Science and Biomedical Informatics, University of Thessaly, 35131 Lamia, Greece; (M.P.); (P.G.B.)
| | | | - Pantelis G. Bagos
- Department of Computer Science and Biomedical Informatics, University of Thessaly, 35131 Lamia, Greece; (M.P.); (P.G.B.)
| | - Georgia G. Braliou
- Department of Computer Science and Biomedical Informatics, University of Thessaly, 35131 Lamia, Greece; (M.P.); (P.G.B.)
| |
Collapse
|
6
|
Brown RE. Measuring the replicability of our own research. J Neurosci Methods 2024; 406:110111. [PMID: 38521128 DOI: 10.1016/j.jneumeth.2024.110111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 03/08/2024] [Accepted: 03/18/2024] [Indexed: 03/25/2024]
Abstract
In the study of transgenic mouse models of neurodevelopmental and neurodegenerative disorders, we use batteries of tests to measure deficits in behaviour and from the results of these tests, we make inferences about the mental states of the mice that we interpret as deficits in "learning", "memory", "anxiety", "depression", etc. This paper discusses the problems of determining whether a particular transgenic mouse is a valid mouse model of disease X, the problem of background strains, and the question of whether our behavioural tests are measuring what we say they are. The problem of the reliability of results is then discussed: are they replicable between labs and can we replicate our results in our own lab? This involves the study of intra- and inter- experimenter reliability. The variables that influence replicability and the importance of conducting a complete behavioural phenotype: sensory, motor, cognitive and social emotional behaviour are discussed. Then the thorny question of failure to replicate is examined: Is it a curse or a blessing? Finally, the role of failure in research and what it tells us about our research paradigms is examined.
Collapse
Affiliation(s)
- Richard E Brown
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, NS B3H 4R2, Canada.
| |
Collapse
|
7
|
Jain S, Vo VA, Wehbe L, Huth AG. Computational Language Modeling and the Promise of In Silico Experimentation. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:80-106. [PMID: 38645624 PMCID: PMC11025654 DOI: 10.1162/nol_a_00101] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 01/18/2023] [Indexed: 04/23/2024]
Abstract
Language neuroscience currently relies on two major experimental paradigms: controlled experiments using carefully hand-designed stimuli, and natural stimulus experiments. These approaches have complementary advantages which allow them to address distinct aspects of the neurobiology of language, but each approach also comes with drawbacks. Here we discuss a third paradigm-in silico experimentation using deep learning-based encoding models-that has been enabled by recent advances in cognitive computational neuroscience. This paradigm promises to combine the interpretability of controlled experiments with the generalizability and broad scope of natural stimulus experiments. We show four examples of simulating language neuroscience experiments in silico and then discuss both the advantages and caveats of this approach.
Collapse
Affiliation(s)
- Shailee Jain
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Labs, Hillsboro, OR, USA
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Alexander G. Huth
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
- Department of Neuroscience, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
8
|
Flournoy JC, Bryce NV, Dennison MJ, Rodman AM, McNeilly EA, Lurie LA, Bitran D, Reid-Russell A, Vidal Bustamante CM, Madhyastha T, McLaughlin KA. A precision neuroscience approach to estimating reliability of neural responses during emotion processing: Implications for task-fMRI. Neuroimage 2024; 285:120503. [PMID: 38141745 PMCID: PMC10872443 DOI: 10.1016/j.neuroimage.2023.120503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 12/13/2023] [Accepted: 12/15/2023] [Indexed: 12/25/2023] Open
Abstract
Recent work demonstrating low test-retest reliability of neural activation during fMRI tasks raises questions about the utility of task-based fMRI for the study of individual variation in brain function. Two possible sources of the instability in task-based BOLD signal over time are noise or measurement error in the instrument, and meaningful variation across time within-individuals in the construct itself-brain activation elicited during fMRI tasks. Examining the contribution of these two sources of test-retest unreliability in task-evoked brain activity has far-reaching implications for cognitive neuroscience. If test-retest reliability largely reflects measurement error, it suggests that task-based fMRI has little utility in the study of either inter- or intra-individual differences. On the other hand, if task-evoked BOLD signal varies meaningfully over time, it would suggest that this tool may yet be well suited to studying intraindividual variation. We parse these sources of variance in BOLD signal in response to emotional cues over time and within-individuals in a longitudinal sample with 10 monthly fMRI scans. Test-retest reliability was low, reflecting a lack of stability in between-person differences across scans. In contrast, within-person, within-session internal consistency of the BOLD signal was higher, and within-person fluctuations across sessions explained almost half the variance in voxel-level neural responses. Additionally, monthly fluctuations in neural response to emotional cues were associated with intraindividual variation in mood, sleep, and exposure to stressors. Rather than reflecting trait-like differences across people, neural responses to emotional cues may be more reflective of intraindividual variation over time. These patterns suggest that task-based fMRI may be able to contribute to the study of individual variation in brain function if more attention is given to within-individual variation approaches, psychometrics-beginning with improving reliability beyond the modest estimates observed here, and the validity of task fMRI beyond the suggestive associations reported here.
Collapse
Affiliation(s)
| | | | - Meg J Dennison
- Phoenix Australia-Centre for Posttraumatic Mental Health, Department of Psychiatry, The University of Melbourne, Melbourne, VIC, Australia
| | | | | | - Lucy A Lurie
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill
| | | | | | | | - Tara Madhyastha
- Rescale; Integrated Brain Imaging Center, University of Washington
| | | |
Collapse
|
9
|
Li W, Germine LT, Mehr SA, Srinivasan M, Hartshorne J. Developmental psychologists should adopt citizen science to improve generalization and reproducibility. INFANT AND CHILD DEVELOPMENT 2024; 33:e2348. [PMID: 38515737 PMCID: PMC10957098 DOI: 10.1002/icd.2348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 05/17/2022] [Indexed: 11/08/2022]
Abstract
Widespread failures of replication and generalization are, ironically, a scientific triumph, in that they confirm the fundamental metascientific theory that underlies our field. Generalizable and replicable findings require testing large numbers of subjects from a wide range of demographics with a large, randomly-sampled stimulus set, and using a variety of experimental parameters. Because few studies accomplish any of this, meta-scientists predict that findings will frequently fail to replicate or generalize. We argue that to be more robust and replicable, developmental psychology needs to find a mechanism for collecting data at greater scale and from more diverse populations. Luckily, this mechanism already exists: Citizen science, in which large numbers of uncompensated volunteers provide data. While best-known for its contributions to astronomy and ecology, citizen science has also produced major findings in neuroscience and psychology, and increasingly in developmental psychology. We provide examples, address practical challenges, discuss limitations, and compare to other methods of obtaining large datasets. Ultimately, we argue that the range of studies where it makes sense *not* to use citizen science is steadily dwindling.
Collapse
Affiliation(s)
- Wei Li
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, USA
| | - Laura Thi Germine
- McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Cambridge, MA
| | - Samuel A. Mehr
- Data Science Initiative, Harvard University, Cambridge, MA
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | | | - Joshua Hartshorne
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, USA
| |
Collapse
|
10
|
Laurent V, Kosinski T, Rusinek S. I know It's false, but I keep thinking as if it were true: A replication study of Johnson and Seifert's (1994) continued influence effect. Acta Psychol (Amst) 2023; 241:104094. [PMID: 38000364 DOI: 10.1016/j.actpsy.2023.104094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 10/25/2023] [Accepted: 11/21/2023] [Indexed: 11/26/2023] Open
Abstract
False Information (FI) is a critical societal issue, made even more pressing by our inability to mitigate its influence through correction. Researchers Johnson and Seifert (1994, Experiment 1A) penned a seminal paper on this "Continued Influence Effect" (CIE), which they observed in English-speaking participants. In their experiments, one group read a text containing FI that was later retracted (FI group), while another read the same text without FI (control group). Interestingly, even after receiving corrections, participants who read the FI were more likely than their peers to form FI-related inferences about the text. To the best of our knowledge, this finding has never been successfully directly replicated. Given the current replicability crisis plaguing the human sciences, the influence of culture on CIE and the importance of Johnson and Seifert's paradigm in this literature, the reassessment of their findings within a non-English-speaking population appears crucial. The present research investigated the direct replicability of their study with a French-speaking sample, comparing the inferences drawn by an FI group (n = 21) to those made by a control group (n = 23). The results confirm those of the original study, supporting the validity of Johnson and Seifert's paradigm (1994) and extending its applicability to a French-speaking population.
Collapse
Affiliation(s)
- Victor Laurent
- Univ. Lille, ULR 4072 - PSITEC - Psychologie: Interactions, Temps, Emotions, Cognition, F-59000 Lille, France.
| | - Thierry Kosinski
- Univ. Lille, ULR 4072 - PSITEC - Psychologie: Interactions, Temps, Emotions, Cognition, F-59000 Lille, France.
| | - Stéphane Rusinek
- Univ. Lille, ULR 4072 - PSITEC - Psychologie: Interactions, Temps, Emotions, Cognition, F-59000 Lille, France.
| |
Collapse
|
11
|
Korcsok B, Korondi P. How do you do the things that you do? Ethological approach to the description of robot behaviour. Biol Futur 2023; 74:253-279. [PMID: 37812380 DOI: 10.1007/s42977-023-00178-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 09/08/2023] [Indexed: 10/10/2023]
Abstract
The detailed description of behaviour of the interacting parties is becoming more and more important in human-robot interaction (HRI), especially in social robotics (SR). With the rise in the number of publications, there is a substantial need for the objective and comprehensive description of implemented robot behaviours to ensure comparability and reproducibility of the studies. Ethograms and the meticulous analysis of behaviour was introduced long ago in animal behaviour research (cf. ethology). The adoption of this method in SR and HRI can ensure the desired clarity over robot behaviours, while also providing added benefits during robot development, behaviour modelling and analysis of HRI experiments. We provide an overview of the possible uses and advantages of ethograms in HRI, and propose a general framework for describing behaviour which can be adapted to the requirements of specific studies.
Collapse
Affiliation(s)
- Beáta Korcsok
- ELKH-ELTE Comparative Ethology Research Group, Budapest, Hungary.
- Department of Mechatronics, Optics and Mechanical Engineering Informatics, Faculty of Mechanical Engineering, Budapest University of Technology and Economics, Budapest, Hungary.
| | - Péter Korondi
- Department of Mechatronics, Faculty of Engineering, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
12
|
Allard A, Clavien C. Nudging accurate scientific communication. PLoS One 2023; 18:e0290225. [PMID: 37651386 PMCID: PMC10470889 DOI: 10.1371/journal.pone.0290225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 08/04/2023] [Indexed: 09/02/2023] Open
Abstract
The recent replicability crisis in social and biomedical sciences has highlighted the need for improvement in the honest transmission of scientific content. We present the results of two studies investigating whether nudges and soft social incentives enhance participants' readiness to transmit high-quality scientific news. In two online randomized experiments (Total N = 2425), participants had to imagine that they were science journalists who had to select scientific studies to report in their next article. They had to choose between studies reporting opposite results (for instance, confirming versus not confirming the effect of a treatment) and varying in traditional signs of research credibility (large versus small sample sizes, randomized versus non-randomized designs). In order to steer participants' choices towards or against the trustworthy transmission of science, we used several soft framing nudges and social incentives. Overall, we find that, although participants show a strong preference for studies using high-sample sizes and randomized design, they are biased towards positive results, and express a preference for results in line with previous intuitions (evincing confirmation bias). Our soft framing nudges and social incentives did not help to counteract these biases. On the contrary, the social incentives against honest transmission of scientific content mildly exacerbated the expression of these biases.
Collapse
Affiliation(s)
- Aurélien Allard
- iEH2-Institute for Ethics History Humanities, University of Geneva, Geneva, Switzerland
| | - Christine Clavien
- iEH2-Institute for Ethics History Humanities, University of Geneva, Geneva, Switzerland
| |
Collapse
|
13
|
Clark CJ, Connor P, Isch C. Failing to replicate predicts citation declines in psychology. Proc Natl Acad Sci U S A 2023; 120:e2304862120. [PMID: 37428904 PMCID: PMC10629524 DOI: 10.1073/pnas.2304862120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/03/2023] [Indexed: 07/12/2023] Open
Abstract
With a sample of 228 psychology papers that failed to replicate, we tested whether the trajectory of citation patterns changes following the publication of a failure to replicate. Across models, we found consistent evidence that failing to replicate predicted lower future citations and that the size of this reduction increased over time. In a 14-y postpublication period, we estimated that the publication of a failed replication was associated with an average citation decline of 14% for original papers. These findings suggest that the publication of failed replications may contribute to a self-correcting science by decreasing scholars' reliance on unreplicable original findings.
Collapse
Affiliation(s)
- Cory J. Clark
- The Wharton School, University of Pennsylvania, Philadelphia, PA19104
- School of Arts and Sciences, University of Pennsylvania, Philadelphia, PA19104
| | - Paul Connor
- School of Arts and Sciences, University of Pennsylvania, Philadelphia, PA19104
| | - Calvin Isch
- The Wharton School, University of Pennsylvania, Philadelphia, PA19104
| |
Collapse
|
14
|
Horton CB, Adam H, Galinsky AD. Evaluating the Evidence for Enclothed Cognition: Z-Curve and Meta-Analyses. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2023:1461672231182478. [PMID: 37458322 DOI: 10.1177/01461672231182478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023]
Abstract
Enclothed cognition refers to the systematic influence that clothes can have on the wearer's feelings, thoughts, and behaviors through their symbolic meaning. It has attracted considerable academic and nonacademic interest, with the 2012 article that coined the phrase cited more than 600 times and covered in more than 160 news outlets. However, a recent high-powered replication failed to replicate one of the original effects. To determine whether the larger body of research on enclothed cognition possesses evidential value and replicable effects, we performed z-curve and meta-analyses using 105 effects from 40 studies across 24 articles (N = 3,789). Underscoring the marked improvement of psychological research practices in the mid-2010s, our results raise concerns about the replicability of early enclothed cognition studies but affirm the evidential value for effects published after 2015. These later studies support the core principle of enclothed cognition-what we wear influences how we think, feel, and act.
Collapse
|
15
|
Knopps AG, Wissman KT. Collaborative Retrieval Practice Reduces Mind-Wandering During Learning. Exp Psychol 2023; 70:241-248. [PMID: 37830756 DOI: 10.1027/1618-3169/a000589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Research has shown engaging in retrieval practice can reduce the frequency of mind-wandering. However, no prior research has examined how engaging in collaborative (as compared to individual) retrieval practice impacts mind-wandering during learning. In the current experiment, participants were asked to study a list of words, followed by retrieval practice that either occurred collaboratively (as a dyad) or individually. During retrieval practice, participants provided self-reports as to whether they were on task or off task. Following retrieval practice, all participants completed an individual final test. Of greatest interest, the results showed that engaging in collaborative retrieval practice decreased the frequency of mind-wandering during learning. In addition, and consistent with prior collaborative learning research, collaborative inhibition during practice and postcollaborative benefits on the final test were observed. The current results provide the first demonstration of an additional benefit to using collaborative retrieval practice: This technique reduces the frequency of mind-wandering.
Collapse
Affiliation(s)
| | - Kathryn T Wissman
- Department Psychology, North Dakota State University, Fargo, ND, USA
| |
Collapse
|
16
|
Bailey D, Rose G, Stewart A. A change to Experimental Physiology's statistics policy. Exp Physiol 2023; 108:795-796. [PMID: 37079429 PMCID: PMC10988475 DOI: 10.1113/ep091248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 04/21/2023]
Affiliation(s)
- Damian Bailey
- University of South WalesNeurovascular Research LaboratoryPontypriddUnited Kingdom
| | - George Rose
- University of South WalesNeurovascular Research LaboratorySouth WalesUnited Kingdom
| | - Alex Stewart
- The Physiological SocietyUnknownLondonUnited Kingdom
| |
Collapse
|
17
|
Schauer JM. On the Accuracy of Replication Failure Rates. MULTIVARIATE BEHAVIORAL RESEARCH 2023; 58:598-615. [PMID: 37339430 DOI: 10.1080/00273171.2022.2066500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
A prominent approach to studying the replication crisis has been to conduct replications of several different scientific findings as part of the same research effort. The reported proportion of findings that these programs determined failed to replicate have become important statistics in the replication crisis. However, these "failure rates" are based on decisions about whether individual studies replicated, which are themselves subject to statistical uncertainty. In this article, we examine how that uncertainty impacts the accuracy of reported failure rates and find that the reported failure rates can be substantially biased and highly variable. Indeed, very high or very low failure rates could arise from chance alone.
Collapse
|
18
|
Karidas S, Hinckley JJ, Brekher I. Replication in Evidence-Based Aphasia Treatments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023:1-9. [PMID: 36596266 DOI: 10.1044/2022_jslhr-21-00688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE A critical review was completed to evaluate replication of aphasia treatments that have been vetted and accepted on the American Speech-Language-Hearing Association Practice Portal. METHOD The electronic databases Academic Search Premier, ProQuest Central, CINAHL Complete, and ERIC were searched for relevant articles using treatment names as keywords. Coders compared stimuli, material, design, and statistical analysis to pilot treatment approaches. Each study was coded as direct, conceptual, failed, or no replication. RESULTS Eighteen treatment approaches were selected for this review. A total of 188 articles met the inclusion criteria. Results revealed that 14 out of 18 treatment approaches were somehow replicated. Direct replications as the most valuable replication type for affirming previously found results were represented in only 1.5% of all studies (3/188). Failed direct replication were at 2% overall (4/188). Conceptual replications were more common across treatment approaches, but also represented at a low level with 22.8% (43/188). The majority of studies were coded as no replication attempt with 73.4%. DISCUSSION A critical factor in developing an evidence-based practice is the existence of replicated results for treatment. Replication evaluates the reproducibility of an author's or authors' research methodology and resulting outcomes and helps to ensure that observed treatment effects are reproducible. For an evidence-based treatment to be implemented or used in any clinical setting, it must be one that can be replicated. Direct and conceptual replications of aphasia treatment approaches were found to be alarmingly low considering the importance of replication in our field. It is recommended that replication should become more valued and mainstreamed in aphasia research. A replication database that compiles and maintains treatment manuals for replication purposes can increase the accessibility and acceptability of replications for researchers.
Collapse
Affiliation(s)
- Stephanie Karidas
- Department of Speech-Language Pathology, Gannon University, Ruskin, FL
| | - Jacqueline J Hinckley
- Department of Speech-Language Pathology, Nova Southeastern University, Fort Lauderdale, FL
| | - Izabelle Brekher
- Department of Speech-Language Pathology, Gannon University, Ruskin, FL
| |
Collapse
|
19
|
Albanese F, Bloem BR, Kalia LV. Addressing the "Replication Crisis" in the Field of Parkinson's Disease. JOURNAL OF PARKINSON'S DISEASE 2023; 13:849-850. [PMID: 37694309 PMCID: PMC10578279 DOI: 10.3233/jpd-239002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Affiliation(s)
- Federica Albanese
- Krembil Research Institute, Toronto Western Hospital, University Health Network, Toronto, Canada
- Laboratory of Neurogenetics, National Institute on Aging, National Institutes of Health, Bethesda, MD, USA
| | - Bastiaan R. Bloem
- Radboud University Nijmegen Medical Centre, Donders Institute for Brain, Cognition and Behaviour, Department of Neurology, Nijmegen, The Netherlands
| | - Lorraine V. Kalia
- Krembil Research Institute, Toronto Western Hospital, University Health Network, Toronto, Canada
- Division of Neurology, Department of Medicine, University of Toronto, Toronto, Canada
| |
Collapse
|
20
|
Kelter R. How to Choose between Different Bayesian Posterior Indices for Hypothesis Testing in Practice. MULTIVARIATE BEHAVIORAL RESEARCH 2023; 58:160-188. [PMID: 34582284 DOI: 10.1080/00273171.2021.1967716] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Hypothesis testing is an essential statistical method in experimental psychology and the cognitive sciences. The problems of traditional null hypothesis significance testing (NHST) have been discussed widely, and among the proposed solutions to the replication problems caused by the inappropriate use of significance tests and p-values is a shift toward Bayesian data analysis. However, Bayesian hypothesis testing is concerned with various posterior indices for significance and the size of an effect. This complicates Bayesian hypothesis testing in practice, as the availability of multiple Bayesian alternatives to the traditional p-value causes confusion which one to select and why. In this paper, various Bayesian posterior indices which have been proposed in the literature are compared and their benefits and limitations are discussed. The comparison shows that conceptually not all proposed Bayesian alternatives to NHST and p-values are beneficial, and the usefulness of some indices strongly depends on the study design and research goal. However, the comparison also reveals that there exist at least two candidates among the available Bayesian posterior indices which have appealing theoretical properties and are widely underused in the cognitive sciences.
Collapse
Affiliation(s)
- Riko Kelter
- Department of Mathematics, University of Siegen
| |
Collapse
|
21
|
Newton T. Psychology: Where history, culture, and biology meet. THEORY & PSYCHOLOGY 2022. [DOI: 10.1177/09593543221131782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
This article argues that the same epistemological assumptions cannot be confidently applied in the transition from the biological to the social arenas of psychology, as a consequence of the sociocultural instability resulting from human linguistic and technological flair. To illustrate this contention, reference is made to historicist theses within critical and sociocultural psychology, the work of Ian Hacking and Norbert Elias, the centrality of language and technology to sociocultural instability, and the illustrative issues raised by cultural neuroscience and replication studies.
Collapse
|
22
|
Kelly DP, Beltz AM. Spatial Skills and Self-Perceived Masculinity: Considering College Major STEM-ness, Directionality, and Gender. SEX ROLES 2022. [DOI: 10.1007/s11199-022-01308-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
23
|
Derksen M, Morawski J. Kinds of Replication: Examining the Meanings of "Conceptual Replication" and "Direct Replication". PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 17:1490-1505. [PMID: 35245130 PMCID: PMC9442273 DOI: 10.1177/17456916211041116] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Although psychology's recent crisis has been attributed to various scientific practices, it has come to be called a "replication crisis," prompting extensive appraisals of this putatively crucial scientific practice. These have yielded disagreements over what kind of replication is to be preferred and what phenomena are being explored, yet the proposals are all grounded in a conventional philosophy of science. This article proposes another avenue that invites moving beyond a discovery metaphor of science to rethink research as enabling realities and to consider how empirical findings enact or perform a reality. An enactment perspective appreciates multiple, dynamic realities and science as producing different entities, enactments that ever encounter differences, uncertainties, and precariousness. The axioms of an enactment perspective are described and employed to more fully understand the two kinds of replication that predominate in the crisis disputes. Although the enactment perspective described here is a relatively recent development in philosophy of science and science studies, some of its core axioms are not new to psychology, and the article concludes by revisiting psychologists' previous calls to apprehend the dynamism of psychological reality to appreciate how scientific practices actively and unavoidably participate in performativity of reality.
Collapse
Affiliation(s)
- Maarten Derksen
- Department of Theory &
History of Psychology, Faculty of Behavioural and Social Sciences,
University of Groningen
| | | |
Collapse
|
24
|
Freese J, Rauf T, Voelkel JG. Advances in transparency and reproducibility in the social sciences. SOCIAL SCIENCE RESEARCH 2022; 107:102770. [PMID: 36058608 DOI: 10.1016/j.ssresearch.2022.102770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 06/15/2023]
Abstract
Worries about a "credibility crisis" besieging science have ignited interest in research transparency and reproducibility as ways of restoring trust in published research. For quantitative social science, advances in transparency and reproducibility can be seen as a set of developments whose trajectory predates the recent alarm. We discuss several of these developments, including preregistration, data-sharing, formal infrastructure in the form of resources and policies, open access to research, and specificity regarding research contributions. We also discuss the spillovers of this predominantly quantitative effort towards transparency for qualitative research. We conclude by emphasizing the importance of mutual accountability for effective science, the essential role of openness for this accountability, and the importance of scholarly inclusiveness in figuring out the best ways for openness to be accomplished in practice.
Collapse
|
25
|
Kelter R. fbst: An R package for the Full Bayesian Significance Test for testing a sharp null hypothesis against its alternative via the e value. Behav Res Methods 2022; 54:1114-1130. [PMID: 34471963 PMCID: PMC9170675 DOI: 10.3758/s13428-021-01613-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2021] [Indexed: 12/23/2022]
Abstract
Hypothesis testing is a central statistical method in psychology and the cognitive sciences. However, the problems of null hypothesis significance testing (NHST) and p values have been debated widely, but few attractive alternatives exist. This article introduces the fbst R package, which implements the Full Bayesian Significance Test (FBST) to test a sharp null hypothesis against its alternative via the e value. The statistical theory of the FBST has been introduced more than two decades ago and since then the FBST has shown to be a Bayesian alternative to NHST and p values with both theoretical and practical highly appealing properties. The algorithm provided in the fbst package is applicable to any Bayesian model as long as the posterior distribution can be obtained at least numerically. The core function of the package provides the Bayesian evidence against the null hypothesis, the e value. Additionally, p values based on asymptotic arguments can be computed and rich visualizations for communication and interpretation of the results can be produced. Three examples of frequently used statistical procedures in the cognitive sciences are given in this paper, which demonstrate how to apply the FBST in practice using the fbst package. Based on the success of the FBST in statistical science, the fbst package should be of interest to a broad range of researchers and hopefully will encourage researchers to consider the FBST as a possible alternative when conducting hypothesis tests of a sharp null hypothesis.
Collapse
Affiliation(s)
- Riko Kelter
- Department of Mathematics, University of Siegen, Walter-Flex-Street 3, 57072, Siegen, Germany.
| |
Collapse
|
26
|
Miller AL, Wissman KT, Peterson DJ. The continued influence effect: Examining how age, retraction, and delay impact inferential reasoning. APPLIED COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1002/acp.3939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Alyssa L. Miller
- Department of Psychology North Dakota State University Fargo ND USA
| | | | | |
Collapse
|
27
|
Regenwetter M, Robinson MM, Wang C. Four Internal Inconsistencies in Tversky and Kahneman’s (1992) Cumulative Prospect Theory Article: A Case Study in Ambiguous Theoretical Scope and Ambiguous Parsimony. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2022. [DOI: 10.1177/25152459221074653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Scholars heavily rely on theoretical scope as a tool to challenge existing theory. We advocate that scientific discovery could be accelerated if far more effort were invested into also overtly specifying and painstakingly delineating the intended purview of any proposed new theory at the time of its inception. As a case study, we consider Tversky and Kahneman (1992). They motivated their Nobel-Prize-winning cumulative prospect theory with evidence that in each of two studies, roughly half of the participants violated independence, a property required by expected utility theory (EUT). Yet even at the time of inception, new theories may reveal signs of their own limited scope. For example, we show that Tversky and Kahneman’s findings in their own test of loss aversion provide evidence that at least half of their participants violated their theory, in turn, in that study. We highlight a combination of conflicting findings in the original article that make it ambiguous to evaluate both cumulative prospect theory’s scope and its parsimony on the authors’ own evidence. The Tversky and Kahneman article is illustrative of a social and behavioral research culture in which theoretical scope plays an extremely asymmetric role: to call existing theory into question and motivate surrogate proposals.
Collapse
Affiliation(s)
- Michel Regenwetter
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, Illinois
- Department of Political Science, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Department of Electrical & Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Maria M. Robinson
- Department of Psychology, University of California San Diego, La Jolla, California
| | - Cihang Wang
- Department of Economics, University of Illinois at Urbana-Champaign, Urbana, Illinois
| |
Collapse
|
28
|
Korem N, Cohen LD, Rubinsten O. The link between math anxiety and performance does not depend on working memory: A network analysis study. Conscious Cogn 2022; 100:103298. [PMID: 35217396 DOI: 10.1016/j.concog.2022.103298] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 02/07/2022] [Accepted: 02/08/2022] [Indexed: 12/19/2022]
Abstract
Math anxiety (MA) and working memory (WM) influence math performance. Yet the interplay between them is not fully understood. Inconsistent results possibly stem from the multicomponent structure of math performance and WM. Using network analysis approach, we investigated the drivers of the MA, WM and math performance edges, and the contribution of each node to the network. First, 116 women completed a battery of tests and questionnaires. Second, we explored the generalizability of our model by applying it to a new data-set (Skagerlund et al., 2019; conceptual replication). The results revealed: (1) the links between MA and WM depend on specific task properties, specifically, WM tasks that require manipulation of numbers; (2) WM and MA are independently linked to math performance; and (3) each WM task is associated with different math abilities. The study provides a strong and reliable model showing the direct effects of math anxiety on math performance.
Collapse
Affiliation(s)
- Nachshon Korem
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA; Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, Israel.
| | - Lital Daches Cohen
- Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, Israel; Department of Learning Disabilities, University of Haifa, Israel
| | - Orly Rubinsten
- Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, Israel; Department of Learning Disabilities, University of Haifa, Israel
| |
Collapse
|
29
|
Sarafoglou A, van der Heijden A, Draws T, Cornelisse J, Wagenmakers EJ, Marsman M. Combine Statistical Thinking With Open Scientific Practice: A Protocol of a Bayesian Research Project. PSYCHOLOGY LEARNING AND TEACHING-PLAT 2022. [DOI: 10.1177/14757257221077307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Current developments in the statistics community suggest that modern statistics education should be structured holistically, that is, by allowing students to work with real data and to answer concrete statistical questions, but also by educating them about alternative frameworks, such as Bayesian inference. In this article, we describe how we incorporated such a holistic structure in a Bayesian research project on ordered binomial probabilities. The project was conducted with a group of three undergraduate psychology students who had basic knowledge of Bayesian statistics and programming, but lacked formal mathematical training. The research project aimed to (1) convey the basic mathematical concepts of Bayesian inference; (2) have students experience the entire empirical cycle including collection, analysis, and interpretation of data and (3) teach students open science practices.
Collapse
Affiliation(s)
| | - Anna van der Heijden
- Department of Psychology, University of Amsterdam, The Netherlands
- Currently at De Groene Lobby, The Netherlands
| | - Tim Draws
- Faculty of Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands
| | - Joran Cornelisse
- Department of Psychology, University of Amsterdam, The Netherlands
- Currently at SocialDatabase, The Netherlands
| | | | - Maarten Marsman
- Department of Psychology, University of Amsterdam, The Netherlands
| |
Collapse
|
30
|
Tullett AM. The Limitations of Social Science as the Arbiter of Blame: An Argument for Abandoning Retribution. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 17:995-1007. [PMID: 35050799 DOI: 10.1177/17456916211033284] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The U.S. criminal-justice system has consequentialist and retributivist goals: It considers what is best for society as well as how to punish people in a manner proportional to their crimes. In deciding on the degree of retribution that is called for, the system attempts to determine the blameworthiness-or culpability-of the people involved, weighing factors such as their ability to think rationality, their freedom from coercion, and whether their actions were out of character. These determinations hinge on social-scientific research that is not strong enough to justify such decisions. In this article, I challenge the social-scientific basis for determining culpability on three grounds: replicability, generalizability, and inferential strength. In light of the limitations of this research, I argue that the criminal-justice system should abandon its retributive goals and pursue a more consequentialist-and more reparative-form of justice.
Collapse
|
31
|
Schöllhorn WI, Rizzi N, Slapšinskaitė-Dackevičienė A, Leite N. Always Pay Attention to Which Model of Motor Learning You Are Using. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:711. [PMID: 35055533 PMCID: PMC8776195 DOI: 10.3390/ijerph19020711] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 12/30/2021] [Accepted: 01/03/2022] [Indexed: 12/22/2022]
Abstract
This critical review considers the epistemological and historical background of the theoretical construct of motor learning for a more differentiated understanding. More than simply reflecting critically on the models that are used to solve problems-whether they are applied in therapy, physical education, or training practice-this review seeks to respond constructively to the recent discussion caused by the replication crisis in life sciences. To this end, an in-depth review of contemporary motor learning approaches is provided, with a pragmatism-oriented clarification of the researcher's intentions on fundamentals (what?), subjects (for whom?), time intervals (when?), and purpose (for what?). The complexity in which the processes of movement acquisition, learning, and refinement take place removes their predictable and linear character and therefore, from an applied point of view, invites a great deal of caution when trying to make generalization claims. Particularly when we attempt to understand and study these phenomena in unpredictable and dynamic contexts, it is recommended that scientists and practitioners seek to better understand the central role that the individual and their situatedness plays in the system. In this way, we will be closer to making a meaningful and authentic contribution to the advancement of knowledge, and not merely for the sake of renaming inventions.
Collapse
Affiliation(s)
- Wolfgang I. Schöllhorn
- Department of Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, 55099 Mainz, Germany;
| | - Nikolas Rizzi
- Department of Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, 55099 Mainz, Germany;
| | - Agnė Slapšinskaitė-Dackevičienė
- Department of Sports Medicine, Faculty of Nursing, Medical Academy, Lithuanian University of Health Sciences, Tilžės g. 18, 47181 Kaunas, Lithuania;
| | - Nuno Leite
- Reseach Center in Sports Sciences, Health Sciences and Human Development (CIDESD), Department of Sport Sciences, Exercise and Health, University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal;
| |
Collapse
|
32
|
Hagger MS. Developing an open science 'mindset'. Health Psychol Behav Med 2021; 10:1-21. [PMID: 34993003 PMCID: PMC8725925 DOI: 10.1080/21642850.2021.2012474] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND Identification of widespread biases present in reported research findings in many scientific disciplines, including psychology, such as failures to replicate and the likely extensive application of questionable research practices, has raised serious concerns over the reliability and trustworthiness of scientific research. This has led to the development of, and advocacy for, 'open science' practices, including data, materials, analysis, and output sharing, pre-registration of study predictions and analysis plans, and increased access to published research findings. Implementation of such practices has been enthusiastic in some quarters, but literacy in, and adoption of, these practices has lagged behind among many researchers in the scientific community. ADVANCES In the current article I propose that researchers adopt an open science 'mindset', a comprehensive approach to open science predicated on researchers' operating under the basic assumption that, wherever possible, open science practices will be a central component of all steps of their research projects. The primary, defining feature of the mindset is a commitment to open science principles in all research projects from inception to dissemination. Other features of the mindset include the assumption that all components of research projects (e.g. pre-registered hypotheses, protocols, materials, analysis plans, data, and output) will be accessible broadly; pro-active selection of open fora to disseminate research components and findings; open and transparent dissemination of reports of the research findings in advance of, and after, formal publication; and active promotion of open science practices through education, modeling, and advocacy. CONCLUSION The open science mindset is a 'farm to fork' approach to open science aimed at promoting comprehensive quality in application of open science, and widening participation in open science practices so that they become the norm in research in health psychology and behavioral medicine going forward.
Collapse
Affiliation(s)
- Martin S. Hagger
- Department of Psychological Sciences and Health Sciences Research Institute, University of California, Merced, CA, USA
- Faculty of Sport and Health Sciences, University of Jyväskylä, Jyväskylä, Finland
- School of Applied Psychology, Griffith University, Mt. Gravatt, Australia
| |
Collapse
|
33
|
Thompson WB, Radell ML. Acceptance of anomalous research findings: explaining treatment implausibility reduces belief in far-fetched results. PeerJ 2021; 9:e12532. [PMID: 34900438 PMCID: PMC8621712 DOI: 10.7717/peerj.12532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 11/02/2021] [Indexed: 11/30/2022] Open
Abstract
Research findings are best understood by considering contextual factors such as treatment plausibility: how likely it is that a studied treatment or manipulation is effective, based on theory and data. If a treatment is implausible, then more evidence should be required before believing it has an effect. The current study assessed the extent to which the interpretation of a research finding is affected by treatment plausibility. Participant age varied from 18 to 82 (M = 27.4, SD = 9.4), and about half of the participants (53%) were college students. A total of 600 participants read a brief news article about an experiment with a new type of psychotherapy for weight loss. The current study used a 2 (treatment plausibility) × 3 (results type) between-subjects factorial design. Treatment plausibility had two levels: (1) a plausible cognitive behavioral therapy and (2) an implausible ”psychic reinforcement therapy” that was described as employing psychic messages to promote weight loss. The three levels of the results type factor varied how the study results were presented in the article: (1) standard results with no mention of treatment plausibility, (2) standard results followed by interpretive statements focused on treatment plausibility, and (3) no results—the study was described as still in progress. Participants rated their belief in the effectiveness of the therapy on a scale of 0 to 100% in 10% increments. When treatment plausibility was not discussed in the article, average ratings for the implausible therapy were relatively high (M = 63.1%, SD = 25.0, 95% CI% [58.2–68.1]) and similar to those for the plausible therapy (M = 69.2%, SD = 21.5, 95% CI% [65.0–73.5]). Ratings for the implausible treatment were moderately lower when the article explained why the results supporting it were questionable (M = 48.5%, SD = 26.6, 95% CI% [43.2–53.8]). The findings of the current study suggest that students and other members of the public may draw incorrect inferences from research partly because they do not appreciate the importance of treatment plausibility. This could be remedied, though not completely, by explicitly discussing the plausibility of the treatment based on theory and prior data.
Collapse
Affiliation(s)
- W Burt Thompson
- Department of Psychology, Niagara University, Lewiston, NY, United States of America
| | - Milen L Radell
- Department of Psychology, Niagara University, Lewiston, NY, United States of America
| |
Collapse
|
34
|
Psychologists update their beliefs about effect sizes after replication studies. Nat Hum Behav 2021; 5:1663-1673. [PMID: 34811490 DOI: 10.1038/s41562-021-01220-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 09/16/2021] [Indexed: 11/08/2022]
Abstract
Self-correction-a key feature distinguishing science from pseudoscience-requires that scientists update their beliefs in light of new evidence. However, people are often reluctant to change their beliefs. We examined belief updating in action by tracking research psychologists' beliefs in psychological effects before and after the completion of four large-scale replication projects. We found that psychologists did update their beliefs; they updated as much as they predicted they would, but not as much as our Bayesian model suggests they should if they trust the results. We found no evidence that psychologists became more critical of replications when it would have preserved their pre-existing beliefs. We also found no evidence that personal investment or lack of expertise discouraged belief updating, but people higher on intellectual humility updated their beliefs slightly more. Overall, our results suggest that replication studies can contribute to self-correction within psychology, but psychologists may underweight their evidentiary value.
Collapse
|
35
|
Colizoli O, de Gee JW, van der Zwaag W, Donner TH. Functional magnetic resonance imaging responses during perceptual decision-making at 3 and 7 T in human cortex, striatum, and brainstem. Hum Brain Mapp 2021; 43:1265-1279. [PMID: 34816533 PMCID: PMC8837598 DOI: 10.1002/hbm.25719] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 11/03/2021] [Accepted: 11/07/2021] [Indexed: 11/29/2022] Open
Abstract
While functional magnetic resonance imaging (fMRI) at ultra‐high field (7 T) promises a general increase in sensitivity compared to lower field strengths, the benefits may be most pronounced for specific applications. The current study aimed to evaluate the relative benefit of 7 over 3 T fMRI for the assessment of responses evoked in different brain regions by a well‐controlled cognitive task. At 3 and 7 T, the same participants made challenging perceptual decisions about visual motion combined with monetary rewards for correct choices. Previous work on this task has extensively characterized the underlying cognitive computations and single‐cell responses in cortical and subcortical structures. We quantified the evoked fMRI responses in extrastriate visual cortical areas, the striatum, and the brainstem during the decision interval and the post‐feedback interval of the task. The dependence of response amplitudes on field strength during the decision interval differed between cortical, striatal, and brainstem regions, with a generally bigger 7 versus 3 T benefit in subcortical structures. We also found stronger responses during relatively easier than harder decisions at 7 T for dopaminergic midbrain nuclei, in line with reward expectation. Our results demonstrate the potential of 7 T fMRI for illuminating the contribution of small brainstem nuclei to the orchestration of cognitive computations in the human brain.
Collapse
Affiliation(s)
- Olympia Colizoli
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Jan Willem de Gee
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Tobias H Donner
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands.,Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
36
|
Reporting in Experimental Philosophy: Current Standards and Recommendations for Future Practice. ACTA ACUST UNITED AC 2021; 12:49-73. [PMID: 34721740 PMCID: PMC8550012 DOI: 10.1007/s13164-018-0414-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Recent replication crises in psychology and other fields have led to intense reflection about the validity of common research practices. Much of this reflection has focussed on reporting standards, and how they may be related to the questionable research practices that could underlie a high proportion of irreproducible findings in the published record. As a developing field, it is particularly important for Experimental Philosophy to avoid some of the pitfalls that have beset other disciplines. To this end, here we provide a detailed, comprehensive assessment of current reporting practices in Experimental Philosophy. We focus on the quality of statistical reporting and the disclosure of information about study methodology. We assess all the articles using quantitative methods (n = 134) that were published over the years 2013–2016 in 29 leading philosophy journals. We find that null hypothesis significance testing is the prevalent statistical practice in Experimental Philosophy, although relying solely on this approach has been criticised in the psychological literature. To augment this approach, various additional measures have become commonplace in other fields, but we find that Experimental Philosophy has adopted these only partially: 53% of the papers report an effect size, 28% confidence intervals, 1% examined prospective statistical power and 5% report observed statistical power. Importantly, we find no direct relation between an article’s reporting quality and its impact (numbers of citations). We conclude with recommendations for authors, reviewers and editors in Experimental Philosophy, to facilitate making research statistically-transparent and reproducible.
Collapse
|
37
|
Three Suggestions to Support Ongoing Replication Efforts: A Checklist, Visual Comparison, and Rating Scale. METHODS IN PSYCHOLOGY 2021. [DOI: 10.1016/j.metip.2021.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
38
|
Miller J, Ulrich R. Optimizing Research Output: How Can Psychological Research Methods Be Improved? Annu Rev Psychol 2021; 73:691-718. [PMID: 34614371 DOI: 10.1146/annurev-psych-020821-094927] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Recent evidence suggests that research practices in psychology and many other disciplines are far less effective than previously assumed, which has led to what has been called a "crisis of confidence" in psychological research (e.g., Pashler & Wagenmakers 2012). In response to the perceived crisis, standard research practices have come under intense scrutiny, and various changes have been suggested to improve them. The burgeoning field of metascience seeks to use standard quantitative data-gathering and modeling techniques to understand the reasons for inefficiency, to assess the likely effects of suggested changes, and ultimately to tell psychologists how to do better science. We review the pros and cons of suggested changes, highlighting the many complex research trade-offs that must be addressed to identify better methods. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jeff Miller
- Department of Psychology, University of Otago, Dunedin 9016, New Zealand
| | - Rolf Ulrich
- Department of Psychology, University of Tübingen, Tübingen 72074; Germany
| |
Collapse
|
39
|
Abstract
Psychology’s current crisis attends most visibly to perceived problems with statistical models, methods, publication practices, and career incentives. Rarely is close attention given to the objects of inquiry—to ontological matters—yet the crisis-related literature does features statements about the nature of psychology’s objects. Close analysis of the ontological claims reveals discrepant understandings: some researchers assume objects to be stable and singular while others posit them to be dynamic and complex. Nevertheless, both views presume the objects under scrutiny to be real. The analysis also finds each of these ontological claims to be associated not only with particular method prescriptions but also with distinct notions of the scientific self. Though both take the scientific self to be objective, one figures the scientist as not always a rational actor and, therefore, requiring some behavior regulation, while the other sees the scientist as largely capable of self-governing sustained through painstakingly acquired expertise and self-control. The fate of these prevalent assemblages of object, method, and scientific self remains to be determined, yet as conditions of possibility they portend quite different futures. Following description of the assemblages, the article ventures a futuristic portrayal of the scientific practices they each might engender.
Collapse
Affiliation(s)
- Jill Morawski
- Department of Psychology, Wesleyan University, Middletown, CT, USA
| |
Collapse
|
40
|
Fu J, Zhao Y, Feng X, Wang Y, Yu Z, Hua L, Wang S, Li J. How is fatalistic determinism linked to depression? The mediating role of self-control and resilience. PERSONALITY AND INDIVIDUAL DIFFERENCES 2021. [DOI: 10.1016/j.paid.2021.110992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
41
|
Hudson R. Explicating Exact versus Conceptual Replication. ERKENNTNIS 2021; 88:2493-2514. [PMID: 37388139 PMCID: PMC10300171 DOI: 10.1007/s10670-021-00464-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 09/10/2021] [Indexed: 07/01/2023]
Abstract
What does it mean to replicate an experiment? A distinction is often drawn between 'exact' (or 'direct') and 'conceptual' replication. However, in recent work, Uljana Feest argues that the notion of replication in itself, whether exact or conceptual, is flawed due to the problem of systematic error, and Edouard Machery argues that, although the notion of replication is not flawed, we should nevertheless dispense with the distinction between exact and conceptual replication. My plan in this paper is to defend the value of replication, along with the distinction between exact and conceptual replication, from the critiques of Feest and Machery. To that end, I provide an explication of conceptual replication, and distinguish it from what I call 'experimental' replication. On the basis, then, of a tripartite distinction between exact, experimental and conceptual replication, I argue in response to Feest that replication is still informative despite the prospect of systematic error. I also rebut Machery's claim that conceptual replication is fundamentally confused and wrongly conflates replication and extension, and in turn raise some objections to his own Resampling Account of replication.
Collapse
Affiliation(s)
- Robert Hudson
- Department of Philosophy, University of Saskatchewan, 9 Campus Drive, Saskatoon, SK S7N 5A5 Canada
| |
Collapse
|
42
|
Wegener DT, Fabrigar LR, Pek J, Hoisington-Shaw K. Evaluating Research in Personality and Social Psychology: Considerations of Statistical Power and Concerns About False Findings. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2021; 48:1105-1117. [PMID: 34308722 DOI: 10.1177/01461672211030811] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Traditionally, statistical power was viewed as relevant to research planning but not evaluation of completed research. However, following discussions of high false finding rates (FFRs) associated with low statistical power, the assumed level of statistical power has become a key criterion for research acceptability. Yet, the links between power and false findings are not as straightforward as described. Assumptions underlying FFR calculations do not reflect research realities in personality and social psychology. Even granting the assumptions, the FFR calculations identify important limitations to any general influences of statistical power. Limits for statistical power in inflating false findings can also be illustrated through the use of FFR calculations to (a) update beliefs about the null or alternative hypothesis and (b) assess the relative support for the null versus alternative hypothesis when evaluating a set of studies. Taken together, statistical power should be de-emphasized in comparison to current uses in research evaluation.
Collapse
Affiliation(s)
| | | | - Jolynn Pek
- The Ohio State University, Columbus, USA
| | | |
Collapse
|
43
|
Roisman GI. Editorial: A Vision of a Fair and Efficient, Diverse and Inclusive, Cumulative Science of Child Development in the Best and Worst of Times. Child Dev 2021; 92:451-465. [PMID: 33783829 DOI: 10.1111/cdev.13538] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
If you have come here in search of the submission requirements at Child Development, this is perhaps not the editorial you are looking for. Consider visiting instead our revised instructions to authors. Nor does this essay simply detail the priorities of the incoming board and the initiatives we will be implementing over the next 6 years, though these are summarized in Table 1. Rather, this editorial was written to articulate clearly the scientific values underlying current plans and policies at the journal in support of publishing the highest quality and highest impact research on child development. I emphasize two interrelated themes: (a) our plans for continuing to emphasize and enhance diversity and inclusion in research on child development and (b) our policies that remove impediments to cumulative developmental science. Discussion focuses primarily on how we are incentivizing efforts to achieve these widely held yet too often neglected goals, taking as its point of departure emerging challenges to a fair and efficient editorial process at the journal. In so doing, I mean to highlight the essential work of continuously cultivating editorial structures that firmly embed in developmental science fundamental scientific values, principles that make it possible for research on child development to flourish in both the best and worst of times.
Collapse
|
44
|
Tiokhin L, Yan M, Morgan TJH. Competition for priority harms the reliability of science, but reforms can help. Nat Hum Behav 2021; 5:857-867. [PMID: 33510392 DOI: 10.1038/s41562-020-01040-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 12/18/2020] [Indexed: 01/30/2023]
Abstract
Incentives for priority of discovery are hypothesized to harm scientific reliability. Here, we evaluate this hypothesis by developing an evolutionary agent-based model of a competitive scientific process. We find that rewarding priority of discovery causes populations to culturally evolve towards conducting research with smaller samples. This reduces research reliability and the information value of the average study. Increased start-up costs for setting up single studies and increased payoffs for secondary results (also known as scoop protection) attenuate the negative effects of competition. Furthermore, large rewards for negative results promote the evolution of smaller sample sizes. Our results confirm the logical coherence of scoop protection reforms at several journals. Our results also imply that reforms to increase scientific efficiency, such as rapid journal turnaround times, may produce collateral damage by incentivizing lower-quality research; in contrast, reforms that increase start-up costs, such as pre-registration and registered reports, may generate incentives for higher-quality research.
Collapse
Affiliation(s)
- Leonid Tiokhin
- Human-Technology Interaction Group, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Minhua Yan
- School of Human Evolution and Social Change, Arizona State University, Tempe, AZ, USA.,Institute of Human Origins, Arizona State University, Tempe, AZ, USA
| | - Thomas J H Morgan
- School of Human Evolution and Social Change, Arizona State University, Tempe, AZ, USA.,Institute of Human Origins, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
45
|
Eden D. The science of leadership: A journey from survey research to field experimentation. THE LEADERSHIP QUARTERLY 2021. [DOI: 10.1016/j.leaqua.2020.101472] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
46
|
Graf-Vlachy L. Is the readability of abstracts decreasing in management research? REVIEW OF MANAGERIAL SCIENCE 2021. [DOI: 10.1007/s11846-021-00468-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
AbstractThe readability of scientific texts is critical for the successful distribution of research findings. I replicate a recent study which found that the abstracts of scientific articles in the life sciences became less readable over time. Specifically, I sample 28,345 abstracts from 17 of the leading journals in the field of management and organization over 3 decades, and study two established indicators of readability over time, namely the Flesch Reading Ease and the New Dale–Chall Readability Formula. I find a modest trend towards less readable abstracts, which leads to an increase in articles that are extremely hard to read from 12% in the first decade of the sample to 16% in the final decade of the sample. I further find that an increasing number of authors partially explains this trend, as do the use of scientific jargon and corresponding author affiliations with institutions in English-speaking countries. I discuss implications for authors, reviewers, and editors in the field of management.
Collapse
|
47
|
Yu Y, Hua L, Feng X, Wang Y, Yu Z, Zi T, Zhao Y, Li J. True Grit in Learning Math: The Math Anxiety-Achievement Link Is Mediated by Math-Specific Grit. Front Psychol 2021; 12:645793. [PMID: 33889116 PMCID: PMC8055855 DOI: 10.3389/fpsyg.2021.645793] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 03/09/2021] [Indexed: 11/21/2022] Open
Abstract
In this study, we tested a possible mechanism of the association between math anxiety and math achievement: the mediating role of math-specific grit (i.e., sustaining effort in the face of adversity when learning math). In Study 1, a sample of 10th grade students (N = 222) completed a battery of personality and attitude questionnaires, and math achievement was indexed by curriculum-based examination scores. Mediation analyses indicated that math-specific grit, but not domain-general grit, mediated the relationship between math anxiety and math achievement. In Study 2, we replicated and extended the above findings with another sample of 11th grade students (N = 465). Mediation analyses indicated that math-specific grit and math-specific procrastination played sequential mediating roles in the relationship between math anxiety and math achievement. That is, individuals with higher math anxiety were less gritty in math learning, possibly further leading them to be more procrastinated in performing math work, which may finally result in worse math achievement. In summary, the current study provides the first evidence that math-specific grit may mediate the relationship between math anxiety and math achievement. Furthermore, it also demonstrated the value of math-specific grit over domain-general grit in predicting math success, which invites a broader investigation on subject-specific grit.
Collapse
Affiliation(s)
| | - Liyun Hua
- College of Teacher Education, Dali University, Dali, China
| | - Xingwang Feng
- College of Teacher Education, Dali University, Dali, China
| | - Yueru Wang
- College of Teacher Education, Dali University, Dali, China
| | - Zongren Yu
- College of Teacher Education, Dali University, Dali, China
| | - Tong Zi
- College of Teacher Education, Dali University, Dali, China.,No. 4 Middle School of Binchuan, Dali, China
| | - Yajun Zhao
- School of Education and Psychology, Southwest Minzu University, Chengdu, China
| | - Jingguang Li
- College of Teacher Education, Dali University, Dali, China
| |
Collapse
|
48
|
Hudson R. Should We Strive to Make Science Bias-Free? A Philosophical Assessment of the Reproducibility Crisis. JOURNAL FOR GENERAL PHILOSOPHY OF SCIENCE = ZEITSCHRIFT FUR ALLGEMEINE WISSENSCHAFTSTHEORIE 2021; 52:389-405. [PMID: 34720421 PMCID: PMC8550477 DOI: 10.1007/s10838-020-09548-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/16/2020] [Indexed: 06/13/2023]
Abstract
Recently, many scientists have become concerned about an excessive number of failures to reproduce statistically significant effects. The situation has become dire enough that the situation has been named the 'reproducibility crisis'. After reviewing the relevant literature to confirm the observation that scientists do indeed view replication as currently problematic, I explain in philosophical terms why the replication of empirical phenomena, such as statistically significant effects, is important for scientific progress. Following that explanation, I examine various diagnoses of the reproducibility crisis, and argue that for the majority of scientists the crisis is due, at least in part, to a form of publication bias. This conclusion sets the stage for an assessment of the view that evidential relations in science are inherently value-laden, a view championed by Heather Douglas and Kevin Elliott. I argue, in response to Douglas and Elliott, and as motivated by the meta-scientific resistance scientists harbour to a publication bias, that if we advocate the value-ladenness of science the result would be a deepening of the reproducibility crisis.
Collapse
Affiliation(s)
- Robert Hudson
- Department of Philosophy, University of Saskatchewan, 9 Campus Drive, Saskatoon, SK S7N 5A5 Canada
| |
Collapse
|
49
|
Gordon M, Viganola D, Dreber A, Johannesson M, Pfeiffer T. Predicting replicability-Analysis of survey and prediction market data from large-scale forecasting projects. PLoS One 2021; 16:e0248780. [PMID: 33852589 PMCID: PMC8046229 DOI: 10.1371/journal.pone.0248780] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/05/2021] [Indexed: 12/21/2022] Open
Abstract
The reproducibility of published research has become an important topic in science policy. A number of large-scale replication projects have been conducted to gauge the overall reproducibility in specific academic fields. Here, we present an analysis of data from four studies which sought to forecast the outcomes of replication projects in the social and behavioural sciences, using human experts who participated in prediction markets and answered surveys. Because the number of findings replicated and predicted in each individual study was small, pooling the data offers an opportunity to evaluate hypotheses regarding the performance of prediction markets and surveys at a higher power. In total, peer beliefs were elicited for the replication outcomes of 103 published findings. We find there is information within the scientific community about the replicability of scientific findings, and that both surveys and prediction markets can be used to elicit and aggregate this information. Our results show prediction markets can determine the outcomes of direct replications with 73% accuracy (n = 103). Both the prediction market prices, and the average survey responses are correlated with outcomes (0.581 and 0.564 respectively, both p < .001). We also found a significant relationship between p-values of the original findings and replication outcomes. The dataset is made available through the R package "pooledmaRket" and can be used to further study community beliefs towards replications outcomes as elicited in the surveys and prediction markets.
Collapse
Affiliation(s)
| | | | - Anna Dreber
- Stockholm School of Economics, Stockholm, Sweden
| | | | | |
Collapse
|
50
|
Rowe LI, Hattie J, Hester R. g versus c: comparing individual and collective intelligence across two meta-analyses. Cogn Res Princ Implic 2021; 6:26. [PMID: 33813669 PMCID: PMC8019454 DOI: 10.1186/s41235-021-00285-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Accepted: 03/03/2021] [Indexed: 11/17/2022] Open
Abstract
Collective intelligence (CI) is said to manifest in a group's domain general mental ability. It can be measured across a battery of group IQ tests and statistically reduced to a latent factor called the "c-factor." Advocates have found the c-factor predicts group performance better than individual IQ. We test this claim by meta-analyzing correlations between the c-factor and nine group performance criterion tasks generated by eight independent samples (N = 857 groups). Results indicated a moderate correlation, r, of .26 (95% CI .10, .40). All but four studies comprising five independent samples (N = 366 groups) failed to control for the intelligence of individual members using individual IQ scores or their statistically reduced equivalent (i.e., the g-factor). A meta-analysis of this subset of studies found the average IQ of the groups' members had little to no correlation with group performance (r = .06, 95% CI -.08, .20). Around 80% of studies did not have enough statistical power to reliably detect correlations between the primary predictor variables and the criterion tasks. Though some of our findings are consistent with claims that a general factor of group performance may exist and relate positively to group performance, limitations suggest alternative explanations cannot be dismissed. We caution against prematurely embracing notions of the c-factor unless it can be independently and robustly replicated and demonstrated to be incrementally valid beyond the g-factor in group performance contexts.
Collapse
Affiliation(s)
- Luke I Rowe
- National School of Education, Australian Catholic University, East Melbourne, VIC, Australia.
| | - John Hattie
- Science of Learning Research Centre, The University of Melbourne, Parkville, VIC, Australia
| | - Robert Hester
- School of Psychological Sciences, The University of Melbourne, Parkville, VIC, Australia
| |
Collapse
|