1
|
Ophir Y, Walter D, Jamieson PE, Jamieson KH. Factors Assessing Science's Self-Presentation model and their effect on conservatives' and liberals' support for funding science. Proc Natl Acad Sci U S A 2023; 120:e2213838120. [PMID: 37695894 PMCID: PMC10515153 DOI: 10.1073/pnas.2213838120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 07/25/2023] [Indexed: 09/13/2023] Open
Abstract
A confirmatory factor analysis (CFA) of responses to 13 questions from a 2022 national probability sample of 1,154 US adults supported the existence of five factors that we argue assess perceptions of Factors Assessing Science's Self-Presentation (FASS). These factors also predict support for increasing federal funding of science and, separately, supporting federal funding of basic research. Each of the factors reflects perceptions of a key facet of scientists' self-presentation, science/scientists' adherence to professed norms, or science's benefits: specifically, that scientists are Credible, Prudent, and Unbiased and that science is Self-Correcting and Beneficial. The FASS model explained 40.6% of the variance in support for increasing federal funding for science and 33.7% in support for basic research. For both dependent variables, conservatives were less likely to be supportive when they perceived that science/scientists fail to overcome biases. The interactions between political ideology and both Prudence and Beneficial, however, were significant only when predicting Basic Research support. In that case, there were no differences between conservatives and liberals when perceptions of benefit were low, but when high, liberals' perception of benefit had a stronger association with support for funding than conservatives'. Among those perceiving that scientists lack prudence, liberals were more likely to support funding basic research than conservatives, but the difference disappeared when perceptions of prudence were very high. The factors could serve as across-time indicators of the public's assessment of the state of science.
Collapse
Affiliation(s)
- Yotam Ophir
- Department of Communication, University at Buffalo, State University of New York, Buffalo, NY14228
| | - Dror Walter
- Department of Communication, Georgia State University, Atlanta, GA30303
| | - Patrick E. Jamieson
- Annenberg Public Policy Center, University of Pennsylvania, Philadelphia, PA19104
| | | |
Collapse
|
2
|
Are most published research findings false in a continuous universe? PLoS One 2022; 17:e0277935. [PMID: 36538521 PMCID: PMC9767354 DOI: 10.1371/journal.pone.0277935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 10/30/2022] [Indexed: 12/24/2022] Open
Abstract
Diagnostic screening models for the interpretation of null hypothesis significance test (NHST) results have been influential in highlighting the effect of selective publication on the reproducibility of the published literature, leading to John Ioannidis' much-cited claim that most published research findings are false. These models, however, are typically based on the assumption that hypotheses are dichotomously true or false, without considering that effect sizes for different hypotheses are not the same. To address this limitation, we develop a simulation model that overcomes this by modeling effect sizes explicitly using different continuous distributions, while retaining other aspects of previous models such as publication bias and the pursuit of statistical significance. Our results show that the combination of selective publication, bias, low statistical power and unlikely hypotheses consistently leads to high proportions of false positives, irrespective of the effect size distribution assumed. Using continuous effect sizes also allows us to evaluate the degree of effect size overestimation and prevalence of estimates with the wrong sign in the literature, showing that the same factors that drive false-positive results also lead to errors in estimating effect size direction and magnitude. Nevertheless, the relative influence of these factors on different metrics varies depending on the distribution assumed for effect sizes. The model is made available as an R ShinyApp interface, allowing one to explore features of the literature in various scenarios.
Collapse
|
3
|
Abstract
Concern over social scientists' inability to reproduce empirical research has spawned a vast and rapidly growing literature. The size and growth of this literature make it difficult for newly interested academics to come up to speed. Here, we provide a formal text modeling approach to characterize the entirety of the field, which allows us to summarize the breadth of this literature and identify core themes. We construct and analyze text networks built from 1,947 articles to reveal differences across social science disciplines within the body of reproducibility publications and to discuss the diversity of subtopics addressed in the literature. This field-wide view suggests that reproducibility is a heterogeneous problem with multiple sources for errors and strategies for solutions, a finding that is somewhat at odds with calls for largely passive remedies reliant on open science. We propose an alternative rigor and reproducibility model that takes an active approach to rigor prior to publication, which may overcome some of the shortfalls of the postpublication model.
Collapse
Affiliation(s)
- James W Moody
- Department of Sociology, Duke University, Durham, North Carolina, USA
- Duke Network Analysis Center, Duke University, Durham, North Carolina, USA
| | - Lisa A Keister
- Department of Sociology, Duke University, Durham, North Carolina, USA
- Duke Network Analysis Center, Duke University, Durham, North Carolina, USA
- Sanford School of Public Policy, Duke University, Durham, North Carolina, USA
| | - Maria C Ramos
- Interdisciplinary Social Science Program, Florida State University, Tallahassee, Florida, USA
| |
Collapse
|
4
|
Kamali N, Rahimi F, Talebi Bezmin Abadi A. Learning from Retracted Papers Authored by the Highly Cited Iran-affiliated Researchers: Revisiting Research Policies and a Key Message to Clarivate Analytics. SCIENCE AND ENGINEERING ETHICS 2022; 28:18. [PMID: 35362834 DOI: 10.1007/s11948-022-00368-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Reasons underlying retractions of papers authored by the Iran-affiliated highly cited researchers (HCRs) have not been documented. Here, we report that 229 of the Iran-affiliated researchers were listed by the Clarivate Analytics as HCRs. We investigated the Retraction Watch Database and found that, in total, 51 papers authored by the Iran-affiliated HCRs were retracted from 2006 to 2019. Twenty-three of the 229 HCRs (10%) had at least one paper retracted. One of the listed HCRs had 22 papers retracted; 14 of the 23 (60.8%) had only one paper retracted. Among the 51 retracted papers, three had been authored by two female authors. Eight (16.8%) retracted papers had international co-authorships. The shortest and longest times from publication to retraction were 20 and 2610 (mean ± SD, 857 ± 616) days, respectively. Of the 51 papers, 43 (84%) had a single reason for retraction, whereas eight had multiple reasons. Among the 43 papers, 23 (53%) were retracted due to fake peer-review, eight (19%) were duplications, six (14%) had errors, four (9%) had plagiarism, and two (5%) were labelled as "limited or no information." Duplication of data, which is easily preventable, amounted to 27%. Any publishing oversight committed by an HCR may not be tolerated because they represent the stakeholders of the scientific literature and stand as role-models for other peer researchers. Future policies supporting the Iranian academia should radically change by implementation of educational and awareness programs on publishing ethics to reduce the rate of retractions in Iran.
Collapse
Affiliation(s)
- Negin Kamali
- Department of Bacteriology, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Farid Rahimi
- Research School of Biology, The Australian National University, Ngunnawal and Ngambri Country, Canberra, ACT, Australia
| | - Amin Talebi Bezmin Abadi
- Department of Bacteriology, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran.
| |
Collapse
|
5
|
Ophir Y, Jamieson KH. The effects of media narratives about failures and discoveries in science on beliefs about and support for science. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2021; 30:1008-1023. [PMID: 34000907 DOI: 10.1177/09636625211012630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This study examines the effects of exposure to media narratives about science on perceptions pertaining to the reliability of science, including trust, beliefs, and support for science. In an experiment (n = 4497), participants were randomly assigned to read stories representing ecologically valid media narratives: the honorable quest, counterfeit quest, crisis or broken, and problem explored. Exposure to stories highlighting problems reduced trust in scientists and induced negative beliefs about scientists, with more extensive effects among those exposed to the "crisis/broken" accounts and fewer for those exposed to "counterfeit" and "problem explored" stories. In the "crisis/broken" and "problem explored" conditions, we identified a three-way interaction in which those with higher trust who considered the problem-focused stories to be representative of science were more likely to believe science is self-correcting and those with lower trust who perceived the stories to be representative were less likely to report that belief. Support for funding science was not affected by the stories. This study demonstrates the detrimental consequences of media failure to accurately communicate the scientific process, and provides evidence for ways for scientists and journalists to improve science communication, while acknowledging the need for changes in structural incentives to obtain such a goal.
Collapse
Affiliation(s)
- Yotam Ophir
- University at Buffalo-The State University of New York, USA
| | | |
Collapse
|
6
|
Nelson NC, Ichikawa K, Chung J, Malik MM. Mapping the discursive dimensions of the reproducibility crisis: A mixed methods analysis. PLoS One 2021; 16:e0254090. [PMID: 34242331 PMCID: PMC8270481 DOI: 10.1371/journal.pone.0254090] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 06/18/2021] [Indexed: 11/19/2022] Open
Abstract
To those involved in discussions about rigor, reproducibility, and replication in science, conversation about the "reproducibility crisis" appear ill-structured. Seemingly very different issues concerning the purity of reagents, accessibility of computational code, or misaligned incentives in academic research writ large are all collected up under this label. Prior work has attempted to address this problem by creating analytical definitions of reproducibility. We take a novel empirical, mixed methods approach to understanding variation in reproducibility discussions, using a combination of grounded theory and correspondence analysis to examine how a variety of authors narrate the story of the reproducibility crisis. Contrary to expectations, this analysis demonstrates that there is a clear thematic core to reproducibility discussions, centered on the incentive structure of science, the transparency of methods and data, and the need to reform academic publishing. However, we also identify three clusters of discussion that are distinct from the main body of articles: one focused on reagents, another on statistical methods, and a final cluster focused on the heterogeneity of the natural world. Although there are discursive differences between scientific and popular articles, we find no strong differences in how scientists and journalists write about the reproducibility crisis. Our findings demonstrate the value of using qualitative methods to identify the bounds and features of reproducibility discourse, and identify distinct vocabularies and constituencies that reformers should engage with to promote change.
Collapse
Affiliation(s)
- Nicole C. Nelson
- Department of Medical History and Bioethics, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Kelsey Ichikawa
- Radcliffe Institute for Advanced Study, Harvard University, Cambridge, Massachusetts, United States of America
| | - Julie Chung
- Radcliffe Institute for Advanced Study, Harvard University, Cambridge, Massachusetts, United States of America
| | - Momin M. Malik
- Berkman Klein Center for Internet & Society, Harvard University, Cambridge, Massachusetts, United States of America
| |
Collapse
|
7
|
Samuel S, König-Ries B. Understanding experiments and research practices for reproducibility: an exploratory study. PeerJ 2021; 9:e11140. [PMID: 33976964 PMCID: PMC8067906 DOI: 10.7717/peerj.11140] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 03/01/2021] [Indexed: 11/20/2022] Open
Abstract
Scientific experiments and research practices vary across disciplines. The research practices followed by scientists in each domain play an essential role in the understandability and reproducibility of results. The "Reproducibility Crisis", where researchers find difficulty in reproducing published results, is currently faced by several disciplines. To understand the underlying problem in the context of the reproducibility crisis, it is important to first know the different research practices followed in their domain and the factors that hinder reproducibility. We performed an exploratory study by conducting a survey addressed to researchers representing a range of disciplines to understand scientific experiments and research practices for reproducibility. The survey findings identify a reproducibility crisis and a strong need for sharing data, code, methods, steps, and negative and positive results. Insufficient metadata, lack of publicly available data, and incomplete information in study methods are considered to be the main reasons for poor reproducibility. The survey results also address a wide number of research questions on the reproducibility of scientific results. Based on the results of our explorative study and supported by the existing published literature, we offer general recommendations that could help the scientific community to understand, reproduce, and reuse experimental data and results in the research data lifecycle.
Collapse
Affiliation(s)
- Sheeba Samuel
- Heinz Nixdorf Chair for Distributed Information Systems, Friedrich Schiller University Jena, Jena, Thuringia, Germany
- Michael Stifel Center Jena, Jena, Thuringia, Germany
| | - Birgitta König-Ries
- Heinz Nixdorf Chair for Distributed Information Systems, Friedrich Schiller University Jena, Jena, Thuringia, Germany
- Michael Stifel Center Jena, Jena, Thuringia, Germany
| |
Collapse
|
8
|
Mede NG, Schäfer MS, Ziegler R, Weißkopf M. The "replication crisis" in the public eye: Germans' awareness and perceptions of the (ir)reproducibility of scientific research. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2021; 30:91-102. [PMID: 32924865 DOI: 10.1177/0963662520954370] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Several meta-analytical attempts to reproduce results of empirical research have failed in recent years, prompting scholars and news media to diagnose a "replication crisis" and voice concerns about science losing public credibility. Others, in contrast, hoped replication efforts could improve public confidence in science. Yet nationally representative evidence backing these concerns or hopes is scarce. We provide such evidence, conducting a secondary analysis of the German "Science Barometer" ("Wissenschaftsbarometer") survey. We find that most Germans are not aware of the "replication crisis." In addition, most interpret replication efforts as indicative of scientific quality control and science's self-correcting nature. However, supporters of the populist right-wing party AfD tend to believe that the "crisis" shows one cannot trust science, perhaps using it as an argument to discredit science. But for the majority of Germans, hopes about reputational benefits of the "replication crisis" for science seem more justified than concerns about detrimental effects.
Collapse
|
9
|
Shifting medical guidelines: Compliance and spillover effects for revised antibiotic recommendations. Soc Sci Med 2020; 255:112943. [DOI: 10.1016/j.socscimed.2020.112943] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 02/04/2020] [Accepted: 03/16/2020] [Indexed: 02/07/2023]
|
10
|
Sarathchandra D, Haltinner K. Trust/distrust judgments and perceptions of climate science: A research note on skeptics' rationalizations. PUBLIC UNDERSTANDING OF SCIENCE (BRISTOL, ENGLAND) 2020; 29:53-60. [PMID: 31691642 DOI: 10.1177/0963662519886089] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Using interviews with residents of Idaho (a rural northwest US state) who identify as skeptical of climate change, we examine how skeptics rationalize their doubts about climate science. Skeptics tend to question the reality and human causes of climate change by (1) raising concerns about incentive structures in science that could bias climatology, (2) doubting the accuracy of data and models used by climate scientists, and (3) perceiving some practices of climate science and scientists as exclusionary. Despite these concerns, skeptics exhibit deference to scientific authority when using scientific assessments to make policy decisions, including environmental policy. Understanding skeptics' concerns about climate science and areas where they support science-based policy, will lead to better dialogue between scientists, interest groups, policy makers, and the skeptical public, potentially clarifying avenues to communicate climate information and enact climate policy.
Collapse
|
11
|
Maree DJ. Burning the straw man: What exactly is psychological science? SA JOURNAL OF INDUSTRIAL PSYCHOLOGY 2019. [DOI: 10.4102/sajip.v45i0.1731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
Problemification: Efendic and Van Zyl (2019) argue for following open access-based principles in IO psychology following the recent crises in psychological research. Among others, these refer to the failure to replicate empirical studies which cast doubt on the trustworthiness of what we believe to be psychological knowledge. However, saving knowledge is not the issue at stake: focusing on transparency and compliance to standards might solve some problems but not all.Implications: The crisis focuses our attention on what science is and particularly science in psychology and its related disciplines. Both the scientist–practitioner model of training psychologists and the quantitative–qualitative methods polarity reveal the influence of the received or positivistic view of science as characterised by quantification and measurement. Postmodern resistance to positivism feeds these polarities and conceals the true nature of psychological science.Purpose: This article argues for a realist conception of science that sustains a variety of methods, from interpretative and constructionist approaches to measurement. However, in this view, measurement is not a defining characteristic of science, but a way to find things out and the latter supports a critical process.Recommendations: Revising our understanding of science, thus moving beyond the received view to a realist one, is crucial to manage misconceptions about what counts as knowledge and as appropriate measures when our discipline is in the crossfire. Thus, Efendic and Van Zyl’s (2019) proposals make sense and can be taken on board where measurement as one of the ways to find things out is appropriate. However, realism supports a broader enterprise that can be called scientific because it involves a critical movement of claim and counter-claim while executing its taxonomical and explanatory tasks. Thus, the psychosocial researcher, when analysing discourse, for example, can also be regarded as a scientist.
Collapse
|
12
|
How the public, and scientists, perceive advancement of knowledge from conflicting study results. JUDGMENT AND DECISION MAKING 2019. [DOI: 10.1017/s1930297500005398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractScience often advances through disagreement among scientists and the studies they produce. For members of the public, however, conflicting results from scientific studies may trigger a sense of uncertainty that in turn leads to a feeling that nothing new has been learned from those studies. In several scenario studies, participants read about pairs of highly similar scientific studies with results that either agreed or disagreed, and were asked, “When we take the results of these two studies together, do we now know more, less, or the same as we did before about (the study topic)?” We find that over half of participants do not feel that “we know more” as the result of the two new studies when the second study fails to replicate the first. When the two study results strongly conflict (e.g., one finds a positive and the other a negative association between two variables), a non-trivial proportion of participants actually say that “we know less” than we did before. Such a sentiment arguably violates normative principles of statistical and scientific inference positing that new study findings can never reduce our level of knowledge (and that only completely uninformative studies can leave our level of knowledge unchanged). Drawing attention to possible moderating variables, or to sample size considerations, did not influence people’s perceptions of knowledge advancement. Scientist members of the American Academy of Arts and Sciences, when presented with the same scenarios, were less inclined to say that nothing new is learned from conflicting study results.
Collapse
|
13
|
Strengthening the Medical Error "Meme Pool". J Gen Intern Med 2019; 34:2264-2267. [PMID: 31292902 PMCID: PMC6816797 DOI: 10.1007/s11606-019-05156-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Revised: 04/10/2019] [Accepted: 05/08/2019] [Indexed: 12/19/2022]
Abstract
The exact number of patients in the USA who die from preventable medical errors each year is highly debated. Despite uncertainty in the underlying science, two very large estimates have spread rapidly through both the academic and popular media. We utilize Richard Dawkins' concept of the "meme" to explore why these imprecise estimates remain so compelling, and examine what potential harms can occur from their dissemination. We conclude by suggesting that instead of simply providing more precise estimates, physicians should encourage nuance in public medical error discussions, and strive to provide narrative context about the reality of the complex biological and social systems in which we practice medicine.
Collapse
|
14
|
Liskauskas S, Ribeiro MD, Vasconcelos SMR. Changing times for science and the public: Science journalists' roles for the responsible communication of science. EMBO Rep 2019; 20:e47906. [PMID: 30850383 PMCID: PMC6446191 DOI: 10.15252/embr.201947906] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
The increasing number of corrections in the scientific record and the debate about reproducibility affect journalists’ reporting about science and thereby public opinion on scientists and research.
Collapse
Affiliation(s)
- Suzana Liskauskas
- Professional Masters Program/Institute of Medical Biochemistry Leopoldo Meis (IBqM)Federal University of Rio de Janeiro (UFRJ)Rio de JaneiroBrazil
| | | | - Sonia MR Vasconcelos
- Science Education Program and Professional Masters Program/IBqM/UFRJRio de JaneiroBrazil
| |
Collapse
|
15
|
Amaral OB, Neves K, Wasilewska-Sampaio AP, Carneiro CFD. The Brazilian Reproducibility Initiative. eLife 2019; 8:e41602. [PMID: 30720433 PMCID: PMC6374071 DOI: 10.7554/elife.41602] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Accepted: 01/25/2019] [Indexed: 12/12/2022] Open
Abstract
Most efforts to estimate the reproducibility of published findings have focused on specific areas of research, even though science is usually assessed and funded on a regional or national basis. Here we describe a project to assess the reproducibility of findings in biomedical science published by researchers based in Brazil. The Brazilian Reproducibility Initiative is a systematic, multicenter effort to repeat between 60 and 100 experiments: the project will focus on a set of common methods, repeating each experiment in three different laboratories from a countrywide network. The results, due in 2021, will allow us to estimate the level of reproducibility of biomedical science in Brazil, and to investigate what aspects of the published literature might help to predict whether a finding is reproducible.
Collapse
Affiliation(s)
- Olavo B Amaral
- Institute of Medical Biochemistry Leopoldo de MeisFederal University of Rio de JaneiroRio de JaneiroBrazil
| | - Kleber Neves
- Institute of Medical Biochemistry Leopoldo de MeisFederal University of Rio de JaneiroRio de JaneiroBrazil
| | - Ana P Wasilewska-Sampaio
- Institute of Medical Biochemistry Leopoldo de MeisFederal University of Rio de JaneiroRio de JaneiroBrazil
| | - Clarissa FD Carneiro
- Institute of Medical Biochemistry Leopoldo de MeisFederal University of Rio de JaneiroRio de JaneiroBrazil
| |
Collapse
|
16
|
Yosten GLC, Adams JC, Bennett CN, Bunnett NW, Scheman R, Sigmund CD, Yates BJ, Zucker IH, Samson WK. Revised guidelines to enhance the rigor and reproducibility of research published in American Physiological Society journals. Am J Physiol Regul Integr Comp Physiol 2018; 315:R1251-R1253. [PMID: 30332303 DOI: 10.1152/ajpregu.00274.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Affiliation(s)
- Gina L C Yosten
- Department of Pharmacology and Physiology, Saint Louis University , St. Louis, Missouri
| | - Josephine C Adams
- Department of Biochemistry, University of Bristol , Bristol , United Kingdom
| | - Christina N Bennett
- Publications Department, The American Physiological Society , Rockville, Maryland
| | - Nigel W Bunnett
- Departments of Surgery and Pharmacology, Columbia University , New York, New York
| | - Rita Scheman
- Publications Department, The American Physiological Society , Rockville, Maryland
| | - Curt D Sigmund
- Department of Pharmacology, University of Iowa , Iowa City, Iowa
| | - Bill J Yates
- Department of Otolaryngology, University of Pittsburgh , Pittsburgh, Pennsylvania
| | - Irving H Zucker
- Department of Cellular and Integrative Physiology, University of Nebraska Medical Center , Omaha, Nebraska
| | - Willis K Samson
- Department of Pharmacology and Physiology, Saint Louis University , St. Louis, Missouri
| |
Collapse
|
17
|
Shiffrin RM, Börner K, Stigler SM. Scientific progress despite irreproducibility: A seeming paradox. Proc Natl Acad Sci U S A 2018; 115:2632-2639. [PMID: 29531095 PMCID: PMC5856513 DOI: 10.1073/pnas.1711786114] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
It appears paradoxical that science is producing outstanding new results and theories at a rapid rate at the same time that researchers are identifying serious problems in the practice of science that cause many reports to be irreproducible and invalid. Certainly, the practice of science needs to be improved, and scientists are now pursuing this goal. However, in this perspective, we argue that this seeming paradox is not new, has always been part of the way science works, and likely will remain so. We first introduce the paradox. We then review a wide range of challenges that appear to make scientific success difficult. Next, we describe the factors that make science work-in the past, present, and presumably also in the future. We then suggest that remedies for the present practice of science need to be applied selectively so as not to slow progress and illustrate with a few examples. We conclude with arguments that communication of science needs to emphasize not just problems but the enormous successes and benefits that science has brought and is now bringing to all elements of modern society.
Collapse
Affiliation(s)
- Richard M Shiffrin
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN 47405;
| | - Katy Börner
- Department of Intelligent Systems Engineering, Indiana University Bloomington, Bloomington, IN 47405
| | | |
Collapse
|
18
|
|