26
|
Murphy G, Maher J, Ballantyne L, Barrett E, Cowman CS, Dawson CA, Huston C, Ryan KM, Greene CM. How do participants feel about the ethics of rich false memory studies? Memory 2023; 31:474-481. [PMID: 36689341 DOI: 10.1080/09658211.2023.2170417] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
ABSTRACTDeception is often a necessity in rich false memory studies, but is this deception acceptable to participants? In the current study, we followed up with 175 participants who had taken part in a replication of the Lost in the Mall childhood false memory study (Loftus & Pickrell, 1995), as either a research subject or a familial informant. We found that both participants and informants were generally very positive about their experience, did not regret taking part and found the deceptive methods acceptable. Importantly, the vast majority reported that they would still have taken part had they known the true objectives from the beginning. Participants also reported learning something interesting about memory and enjoying the nostalgia and family discussions that were prompted by the study. We would encourage other researchers to assess the ethical implications of false memory research paradigms and to incorporate the valuable feedback from participants and informants.
Collapse
|
27
|
Gainsburg I, Pauer S, Abboub N, Aloyo ET, Mourrat JC, Cristia A. How Effective Altruism Can Help Psychologists Maximize Their Impact. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:239-253. [PMID: 35981321 DOI: 10.1177/17456916221079596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Although many psychologists are interested in making the world a better place through their work, they are often unable to have the impact that they would like. Here, we suggest that both individuals and psychology as a field can better improve human welfare by incorporating ideas from effective altruism, a growing movement whose members aim to do the most good by using science and reason to inform their efforts. In this article, we first briefly introduce effective altruism and review important principles that can be applied to how psychologists approach their work, such as the importance, tractability, and neglectedness framework. We then review how effective altruism can inform individual psychologists' choices. Finally, we close with a discussion of ideas for how psychology, as a field, can increase its positive impact. By applying insights from effective altruism to psychological science, we aim to integrate a new theoretical framework into psychological science, stimulate new areas of research, start a discussion on how psychology can maximize its impact, and inspire the psychology community to do the most good.
Collapse
|
28
|
Breznau N, Rinke EM, Wuttke A, Nguyen HHV, Adem M, Adriaans J, Alvarez-Benjumea A, Andersen HK, Auer D, Azevedo F, Bahnsen O, Balzer D, Bauer G, Bauer PC, Baumann M, Baute S, Benoit V, Bernauer J, Berning C, Berthold A, Bethke FS, Biegert T, Blinzler K, Blumenberg JN, Bobzien L, Bohman A, Bol T, Bostic A, Brzozowska Z, Burgdorf K, Burger K, Busch KB, Carlos-Castillo J, Chan N, Christmann P, Connelly R, Czymara CS, Damian E, Ecker A, Edelmann A, Eger MA, Ellerbrock S, Forke A, Forster A, Gaasendam C, Gavras K, Gayle V, Gessler T, Gnambs T, Godefroidt A, Grömping M, Groß M, Gruber S, Gummer T, Hadjar A, Heisig JP, Hellmeier S, Heyne S, Hirsch M, Hjerm M, Hochman O, Hövermann A, Hunger S, Hunkler C, Huth N, Ignácz ZS, Jacobs L, Jacobsen J, Jaeger B, Jungkunz S, Jungmann N, Kauff M, Kleinert M, Klinger J, Kolb JP, Kołczyńska M, Kuk J, Kunißen K, Kurti Sinatra D, Langenkamp A, Lersch PM, Löbel LM, Lutscher P, Mader M, Madia JE, Malancu N, Maldonado L, Marahrens H, Martin N, Martinez P, Mayerl J, Mayorga OJ, McManus P, McWagner K, Meeusen C, Meierrieks D, Mellon J, Merhout F, Merk S, Meyer D, Micheli L, Mijs J, Moya C, Neunhoeffer M, Nüst D, Nygård O, Ochsenfeld F, Otte G, Pechenkina AO, Prosser C, Raes L, Ralston K, Ramos MR, Roets A, Rogers J, Ropers G, Samuel R, Sand G, Schachter A, Schaeffer M, Schieferdecker D, Schlueter E, Schmidt R, Schmidt KM, Schmidt-Catran A, Schmiedeberg C, Schneider J, Schoonvelde M, Schulte-Cloos J, Schumann S, Schunck R, Schupp J, Seuring J, Silber H, Sleegers W, Sonntag N, Staudt A, Steiber N, Steiner N, Sternberg S, Stiers D, Stojmenovska D, Storz N, Striessnig E, Stroppe AK, Teltemann J, Tibajev A, Tung B, Vagni G, Van Assche J, van der Linden M, van der Noll J, Van Hootegem A, Vogtenhuber S, Voicu B, Wagemans F, Wehl N, Werner H, Wiernik BM, Winter F, Wolf C, Yamada Y, Zhang N, Ziller C, Zins S, Żółtak T. Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proc Natl Acad Sci U S A 2022; 119:e2203150119. [PMID: 36306328 PMCID: PMC9636921 DOI: 10.1073/pnas.2203150119] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 08/22/2022] [Indexed: 11/25/2022] Open
Abstract
This study explores how researchers' analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers' expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team's workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers' results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.
Collapse
|
29
|
Forscher PS, Wagenmakers EJ, Coles NA, Silan MA, Dutra N, Basnight-Brown D, IJzerman H. The Benefits, Barriers, and Risks of Big-Team Science. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 18:607-623. [PMID: 36190899 DOI: 10.1177/17456916221082970] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Progress in psychology has been frustrated by challenges concerning replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility. Although often discussed separately, these five challenges may share a common cause: insufficient investment of intellectual and nonintellectual resources into the typical psychology study. We suggest that the emerging emphasis on big-team science can help address these challenges by allowing researchers to pool their resources together to increase the amount available for a single study. However, the current incentives, infrastructure, and institutions in academic science have all developed under the assumption that science is conducted by solo principal investigators and their dependent trainees, an assumption that creates barriers to sustainable big-team science. We also anticipate that big-team science carries unique risks, such as the potential for big-team-science organizations to be co-opted by unaccountable leaders, become overly conservative, and make mistakes at a grand scale. Big-team-science organizations must also acquire personnel who are properly compensated and have clear roles. Not doing so raises risks related to mismanagement and a lack of financial sustainability. If researchers can manage its unique barriers and risks, big-team science has the potential to spur great progress in psychology and beyond.
Collapse
|
30
|
Mazor M, Brown S, Ciaunica A, Demertzi A, Fahrenfort J, Faivre N, Francken JC, Lamy D, Lenggenhager B, Moutoussis M, Nizzi MC, Salomon R, Soto D, Stein T, Lubianiker N. The Scientific Study of Consciousness Cannot and Should Not Be Morally Neutral. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 18:535-543. [PMID: 36170496 DOI: 10.1177/17456916221110222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A target question for the scientific study of consciousness is how dimensions of consciousness, such as the ability to feel pain and pleasure or reflect on one's own experience, vary in different states and animal species. Considering the tight link between consciousness and moral status, answers to these questions have implications for law and ethics. Here we point out that given this link, the scientific community studying consciousness may face implicit pressure to carry out certain research programs or interpret results in ways that justify current norms rather than challenge them. We show that because consciousness largely determines moral status, the use of nonhuman animals in the scientific study of consciousness introduces a direct conflict between scientific relevance and ethics-the more scientifically valuable an animal model is for studying consciousness, the more difficult it becomes to ethically justify compromises to its well-being for consciousness research. Finally, in light of these considerations, we call for a discussion of the immediate ethical corollaries of the body of knowledge that has accumulated and for a more explicit consideration of the role of ideology and ethics in the scientific study of consciousness.
Collapse
|
31
|
Roberts SO, Mortenson E. Challenging the White = Neutral Framework in Psychology. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 18:597-606. [PMID: 35981299 DOI: 10.1177/17456916221077117] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In the United States, White samples are often portrayed as if their racial identities were inconsequential to their thoughts, feelings, and behaviors, and research findings derived from White samples are often portrayed as if they were generalizable to all humans. We argue that these and other practices are rooted in a "White = neutral" framework (i.e., the conceptualization of White samples as nonracial). First, we review existing data and present some new data to highlight the scope of the White = neutral framework. Second, we integrate research from across psychological science to argue that the continued use of the White = neutral framework will prevent psychology from becoming a truly objective and inclusive science for at least three reasons: (a) Research with White samples will be valued over research with samples of color, (b) norms that maintain White neutrality will remain unchallenged, and (c) the role of White identity in psychological processes will remain underspecified and underexamined. Third, we provide recommendations for how to move beyond the White = neutral framework in hopes of encouraging all psychological scientists to move toward a White ≠ neutral framework in which all samples are identified for the unique and diverse perspectives that they bring to the world.
Collapse
|
32
|
Sosa DN, Altman RB. Contexts and contradictions: a roadmap for computational drug repurposing with knowledge inference. Brief Bioinform 2022; 23:6640007. [PMID: 35817308 PMCID: PMC9294417 DOI: 10.1093/bib/bbac268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 05/25/2022] [Accepted: 06/07/2022] [Indexed: 11/30/2022] Open
Abstract
The cost of drug development continues to rise and may be prohibitive in cases of unmet clinical need, particularly for rare diseases. Artificial intelligence-based methods are promising in their potential to discover new treatment options. The task of drug repurposing hypothesis generation is well-posed as a link prediction problem in a knowledge graph (KG) of interacting of drugs, proteins, genes and disease phenotypes. KGs derived from biomedical literature are semantically rich and up-to-date representations of scientific knowledge. Inference methods on scientific KGs can be confounded by unspecified contexts and contradictions. Extracting context enables incorporation of relevant pharmacokinetic and pharmacodynamic detail, such as tissue specificity of interactions. Contradictions in biomedical KGs may arise when contexts are omitted or due to contradicting research claims. In this review, we describe challenges to creating literature-scale representations of pharmacological knowledge and survey current approaches toward incorporating context and resolving contradictions.
Collapse
|
33
|
Questionable Research Practices, Low Statistical Power, and Other Obstacles to Replicability: Why Preclinical Neuroscience Research Would Benefit from Registered Reports. eNeuro 2022; 9:9/4/ENEURO.0017-22.2022. [PMID: 35922130 PMCID: PMC9351632 DOI: 10.1523/eneuro.0017-22.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/22/2022] [Accepted: 05/31/2022] [Indexed: 02/03/2023] Open
Abstract
Replicability, the degree to which a previous scientific finding can be repeated in a distinct set of data, has been considered an integral component of institutionalized scientific practice since its inception several hundred years ago. In the past decade, large-scale replication studies have demonstrated that replicability is far from favorable, across multiple scientific fields. Here, I evaluate this literature and describe contributing factors including the prevalence of questionable research practices (QRPs), misunderstanding of p-values, and low statistical power. I subsequently discuss how these issues manifest specifically in preclinical neuroscience research. I conclude that these problems are multifaceted and difficult to solve, relying on the actions of early and late career researchers, funding sources, academic publishers, and others. I assert that any viable solution to the problem of substandard replicability must include changing academic incentives, with adoption of registered reports being the most immediately impactful and pragmatic strategy. For animal research in particular, comprehensive reporting guidelines that document potential sources of sensitivity for experimental outcomes is an essential addition.
Collapse
|
34
|
Roche DG, Berberi I, Dhane F, Lauzon F, Soeharjono S, Dakin R, Binning SA. Slow improvement to the archiving quality of open datasets shared by researchers in ecology and evolution. Proc Biol Sci 2022; 289:20212780. [PMID: 35582791 PMCID: PMC9114975 DOI: 10.1098/rspb.2021.2780] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Many leading journals in ecology and evolution now mandate open data upon publication. Yet, there is very little oversight to ensure the completeness and reusability of archived datasets, and we currently have a poor understanding of the factors associated with high-quality data sharing. We assessed 362 open datasets linked to first- or senior-authored papers published by 100 principal investigators (PIs) in the fields of ecology and evolution over a period of 7 years to identify predictors of data completeness and reusability (data archiving quality). Datasets scored low on these metrics: 56.4% were complete and 45.9% were reusable. Data reusability, but not completeness, was slightly higher for more recently archived datasets and PIs with less seniority. Journal open data policy, PI gender and PI corresponding author status were unrelated to data archiving quality. However, PI identity explained a large proportion of the variance in data completeness (27.8%) and reusability (22.0%), indicating consistent inter-individual differences in data sharing practices by PIs across time and contexts. Several PIs consistently shared data of either high or low archiving quality, but most PIs were inconsistent in how well they shared. One explanation for the high intra-individual variation we observed is that PIs often conduct research through students and postdoctoral researchers, who may be responsible for the data collection, curation and archiving. Levels of data literacy vary among trainees and PIs may not regularly perform quality control over archived files. Our findings suggest that research data management training and culture within a PI's group are likely to be more important determinants of data archiving quality than other factors such as a journal's open data policy. Greater incentives and training for individual researchers at all career stages could improve data sharing practices and enhance data transparency and reusability.
Collapse
|
35
|
Paret C, Unverhau N, Feingold F, Poldrack RA, Stirner M, Schmahl C, Sicorello M. Survey on Open Science Practices in Functional Neuroimaging. Neuroimage 2022; 257:119306. [PMID: 35595201 DOI: 10.1016/j.neuroimage.2022.119306] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 05/03/2022] [Accepted: 05/10/2022] [Indexed: 12/12/2022] Open
Abstract
Replicability and reproducibility of scientific findings is paramount for sustainable progress in neuroscience. Preregistration of the hypotheses and methods of an empirical study before analysis, the sharing of primary research data, and compliance with data standards such as the Brain Imaging Data Structure (BIDS), are considered effective practices to secure progress and to substantiate quality of research. We investigated the current level of adoption of open science practices in neuroimaging and the difficulties that prevent researchers from using them. Email invitations to participate in the survey were sent to addresses received through a PubMed search of human functional magnetic resonance imaging studies that were published between 2010 and 2020. 283 persons completed the questionnaire. Although half of the participants were experienced with preregistration, the willingness to preregister studies in the future was modest. The majority of participants had experience with the sharing of primary neuroimaging data. Most of the participants were interested in implementing a standardized data structure such as BIDS in their labs. Based on demographic variables, we compared participants on seven subscales, which had been generated through factor analysis. Exploratory analyses found that experienced researchers at lower career level had higher fear of being transparent and researchers with residence in the EU had a higher need for data governance. Additionally, researchers at medical faculties as compared to other university faculties reported a more unsupportive supervisor with regards to open science practices and a higher need for data governance. The results suggest growing adoption of open science practices but also highlight a number of important impediments.
Collapse
|
36
|
Cowan N. The Adversarial Collaboration Within Each of Us. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2022; 11:19-22. [PMID: 37346985 PMCID: PMC10284568 DOI: 10.1037/mac0000001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
|
37
|
Donegan KR, Gillan CM. New principles and new paths needed for online research in mental health: Commentary on Burnette et al. (2021). Int J Eat Disord 2022; 55:278-281. [PMID: 35005784 PMCID: PMC9306503 DOI: 10.1002/eat.23670] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 12/17/2021] [Accepted: 12/18/2021] [Indexed: 11/18/2022]
Abstract
Online methods have become a powerful research tool, allowing us to conduct well-powered studies, to explore and replicate effects, and to recruit often rare and diverse samples. However, concerns about the validity and reliability of the data collected from some platforms have reached crescendo. In this issue, Burnette et al. (2021) describe how commonly employed protective measures such as captchas, response consistency requirements, and attention checks may no longer be sufficient to ensure high-quality data in survey-based studies on Amazon's Mechanical Turk. We echo and elaborate on these concerns, but believe that although imperfect, online research will continue to be incredibly important in driving progress in mental health science. Not all platforms or populations are well suited to every research question and so we posit that the future of online research will be much more varied, and in no small part supported by citizen scientists and those with lived experience. Whatever the medium, researchers cannot stand still; we must continuously reflect and adapt to technological advances, demographics, and motivational shifts of our participants. Online research is difficult but worthwhile.
Collapse
|
38
|
Lamers WS, Boyack K, Larivière V, Sugimoto CR, van Eck NJ, Waltman L, Murray D. Investigating disagreement in the scientific literature. eLife 2021; 10:72737. [PMID: 34951588 PMCID: PMC8709576 DOI: 10.7554/elife.72737] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 11/26/2021] [Indexed: 11/13/2022] Open
Abstract
Disagreement is essential to scientific progress but the extent of disagreement in science, its evolution over time, and the fields in which it happens remain poorly understood. Here we report the development of an approach based on cue phrases that can identify instances of disagreement in scientific articles. These instances are sentences in an article that cite other articles. Applying this approach to a collection of more than four million English-language articles published between 2000 and 2015 period, we determine the level of disagreement in five broad fields within the scientific literature (biomedical and health sciences; life and earth sciences; mathematics and computer science; physical sciences and engineering; and social sciences and humanities) and 817 meso-level fields. Overall, the level of disagreement is highest in the social sciences and humanities, and lowest in mathematics and computer science. However, there is considerable heterogeneity across the meso-level fields, revealing the importance of local disciplinary cultures and the epistemic characteristics of disagreement. Analysis at the level of individual articles reveals notable episodes of disagreement in science, and illustrates how methodological artifacts can confound analyses of scientific texts.
Collapse
|
39
|
Aczel B, Szaszi B, Nilsonne G, van den Akker OR, Albers CJ, van Assen MALM, Bastiaansen JA, Benjamin D, Boehm U, Botvinik-Nezer R, Bringmann LF, Busch NA, Caruyer E, Cataldo AM, Cowan N, Delios A, van Dongen NNN, Donkin C, van Doorn JB, Dreber A, Dutilh G, Egan GF, Gernsbacher MA, Hoekstra R, Hoffmann S, Holzmeister F, Huber J, Johannesson M, Jonas KJ, Kindel AT, Kirchler M, Kunkels YK, Lindsay DS, Mangin JF, Matzke D, Munafò MR, Newell BR, Nosek BA, Poldrack RA, van Ravenzwaaij D, Rieskamp J, Salganik MJ, Sarafoglou A, Schonberg T, Schweinsberg M, Shanks D, Silberzahn R, Simons DJ, Spellman BA, St-Jean S, Starns JJ, Uhlmann EL, Wicherts J, Wagenmakers EJ. Consensus-based guidance for conducting and reporting multi-analyst studies. eLife 2021; 10:e72185. [PMID: 34751133 PMCID: PMC8626083 DOI: 10.7554/elife.72185] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 11/07/2021] [Indexed: 11/13/2022] Open
Abstract
Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.
Collapse
|
40
|
Rosenfeld DL, Balcetis E, Bastian B, Berkman ET, Bosson JK, Brannon TN, Burrow AL, Cameron CD, Chen S, Cook JE, Crandall C, Davidai S, Dhont K, Eastwick PW, Gaither SE, Gangestad SW, Gilovich T, Gray K, Haines EL, Haselton MG, Haslam N, Hodson G, Hogg MA, Hornsey MJ, Huo YJ, Joel S, Kachanoff FJ, Kraft-Todd G, Leary MR, Ledgerwood A, Lee RT, Loughnan S, MacInnis CC, Mann T, Murray DR, Parkinson C, Pérez EO, Pyszczynski T, Ratner K, Rothgerber H, Rounds JD, Schaller M, Silver RC, Spellman BA, Strohminger N, Swim JK, Thoemmes F, Urganci B, Vandello JA, Volz S, Zayas V, Tomiyama AJ. Psychological Science in the Wake of COVID-19: Social, Methodological, and Metascientific Considerations. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 17:311-333. [PMID: 34597198 DOI: 10.1177/1745691621999374] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
The COVID-19 pandemic has extensively changed the state of psychological science from what research questions psychologists can ask to which methodologies psychologists can use to investigate them. In this article, we offer a perspective on how to optimize new research in the pandemic's wake. Because this pandemic is inherently a social phenomenon-an event that hinges on human-to-human contact-we focus on socially relevant subfields of psychology. We highlight specific psychological phenomena that have likely shifted as a result of the pandemic and discuss theoretical, methodological, and practical considerations of conducting research on these phenomena. After this discussion, we evaluate metascientific issues that have been amplified by the pandemic. We aim to demonstrate how theoretically grounded views on the COVID-19 pandemic can help make psychological science stronger-not weaker-in its wake.
Collapse
|
41
|
Kominsky JF, Begus K, Bass I, Colantonio J, Leonard JA, Mackey AP, Bonawitz E. Organizing the Methodological Toolbox: Lessons Learned From Implementing Developmental Methods Online. Front Psychol 2021; 12:702710. [PMID: 34589023 PMCID: PMC8473607 DOI: 10.3389/fpsyg.2021.702710] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 08/23/2021] [Indexed: 11/16/2022] Open
Abstract
Adapting studies typically run in the lab, preschool, or museum to online data collection presents a variety of challenges. The solutions to those challenges depend heavily on the specific questions pursued, the methods used, and the constraints imposed by available technology. We present a partial sample of solutions, discussing approaches we have developed for adapting studies targeting a range of different developmental populations, from infants to school-aged children, and utilizing various online methods such as high-framerate video presentation, having participants interact with a display on their own computer, having the experimenter interact with both the participant and an actor, recording free-play with physical objects, recording infant looking times both offline and live, and more. We also raise issues and solutions regarding recruitment and representativeness in online samples. By identifying the concrete needs of a given approach, tools that meet each of those individual needs, and interfaces between those tools, we have been able to implement many (but not all) of our studies using online data collection during the COVID-19 pandemic. This systematic review aligning available tools and approaches with different methods can inform the design of future studies, in and outside of the lab.
Collapse
|
42
|
Peterson D, Panofsky A. Self-correction in science: The diagnostic and integrative motives for replication. SOCIAL STUDIES OF SCIENCE 2021; 51:583-605. [PMID: 33764246 DOI: 10.1177/03063127211005551] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
A series of failed replications and frauds have raised questions regarding self-correction in science. Metascientific activists have advocated policies that incentivize replications and make them more diagnostically potent. We argue that current debates, as well as research in science and technology studies, have paid little heed to a key dimension of replication practice. Although it sometimes serves a diagnostic function, replication is commonly motivated by a practical desire to extend research interests. The resulting replication, which we label 'integrative', is characterized by a pragmatic flexibility toward protocols. The goal is to appropriate what is useful, not test for truth. Within many experimental cultures, however, integrative replications can produce replications of ambiguous diagnostic power. Based on interviews with 60 members of the Board of Reviewing Editors for the journal Science, we show how the interplay between the diagnostic and integrative motives for replication differs between fields and produces different cultures of replication. We offer six theses that aim to put science and technology studies and science activism into dialog to show why effective reforms will need to confront issues of disciplinary difference.
Collapse
|
43
|
Brick C, Hood B, Ekroll V, de-Wit L. Illusory Essences: A Bias Holding Back Theorizing in Psychological Science. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 17:491-506. [PMID: 34283676 PMCID: PMC8902028 DOI: 10.1177/1745691621991838] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The reliance in psychology on verbal definitions means that psychological research is unusually moored to how humans think and communicate about categories. Psychological concepts (e.g., intelligence, attention) are easily assumed to represent objective, definable categories with an underlying essence. Like the “vital forces” previously thought to animate life, these assumed essences can create an illusion of understanding. By synthesizing a wide range of research lines from cognitive, clinical, and biological psychology and neuroscience, we describe a pervasive tendency across psychological science to assume that essences explain phenomena. Labeling a complex phenomenon can appear as theoretical progress before there is sufficient evidence that the described category has a definable essence or known boundary conditions. Category labels can further undermine progress by masking contingent and contextual relationships and obscuring the need to specify mechanisms. Finally, we highlight examples of promising methods that circumvent the lure of essences and suggest four concrete strategies for identifying and avoiding essentialist intuitions in theory development.
Collapse
|
44
|
Abstract
In the face of unreplicable results, statistical anomalies, and outright fraud, introspection and changes in the psychological sciences have taken root. Vibrant reform and metascience movements have emerged. These are exciting developments and may point toward practical improvements in the future. Yet there is nothing so practical as good theory. This article outlines aspects of reform and metascience in psychology that are ripe for an injection of theory, including a lot of excellent and overlooked theoretical work from different disciplines. I review established frameworks that model the process of scientific discovery, the types of scientific networks that we ought to aspire to, and the processes by which problematic norms and institutions might evolve, focusing especially on modeling from the philosophy of science and cultural evolution. We have unwittingly evolved a toxic scientific ecosystem; existing interdisciplinary theory may help us intelligently design a better one.
Collapse
|
45
|
Latinjak AT, Hatzigeorgiadis A. The Knowledge Map of Sport and Exercise Psychology: An Integrative Perspective. Front Psychol 2021; 12:661824. [PMID: 34220635 PMCID: PMC8242169 DOI: 10.3389/fpsyg.2021.661824] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 05/21/2021] [Indexed: 11/13/2022] Open
Abstract
The present work contains a personal perspective on what sport and exercise psychology (SEP) is today. It is a global synthesis of research about psychological aspects related to the context and practice of sport and exercise. The intended impact was to positively influence teaching SEP to students, to promote interdisciplinary research and practice, and to assist the development of SEP as an applied science by helping experts develop a more holistic view of the field. Over 650 theoretical and review articles about psychological concepts in connection to sport and exercise were read in the process of creating a conceptual model that reflects the essence of SEP and leads to a conceptualization of SEP based on research topics. The result was a knowledge map of SEP made up of four main research clusters: biopsychological descriptors, external variables, psychological skills, and applied SEP practice. In terms of interdisciplinarity, the present perspective on SEP suggests that sport and exercise can be used as a research paradigm or natural laboratory to study psychological aspects relevant to various scientific fields, and that sport and exercise can be used as a therapeutic framework in response to challenges that researchers and practitioners in these fields are typically addressing.
Collapse
|
46
|
Klau S, Hoffmann S, Patel CJ, Ioannidis JP, Boulesteix AL. Examining the robustness of observational associations to model, measurement and sampling uncertainty with the vibration of effects framework. Int J Epidemiol 2021; 50:266-278. [PMID: 33147614 PMCID: PMC7938511 DOI: 10.1093/ije/dyaa164] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/03/2020] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND The results of studies on observational associations may vary depending on the study design and analysis choices as well as due to measurement error. It is important to understand the relative contribution of different factors towards generating variable results, including low sample sizes, researchers' flexibility in model choices, and measurement error in variables of interest and adjustment variables. METHODS We define sampling, model and measurement uncertainty, and extend the concept of vibration of effects in order to study these three types of uncertainty in a common framework. In a practical application, we examine these types of uncertainty in a Cox model using data from the National Health and Nutrition Examination Survey. In addition, we analyse the behaviour of sampling, model and measurement uncertainty for varying sample sizes in a simulation study. RESULTS All types of uncertainty are associated with a potentially large variability in effect estimates. Measurement error in the variable of interest attenuates the true effect in most cases, but can occasionally lead to overestimation. When we consider measurement error in both the variable of interest and adjustment variables, the vibration of effects are even less predictable as both systematic under- and over-estimation of the true effect can be observed. The results on simulated data show that measurement and model vibration remain non-negligible even for large sample sizes. CONCLUSION Sampling, model and measurement uncertainty can have important consequences for the stability of observational associations. We recommend systematically studying and reporting these types of uncertainty, and comparing them in a common framework.
Collapse
|
47
|
Edlund JE, Cuccolo K, Irgens MS, Wagge JR, Zlokovich MS. Saving Science Through Replication Studies. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 17:216-225. [PMID: 33682522 DOI: 10.1177/1745691620984385] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The scientific enterprise has long been based on the presumption of replication, although scientists have recently become aware of various corruptions of the enterprise that have hurt replicability. In this article, we begin by considering three illustrations of research paradigms that have all been subject to intense scrutiny through replications and theoretical concerns. The three paradigms are one for which the corpus of research points to a real finding, one for which the corpus of research points to a significantly attenuated effect, and one for which the debate is ongoing. We then discuss what scientists can learn-and how science can be improved-through replications more generally. From there, we discuss what we believe needs to be done to improve scientific inquiry with regard to replication moving forward. Finally, we conclude by providing readers with several different approaches to replication and how these approaches progress science. The approaches discussed include multilab replications of many effects, multilab replications of specific effects, adversarial collaborations, and stand-alone applications.
Collapse
|
48
|
Rohrer JM, Tierney W, Uhlmann EL, DeBruine LM, Heyman T, Jones B, Schmukle SC, Silberzahn R, Willén RM, Carlsson R, Lucas RE, Strand J, Vazire S, Witt JK, Zentall TR, Chabris CF, Yarkoni T. Putting the Self in Self-Correction: Findings From the Loss-of-Confidence Project. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 16:1255-1269. [PMID: 33645334 PMCID: PMC8564260 DOI: 10.1177/1745691620964106] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Science is often perceived to be a self-correcting enterprise. In principle, the
assessment of scientific claims is supposed to proceed in a cumulative fashion,
with the reigning theories of the day progressively approximating truth more
accurately over time. In practice, however, cumulative self-correction tends to
proceed less efficiently than one might naively suppose. Far from evaluating new
evidence dispassionately and infallibly, individual scientists often cling
stubbornly to prior findings. Here we explore the dynamics of scientific
self-correction at an individual rather than collective level. In 13 written
statements, researchers from diverse branches of psychology share why and how
they have lost confidence in one of their own published findings. We
qualitatively characterize these disclosures and explore their implications. A
cross-disciplinary survey suggests that such loss-of-confidence sentiments are
surprisingly common among members of the broader scientific population yet
rarely become part of the public record. We argue that removing barriers to
self-correction at the individual level is imperative if the scientific
community as a whole is to achieve the ideal of efficient self-correction.
Collapse
|
49
|
Premachandra B, Lewis NA. Do We Report the Information That Is Necessary to Give Psychology Away? A Scoping Review of the Psychological Intervention Literature 2000-2018. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 17:226-238. [PMID: 33651952 DOI: 10.1177/1745691620974774] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Psychologists are spending a considerable amount of time researching and developing interventions in hopes that their efforts can help to tackle some of society's pressing problems. Unfortunately, those hopes are often not realized-many interventions are developed and reported in journals but do not make their way into the broader world they were designed to change. One potential reason for this is that there may be a gap between the information reported in articles and the information others, such as practitioners, need to implement the findings. We explored this possibility in the current article. We conducted a scoping review to assess the extent to which the information needed for implementation is reported in psychological intervention articles. Results suggest psychological intervention articles report, at most, 64% of the information needed to implement interventions. We discuss the implications of this for both psychological theories and applying them in the world.
Collapse
|
50
|
Lewis NA, Wai J. Communicating What We Know and What Isn't So: Science Communication in Psychology. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2021; 16:1242-1254. [PMID: 33615912 DOI: 10.1177/1745691620964062] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The field of psychology has a long history of encouraging researchers to disseminate their findings to the broader public. This trend has continued in recent decades in part because of professional psychology organizations reissuing calls to "give psychology away." This recent wave of calls to give psychology away is different because it has been occurring alongside another movement in the field-the credibility revolution in which psychology has been reckoning with metascientific questions about what exactly psychologists know. This creates a dilemma for the modern psychologist: How is one to "give psychology away" if one is unsure about what is known or what one has to give? In the current article, we discuss strategies for navigating this tension by drawing on insights from the interdisciplinary fields of science communication and persuasion and social influence.
Collapse
|