1
|
Truong BT, Lou X, Flammini A, Menczer F. Quantifying the vulnerabilities of the online public square to adversarial manipulation tactics. PNAS NEXUS 2024; 3:pgae258. [PMID: 38994499 PMCID: PMC11238850 DOI: 10.1093/pnasnexus/pgae258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 06/18/2024] [Indexed: 07/13/2024]
Abstract
Social media, seen by some as the modern public square, is vulnerable to manipulation. By controlling inauthentic accounts impersonating humans, malicious actors can amplify disinformation within target communities. The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Here we use a social media model that simulates information diffusion in an empirical network to quantify the impacts of adversarial manipulation tactics on the quality of content. We find that the presence of hub accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation. Among the explored tactics that bad actors can employ, infiltrating a community is the most likely to make low-quality content go viral. Such harm can be further compounded by inauthentic agents flooding the network with low-quality, yet appealing content, but is mitigated when bad actors focus on specific targets, such as influential or vulnerable individuals. These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
Collapse
Affiliation(s)
- Bao Tran Truong
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| | - Xiaodan Lou
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| | - Alessandro Flammini
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| | - Filippo Menczer
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| |
Collapse
|
2
|
Gollwitzer A, Bao E, Oettingen G. Intellectual humility as a tool to combat false beliefs: An individual-based approach to belief revision. BRITISH JOURNAL OF SOCIAL PSYCHOLOGY 2024; 63:1450-1478. [PMID: 38421055 DOI: 10.1111/bjso.12732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 01/30/2024] [Indexed: 03/02/2024]
Abstract
False beliefs pose significant societal threats, including health risks, political polarization and even violence. In two studies (N = 884) we explored the efficacy of an individual-based approach to correcting false beliefs. We examined whether the character virtue of intellectual humility (IH)-an appreciation of one's intellectual boundaries-encourages revising one's false beliefs in response to counter-information. Our research produced encouraging but also mixed findings. Among participants who held false beliefs about the risks of vaccines (Study 1) and the 2020 US Election being rigged (Study 2), those with higher IH explored more information opposing these false beliefs. This exploration of opposing information, in turn, predicted updating away from these inaccurate health and political beliefs. IH did not directly predict updating away from false beliefs, however, suggesting that this effect-if it exists-may not be particularly powerful. Taken together, these results provide moderate support for IH as a character trait that can foster belief revision but, simultaneously, suggest that alternate pathways to combat false beliefs and misinformation may be preferred.
Collapse
Affiliation(s)
- Anton Gollwitzer
- Department of Leadership and Organizational Behaviour, BI Norwegian Business School, Oslo, Norway
- Center for Adaptive Rationality, Max Plank Institute for Human Development, Berlin, Germany
| | - Evelina Bao
- New York University, New York City, New York, USA
| | - Gabriele Oettingen
- New York University, New York City, New York, USA
- Department of Political and Social Sciences, Zeppelin University Friedrichshafen, Friedrichshafen, Germany
| |
Collapse
|
3
|
Dougherty MR, Illingworth DA, Nguyen R. A memory-theoretic account of citation propagation. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231521. [PMID: 39076797 PMCID: PMC11286183 DOI: 10.1098/rsos.231521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 04/04/2024] [Accepted: 04/05/2024] [Indexed: 07/31/2024]
Abstract
Despite the common assumption that citations are indicative of an article's scientific merit, increasing evidence indicates that citation counts are largely driven by variables unrelated to quality. In this article, we treat people's decisions of what to cite as an instance of memory retrieval and show that observed citation patterns are well accounted for by a model of memory. The proposed exposure model anticipates that small alterations in factors that affect people's ability to retrieve to-be-cited articles from memory early in their life cycle are magnified over time and can lead to the emergence of highly cited papers. This effect occurs even when there is no variation in the starting point exposure probabilities (i.e. when assuming a level playing field where all articles are treated equally and of equal 'quality'), and is exacerbated by natural variation in retrievability of articles due to encoding. We discuss the implications of the model within the context of research evaluation and hiring, tenure and promotion decisions.
Collapse
Affiliation(s)
| | | | - Rosalind Nguyen
- Department of Psychology, University of Maryland, College Park, MD, USA
| |
Collapse
|
4
|
Goldstone RL, Dubova M, Aiyappa R, Edinger A. The Spread of Beliefs in Partially Modularized Communities. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2024; 19:404-417. [PMID: 38019565 DOI: 10.1177/17456916231198238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Many life-influencing social networks are characterized by considerable informational isolation. People within a community are far more likely to share beliefs than people who are part of different communities. The spread of useful information across communities is impeded by echo chambers (far greater connectivity within than between communities) and filter bubbles (more influence of beliefs by connected neighbors within than between communities). We apply the tools of network analysis to organize our understanding of the spread of beliefs across modularized communities and to predict the effect of individual and group parameters on the dynamics and distribution of beliefs. In our Spread of Beliefs in Modularized Communities (SBMC) framework, a stochastic block model generates social networks with variable degrees of modularity, beliefs have different observable utilities, individuals change their beliefs on the basis of summed or average evidence (or intermediate decision rules), and parameterized stochasticity introduces randomness into decisions. SBMC simulations show surprising patterns; for example, increasing out-group connectivity does not always improve group performance, adding randomness to decisions can promote performance, and decision rules that sum rather than average evidence can improve group performance, as measured by the average utility of beliefs that the agents adopt. Overall, the results suggest that intermediate degrees of belief exploration are beneficial for the spread of useful beliefs in a community, and so parameters that pull in opposite directions on an explore-exploit continuum are usefully paired.
Collapse
Affiliation(s)
- Robert L Goldstone
- Department of Psychological and Brain Sciences, Indiana University
- Program in Cognitive Science, Indiana University
| | | | - Rachith Aiyappa
- Center for Complex Networks and Systems, Luddy School of Informatics, Computing, and Engineering, Indiana University
| | - Andy Edinger
- Program in Cognitive Science, Indiana University
- Center for Complex Networks and Systems, Luddy School of Informatics, Computing, and Engineering, Indiana University
| |
Collapse
|
5
|
Geers M, Swire-Thompson B, Lorenz-Spreen P, Herzog SM, Kozyreva A, Hertwig R. The Online Misinformation Engagement Framework. Curr Opin Psychol 2024; 55:101739. [PMID: 38091666 DOI: 10.1016/j.copsyc.2023.101739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 11/01/2023] [Accepted: 11/10/2023] [Indexed: 01/28/2024]
Abstract
Research on online misinformation has evolved rapidly, but organizing its results and identifying open research questions is difficult without a systematic approach. We present the Online Misinformation Engagement Framework, which classifies people's engagement with online misinformation into four stages: selecting information sources, choosing what information to consume or ignore, evaluating the accuracy of the information and/or the credibility of the source, and judging whether and how to react to the information (e.g., liking or sharing). We outline entry points for interventions at each stage and pinpoint the two early stages-source and information selection-as relatively neglected processes that should be addressed to further improve people's ability to contend with misinformation.
Collapse
Affiliation(s)
- Michael Geers
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany; Department of Psychology, Humboldt University of Berlin, Rudower Ch. 18, 12489 Berlin, Germany.
| | - Briony Swire-Thompson
- Network Science Institute, Northeastern University, 177 Huntington Ave., Boston, MA, 02115, USA; Institute for Quantitative Social Science, Harvard University, 1737 Cambridge St., Cambridge, MA, 02138, USA
| | - Philipp Lorenz-Spreen
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| | - Stefan M Herzog
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| | - Anastasia Kozyreva
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany.
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Lentzeallee 94, 14195 Berlin, Germany
| |
Collapse
|
6
|
Hahn U, Merdes C, von Sydow M. Knowledge through social networks: Accuracy, error, and polarisation. PLoS One 2024; 19:e0294815. [PMID: 38170696 PMCID: PMC10763946 DOI: 10.1371/journal.pone.0294815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 11/09/2023] [Indexed: 01/05/2024] Open
Abstract
This paper examines the fundamental problem of testimony. Much of what we believe to know we know in good part, or even entirely, through the testimony of others. The problem with testimony is that we often have very little on which to base estimates of the accuracy of our sources. Simulations with otherwise optimal agents examine the impact of this for the accuracy of our beliefs about the world. It is demonstrated both where social networks of information dissemination help and where they hinder. Most importantly, it is shown that both social networks and a common strategy for gauging the accuracy of our sources give rise to polarisation even for entirely accuracy motivated agents. Crucially these two factors interact, amplifying one another's negative consequences, and this side effect of communication in a social network increases with network size. This suggests a new causal mechanism by which social media may have fostered the increase in polarisation currently observed in many parts of the world.
Collapse
Affiliation(s)
- Ulrike Hahn
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- MCMP, Ludwig-Maximilians-Universitaet, Munich, Germany
| | - Christoph Merdes
- MCMP, Ludwig-Maximilians-Universitaet, Munich, Germany
- Interdisciplinary Centre for Ethics, Jagiellonian University Cracow, Cracow, Poland
| | | |
Collapse
|
7
|
Li Y, Breithaupt F, Hills T, Lin Z, Chen Y, Siew CSW, Hertwig R. How cognitive selection affects language change. Proc Natl Acad Sci U S A 2024; 121:e2220898120. [PMID: 38150495 PMCID: PMC10769849 DOI: 10.1073/pnas.2220898120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 10/12/2023] [Indexed: 12/29/2023] Open
Abstract
Like biological species, words in language must compete to survive. Previously, it has been shown that language changes in response to cognitive constraints and over time becomes more learnable. Here, we use two complementary research paradigms to demonstrate how the survival of existing word forms can be predicted by psycholinguistic properties that impact language production. In the first study, we analyzed the survival of words in the context of interpersonal communication. We analyzed data from a large-scale serial-reproduction experiment in which stories were passed down along a transmission chain over multiple participants. The results show that words that are acquired earlier in life, more concrete, more arousing, and more emotional are more likely to survive retellings. We reason that the same trend might scale up to language evolution over multiple generations of natural language users. If that is the case, the same set of psycholinguistic properties should also account for the change of word frequency in natural language corpora over historical time. That is what we found in two large historical-language corpora (Study 2): Early acquisition, concreteness, and high arousal all predict increasing word frequency over the past 200 y. However, the two studies diverge with respect to the impact of word valence and word length, which we take up in the discussion. By bridging micro-level behavioral preferences and macro-level language patterns, our investigation sheds light on the cognitive mechanisms underlying word competition.
Collapse
Affiliation(s)
- Ying Li
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing100101, China
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
| | - Fritz Breithaupt
- Department of Germanic Studies, Indiana University, Bloomington, IN001809
- Program of Cognitive Science, Indiana University, Bloomington, IN001809
| | - Thomas Hills
- Department of Psychology, University of Warwick, CoventryCV4 7AL, United Kingdom
| | - Ziyong Lin
- Center for Life Span Psychology, Max Planck Institute for Human Development, Berlin14195, Germany
| | - Yanyan Chen
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing100049, China
| | - Cynthia S. W. Siew
- Department of Psychology, National University of Singapore, Singapore119077, Singapore
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin14195, Germany
| |
Collapse
|
8
|
Martire KA, Robson SG, Drew M, Nicholls K, Faasse K. Thinking false and slow: Implausible beliefs and the Cognitive Reflection Test. Psychon Bull Rev 2023; 30:2387-2396. [PMID: 37369977 PMCID: PMC10728225 DOI: 10.3758/s13423-023-02321-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2023] [Indexed: 06/29/2023]
Abstract
Why do people believe implausible claims like conspiracy theories, pseudoscience, and fake news? Past studies using the Cognitive Reflection Test (CRT) suggest that implausible beliefs may result from an unwillingness to effortfully process information (i.e., cognitive miserliness). Our analysis (N = 664) tests this account by comparing CRT performance (total score, number and proportion of incorrect intuitive responses, and completion time) for endorsers and non-endorsers of implausible claims. Our results show that endorsers performed worse than non-endorsers on the CRT, but they took significantly longer to answer the questions and did not make proportionally more intuitive mistakes. Endorsers therefore appear to process information effortfully but nonetheless score lower on the CRT. Poorer overall CRT performance may not necessarily indicate that those who endorse implausible beliefs have a more reflexive, intuitive, or non-analytical cognitive style than non-endorsers.
Collapse
Affiliation(s)
- Kristy A Martire
- The University of New South Wales, Kensington, NSW, 2052, Australia
| | - Samuel G Robson
- The University of New South Wales, Kensington, NSW, 2052, Australia.
| | - Manisara Drew
- The University of New South Wales, Kensington, NSW, 2052, Australia
| | - Kate Nicholls
- The University of New South Wales, Kensington, NSW, 2052, Australia
| | - Kate Faasse
- The University of New South Wales, Kensington, NSW, 2052, Australia
| |
Collapse
|
9
|
Yan HY, Yang KC, Shanahan J, Menczer F. Exposure to social bots amplifies perceptual biases and regulation propensity. Sci Rep 2023; 13:20707. [PMID: 38001150 PMCID: PMC10673860 DOI: 10.1038/s41598-023-46630-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Automated accounts on social media that impersonate real users, often called "social bots," have received a great deal of attention from academia and the public. Here we present experiments designed to investigate public perceptions and policy preferences about social bots, in particular how they are affected by exposure to bots. We find that before exposure, participants have some biases: they tend to overestimate the prevalence of bots and see others as more vulnerable to bot influence than themselves. These biases are amplified after bot exposure. Furthermore, exposure tends to impair judgment of bot-recognition self-efficacy and increase propensity toward stricter bot-regulation policies among participants. Decreased self-efficacy and increased perceptions of bot influence on others are significantly associated with these policy preference changes. We discuss the relationship between perceptions about social bots and growing dissatisfaction with the polluted social media environment.
Collapse
Affiliation(s)
- Harry Yaojun Yan
- The Media School, Indiana University, Bloomington, IN, 47405, USA.
- Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
- Observatory on Social Media, Indiana University, Bloomington, IN, 47408, USA.
| | - Kai-Cheng Yang
- Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
- Observatory on Social Media, Indiana University, Bloomington, IN, 47408, USA
| | - James Shanahan
- The Media School, Indiana University, Bloomington, IN, 47405, USA
- Observatory on Social Media, Indiana University, Bloomington, IN, 47408, USA
| | - Filippo Menczer
- Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
- Observatory on Social Media, Indiana University, Bloomington, IN, 47408, USA
| |
Collapse
|
10
|
Adams Z, Osman M, Bechlivanidis C, Meder B. (Why) Is Misinformation a Problem? PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1436-1463. [PMID: 36795592 PMCID: PMC10623619 DOI: 10.1177/17456916221141344] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
In the last decade there has been a proliferation of research on misinformation. One important aspect of this work that receives less attention than it should is exactly why misinformation is a problem. To adequately address this question, we must first look to its speculated causes and effects. We examined different disciplines (computer science, economics, history, information science, journalism, law, media, politics, philosophy, psychology, sociology) that investigate misinformation. The consensus view points to advancements in information technology (e.g., the Internet, social media) as a main cause of the proliferation and increasing impact of misinformation, with a variety of illustrations of the effects. We critically analyzed both issues. As to the effects, misbehaviors are not yet reliably demonstrated empirically to be the outcome of misinformation; correlation as causation may have a hand in that perception. As to the cause, advancements in information technologies enable, as well as reveal, multitudes of interactions that represent significant deviations from ground truths through people's new way of knowing (intersubjectivity). This, we argue, is illusionary when understood in light of historical epistemology. Both doubts we raise are used to consider the cost to established norms of liberal democracy that come from efforts to target the problem of misinformation.
Collapse
Affiliation(s)
- Zoë Adams
- Department of Linguistics, School of Languages, Linguistics and Film, Queen Mary University London
| | - Magda Osman
- Centre for Science and Policy, University of Cambridge
- Judge Business School, University of Cambridge
- Leeds Business School, University of Leeds
| | | | - Björn Meder
- Department of Psychology, Health and Medical University, Potsdam, Germany
- Max Planck Research Group iSearch, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
11
|
Buder J, Zimmermann A, Buttliere B, Rabl L, Vogel M, Huff M. Online Interaction Turns the Congeniality Bias Into an Uncongeniality Bias. Psychol Sci 2023; 34:1055-1068. [PMID: 37722137 DOI: 10.1177/09567976231194590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/20/2023] Open
Abstract
Online phenomena like echo chambers and polarization are believed to be driven by humans' penchant to selectively expose themselves to attitudinally congenial content. However, if like-minded content were the only predictor of online behavior, heated debate and flaming on the Internet would hardly occur. Research has overlooked how online behavior changes when people are given an opportunity to reply to dissenters. Three experiments (total N = 320; convenience student samples from Germany) and an internal meta-analysis show that in a discussion-forum setting where participants can reply to earlier comments larger cognitive conflict between participant attitude and comment attitude predicts higher likelihood to respond (uncongeniality bias). When the discussion climate was friendly (vs. oppositional) to the views of participants, the uncongeniality bias was more pronounced and was also associated with attitude polarization. These results suggest that belief polarization on social media may not only be driven by congeniality but also by conflict.
Collapse
Affiliation(s)
- Jürgen Buder
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Tübingen
| | - Anja Zimmermann
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Tübingen
- Department of Research and Transfer, Technical University of Darmstadt
| | - Brett Buttliere
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Tübingen
- Faculty of Humanities, Nicolaus Copernicus University
| | - Lisa Rabl
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Tübingen
| | - Moritz Vogel
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Tübingen
| | - Markus Huff
- Perception and Action Lab, Leibniz-Institut für Wissensmedien, Tübingen
- Department of Psychology, University of Tübingen
| |
Collapse
|
12
|
He T, Breithaupt F, Kübler S, Hills TT. Quantifying the retention of emotions across story retellings. Sci Rep 2023; 13:2448. [PMID: 36774370 PMCID: PMC9922315 DOI: 10.1038/s41598-023-29178-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 01/31/2023] [Indexed: 02/13/2023] Open
Abstract
Story retelling is a fundamental medium for the transmission of information between individuals and among social groups. Besides conveying factual information, stories also contain affective information. Though natural language processing techniques have advanced considerably in recent years, the extent to which machines can be trained to identify and track emotions across retellings is unknown. This study leverages the powerful RoBERTa model, based on a transformer architecture, to derive emotion-rich story embeddings from a unique dataset of 25,728 story retellings. The initial stories were centered around five emotional events (joy, sadness, embarrassment, risk, and disgust-though the stories did not contain these emotion words) and three intensities (high, medium, and low). Our results indicate (1) that RoBERTa can identify emotions in stories it was not trained on, (2) that the five emotions and their intensities are preserved when they are transmitted in the form of retellings, (3) that the emotions in stories are increasingly well-preserved as they experience additional retellings, and (4) that among the five emotions, risk and disgust are least well-preserved, compared with joy, sadness, and embarrassment. This work is a first step toward quantifying situation-driven emotions with machines.
Collapse
Affiliation(s)
- Tianyou He
- Department of Psychology, University of Warwick, Gibbet Hill Road, Coventry, CV47AL, UK
| | - Fritz Breithaupt
- Department of Germanic Studies, Indiana University, Bloomington, USA
- Cognitive Science Program, Indiana University, Bloomington, USA
| | - Sandra Kübler
- Cognitive Science Program, Indiana University, Bloomington, USA
- Department of Psychology, University of Warwick, Gibbet Hill Road, Coventry, UK
| | - Thomas T Hills
- Department of Psychology, University of Warwick, Gibbet Hill Road, Coventry, CV47AL, UK.
| |
Collapse
|
13
|
Lin H, Pennycook G, Rand DG. Thinking more or thinking differently? Using drift-diffusion modeling to illuminate why accuracy prompts decrease misinformation sharing. Cognition 2023; 230:105312. [PMID: 36334467 DOI: 10.1016/j.cognition.2022.105312] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 11/27/2022]
Abstract
Recent experiments have found that prompting people to think about accuracy reduces misinformation sharing intentions. The process by which this effect operates, however, remains unclear. Do accuracy prompts cause people to "stop and think," increasing deliberation? Or do they change what people think about, drawing attention to accuracy? Since these two accounts predict the same behavioral outcomes (i.e., increased sharing discernment following a prompt), we used computational modeling of sharing decisions with response time data, as well as out-of-sample ratings of headline perceived accuracy, to test the accounts' divergent predictions across six studies (N = 5633). The results suggest that accuracy prompts do not increase the amount of deliberation people engage in. Instead, they increase the weight participants put on accuracy while deliberating. By showing that prompting people makes them think better even without thinking more, our results challenge common dual-process interpretations of the accuracy-prompt effect. Our findings also highlight the importance of understanding how social media distracts people from considering accuracy, and provide evidence for scalable interventions that redirect people's attention.
Collapse
Affiliation(s)
- Hause Lin
- Hill/Levene Schools of Business, University of Regina, Canada; Sloan School, Massachusetts Institute of Technology, USA.
| | - Gordon Pennycook
- Hill/Levene Schools of Business, University of Regina, Canada; Department of Psychology, University of Regina, Canada
| | - David G Rand
- Sloan School, Massachusetts Institute of Technology, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
| |
Collapse
|
14
|
Sultan M, Tump AN, Geers M, Lorenz-Spreen P, Herzog SM, Kurvers RHJM. Time pressure reduces misinformation discrimination ability but does not alter response bias. Sci Rep 2022; 12:22416. [PMID: 36575232 PMCID: PMC9794823 DOI: 10.1038/s41598-022-26209-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 12/09/2022] [Indexed: 12/28/2022] Open
Abstract
Many parts of our social lives are speeding up, a process known as social acceleration. How social acceleration impacts people's ability to judge the veracity of online news, and ultimately the spread of misinformation, is largely unknown. We examined the effects of accelerated online dynamics, operationalised as time pressure, on online misinformation evaluation. Participants judged the veracity of true and false news headlines with or without time pressure. We used signal detection theory to disentangle the effects of time pressure on discrimination ability and response bias, as well as on four key determinants of misinformation susceptibility: analytical thinking, ideological congruency, motivated reflection, and familiarity. Time pressure reduced participants' ability to accurately distinguish true from false news (discrimination ability) but did not alter their tendency to classify an item as true or false (response bias). Key drivers of misinformation susceptibility, such as ideological congruency and familiarity, remained influential under time pressure. Our results highlight the dangers of social acceleration online: People are less able to accurately judge the veracity of news online, while prominent drivers of misinformation susceptibility remain present. Interventions aimed at increasing deliberation may thus be fruitful avenues to combat online misinformation.
Collapse
Affiliation(s)
- Mubashir Sultan
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany.
- Department of Psychology, Humboldt University of Berlin, 12489, Berlin, Germany.
| | - Alan N Tump
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
- Technical University of Berlin, Exzellenzcluster Science of Intelligence, Berlin, 10587, Germany
| | - Michael Geers
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
- Department of Psychology, Humboldt University of Berlin, 12489, Berlin, Germany
| | - Philipp Lorenz-Spreen
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
| | - Stefan M Herzog
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
| | - Ralf H J M Kurvers
- Max Planck Institute for Human Development, Center for Adaptive Rationality, Berlin, 14195, Germany
- Technical University of Berlin, Exzellenzcluster Science of Intelligence, Berlin, 10587, Germany
| |
Collapse
|
15
|
Salvi C, Barr N, Dunsmoor JE, Grafman J. Insight Problem Solving Ability Predicts Reduced Susceptibility to Fake News, Bullshit, and Overclaiming. THINKING & REASONING 2022; 29:760-784. [PMID: 37982007 PMCID: PMC10655953 DOI: 10.1080/13546783.2022.2146191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 10/28/2022] [Indexed: 11/27/2022]
Abstract
The information humans are exposed to has grown exponentially. This has placed increased demands upon our information selection strategies resulting in reduced fact-checking and critical-thinking time. Prior research shows that problem solving (traditionally measured using the Cognitive Reflection Test-CRT) negatively correlates with believing in false information. We argue that this result is specifically related to insight problem solving. Solutions via insight are the result of parallel processing, characterized by filtering external noise, and, unlike cognitively controlled thinking, it does not suffer from the cognitive overload associated with processing multiple sources of information. We administered the Compound Remote Associate Test (problems used to investigate insight problem solving) as well as the CRT, 20 fake and real news headlines, the bullshit, and overclaiming scales to a sample of 61 participants. Results show that insight problem solving predicts better identification of fake news and bullshit (over and above traditional measures i.e., the CRT), and is associated with reduced overclaiming. These results have implications for understanding individual differences in susceptibility to believing false information.
Collapse
Affiliation(s)
- Carola Salvi
- Department of Psychiatry and Behavioral Sciences, University of Texas at Austin, Austin, TX USA
- Department of Psychology and Social Sciences, John Cabot University, Rome, Italy
| | - Nathaniel Barr
- School of Humanities and Creativity, Sheridan College, ON, Canada
| | - Joseph E. Dunsmoor
- Department of Psychiatry and Behavioral Sciences, University of Texas at Austin, Austin, TX USA
| | - Jordan Grafman
- Shirley Ryan Ability Lab, Chicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
16
|
Malthouse E. Confirmation bias and vaccine-related beliefs in the time of COVID-19. J Public Health (Oxf) 2022:6833492. [DOI: 10.1093/pubmed/fdac128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/01/2022] [Accepted: 10/11/2022] [Indexed: 11/21/2022] Open
Abstract
Abstract
Background
In recent history mass vaccination has proved essential to dealing with pandemics. However, the effectiveness of a vaccine depends on the number of people willing to take it. One approach to encouraging uptake is to publish information about safety and effectiveness. But confirmation bias research in other domains suggests that people may evaluate this information through the lens of their existing beliefs.
Methods
This study used a simple 2 × 2 design to investigate whether people’s (n = 3899) existing beliefs influenced their ability to correctly evaluate data from a fictional trial presented in a frequency table. Treatment groups saw different trial outcomes (intervention effective versus ineffective and trial related versus unrelated to vaccines).
Results
Results provided robust evidence for confirmation bias in the domain of vaccines: people made systematic errors (P < 0.01) when evaluating evidence that was inconsistent with their prior beliefs. This pattern emerged among people with both pro-vaccination and anti-vaccination attitudes. Errors were attributed to confirmation bias because no such differences were detected when participants evaluated data unrelated to vaccines.
Conclusions
People are prone to misinterpreting evidence about vaccines in ways that reflect their underlying beliefs. Confirmation bias is an important consideration for vaccine communication.
Collapse
Affiliation(s)
- Eugene Malthouse
- Department of Psychology, University of Warwick , Coventry CV4 7AL, UK
| |
Collapse
|
17
|
Moore R, Hills T. The evolution of imagination and the adaptive value of imaginary worlds. Behav Brain Sci 2022; 45:e288. [PMID: 36396390 DOI: 10.1017/s0140525x2100217x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Characterizing the cultural evolution of imaginary worlds as a hedonic but non-adaptive exaptation from evolved exploratory tendencies, Dubourg and Baumard defend too narrow a conception of the adaptive evolution of imaginary worlds. Imagination and its imaginary worlds are ancient and adaptive, allowing deliberation over actions, consequences, and futures worth aspiring to, often engendering the world we see around us.
Collapse
Affiliation(s)
- Richard Moore
- Department of Philosophy, Social Sciences Building, University of Warwick, Coventry, CV4 7AL, ; https://warwick.ac.uk/fac/soc/philosophy/people/summaries/moore/
| | - Thomas Hills
- Department of Psychology, University of Warwick, Coventry, CV4 7AL, UK ; https://warwick.ac.uk/fac/sci/psych/people/thills/
| |
Collapse
|
18
|
Loss aversion, the endowment effect, and gain-loss framing shape preferences for noninstrumental information. Proc Natl Acad Sci U S A 2022; 119:e2202700119. [PMID: 35972966 PMCID: PMC9407664 DOI: 10.1073/pnas.2202700119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We often talk about interacting with information as we would with a physical good (e.g., "consuming content") and describe our attachment to personal beliefs in the same way as our attachment to personal belongings (e.g., "holding on to" or "letting go of" our beliefs). But do we in fact value information the way we do objects? The valuation of money and material goods has been extensively researched, but surprisingly few insights from this literature have been applied to the study of information valuation. This paper demonstrates that two fundamental features of how we value money and material goods embodied in Prospect Theory-loss aversion and different risk preferences for gains versus losses-also hold true for information, even when it has no material value. Study 1 establishes loss aversion for noninstrumental information by showing that people are less likely to choose a gamble when the same outcome is framed as a loss (rather than gain) of information. Study 2 shows that people exhibit the endowment effect for noninstrumental information, and so value information more, simply by virtue of "owning" it. Study 3 provides a conceptual replication of the classic "Asian Disease" gain-loss pattern of risk preferences, but with facts instead of human lives, thereby also documenting a gain-loss framing effect for noninstrumental information. These findings represent a critical step in building a theoretical analogy between information and objects, and provide a useful perspective on why we often resist changing (or losing) our beliefs.
Collapse
|
19
|
Samuel J, Kashyap R, Samuel Y, Pelaez A. Adaptive cognitive fit: Artificial intelligence augmented management of information facets and representations. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2022. [DOI: 10.1016/j.ijinfomgt.2022.102505] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
20
|
Cognitive Networks Extract Insights on COVID-19 Vaccines from English and Italian Popular Tweets: Anticipation, Logistics, Conspiracy and Loss of Trust. BIG DATA AND COGNITIVE COMPUTING 2022. [DOI: 10.3390/bdcc6020052] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Monitoring social discourse about COVID-19 vaccines is key to understanding how large populations perceive vaccination campaigns. This work reconstructs how popular and trending posts framed semantically and emotionally COVID-19 vaccines on Twitter. We achieve this by merging natural language processing, cognitive network science and AI-based image analysis. We focus on 4765 unique popular tweets in English or Italian about COVID-19 vaccines between December 2020 and March 2021. One popular English tweet contained in our data set was liked around 495,000 times, highlighting how popular tweets could cognitively affect large parts of the population. We investigate both text and multimedia content in tweets and build a cognitive network of syntactic/semantic associations in messages, including emotional cues and pictures. This network representation indicates how online users linked ideas in social discourse and framed vaccines along specific semantic/emotional content. The English semantic frame of “vaccine” was highly polarised between trust/anticipation (towards the vaccine as a scientific asset saving lives) and anger/sadness (mentioning critical issues with dose administering). Semantic associations with “vaccine,” “hoax” and conspiratorial jargon indicated the persistence of conspiracy theories and vaccines in extremely popular English posts. Interestingly, these were absent in Italian messages. Popular tweets with images of people wearing face masks used language that lacked the trust and joy found in tweets showing people with no masks. This difference indicates a negative effect attributed to face-covering in social discourse. Behavioural analysis revealed a tendency for users to share content eliciting joy, sadness and disgust and to like sad messages less. Both patterns indicate an interplay between emotions and content diffusion beyond sentiment. After its suspension in mid-March 2021, “AstraZeneca” was associated with trustful language driven by experts. After the deaths of a small number of vaccinated people in mid-March, popular Italian tweets framed “vaccine” by crucially replacing earlier levels of trust with deep sadness. Our results stress how cognitive networks and innovative multimedia processing open new ways for reconstructing online perceptions about vaccines and trust.
Collapse
|
21
|
Wells JS, Scheibein F. Global pandemics, conflict and networks - the dynamics of international instability, infodemics and health care in the 21st century. J Res Nurs 2022; 27:291-300. [PMID: 35730049 PMCID: PMC9204121 DOI: 10.1177/17449871221090778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Background The COVID-19 global pandemic is a harbinger of a future destabilised world driven by climate change, rapid mass migration, food insecurity, state failures and epidemics. A significant feature fuelling this destabilised world is networked misinformation and disinformation (referred to as an infodemic), particularly in the area of health. Aims To describe the interactive dynamic of climate change; mass population movement; famine; state failure and epidemic disease, analyse developments over the year 2020–2021 and discuss their relationship to an infodemic about disease and public health responses and how this should be addressed in the future. Methods Using the concept of ‘the Five Horsemen’ of epochal change and network theory to guide a narrative review. Results Concepts of epidemiology are reflected in how misinformation is spread around the world. Health care services and personnel face threats as a result that make it more difficult to manage pan global health risks effectively. Conclusions Heath care professionals at an individual and organisational level need to counter infodemic networks. Health care professionals who consistently spread misinformation should have their licence to practice withdrawn.
Collapse
Affiliation(s)
- John Sg Wells
- Fellow, School of Health Sciences, Waterford Institute of Technology, Waterford, Ireland
| | - Florian Scheibein
- Research Assistant, School of Health Sciences, Waterford Institute of Technology, Waterford, Ireland
| |
Collapse
|
22
|
Lewandowsky S, Pomerantsev P. Technology and democracy: a paradox wrapped in a contradiction inside an irony. MEMORY, MIND & MEDIA 2021; 1:mem.2021.7. [PMID: 36415623 PMCID: PMC7613775 DOI: 10.1017/mem.2021.7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Democracy is in retreat around the globe. Many commentators have blamed the Internet for this development, whereas others have celebrated the Internet as a tool for liberation, with each opinion being buttressed by supporting evidence. We try to resolve this paradox by reviewing some of the pressure points that arise between human cognition and the online information architecture, and their fallout for the well-being of democracy. We focus on the role of the attention economy, which has monetised dwell time on platforms, and the role of algorithms that satisfy users' presumed preferences. We further note the inherent asymmetry in power between platforms and users that arises from these pressure points, and we conclude by sketching out the principles of a new Internet with democratic credentials.
Collapse
|
23
|
Shrestha A, Spezzano F. Characterizing and predicting fake news spreaders in social networks. INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2021. [DOI: 10.1007/s41060-021-00291-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Abstract
The spread of online conspiracy theories represents a serious threat to society. To understand the content of conspiracies, here we present the language of conspiracy (LOCO) corpus. LOCO is an 88-million-token corpus composed of topic-matched conspiracy (N = 23,937) and mainstream (N = 72,806) documents harvested from 150 websites. Mimicking internet user behavior, documents were identified using Google by crossing a set of seed phrases with a set of websites. LOCO is hierarchically structured, meaning that each document is cross-nested within websites (N = 150) and topics (N = 600, on three different resolutions). A rich set of linguistic features (N = 287) and metadata includes upload date, measures of social media engagement, measures of website popularity, size, and traffic, as well as political bias and factual reporting annotations. We explored LOCO's features from different perspectives showing that documents track important societal events through time (e.g., Princess Diana's death, Sandy Hook school shooting, coronavirus outbreaks), while patterns of lexical features (e.g., deception, power, dominance) overlap with those extracted from online social media communities dedicated to conspiracy theories. By computing within-subcorpus cosine similarity, we derived a subset of the most representative conspiracy documents (N = 4,227), which, compared to other conspiracy documents, display prototypical and exaggerated conspiratorial language and are more frequently shared on Facebook. We also show that conspiracy website users navigate to websites via more direct means than mainstream users, suggesting confirmation bias. LOCO and related datasets are freely available at https://osf.io/snpcg/ .
Collapse
|
25
|
Wang P, Shi H, Wu X, Jiao L. Sentiment Analysis of Rumor Spread Amid COVID-19: Based on Weibo Text. Healthcare (Basel) 2021; 9:healthcare9101275. [PMID: 34682955 PMCID: PMC8535399 DOI: 10.3390/healthcare9101275] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 09/22/2021] [Accepted: 09/22/2021] [Indexed: 11/30/2022] Open
Abstract
(1) Background: in early 2020, COVID-19 broke out. Driven by people’s psychology of conformity, panic, group polarization, etc., various rumors appeared and spread wildly, and the Internet became a hotbed of rumors. (2) Methods: the study selected Weibo as the research media, using topic models, time series analysis, sentiment analysis, and Granger causality testing methods to analyze the social media texts related to COVID-19 rumors. (3) Results: in study 1, we obtained 21 topics related to “COVID-19 rumors” and “outbreak rumors” after conducting topic model analysis on Weibo texts; in study 2, we explored the emotional changes of netizens before and after rumor dispelling information was released and found people’s positive emotions first declined and then rose; in study 3, we also explored the emotional changes of netizens before and after the “Wuhan lockdown” event and found positive sentiment of people in non-Wuhan areas increased, while negative sentiment of people in Wuhan increased; in study 4, we studied the relationship between rumor spread and emotional polarity and found negative sentiment and rumor spread was causally interrelated. (4) Conclusion: These findings could help us to intuitively understand the impact of rumors spread on people’s emotions during the COVID-19 pandemic and help the government take measures to reduce panic.
Collapse
|
26
|
Abstract
Social media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their early connections. The interactions of conservative accounts are skewed toward the right, whereas liberal accounts are exposed to moderate content shifting their experience toward the political center. Partisan accounts, especially conservative ones, tend to receive more followers and follow more automated accounts. Conservative accounts also find themselves in denser communities and are exposed to more low-credibility content. Social media platforms moderating misinformation have been accused of political bias. Here, the authors use neutral social bots to show that, while there is no strong evidence for such a bias, the content to which Twitter users are exposed depends strongly on the political leaning of early Twitter connections.
Collapse
|
27
|
Stella M. Cognitive Network Science for Understanding Online Social Cognitions: A Brief Review. Top Cogn Sci 2021; 14:143-162. [PMID: 34118113 DOI: 10.1111/tops.12551] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/24/2021] [Accepted: 05/24/2021] [Indexed: 11/29/2022]
Abstract
Social media are digitalizing massive amounts of users' cognitions in terms of timelines and emotional content. Such Big Data opens unprecedented opportunities for investigating cognitive phenomena like perception, personality, and information diffusion but requires suitable interpretable frameworks. Since social media data come from users' minds, worthy candidates for this challenge are cognitive networks, models of cognition giving structure to mental conceptual associations. This work outlines how cognitive network science can open new, quantitative ways for understanding cognition through online media like: (i) reconstructing how users semantically and emotionally frame events with contextual knowledge unavailable to machine learning, (ii) investigating conceptual salience/prominence through knowledge structure in social discourse; (iii) studying users' personality traits like openness-to-experience, curiosity, and creativity through language in posts; (iv) bridging cognitive/emotional content and social dynamics via multilayer networks comparing the mindsets of influencers and followers. These advancements combine cognitive-, network- and computer science to understand cognitive mechanisms in both digital and real-world settings but come with limitations concerning representativeness, individual variability, and data integration. Such aspects are discussed along with the ethical implications of manipulating sociocognitive data. In the future, reading cognitions through networks and social media can expose cognitive biases amplified by online platforms and relevantly inform policy-making, education, and markets about complex cognitive trends.
Collapse
Affiliation(s)
- Massimo Stella
- CogNosco Lab, Department of Computer Science, University of Exeter.,Institute for Data Science and Artificial Intelligence, University of Exeter, UK
| |
Collapse
|
28
|
Caled D, Silva MJ. Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation. JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE 2021; 5:123-159. [PMID: 34075349 PMCID: PMC8156576 DOI: 10.1007/s42001-021-00118-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 03/31/2021] [Indexed: 06/12/2023]
Abstract
This review discusses the dynamic mechanisms of misinformation creation and spreading used in social networks. It includes: (1) a conceptualization of misinformation and related terms, such as rumors and disinformation; (2) an analysis of the cognitive vulnerabilities that hinder the correction of the effects of an inaccurate narrative already assimilated; and (3) an interdisciplinary discussion on different strategies for coping with misinformation. The discussion encompasses journalistic, educational, governmental and computational viewpoints on the topic. The review also surveys how digital platforms handle misinformation and gives an outlook on opportunities to address it in light of the presented viewpoints.
Collapse
Affiliation(s)
- Danielle Caled
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, R. Alves Redol 9, 1000-029 Lisbon, Portugal
| | - Mário J. Silva
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa, R. Alves Redol 9, 1000-029 Lisbon, Portugal
| |
Collapse
|
29
|
Houdek P, Koblovský P, Vranka M. The Challenge of Human Psychology to Effective Management of the COVID-19 Pandemic. SOCIETY 2021; 58:131-134. [PMID: 33967360 PMCID: PMC8090512 DOI: 10.1007/s12115-021-00575-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
When it comes to the nature of the COVID-19 pandemic and the effectiveness of measures against the disease, many citizens worldwide do not trust their governments or health authorities. This brief essay discusses several psychological mechanisms which, under certain conditions, lead people to ignore important sources of information and hinder effective management of the epidemic. The paper shows that understanding psychological mechanisms, such as information neglect, cognitive dissonance, psychological reactance, and, in general, the diversity of people's thinking styles, may help leaders design more effective government communications.
Collapse
Affiliation(s)
- Petr Houdek
- Faculty of Business Administration, Prague University of Economics and Business, Prague, Czechia
- Faculty of Social Sciences, Charles University, Prague, Czechia
| | - Petr Koblovský
- Faculty of Social Sciences, Charles University, Prague, Czechia
- Center of Behavioral Experiments (CEBEX), Prague, Czechia
- Institute for Behavioral and Economic Studies (INBES), Prague, Czechia
| | - Marek Vranka
- Faculty of Social Sciences, Charles University, Prague, Czechia
| |
Collapse
|
30
|
Asatani K, Yamano H, Sakaki T, Sakata I. Dense and influential core promotion of daily viral information spread in political echo chambers. Sci Rep 2021; 11:7491. [PMID: 33820918 PMCID: PMC8021571 DOI: 10.1038/s41598-021-86750-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/05/2021] [Indexed: 02/01/2023] Open
Abstract
Despite the intensive study of the viral spread of fake news in political echo chambers (ECs) on social networking services (SNSs), little is known regarding the underlying structure of the daily information spread in these ECs. Moreover, the effect of SNSs on opinion polarisation is still unclear in terms of pluralistic information access or selective exposure to opinions in an SNS. In this study, we confirmed the steady, highly independent nature of left- and right-leaning ECs, both of which are composed of approximately 250,000 users, from a year-long reply/retweet network of 42 million Japanese Twitter users. We found that both communities have similarly efficient information spreading networks with densely connected and core-periphery structures. Core nodes resonate in the early stages of information cascades, and unilaterally transmit information to peripheral nodes. Each EC has resonant core users who amplify and steadily spread information to a quarter of a million users. In addition, we confirmed the existence of extremely aggressive users of ECs who co-reply/retweet each other. The connection between these users and top influencers suggests that the extreme opinions of the former group affect the entire community through the top influencers.
Collapse
Affiliation(s)
- Kimitaka Asatani
- Graduate school of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan.
| | - Hiroko Yamano
- Institute for Future Initiatives, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Takeshi Sakaki
- Graduate school of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Ichiro Sakata
- Graduate school of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| |
Collapse
|
31
|
Does negatively toned language use on social media lead to attitude polarization? COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2020.106663] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
32
|
Malecki KMC, Keating JA, Safdar N. Crisis Communication and Public Perception of COVID-19 Risk in the Era of Social Media. Clin Infect Dis 2021; 72:697-702. [PMID: 32544242 PMCID: PMC7337650 DOI: 10.1093/cid/ciaa758] [Citation(s) in RCA: 153] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 06/11/2020] [Indexed: 11/14/2022] Open
Abstract
A number of important principles in effective risk communication established in the late 20th century can provide important scientific insight into patient response to the risks posed by coronavirus disease 2019 (COVID-19). Early risk communication scholars found acceptability of risk was shaped by 2 key components: hazard and outrage. The number of people who are exposed, infected, and fall ill can be considered the hazard. How the public and patients and respond to messages regarding risk mitigation relates to outrage. Social and cultural factors, immediacy, uncertainty, familiarity, personal control, scientific uncertainty, and trust in institutions and media all shape perception and response to risk mesaging. Outrage factors influence the ever-changing public understanding of COVID-19 risk. In concert, hazard and outrage along with cultural and economic context shape adherence to, and overall acceptance of, personal mitigation strategies including wearing facemasks and social distancing among the general public. The spread of misinformation on social media also provides both challenges and opportunities for clinicians. Social media offers an opportunity for experts to quickly convey true information about hazards, but offers others the opportunity to counter this with the spread of misinformation and exacerbate outrage. We propose strategies for infectious diseases clinicians to apply risk communication principles and frameworks to improve patient care and public message development in response to COVID-19.
Collapse
Affiliation(s)
- Kristen M C Malecki
- Department of Population Health Sciences, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Julie A Keating
- William S. Middleton Memorial Veterans Hospital, Madison, Wisconsin, USA
| | - Nasia Safdar
- William S. Middleton Memorial Veterans Hospital, Madison, Wisconsin, USA.,Division of Infectious Disease, Department of Medicine, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
33
|
Pothos EM, Lewandowsky S, Basieva I, Barque-Duran A, Tapper K, Khrennikov A. Information overload for (bounded) rational agents. Proc Biol Sci 2021; 288:20202957. [PMID: 33529555 DOI: 10.1098/rspb.2020.2957] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Bayesian inference offers an optimal means of processing environmental information and so an advantage in natural selection. We consider the apparent, recent trend in increasing dysfunctional disagreement in, for example, political debate. This is puzzling because Bayesian inference benefits from powerful convergence theorems, precluding dysfunctional disagreement. Information overload is a plausible factor limiting the applicability of full Bayesian inference, but what is the link with dysfunctional disagreement? Individuals striving to be Bayesian-rational, but challenged by information overload, might simplify by using Bayesian networks or the separation of questions into knowledge partitions, the latter formalized with quantum probability theory. We demonstrate the massive simplification afforded by either approach, but also show how they contribute to dysfunctional disagreement.
Collapse
Affiliation(s)
- Emmanuel M Pothos
- Department of Psychology, City, University of London, London EC1V 0HB, UK
| | | | - Irina Basieva
- Department of Psychology, City, University of London, London EC1V 0HB, UK
| | - Albert Barque-Duran
- Department of Computer Science, Universitat de Lleida, Carrer de Jaume II, 67, 25001 Lleida, Spain
| | - Katy Tapper
- Department of Psychology, City, University of London, London EC1V 0HB, UK
| | - Andrei Khrennikov
- International Center for Mathematical Modeling in Physics and Cognitive Science, Linnaeus University, Universitetplatsen 1, 351 95 Växjö, Sweden
| |
Collapse
|
34
|
Martire KA, Growns B, Bali AS, Montgomery-Farrer B, Summersby S, Younan M. Limited not lazy: a quasi-experimental secondary analysis of evidence quality evaluations by those who hold implausible beliefs. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:65. [PMID: 33306157 PMCID: PMC7729693 DOI: 10.1186/s41235-020-00264-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Accepted: 11/17/2020] [Indexed: 11/30/2022]
Abstract
Past research suggests that an uncritical or ‘lazy’ style of evaluating evidence may play a role in the development and maintenance of implausible beliefs. We examine this possibility by using a quasi-experimental design to compare how low- and high-quality evidence is evaluated by those who do and do not endorse implausible claims. Seven studies conducted during 2019–2020 provided the data for this analysis (N = 746). Each of the seven primary studies presented participants with high- and/or low-quality evidence and measured implausible claim endorsement and evaluations of evidence persuasiveness (via credibility, value, and/or weight). A linear mixed-effect model was used to predict persuasiveness from the interaction between implausible claim endorsement and evidence quality. Our results showed that endorsers were significantly more persuaded by the evidence than non-endorsers, but both groups were significantly more persuaded by high-quality than low-quality evidence. The interaction between endorsement and evidence quality was not significant. These results suggest that the formation and maintenance of implausible beliefs by endorsers may result from less critical evidence evaluations rather than a failure to analyse. This is consistent with a limited rather than a lazy approach and suggests that interventions to develop analytical skill may be useful for minimising the effects of implausible claims.
Collapse
Affiliation(s)
- Kristy A Martire
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia.
| | - Bethany Growns
- School of Social and Behavioral Sciences, Arizona State University, 4701 W Thunderbird Rd, Glendale, AZ, 85069, USA
| | - Agnes S Bali
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia
| | | | - Stephanie Summersby
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia
| | - Mariam Younan
- School of Psychology, The University of New South Wales, Sydney, NSW, 2052, Australia
| |
Collapse
|
35
|
Schulz L, Rollwage M, Dolan RJ, Fleming SM. Dogmatism manifests in lowered information search under uncertainty. Proc Natl Acad Sci U S A 2020; 117:31527-31534. [PMID: 33214149 PMCID: PMC7733856 DOI: 10.1073/pnas.2009641117] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
When knowledge is scarce, it is adaptive to seek further information to resolve uncertainty and obtain a more accurate worldview. Biases in such information-seeking behavior can contribute to the maintenance of inaccurate views. Here, we investigate whether predispositions for uncertainty-guided information seeking relate to individual differences in dogmatism, a phenomenon linked to entrenched beliefs in political, scientific, and religious discourse. We addressed this question in a perceptual decision-making task, allowing us to rule out motivational factors and isolate the role of uncertainty. In two independent general population samples (n = 370 and n = 364), we show that more dogmatic participants are less likely to seek out new information to refine an initial perceptual decision, leading to a reduction in overall belief accuracy despite similar initial decision performance. Trial-by-trial modeling revealed that dogmatic participants placed less reliance on internal signals of uncertainty (confidence) to guide information search, rendering them less likely to seek additional information to update beliefs derived from weak or uncertain initial evidence. Together, our results highlight a cognitive mechanism that may contribute to the formation of dogmatic worldviews.
Collapse
Affiliation(s)
- Lion Schulz
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3BG, United Kingdom;
- Department of Experimental Psychology, University College London, London WC1H 0AP, United Kingdom
- Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany
| | - Max Rollwage
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3BG, United Kingdom
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, Institute of Neurology, University College London, London WC1B 5EH, United Kingdom
| | - Raymond J Dolan
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3BG, United Kingdom
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, Institute of Neurology, University College London, London WC1B 5EH, United Kingdom
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3BG, United Kingdom;
- Department of Experimental Psychology, University College London, London WC1H 0AP, United Kingdom
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, Institute of Neurology, University College London, London WC1B 5EH, United Kingdom
| |
Collapse
|
36
|
|
37
|
Abstract
Despite increasing life expectancy and high levels of welfare, health care, and public safety in most post-industrial countries, the public discourse often revolves around perceived threats. Terrorism, global pandemics, and environmental catastrophes are just a few of the risks that dominate media coverage. Is this public discourse on risk disconnected from reality? To examine this issue, we analyzed the dynamics of the risk discourse in two natural language text corpora. Specifically, we tracked latent semantic patterns over a period of 150 years to address four questions: First, we examined how the frequency of the word risk has changed over historical time. Is the construct of risk playing an ever-increasing role in the public discourse, as the sociological notion of a 'risk society' suggests? Second, we investigated how the sentiments for the words co-occurring with risk have changed. Are the connotations of risk becoming increasingly ominous? Third, how has the meaning of risk changed relative to close associates such as danger and hazard? Is risk more subject to semantic change? Finally, we decompose the construct of risk into the specific topics with which it has been associated and track those topics over historical time. This brief history of the semantics of risk reveals new and surprising insights-a fourfold increase in frequency, increasingly negative sentiment, a semantic drift toward forecasting and prevention, and a shift away from war toward chronic disease-reflecting the conceptual evolution of risk in the archeological records of public discourse.
Collapse
Affiliation(s)
- Ying Li
- Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany.
| | - Thomas Hills
- Department of Psychology, University of Warwick, University Road, Coventry CV4 7AL, United Kingdom
| | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany
| |
Collapse
|
38
|
Badman RP, Hills TT, Akaishi R. Multiscale Computation and Dynamic Attention in Biological and Artificial Intelligence. Brain Sci 2020; 10:E396. [PMID: 32575758 PMCID: PMC7348831 DOI: 10.3390/brainsci10060396] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 05/23/2020] [Accepted: 06/17/2020] [Indexed: 11/16/2022] Open
Abstract
Biological and artificial intelligence (AI) are often defined by their capacity to achieve a hierarchy of short-term and long-term goals that require incorporating information over time and space at both local and global scales. More advanced forms of this capacity involve the adaptive modulation of integration across scales, which resolve computational inefficiency and explore-exploit dilemmas at the same time. Research in neuroscience and AI have both made progress towards understanding architectures that achieve this. Insight into biological computations come from phenomena such as decision inertia, habit formation, information search, risky choices and foraging. Across these domains, the brain is equipped with mechanisms (such as the dorsal anterior cingulate and dorsolateral prefrontal cortex) that can represent and modulate across scales, both with top-down control processes and by local to global consolidation as information progresses from sensory to prefrontal areas. Paralleling these biological architectures, progress in AI is marked by innovations in dynamic multiscale modulation, moving from recurrent and convolutional neural networks-with fixed scalings-to attention, transformers, dynamic convolutions, and consciousness priors-which modulate scale to input and increase scale breadth. The use and development of these multiscale innovations in robotic agents, game AI, and natural language processing (NLP) are pushing the boundaries of AI achievements. By juxtaposing biological and artificial intelligence, the present work underscores the critical importance of multiscale processing to general intelligence, as well as highlighting innovations and differences between the future of biological and artificial intelligence.
Collapse
Affiliation(s)
| | | | - Rei Akaishi
- Center for Brain Science, RIKEN, Saitama 351-0198, Japan
| |
Collapse
|
39
|
Lorenz-Spreen P, Lewandowsky S, Sunstein CR, Hertwig R. How behavioural sciences can promote truth, autonomy and democratic discourse online. Nat Hum Behav 2020; 4:1102-1109. [PMID: 32541771 DOI: 10.1038/s41562-020-0889-7] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 04/23/2020] [Indexed: 12/28/2022]
Abstract
Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions-nudging and boosting- that enlist these cues to redesign online environments for informed and autonomous choice.
Collapse
Affiliation(s)
- Philipp Lorenz-Spreen
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany.
| | - Stephan Lewandowsky
- School of Psychological Science and Cabot Institute, University of Bristol, Bristol, UK.,School of Psychological Science, University of Western Australia, Perth, Australia
| | | | - Ralph Hertwig
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
40
|
Abstract
The target article "Thinking Through Other Minds" (TTOM) offered an account of the distinctively human capacity to acquire cultural knowledge, norms, and practices. To this end, we leveraged recent ideas from theoretical neurobiology to understand the human mind in social and cultural contexts. Our aim was both synthetic - building an integrative model adequate to account for key features of cultural learning and adaptation; and prescriptive - showing how the tools developed to explain brain dynamics can be applied to the emergence of social and cultural ecologies of mind. In this reply to commentators, we address key issues, including: (1) refining the concept of culture to show how TTOM and the free-energy principle (FEP) can capture essential elements of human adaptation and functioning; (2) addressing cognition as an embodied, enactive, affective process involving cultural affordances; (3) clarifying the significance of the FEP formalism related to entropy minimization, Bayesian inference, Markov blankets, and enactivist views; (4) developing empirical tests and applications of the TTOM model; (5) incorporating cultural diversity and context at the level of intra-cultural variation, individual differences, and the transition to digital niches; and (6) considering some implications for psychiatry. The commentators' critiques and suggestions point to useful refinements and applications of the model. In ongoing collaborations, we are exploring how to augment the theory with affective valence, take into account individual differences and historicity, and apply the model to specific domains including epistemic bias.
Collapse
|
41
|
Historical analysis of national subjective wellbeing using millions of digitized books. Nat Hum Behav 2019; 3:1271-1275. [PMID: 31611658 DOI: 10.1038/s41562-019-0750-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 08/30/2019] [Indexed: 11/08/2022]
Abstract
In addition to improving quality of life, higher subjective wellbeing leads to fewer health problems and higher productivity, making subjective wellbeing a focal issue among researchers and governments. However, it is difficult to estimate how happy people were during previous centuries. Here we show that a method based on the quantitative analysis of natural language published over the past 200 years captures reliable patterns in historical subjective wellbeing. Using sentiment analysis on the basis of psychological valence norms, we compute a national valence index for the United Kingdom, the United States, Germany and Italy, indicating relative happiness in response to national and international wars and in comparison to historical trends in longevity and gross domestic product. We validate our method using Eurobarometer survey data from the 1970s and demonstrate robustness using words with stable historical meanings, diverse corpora (newspapers, magazines and books) and additional word norms. By providing a window on quantitative historical psychology, this approach could inform policy and economic history.
Collapse
|
42
|
France CR, France JL. Estimating the risk of blood donation fainting for self versus others: the moderating effect of fear. Transfusion 2019; 59:2039-2045. [PMID: 30828820 DOI: 10.1111/trf.15225] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 01/31/2019] [Accepted: 01/31/2019] [Indexed: 11/26/2022]
Abstract
BACKGROUND People overestimate the likelihood that blood donors will experience faint and prefaint reactions, particularly if they are themselves fearful of having blood drawn. This study investigated the influence of making a risk assessment for oneself versus others, and examined whether providing information about the low base rates of reactions affected donation attitude, anxiety, and intention. STUDY DESIGN AND METHODS An anonymous online survey was completed by 3702 individuals (53.9% female; mean age, 36.9 years; standard deviation, 12.1 years; range, 18-84) that included questions about fear of having blood drawn and estimation of the risk of donation-related faint and prefaint symptoms for themselves and others. A random one-half of the respondents received accurate information about the proportion of whole blood donors who experience such reactions before all participants completed questions concerning their donation attitude, anxiety, and intention. RESULTS Estimates of both faint and prefaint reactions revealed a pattern of lower perceived risk for self versus others among individuals with low fear, and higher perceived risk for self versus others among those with high fear. Provision of accurate information about risk of reactions was associated with more positive donation attitudes (F[1, 3692] = 8.182, p = 0.004) and intentions (F[1, 3692] = 10.137, p = 0.001) but did not significantly affect anxiety. CONCLUSION Donation-related fear is associated with inflated expectancies of adverse events, especially regarding oneself. Such fear may be particularly important to address, as it has both a direct and indirect negative effect on donor retention.
Collapse
|