1
|
Howe PDL, Perfors A, Ransom KJ, Walker B, Fay N, Kashima Y, Saletta M, Dong S. Self-certification: A novel method for increasing sharing discernment on social media. PLoS One 2024; 19:e0303025. [PMID: 38861506 PMCID: PMC11166272 DOI: 10.1371/journal.pone.0303025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 04/17/2024] [Indexed: 06/13/2024] Open
Abstract
The proliferation of misinformation on social media platforms has given rise to growing demands for effective intervention strategies that increase sharing discernment (i.e. increase the difference in the probability of sharing true posts relative to the probability of sharing false posts). One suggested method is to encourage users to deliberate on the veracity of the information prior to sharing. However, this strategy is undermined by individuals' propensity to share posts they acknowledge as false. In our study, across three experiments, in a simulated social media environment, participants were shown social media posts and asked whether they wished to share them and, sometimes, whether they believed the posts to be truthful. We observe that requiring users to verify their belief in a news post's truthfulness before sharing it markedly curtails the dissemination of false information. Thus, requiring self-certification increased sharing discernment. Importantly, requiring self-certification didn't hinder users from sharing content they genuinely believed to be true because participants were allowed to share any posts that they indicated were true. We propose self-certification as a method that substantially curbs the spread of misleading content on social media without infringing upon the principle of free speech.
Collapse
Affiliation(s)
| | - Andrew Perfors
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Keith J. Ransom
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Bradley Walker
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
- School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, WA, Australia
| | - Nicolas Fay
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Yoshi Kashima
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Morgan Saletta
- Hunt Laboratory, University of Melbourne, Melbourne, VIC, Australia
| | - Sihan Dong
- School of Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
2
|
Brashier NM. Fighting misinformation among the most vulnerable users. Curr Opin Psychol 2024; 57:101813. [PMID: 38670015 DOI: 10.1016/j.copsyc.2024.101813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 03/04/2024] [Accepted: 03/12/2024] [Indexed: 04/28/2024]
Abstract
Misinformation undermines trust in the integrity of democratic elections, the safety of vaccines, and the authenticity of footage from war zones. Social scientists have proposed many solutions to reduce individuals' demand for fake news, but it is unclear how to evaluate them. Efficacy can mean that an intervention increases discernment (the ability to distinguish true from false content), works over a delay, scales up, and engages users. I argue that experts should also consider differences in exposure prevalence before declaring success. Misleading content makes up a small fraction of the average person's news diet, but some groups are at increased risk - conservatives and older adults see and share the most fake news. Targeting the whole population (universal prevention) could concentrate benefits among the users who already see the least misinformation to begin with. In complement to these approaches, we should design interventions for the people who need them most - conservatives and older adults (selective prevention), as well as users who have already shared low-quality news (indicated prevention).
Collapse
Affiliation(s)
- Nadia M Brashier
- Department of Psychology, University of California, San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA.
| |
Collapse
|
3
|
Offer-Westort M, Rosenzweig LR, Athey S. Battling the coronavirus 'infodemic' among social media users in Kenya and Nigeria. Nat Hum Behav 2024; 8:823-834. [PMID: 38499773 DOI: 10.1038/s41562-023-01810-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 12/20/2023] [Indexed: 03/20/2024]
Abstract
How can we induce social media users to be discerning when sharing information during a pandemic? An experiment on Facebook Messenger with users from Kenya (n = 7,498) and Nigeria (n = 7,794) tested interventions designed to decrease intentions to share COVID-19 misinformation without decreasing intentions to share factual posts. The initial stage of the study incorporated: (1) a factorial design with 40 intervention combinations; and (2) a contextual adaptive design, increasing the probability of assignment to treatments that worked better for previous subjects with similar characteristics. The second stage evaluated the best-performing treatments and a targeted treatment assignment policy estimated from the data. We precisely estimate null effects from warning flags and related article suggestions, tactics used by social media platforms. However, nudges to consider the accuracy of information reduced misinformation sharing relative to control by 4.9% (estimate = -2.3 percentage points, 95% CI = [-4.2, -0.35]). Such low-cost scalable interventions may improve the quality of information circulating online.
Collapse
Affiliation(s)
| | | | - Susan Athey
- Stanford Graduate School of Business, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Ramsey AT, Liu Y, Trueblood JS. Can Invalid Information Be Ignored When It Is Detected? Psychol Sci 2024; 35:328-344. [PMID: 38483515 DOI: 10.1177/09567976241231571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024] Open
Abstract
With the rapid spread of information via social media, individuals are prone to misinformation exposure that they may utilize when forming beliefs. Over five experiments (total N = 815 adults, recruited through Amazon Mechanical Turk in the United States), we investigated whether people could ignore quantitative information when they judged for themselves that it was misreported. Participants recruited online viewed sets of values sampled from Gaussian distributions to estimate the underlying means. They attempted to ignore invalid information, which were outlier values inserted into the value sequences. Results indicated participants were able to detect outliers. Nevertheless, participants' estimates were still biased in the direction of the outlier, even when they were most certain that they detected invalid information. The addition of visual warning cues and different task scenarios did not fully eliminate systematic over- and underestimation. These findings suggest that individuals may incorporate invalid information they meant to ignore when forming beliefs.
Collapse
Affiliation(s)
| | - Yanjun Liu
- School of Psychology, University of New South Wales
- Department of Psychological and Brain Sciences, Indiana University Bloomington
| | | |
Collapse
|
5
|
Porter E, Wood TJ. Factual corrections: Concerns and current evidence. Curr Opin Psychol 2024; 55:101715. [PMID: 37988954 DOI: 10.1016/j.copsyc.2023.101715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/11/2023] [Accepted: 10/18/2023] [Indexed: 11/23/2023]
Abstract
Factual corrections that target misinformation improve belief accuracy. They do so across a wide variety of countries, political beliefs and demographic characteristics. Instances of backfire, wherein exposure to corrections reduce accuracy, are exceedingly rare and may be an artifact of research design. The evidence regarding other common concerns is mixed. While the effects on corrections on belief are not permanent, they are not entirely ephemeral, either. With some exceptions, corrections mostly only affect belief accuracy, with minor to nonexistent influence on downstream attitudes and behaviors. While corrections are not unpopular among the public, limited available evidence suggests that those who see misinformation are exceedingly unlikely to see relevant corrections.
Collapse
|
6
|
Riesthuis P, Woods J. "That's just like, your opinion, man": the illusory truth effect on opinions. PSYCHOLOGICAL RESEARCH 2024; 88:284-306. [PMID: 37300704 PMCID: PMC10257371 DOI: 10.1007/s00426-023-01845-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
With the expanse of technology, people are constantly exposed to an abundance of information. Of vital importance is to understand how people assess the truthfulness of such information. One indicator of perceived truthfulness seems to be whether it is repeated. That is, people tend to perceive repeated information, regardless of its veracity, as more truthful than new information, also known as the illusory truth effect. In the present study, we examined whether such effect is also observed for opinions and whether the manner in which the information is encoded influenced the illusory truth effect. Across three experiments, participants (n = 552) were presented with a list of true information, misinformation, general opinion, and/or social-political opinion statements. First, participants were either instructed to indicate whether the presented statement was a fact or opinion based on its syntax structure (Exp. 1 & 2) or assign each statement to a topic category (Exp. 3). Subsequently, participants rated the truthfulness of various new and repeated statements. Results showed that repeated information, regardless of the type of information, received higher subjective truth ratings when participants simply encoded them by assigning each statement to a topic. However, when general and social-political opinions were encoded as an opinion, we found no evidence of such effect. Moreover, we found a reversed illusory truth effect for general opinion statements when only considering information that was encoded as an opinion. These findings suggest that how information is encoded plays a crucial role in evaluating truth.
Collapse
Affiliation(s)
- Paul Riesthuis
- Leuven Institute of Criminology, KU Leuven, Leuven, Belgium.
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Josh Woods
- Faculty of Psychology, Grand View University, Des Moines, IA, USA
| |
Collapse
|
7
|
Blair RA, Gottlieb J, Nyhan B, Paler L, Argote P, Stainfield CJ. Interventions to counter misinformation: Lessons from the Global North and applications to the Global South. Curr Opin Psychol 2024; 55:101732. [PMID: 38070207 DOI: 10.1016/j.copsyc.2023.101732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/03/2023] [Accepted: 11/06/2023] [Indexed: 01/28/2024]
Abstract
We synthesize evidence from 176 experimental estimates of 11 interventions intended to combat misinformation in the Global North and Global South, which we classify as informational, educational, sociopsychological, or institutional. Among these, we find the most consistent positive evidence for two informational interventions in both Global North and Global South contexts: inoculation/prebunking and debunking. In a complementary survey of 138 misinformation scholars and practitioners, we find that experts tend to be most optimistic about interventions that have been least widely studied or that have been shown to be mostly ineffective. We provide a searchable database of misinformation randomized controlled trials and suggest avenues for future research to close the gap between expert opinion and academic research.
Collapse
Affiliation(s)
- Robert A Blair
- Department of Political Science and Watson Institute for International and Public Affairs, Brown University, United States
| | - Jessica Gottlieb
- Hobby School of Public Affairs, University of Houston, United States
| | - Brendan Nyhan
- Department of Government, Dartmouth College, United States.
| | - Laura Paler
- Department of Government, School of Public Affairs, American University, United States
| | - Pablo Argote
- Department of Political Science and International Relations, University of Southern California, United States
| | | |
Collapse
|
8
|
Moazen B, Ismail N, Agbaria N, Mazzilli S, Petri D, Amaya A, D’Arcy J, Plugge E, Tavoschi L, Stöver H. Vaccination against emerging and reemerging infectious diseases in places of detention: a global multistage scoping review. Front Public Health 2024; 12:1323195. [PMID: 38347924 PMCID: PMC10859397 DOI: 10.3389/fpubh.2024.1323195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 01/04/2024] [Indexed: 02/15/2024] Open
Abstract
Background Despite the elevated risks of infection transmission, people in prisons frequently encounter significant barriers in accessing essential healthcare services in many countries. The present scoping review aimed to evaluate the state of availability and model of delivery of vaccination services within correctional facilities across the globe. Methods Following the methodological framework for scoping reviews and adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping reviews criteria, we conducted a systematic search across four peer-reviewed literature databases (Medline via PubMed, Web of Science, the Cochrane Library, Science Direct, and EBSCO), as well as 14 sources of grey literature. Two researchers meticulously examined the identified papers independently to extract pertinent data published between 2012 and 2022. The quality of the selected publications was assessed using established quality assessment tools. Results Of the 11,281 identified papers 52 met the inclusion criteria. With the exception of one, all the included publications presented data from high-income countries, predominantly originating from the United States. Across the world, the most prevalent vaccines available in prison settings were COVID-19 and HBV vaccines, typically distributed in response to health crises such as pandemics, epidemics, and local outbreaks. Vaccine coverage and uptake rates within correctional facilities displayed noteworthy disparities among various countries and regions. Besides, individual and organizational barriers and facilitating factors of vaccination in prison settings emerged and discussed in the text. Discussion The lack of vaccination services combined with low rates of vaccination coverage and uptake among people living and working in correctional facilities represents a cause for concern. Prisons are not isolated from the broader community, therefore, efforts to increase vaccine uptake among people who live and work in prisons will yield broader public health benefits.
Collapse
Affiliation(s)
- Babak Moazen
- Heidelberg Institute of Global Health, Heidelberg University, Heidelberg, Germany
- Department of Health and Social Work, Institute of Addiction Research (ISFF), Frankfurt University of Applied Sciences, Frankfurt, Germany
| | - Nasrul Ismail
- School for Policy Studies, University of Bristol, Bristol, United Kingdom
| | - Nisreen Agbaria
- Heidelberg Institute of Global Health, Heidelberg University, Heidelberg, Germany
- Department of Health and Social Work, Institute of Addiction Research (ISFF), Frankfurt University of Applied Sciences, Frankfurt, Germany
| | - Sara Mazzilli
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Davide Petri
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Arianna Amaya
- Heidelberg Institute of Global Health, Heidelberg University, Heidelberg, Germany
| | | | - Emma Plugge
- UK Health Security Agency, London, United Kingdom
- Primary Care, Population Sciences and Medical Education, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Lara Tavoschi
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Heino Stöver
- Department of Health and Social Work, Institute of Addiction Research (ISFF), Frankfurt University of Applied Sciences, Frankfurt, Germany
| |
Collapse
|
9
|
Buczel KA, Siwiak A, Szpitalak M, Polczyk R. How do forewarnings and post-warnings affect misinformation reliance? The impact of warnings on the continued influence effect and belief regression. Mem Cognit 2024:10.3758/s13421-024-01520-z. [PMID: 38261249 DOI: 10.3758/s13421-024-01520-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/06/2024] [Indexed: 01/24/2024]
Abstract
People often continue to rely on certain information in their reasoning, even if this information has been retracted; this is called the continued influence effect (CIE) of misinformation. One technique for reducing this effect involves explicitly warning people that there is a possibility that they might have been misled. The present study aimed to investigate these warnings' effectiveness, depending on when they were given (either before or after misinformation). In two experiments (N = 337), we found that while a forewarning did reduce reliance on misinformation, retrospectively warned participants (when the warning was placed either between the misinformation and the retraction or just before testing) relied on the misinformation to a similar degree as unwarned participants. However, the protective effect of the forewarning was not durable, as shown by the fact that reliance on the misinformation increased for over 7 days following the first testing, despite continued memory of the retraction.
Collapse
Affiliation(s)
- Klara Austeja Buczel
- Institute of Psychology, Jagiellonian University, Kraków, Poland.
- Doctoral School in the Social Sciences, Jagiellonian University, Kraków, Poland.
| | - Adam Siwiak
- Institute of Psychology, Jagiellonian University, Kraków, Poland
- Doctoral School in the Social Sciences, Jagiellonian University, Kraków, Poland
| | | | - Romuald Polczyk
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| |
Collapse
|
10
|
Martel C, Rand DG. Misinformation warning labels are widely effective: A review of warning effects and their moderating features. Curr Opin Psychol 2023; 54:101710. [PMID: 37972523 DOI: 10.1016/j.copsyc.2023.101710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 10/14/2023] [Accepted: 10/15/2023] [Indexed: 11/19/2023]
Abstract
There is growing concern over the spread of misinformation online. One widely adopted intervention by platforms for addressing falsehoods is applying "warning labels" to posts deemed inaccurate by fact-checkers. Despite a rich literature on correcting misinformation after exposure, much less work has examined the effectiveness of warning labels presented concurrent with exposure. Promisingly, existing research suggests that warning labels effectively reduce belief and spread of misinformation. The size of these beneficial effects depends on how the labels are implemented and the characteristics of the content being labeled. Despite some individual differences, recent evidence indicates that warning labels are generally effective across party lines and other demographic characteristics. We discuss potential implications and limitations of labeling policies for addressing online misinformation.
Collapse
Affiliation(s)
- Cameron Martel
- Sloan School of Management, Massachusetts Institute of Technology, 02142 Cambridge, MA, USA.
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, 02142 Cambridge, MA, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 02139 Cambridge, MA, USA
| |
Collapse
|
11
|
Prike T, Ecker UKH. Effective correction of misinformation. Curr Opin Psychol 2023; 54:101712. [PMID: 37944323 DOI: 10.1016/j.copsyc.2023.101712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/14/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023]
Abstract
This paper reviews correction effectiveness, highlighting which factors matter, which do not, and where further research is needed. To boost effectiveness, we recommend using detailed corrections and providing an alternative explanation wherever possible. We also recommend providing a reminder of the initial misinformation and repeating the correction. Presenting corrections pre-emptively (i.e., prebunking) or after misinformation exposure is unlikely to greatly impact correction effectiveness. There is also limited risk of repeating misinformation within a correction or that a correction will inadvertently spread misinformation to new audiences. Further research is needed into which correction formats are most effective, whether boosting correction memorability can enhance effectiveness, the effectiveness of discrediting a misinformation source, and whether distrusted correction sources can contribute to corrections backfiring.
Collapse
Affiliation(s)
- Toby Prike
- School of Psychological Science, University of Western Australia, Perth, Australia.
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
| |
Collapse
|
12
|
Scales D, Hurth L, Xi W, Gorman S, Radhakrishnan M, Windham S, Akunne A, Florman J, Leininger L, Gorman J. Addressing Antivaccine Sentiment on Public Social Media Forums Through Web-Based Conversations Based on Motivational Interviewing Techniques: Observational Study. JMIR INFODEMIOLOGY 2023; 3:e50138. [PMID: 37962940 PMCID: PMC10685291 DOI: 10.2196/50138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/16/2023] [Accepted: 09/30/2023] [Indexed: 11/15/2023]
Abstract
BACKGROUND Health misinformation shared on social media can have negative health consequences; yet, there is a dearth of field research testing interventions to address health misinformation in real time, digitally, and in situ on social media. OBJECTIVE We describe a field study of a pilot program of "infodemiologists" trained with evidence-informed intervention techniques heavily influenced by principles of motivational interviewing. Here we provide a detailed description of the nature of infodemiologists' interventions on posts sharing misinformation about COVID-19 vaccines, present an initial evaluation framework for such field research, and use available engagement metrics to quantify the impact of these in-group messengers on the web-based threads on which they are intervening. METHODS We monitored Facebook (Meta Platforms, Inc) profiles of news organizations marketing to 3 geographic regions (Newark, New Jersey; Chicago, Illinois; and central Texas). Between December 2020 and April 2021, infodemiologists intervened in 145 Facebook news posts that generated comments containing either false or misleading information about vaccines or overt antivaccine sentiment. Engagement (emojis plus replies) data were collected on Facebook news posts, the initial comment containing misinformation (level 1 comment), and the infodemiologist's reply (level 2 reply comment). A comparison-group evaluation design was used, with numbers of replies, emoji reactions, and engagements for level 1 comments compared with the median metrics of matched comments using the Wilcoxon signed rank test. Level 2 reply comments (intervention) were also benchmarked against the corresponding metric of matched reply comments (control) using the Wilcoxon signed rank test (paired at the level 1 comment level). Infodemiologists' level 2 reply comments (intervention) and matched reply comments (control) were further compared using 3 Poisson regression models. RESULTS In total, 145 interventions were conducted on 132 Facebook news posts. The level 1 comments received a median of 3 replies, 3 reactions, and 7 engagements. The matched comments received a median of 1.5 (median of IQRs 3.75) engagements. Infodemiologists made 322 level 2 reply comments, precipitating 189 emoji reactions and a median of 0.5 (median of IQRs IQR 0) engagements. The matched reply comments received a median of 1 (median of IQRs 2.5) engagement. Compared to matched comments, level 1 comments received more replies, emoji reactions, and engagements. Compared to matched reply comments, level 2 reply comments received fewer and narrower ranges of replies, reactions, and engagements, except for the median comparison for replies. CONCLUSIONS Overall, empathy-first communication strategies based on motivational interviewing garnered less engagement relative to matched controls. One possible explanation is that our interventions quieted contentious, misinformation-laden threads about vaccines on social media. This work reinforces research on accuracy nudges and cyberbullying interventions that also reduce engagement. More research leveraging field studies of real-time interventions is needed, yet data transparency by technology platforms will be essential to facilitate such experiments.
Collapse
Affiliation(s)
- David Scales
- Weill Cornell Medicine, New York City, NY, United States
- Critica, Bronx, NY, United States
| | | | - Wenna Xi
- Weill Cornell Medicine, New York City, NY, United States
| | | | | | | | | | | | - Lindsey Leininger
- Tuck School of Business, Dartmouth College, Hannover, NH, United States
| | | |
Collapse
|
13
|
Abel M, Bäuml KHT. Item-method directed forgetting and perceived truth of news headlines. Memory 2023; 31:1371-1386. [PMID: 37819019 DOI: 10.1080/09658211.2023.2267191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 09/06/2023] [Indexed: 10/13/2023]
Abstract
Research on item-method directed forgetting (IMDF) suggests that memories can be voluntarily forgotten. IMDF is however usually examined with relatively simple study materials, such as single words or pictures. In the present study, we examined voluntary forgetting of news headlines from (presumably) untrustworthy sources. Experiment 1 found intact IMDF when to-be-forgotten headlines were characterised as untrustworthy and to-be-remembered headlines were characterised as trustworthy. Experiment 2 separated remember/forget cues and trustworthiness prompts. Forget cues alone had a large effect on memory, but only a small reducing effect on perceived truth. In contrast, trustworthiness prompts alone had essentially no effect on memory, but a large effect on perceived truth. Finally, Experiment 3 fully crossed forget/remember cues and trustworthiness prompts, revealing that forget cues can reduce memory irrespective of whether headlines are characterised as trustworthy or untrustworthy. Moreover, forget cues may bias source attributions, which can explain their small reducing effect on perceived truth. Overall, this work suggests that news headlines can be voluntarily forgotten. At least when people are motivated to forget information from untrustworthy sources, such forgetting may be helpful for curtailing the spread of false information.
Collapse
Affiliation(s)
- Magdalena Abel
- Department of Experimental Psychology, Regensburg University, Regensburg, Germany
| | - Karl-Heinz T Bäuml
- Department of Experimental Psychology, Regensburg University, Regensburg, Germany
| |
Collapse
|
14
|
Meng Y, Broom M, Li A. Impact of misinformation in the evolution of collective cooperation on networks. J R Soc Interface 2023; 20:20230295. [PMID: 37751874 PMCID: PMC10522409 DOI: 10.1098/rsif.2023.0295] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 09/04/2023] [Indexed: 09/28/2023] Open
Abstract
Human societies are organized and developed through collective cooperative behaviours. Based on the information in their environment, individuals can form collective cooperation by strategically changing unfavourable surroundings and imitating superior behaviours. However, facing the rampant proliferation and spreading of misinformation, we still lack systematic investigations into the impact of misinformation on the evolution of collective cooperation. Here, we study this problem by classical evolutionary game theory. We find that the existence of misinformation generally impedes the emergence of collective cooperation on networks, although the level of cooperation is slightly higher for weak social cooperative dilemma below a proven threshold. We further show that this possible advantage diminishes as social connections become denser, suggesting that the detrimental effect of misinformation further increases when 'social viscosity' is low. Our results uncover the quantitative effect of misinformation on suppressing collective cooperation, and pave the way for designing possible mechanisms to improve collective cooperation.
Collapse
Affiliation(s)
- Yao Meng
- Center for Systems and Control, College of Engineering, Peking University, Beijing 100871, People’s Republic of China
| | - Mark Broom
- Department of Mathematics, City, University of London, Northampton Square, London EC1V 0HB, UK
| | - Aming Li
- Center for Systems and Control, College of Engineering, Peking University, Beijing 100871, People’s Republic of China
- Center for Multi-Agent Research, Institute for Artificial Intelligence, Peking University, Beijing 100871, People’s Republic of China
| |
Collapse
|
15
|
Kozyreva A, Smillie L, Lewandowsky S. Incorporating Psychological Science Into Policy Making: The Case of Misinformation. EUROPEAN PSYCHOLOGIST 2023; 28:a000493. [PMID: 37994309 PMCID: PMC7615323 DOI: 10.1027/1016-9040/a000493] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
The spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem because misinformation can harm both the public and democracies. To address the spread of misinformation, policymakers require a successful interface between science and policy, as well as a range of evidence-based solutions that respect fundamental rights while efficiently mitigating the harms of misinformation online. In this article, we discuss how regulatory and nonregulatory instruments can be informed by scientific research and used to reach EU policy objectives. First, we consider what it means to approach misinformation as a policy problem. We then outline four building blocks for cooperation between scientists and policymakers who wish to address the problem of misinformation: understanding the misinformation problem, understanding the psychological drivers and public perceptions of misinformation, finding evidence-based solutions, and co-developing appropriate policy measures. Finally, through the lens of psychological science, we examine policy instruments that have been proposed in the EU, focusing on the strengthened Code of Practice on Disinformation 2022.
Collapse
Affiliation(s)
- Anastasia Kozyreva
- Center for Adaptive Rationality, Max Planck Institute for Human Development, Berlin, Germany
| | - Laura Smillie
- Joint Research Center, European Commission, Brussels, Belgium
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, UK
- School of Psychological Sciences, University of Western Australia, Australia
- Department of Psychology, University of Potsdam, Germany
| |
Collapse
|
16
|
Globig LK, Holtz N, Sharot T. Changing the incentive structure of social media platforms to halt the spread of misinformation. eLife 2023; 12:85767. [PMID: 37278047 DOI: 10.7554/elife.85767] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 04/21/2023] [Indexed: 06/07/2023] Open
Abstract
The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social 'carrots' (e.g., 'likes') and 'sticks' (e.g., 'dislikes') that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement.
Collapse
Affiliation(s)
- Laura K Globig
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, United Kingdom
- The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Nora Holtz
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, United Kingdom
- The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
| | - Tali Sharot
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, United Kingdom
- The Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
17
|
Zhong B. Going beyond fact-checking to fight health misinformation: A multi-level analysis of the Twitter response to health news stories. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT 2023. [DOI: 10.1016/j.ijinfomgt.2023.102626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
18
|
Shao X, Li A, Chen C, Loftus EF, Zhu B. Cross-stage neural pattern similarity in the hippocampus predicts false memory derived from post-event inaccurate information. Nat Commun 2023; 14:2299. [PMID: 37085518 PMCID: PMC10121656 DOI: 10.1038/s41467-023-38046-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/11/2023] [Indexed: 04/23/2023] Open
Abstract
The misinformation effect occurs when people's memory of an event is altered by subsequent inaccurate information. No study has systematically tested theories about the dynamics of human hippocampal representations during the three stages of misinformation-induced false memory. This study replicates behavioral results of the misinformation effect, and investigates the cross-stage pattern similarity in the hippocampus and cortex using functional magnetic resonance imaging. Results show item-specific hippocampal pattern similarity between original-event and post-event stages. During the memory-test stage, hippocampal representations of original information are weakened for true memory, whereas hippocampal representations of misinformation compete with original information to create false memory. When false memory occurs, this conflict is resolved by the lateral prefrontal cortex. Individuals' memory traces of post-event information in the hippocampus predict false memory, whereas original information in the lateral parietal cortex predicts true memory. These findings support the multiple-trace model, and emphasize the reconstructive nature of human memory.
Collapse
Affiliation(s)
- Xuhao Shao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 100875, Beijing, China
- Institute of Developmental Psychology, Beijing Normal University, 100875, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, 100875, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, 100875, Beijing, China
| | - Ao Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 100875, Beijing, China
| | - Chuansheng Chen
- Department of Psychological Science, University of California, Irvine, CA, 92697, USA
| | - Elizabeth F Loftus
- Department of Psychological Science, University of California, Irvine, CA, 92697, USA
| | - Bi Zhu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 100875, Beijing, China.
- Institute of Developmental Psychology, Beijing Normal University, 100875, Beijing, China.
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, 100875, Beijing, China.
- IDG/McGovern Institute for Brain Research, Beijing Normal University, 100875, Beijing, China.
| |
Collapse
|
19
|
Greene CM, Murphy G. Debriefing works: Successful retraction of misinformation following a fake news study. PLoS One 2023; 18:e0280295. [PMID: 36662686 PMCID: PMC9858761 DOI: 10.1371/journal.pone.0280295] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/24/2022] [Indexed: 01/21/2023] Open
Abstract
In recent years there has been an explosion of research on misinformation, often involving experiments where participants are presented with fake news stories and subsequently debriefed. In order to avoid potential harm to participants or society, it is imperative that we establish whether debriefing procedures remove any lasting influence of misinformation. In the current study, we followed up with 1547 participants one week after they had been exposed to fake news stories about COVID-19 and then provided with a detailed debriefing. False memories and beliefs for previously-seen fake stories declined from the original study, suggesting that the debrief was effective. Moreover, the debriefing resulted in reduced false memories and beliefs for novel fake stories, suggesting a broader impact on participants' willingness to accept misinformation. Small effects of misinformation on planned health behaviours observed in the original study were also eliminated at follow-up. Our findings suggest that when a careful and thorough debriefing procedure is followed, researchers can safely and ethically conduct misinformation research on sensitive topics.
Collapse
Affiliation(s)
- Ciara M. Greene
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Gillian Murphy
- School of Applied Psychology, University College Cork, Cork, Ireland
| |
Collapse
|
20
|
Gwiaździński P, Gundersen AB, Piksa M, Krysińska I, Kunst JR, Noworyta K, Olejniuk A, Morzy M, Rygula R, Wójtowicz T, Piasecki J. Psychological interventions countering misinformation in social media: A scoping review. Front Psychiatry 2023; 13:974782. [PMID: 36684016 PMCID: PMC9849948 DOI: 10.3389/fpsyt.2022.974782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/30/2022] [Indexed: 01/07/2023] Open
Abstract
Introduction The rise of social media users and the explosive growth in misinformation shared across social media platforms have become a serious threat to democratic discourse and public health. The mentioned implications have increased the demand for misinformation detection and intervention. To contribute to this challenge, we are presenting a systematic scoping review of psychological interventions countering misinformation in social media. The review was conducted to (i) identify and map evidence on psychological interventions countering misinformation, (ii) compare the viability of the interventions on social media, and (iii) provide guidelines for the development of effective interventions. Methods A systematic search in three bibliographic databases (PubMed, Embase, and Scopus) and additional searches in Google Scholar and reference lists were conducted. Results 3,561 records were identified, 75 of which met the eligibility criteria for the inclusion in the final review. The psychological interventions identified during the review can be classified into three categories distinguished by Kozyreva et al.: Boosting, Technocognition, and Nudging, and then into 15 types within these. Most of the studied interventions were not implemented and tested in a real social media environment but under strictly controlled settings or online crowdsourcing platforms. The presented feasibility assessment of implementation insights expressed qualitatively and with numerical scoring could guide the development of future interventions that can be successfully implemented on social media platforms. Discussion The review provides the basis for further research on psychological interventions counteracting misinformation. Future research on interventions should aim to combine effective Technocognition and Nudging in the user experience of online services. Systematic review registration [https://figshare.com/], identifier [https://doi.org/10.6084/m9.figshare.14649432.v2].
Collapse
Affiliation(s)
- Paweł Gwiaździński
- Department of Philosophy and Bioethics, Faculty of Health Sciences, Jagiellonian University Medical College, Kraków, Poland
- Consciousness Lab, Institute of Psychology, Jagiellonian University, Kraków, Poland
| | | | - Michal Piksa
- Affective Cognitive Neuroscience Laboratory, Department of Pharmacology, Maj Institute of Pharmacology of the Polish Academy of Sciences, Kraków, Poland
| | | | - Jonas R. Kunst
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Karolina Noworyta
- Affective Cognitive Neuroscience Laboratory, Department of Pharmacology, Maj Institute of Pharmacology of the Polish Academy of Sciences, Kraków, Poland
| | | | | | - Rafal Rygula
- Affective Cognitive Neuroscience Laboratory, Department of Pharmacology, Maj Institute of Pharmacology of the Polish Academy of Sciences, Kraków, Poland
| | | | - Jan Piasecki
- Department of Philosophy and Bioethics, Faculty of Health Sciences, Jagiellonian University Medical College, Kraków, Poland
| |
Collapse
|
21
|
Aïmeur E, Amri S, Brassard G. Fake news, disinformation and misinformation in social media: a review. SOCIAL NETWORK ANALYSIS AND MINING 2023; 13:30. [PMID: 36789378 PMCID: PMC9910783 DOI: 10.1007/s13278-023-01028-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/07/2023] [Accepted: 01/12/2023] [Indexed: 02/12/2023]
Abstract
Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread of fake news. Fake news identification is still a complex unresolved issue. Furthermore, fake news detection on OSNs presents unique characteristics and challenges that make finding a solution anything but trivial. On the other hand, artificial intelligence (AI) approaches are still incapable of overcoming this challenging problem. To make matters worse, AI techniques such as machine learning and deep learning are leveraged to deceive people by creating and disseminating fake content. Consequently, automatic fake news detection remains a huge challenge, primarily because the content is designed in a way to closely resemble the truth, and it is often hard to determine its veracity by AI alone without additional information from third parties. This work aims to provide a comprehensive and systematic review of fake news research as well as a fundamental review of existing approaches used to detect and prevent fake news from spreading via OSNs. We present the research problem and the existing challenges, discuss the state of the art in existing approaches for fake news detection, and point out the future research directions in tackling the challenges.
Collapse
Affiliation(s)
- Esma Aïmeur
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| | - Sabrine Amri
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| | - Gilles Brassard
- Department of Computer Science and Operations Research (DIRO), University of Montreal, Montreal, Canada
| |
Collapse
|
22
|
Salvador Casara BG, Filippi S, Suitner C, Dollani E, Maass A. Tax the élites! The role of economic inequality and conspiracy beliefs on attitudes towards taxes and redistribution intentions. BRITISH JOURNAL OF SOCIAL PSYCHOLOGY 2023; 62:104-118. [PMID: 35758713 PMCID: PMC10084418 DOI: 10.1111/bjso.12555] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 06/02/2022] [Indexed: 01/10/2023]
Abstract
Taxation is one of the most widely acknowledged strategies to reduce inequality, particularly if based on progressivity. In a high-powered sample study (N = 2119) we investigated economic inequality and conspiracy beliefs as two key predictors of tax attitude and support for progressive taxation. We found that participants in the high economic inequality condition had lower levels of tax compliance and higher levels of conspiracy beliefs and support for progressive taxation. Furthermore, the effect of the experimental condition on tax compliance was mediated by conspiracy beliefs. Finally, conspiracy belief scores were positively associated with support for progressive taxation. Our results provide evidence that attitudes towards taxation are not monolithic but change considering the aims and targets of specific taxes. Indeed, while the perception of economic inequality prompts the desire for equal redistribution, it also fosters conspiracy narratives that undermine compliance with taxes.
Collapse
|
23
|
Eggleston A, Cook R, Over H. The influence of fake news on face-trait learning. PLoS One 2022; 17:e0278671. [PMID: 36542558 PMCID: PMC9770340 DOI: 10.1371/journal.pone.0278671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 11/21/2022] [Indexed: 12/24/2022] Open
Abstract
Humans spontaneously attribute a wide range of traits to conspecifics based on their facial appearance. Unsurprisingly, previous findings indicate that this 'person evaluation' is affected by information provided about the target's past actions and behaviours. Strikingly, many news items shared on social media sites (e.g., Twitter) describe the actions of individuals who are often shown in accompanying images. This kind of material closely resembles that encountered by participants in previous studies of face-trait learning. We therefore sought to determine whether Twitter posts that pair facial images with favourable and unfavourable biographical information also modulate subsequent trait evaluation of the people depicted. We also assessed whether the effects of this information-valence manipulation were attenuated by the presence of the "disputed tag", introduced by Twitter as a means to combat the influence of fake-news. Across two preregistered experiments, we found that fictional tweets that paired facial images with details of the person's positive or negative actions affected the extent to which readers subsequently judged the faces depicted to be trustworthy. When the rating phase followed immediately after the study phase, the presence of the disputed tag attenuated the effect of the behavioural information (Experiment 1: N = 128; Mage = 34.06; 89 female, 36 male, 3 non-binary; 116 White British). However, when the rating phase was conducted after a 10-minute delay, the presence of the disputed tag had no significant effect (Experiment 2: N = 128; Mage = 29.12; 78 female, 44 male, 4 non-binary, 2 prefer not to say; 110 White British). Our findings suggest that disputed tags may have relatively little impact on the long-term face-trait learning that occurs via social media. As such, fake news stories may have considerable potential to shape users' person evaluation.
Collapse
Affiliation(s)
- Adam Eggleston
- Department of Psychology, University of York, York, United Kingdom
- * E-mail:
| | - Richard Cook
- Department of Psychology, University of York, York, United Kingdom
- The School of Psychology, University of Leeds, Leeds, United Kingdom
| | - Harriet Over
- Department of Psychology, University of York, York, United Kingdom
| |
Collapse
|
24
|
Kemp PL, Loaiza VM, Wahlheim CN. Fake news reminders and veracity labels differentially benefit memory and belief accuracy for news headlines. Sci Rep 2022; 12:21829. [PMID: 36528666 PMCID: PMC9758464 DOI: 10.1038/s41598-022-25649-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022] Open
Abstract
Fake news exposure can negatively affect memory and beliefs, thus sparking debate about whether to repeat misinformation during corrections. The once-prevailing view was that repeating misinformation increases its believability and should thus be avoided. However, misinformation reminders have more recently been shown to enhance memory and belief accuracy. We replicated such reminder benefits in two experiments using news headlines and compared those benefits against the effects of veracity labeling. Specifically, we examined the effects of labeling real news corrections to enhance conflict salience (Experiment 1) and labeling fake news on its debut to encourage intentional forgetting (Experiment 2). Participants first viewed real and fake news headlines with some fake news labeled as false. Participants then saw labeled and unlabeled real news corrections; labeled corrections appeared alone or after fake news reminders. Reminders promoted the best memory and belief accuracy, whereas veracity labels had selective effects. Correction labels led to intermediate memory and belief accuracy, whereas fake news labels improved accuracy for beliefs more than memory. The extent that real and fake news details were recalled together correlated with overall memory and belief differences across conditions, implicating a critical role for integrative encoding that was promoted most by fake news reminders.
Collapse
Affiliation(s)
- Paige L. Kemp
- grid.266860.c0000 0001 0671 255XDepartment of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC 27402-6170 USA
| | - Vanessa M. Loaiza
- grid.8356.80000 0001 0942 6946Department of Psychology, University of Essex, Colchester, UK
| | - Christopher N. Wahlheim
- grid.266860.c0000 0001 0671 255XDepartment of Psychology, University of North Carolina at Greensboro, 296 Eberhart Building, P. O. Box 26170, Greensboro, NC 27402-6170 USA
| |
Collapse
|
25
|
Baumgaertner B, Justwan F. The preference for belief, issue polarization, and echo chambers. SYNTHESE 2022; 200:412. [PMID: 36274926 PMCID: PMC9583733 DOI: 10.1007/s11229-022-03880-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 08/31/2022] [Indexed: 06/16/2023]
Abstract
Some common explanations of issue polarization and echo chambers rely on social or cognitive mechanisms of exclusion. Accordingly, suggested interventions like "be more open-minded" target these mechanisms: avoid epistemic bubbles and don't discount contrary information. Contrary to such explanations, we show how a much weaker mechanism-the preference for belief-can produce issue polarization in epistemic communities with little to no mechanisms of exclusion. We present a network model (with an empirically-validated structure) that demonstrates how a dynamic interaction between the preference for belief and common structures of epistemic communities can turn very small unequal distributions of initial beliefs into full-blown polarization. This points to a different class of explanations, one that emphasizes the importance of the initial spread of information. We also show how our model complements extant explanations by including a version of biased assimilation and motivated reasoning-cognitive mechanisms of exclusion. We find that mechanisms of exclusion can exacerbate issue polarization, but may not be the ultimate root of it. Hence, the recommended interventions suggested by extant literature is expected to be limited and the problem of issue polarization to be even more intractable.
Collapse
Affiliation(s)
- Bert Baumgaertner
- Department of Politics and Philosophy, University of Idaho, Moscow, USA
| | - Florian Justwan
- Department of Politics and Philosophy, University of Idaho, Moscow, USA
| |
Collapse
|
26
|
Kemp PL, Alexander TR, Wahlheim CN. Recalling fake news during real news corrections can impair or enhance memory updating: the role of recollection-based retrieval. Cogn Res Princ Implic 2022; 7:85. [PMID: 36114359 PMCID: PMC9481799 DOI: 10.1186/s41235-022-00434-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Fake news can impair memory leading to societal controversies such as COVID-19 vaccine efficacy. The pernicious influence of fake news is clear when ineffective corrections leave memories outdated. A key theoretical issue is whether people should recall fake news while reading corrections with contradictory details. The familiarity backfire view proposes that recalling fake news increases its familiarity, leading to interference. However, the integrative encoding view proposes that recalling fake news promotes co-activation and binding of contradictory details, leading to facilitation. Two experiments examined if one theory better accounts for memory updating after participants recalled actual fake news details when reading headlines that corrected misinformation. In Phase 1, participants read real and fake news headlines of unclear veracity taken from various internet sources. In Phase 2, participants read real news headlines that reaffirmed real news and corrected fake news from Phase 1. When they detected that Phase 2 real news corrected fake news, they attempted to recall Phase 1 fake news. In Phase 3, participants first recalled real news details. When they remembered that those details were corrections from Phase 2, they attempted to recall fake news from Phase 1. Recalling fake news when noticing corrections in Phase 2 led to better memory for real news in Phase 3 when fake news was recalled again and worse memory for real news in Phase 3 when fake news was not recalled again. Both views explain part of the memory differences associated with recalling fake news during corrections, but only when considering whether people recollected that fake news had been corrected.
Collapse
|
27
|
Bowes SM, Tasimi A. Clarifying the relations between intellectual humility and pseudoscience beliefs, conspiratorial ideation, and susceptibility to fake news. JOURNAL OF RESEARCH IN PERSONALITY 2022. [DOI: 10.1016/j.jrp.2022.104220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
28
|
Buczel KA, Szyszka PD, Siwiak A, Szpitalak M, Polczyk R. Vaccination against misinformation: The inoculation technique reduces the continued influence effect. PLoS One 2022; 17:e0267463. [PMID: 35482715 PMCID: PMC9049321 DOI: 10.1371/journal.pone.0267463] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 04/08/2022] [Indexed: 11/26/2022] Open
Abstract
The continued influence effect of misinformation (CIE) is a phenomenon in which certain information, although retracted and corrected, still has an impact on event reporting, reasoning, inference, and decisions. The main goal of this paper is to investigate to what extent this effect can be reduced using the procedure of inoculation and how it can be moderated by the reliability of corrections' sources. The results show that the reliability of corrections' sources did not affect their processing when participants were not inoculated. However, inoculated participants relied on misinformation less when the correction came from a highly credible source. For this source condition, as a result of inoculation, a significant increase in belief in retraction, as well as a decrease in belief in misinformation was also found. Contrary to previous reports, belief in misinformation rather than belief in retraction predicted reliance on misinformation. These findings are of both great practical importance as certain boundary conditions for inoculation efficiency have been discovered to reduce the impact of the continued influence of misinformation, and theoretical, as they provide insight into the mechanisms behind CIE. The results were interpreted in terms of existing CIE theories as well as within the remembering framework, which describes the conversion from memory traces to behavioral manifestations of memory.
Collapse
Affiliation(s)
| | | | - Adam Siwiak
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| | | | - Romuald Polczyk
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| |
Collapse
|
29
|
How do we raise media bias awareness effectively? Effects of visualizations to communicate bias. PLoS One 2022; 17:e0266204. [PMID: 35417491 PMCID: PMC9007373 DOI: 10.1371/journal.pone.0266204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/16/2022] [Indexed: 11/27/2022] Open
Abstract
Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an online experiment, we randomized 985 participants to receive a biased liberal or conservative news article in any combination of the three aids. Meanwhile, their subjective perception of media bias in this article, attitude change, and political ideology were assessed. Both the forewarning message and the annotations increased media bias awareness, whereas the political classification showed no effect. Incongruence between an articles’ political position and individual political orientation also increased media bias awareness. Visual aids did not mitigate this effect. Likewise, attitudes remained unaltered.
Collapse
|
30
|
Panizza F, Ronzani P, Martini C, Mattavelli S, Morisseau T, Motterlini M. Lateral reading and monetary incentives to spot disinformation about science. Sci Rep 2022; 12:5678. [PMID: 35383208 PMCID: PMC8981191 DOI: 10.1038/s41598-022-09168-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/15/2022] [Indexed: 11/21/2022] Open
Abstract
Disinformation about science can impose enormous economic and public health burdens. A recently proposed strategy to help online users recognise false content is to follow the techniques of professional fact checkers, such as looking for information on other websites (lateral reading) and looking beyond the first results suggested by search engines (click restraint). In two preregistered online experiments (N = 5387), we simulated a social media environment and tested two interventions, one in the form of a pop-up meant to advise participants to follow such techniques, the other based on monetary incentives. We measured participants’ ability to identify whether information was scientifically valid or invalid. Analysis of participants’ search style reveals that both monetary incentives and pop-up increased the use of fact-checking strategies. Monetary incentives were overall effective in increasing accuracy, whereas the pop-up worked when the source of information was unknown. Pop-up and incentives, when used together, produced a cumulative effect on accuracy. We suggest that monetary incentives enhance content relevance, and could be combined with fact-checking techniques to counteract disinformation.
Collapse
Affiliation(s)
- Folco Panizza
- Molecular Mind Laboratory, IMT School for Advanced Studies Lucca, Lucca , Italy. .,Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy.
| | - Piero Ronzani
- Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy
| | - Carlo Martini
- Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy.,TINT - Centre for Philosophy of Social Science, Department of Political and Economic Studies, University of Helsinki, Helsinki, Finland
| | | | - Tiffany Morisseau
- Université de Paris and Université Gustave Eiffel, LaPEA, Boulogne-Billancourt, France.,Strane Innovation, Gif-sur-Yvette, France
| | - Matteo Motterlini
- Centre for Applied and Experimental Epistemology, Vita-Salute San Raffaele University, Cesano Maderno, Italy
| |
Collapse
|
31
|
Johnson SS. Knowing Well, Being Well: well-being born of understanding: The Urgent Need for Coordinated and Comprehensive Efforts to Combat Misinformation. Am J Health Promot 2022; 36:559-581. [PMID: 35164544 PMCID: PMC8851052 DOI: 10.1177/08901171211070957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
32
|
Roozenbeek J, van der Linden S. How to Combat Health Misinformation: A Psychological Approach. Am J Health Promot 2022; 36:569-575. [DOI: 10.1177/08901171211070958] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Jon Roozenbeek
- Department of Psychology, School of the Biological Sciences, University of Cambridge, Cambridge, UK
| | - Sander van der Linden
- Department of Psychology, School of the Biological Sciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
33
|
Disinformation detox: teaching and learning about mis- and disinformation using socio-technical systems research perspectives. INFORMATION AND LEARNING SCIENCES 2022. [DOI: 10.1108/ils-09-2021-0083] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Purpose
This paper aims to address some limitations in existing approaches to the study of mis- and dis-information and offers what the authors propose as a more comprehensive approach to framing and studying these issues, geared toward the undergraduate level of learner. In doing so, the authors prioritize social shaping of technology and critical informatics perspectives as lenses for explicating and understanding complex mis- and dis-information phenomena. One purpose is to offer readers an understanding of the mis- and dis-information studies landscape, and advocate for the merit of taking the given approach the authors outline.
Design/methodology/approach
The paper builds upon design-based research (DBR) methods. In this paper, the authors present the actual curriculum that will be empirically researched in 2022 and beyond in a program of iterative DBR.
Findings
Findings of this conceptual paper comprise a fully articulated undergraduate syllabus for a course the authors entitled, “Disinformation Detox.” The authors will iterate upon this curriculum development in ongoing situated studies conducted in undergraduate classrooms.
Originality/value
The value and originality of this article is in its contribution of the ontological “innovation” of a way of framing the mis- and dis-information knowledge domain in terms of social shaping and critical informatics theories. The authors argue that the proposed approach offers students the opportunity to cultivate a complex form of what Milner and Phillips describe as “ecological literacy” that is in keeping with the mis- and dis-information problem domain.
Collapse
|
34
|
Tay LQ, Hurlstone MJ, Kurz T, Ecker UKH. A comparison of prebunking and debunking interventions for implied versus explicit misinformation. Br J Psychol 2021; 113:591-607. [PMID: 34967004 DOI: 10.1111/bjop.12551] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 12/15/2021] [Indexed: 01/14/2023]
Abstract
Psychological research has offered valuable insights into how to combat misinformation. The studies conducted to date, however, have three limitations. First, pre-emptive ("prebunking") and retroactive ("debunking") interventions have mostly been examined in parallel, and thus it is unclear which of these two predominant approaches is more effective. Second, there has been a focus on misinformation that is explicitly false, but implied misinformation that uses literally true information to mislead is common in the real world. Finally, studies have relied mainly on questionnaire measures of reasoning, neglecting behavioural impacts of misinformation and interventions. To offer incremental progress towards addressing these three issues, we conducted an experiment (N = 735) involving misinformation on fair trade. We contrasted the effectiveness of prebunking versus debunking and the impacts of implied versus explicit misinformation, and incorporated novel measures assessing consumer behaviours (i.e., willingness-to-pay; information seeking; online misinformation promotion) in addition to standard questionnaire measures. In general, both prebunking and debunking reduced misinformation reliance. We also found that individuals tended to rely more on explicit than implied misinformation both with and without interventions.
Collapse
Affiliation(s)
- Li Qian Tay
- School of Psychological Science, University of Western Australia, Perth, Australia
| | | | - Tim Kurz
- School of Psychological Science, University of Western Australia, Perth, Australia
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
| |
Collapse
|
35
|
A brief forewarning intervention overcomes negative effects of salient changes in COVID-19 guidance. JUDGMENT AND DECISION MAKING 2021. [DOI: 10.1017/s1930297500008548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractDuring the COVID-19 pandemic, public health guidance (e.g., regarding the use of non-medical masks) changed over time. Although many revisions were a result of gains in scientific understanding, we nonetheless hypothesized that making changes in guidance salient would negatively affect evaluations of experts and health-protective intentions. In Study 1 (N = 300), we demonstrate that describing COVID-19 guidance in terms of inconsistency (versus consistency) leads people to perceive scientists and public health authorities less favorably (e.g., as less expert). For participants in Canada (n = 190), though not the U.S. (n = 110), making guidance change salient also reduced intentions to download a contact tracing app. In Study 2 (N = 1399), we show that a brief forewarning intervention mitigates detrimental effects of changes in guidance. In the absence of forewarning, emphasizing inconsistency harmed judgments of public health authorities and reduced health-protective intentions, but forewarning eliminated this effect.
Collapse
|
36
|
Lewandowsky S, Yesilada M. Inoculating against the spread of Islamophobic and radical-Islamist disinformation. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:57. [PMID: 34410513 PMCID: PMC8374109 DOI: 10.1186/s41235-021-00323-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 08/09/2021] [Indexed: 12/04/2022]
Abstract
We report the results of a preregistered study that tested the effectiveness of inoculating participants against Islamophobic and radical-Islamist disinformation. Participants in the experimental (inoculation) condition watched a video that explained common rhetorical markers of radical-Islamist and Islamophobic disinformation that had been identified in an analysis of YouTube content. The information was presented in a neutral context not involving Islam and focused on analysis of the misleading argumentation. The control group watched a video about an unrelated topic. Participants were then exposed to target videos with “gateway” content that constituted an entry point to potential Islamist or Islamophobic radicalization. Both videos contained numerous items of disinformation. Participants then answered a variety of questions such as how likely they were to share the video, their level of agreement, and their perceived accuracy of the video. Participants who had received the inoculation displayed less agreement with the video content, perceived the video as less reliable, and were less likely to share it in comparison with participants in the control group. The study provides support for the use of argument-based inoculation in combatting extremist messages.
Collapse
Affiliation(s)
- Stephan Lewandowsky
- School of Psychological Science, University of Bristol, 12A Priory Road, Bristol, BS8 1TU, UK. .,University of Western Australia, Perth, Australia.
| | - Muhsin Yesilada
- School of Psychological Science, University of Bristol, 12A Priory Road, Bristol, BS8 1TU, UK
| |
Collapse
|
37
|
Pennycook G, Rand DG. The Psychology of Fake News. Trends Cogn Sci 2021; 25:388-402. [PMID: 33736957 DOI: 10.1016/j.tics.2021.02.007] [Citation(s) in RCA: 206] [Impact Index Per Article: 68.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 02/13/2021] [Accepted: 02/17/2021] [Indexed: 12/25/2022]
Abstract
We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are 'better' at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.
Collapse
Affiliation(s)
- Gordon Pennycook
- Hill/Levene Schools of Business, University of Regina, Regina, SK S4S 0A2, Canada; Department of Psychology, University of Regina, Regina, SK S4S 0A2, Canada.
| | - David G Rand
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA 02142, USA; Institute for Data, Systems, and Society, Massachusetts Institute of Technology, Cambridge, MA 02142, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02142, USA.
| |
Collapse
|