1
|
Buczel KA, Siwiak A, Szpitalak M, Polczyk R. How do forewarnings and post-warnings affect misinformation reliance? The impact of warnings on the continued influence effect and belief regression. Mem Cognit 2024:10.3758/s13421-024-01520-z. [PMID: 38261249 DOI: 10.3758/s13421-024-01520-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/06/2024] [Indexed: 01/24/2024]
Abstract
People often continue to rely on certain information in their reasoning, even if this information has been retracted; this is called the continued influence effect (CIE) of misinformation. One technique for reducing this effect involves explicitly warning people that there is a possibility that they might have been misled. The present study aimed to investigate these warnings' effectiveness, depending on when they were given (either before or after misinformation). In two experiments (N = 337), we found that while a forewarning did reduce reliance on misinformation, retrospectively warned participants (when the warning was placed either between the misinformation and the retraction or just before testing) relied on the misinformation to a similar degree as unwarned participants. However, the protective effect of the forewarning was not durable, as shown by the fact that reliance on the misinformation increased for over 7 days following the first testing, despite continued memory of the retraction.
Collapse
Affiliation(s)
- Klara Austeja Buczel
- Institute of Psychology, Jagiellonian University, Kraków, Poland.
- Doctoral School in the Social Sciences, Jagiellonian University, Kraków, Poland.
| | - Adam Siwiak
- Institute of Psychology, Jagiellonian University, Kraków, Poland
- Doctoral School in the Social Sciences, Jagiellonian University, Kraków, Poland
| | | | - Romuald Polczyk
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| |
Collapse
|
2
|
Chan MPS, Albarracín D. A meta-analysis of correction effects in science-relevant misinformation. Nat Hum Behav 2023; 7:1514-1525. [PMID: 37322236 DOI: 10.1038/s41562-023-01623-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 05/09/2023] [Indexed: 06/17/2023]
Abstract
Scientifically relevant misinformation, defined as false claims concerning a scientific measurement procedure or scientific evidence, regardless of the author's intent, is illustrated by the fiction that the coronavirus disease 2019 vaccine contained microchips to track citizens. Updating science-relevant misinformation after a correction can be challenging, and little is known about what theoretical factors can influence the correction. Here this meta-analysis examined 205 effect sizes (that is, k, obtained from 74 reports; N = 60,861), which showed that attempts to debunk science-relevant misinformation were, on average, not successful (d = 0.19, P = 0.131, 95% confidence interval -0.06 to 0.43). However, corrections were more successful when the initial science-relevant belief concerned negative topics and domains other than health. Corrections fared better when they were detailed, when recipients were likely familiar with both sides of the issue ahead of the study and when the issue was not politically polarized.
Collapse
Affiliation(s)
- Man-Pui Sally Chan
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, USA.
| | - Dolores Albarracín
- Annenberg School for Communication, Annenberg Public Policy Center, School of Arts and Sciences, School of Nursing, Wharton School, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
3
|
Prike T, Blackley P, Swire-Thompson B, Ecker UKH. Examining the replicability of backfire effects after standalone corrections. Cogn Res Princ Implic 2023; 8:39. [PMID: 37395864 DOI: 10.1186/s41235-023-00492-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/06/2023] [Indexed: 07/04/2023] Open
Abstract
Corrections are a frequently used and effective tool for countering misinformation. However, concerns have been raised that corrections may introduce false claims to new audiences when the misinformation is novel. This is because boosting the familiarity of a claim can increase belief in that claim, and thus exposing new audiences to novel misinformation-even as part of a correction-may inadvertently increase misinformation belief. Such an outcome could be conceptualized as a familiarity backfire effect, whereby a familiarity boost increases false-claim endorsement above a control-condition or pre-correction baseline. Here, we examined whether standalone corrections-that is, corrections presented without initial misinformation exposure-can backfire and increase participants' reliance on the misinformation in their subsequent inferential reasoning, relative to a no-misinformation, no-correction control condition. Across three experiments (total N = 1156) we found that standalone corrections did not backfire immediately (Experiment 1) or after a one-week delay (Experiment 2). However, there was some mixed evidence suggesting corrections may backfire when there is skepticism regarding the correction (Experiment 3). Specifically, in Experiment 3, we found the standalone correction to backfire in open-ended responses, but only when there was skepticism towards the correction. However, this did not replicate with the rating scales measure. Future research should further examine whether skepticism towards the correction is the first replicable mechanism for backfire effects to occur.
Collapse
Affiliation(s)
- Toby Prike
- School of Psychological Science, University of Western Australia, Perth, Australia.
| | - Phoebe Blackley
- School of Psychological Science, University of Western Australia, Perth, Australia
| | - Briony Swire-Thompson
- Network Science Institute, Northeastern University, Boston, USA
- Institute of Quantitative Social Science, Harvard University, Cambridge, USA
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
- Public Policy Institute, University of Western Australia, Perth, Australia
| |
Collapse
|
4
|
Westbrook V, Wegener DT, Susmann MW. Mechanisms in continued influence: The impact of misinformation corrections on source perceptions. Mem Cognit 2023:10.3758/s13421-023-01402-w. [PMID: 36988856 DOI: 10.3758/s13421-023-01402-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/02/2023] [Indexed: 03/30/2023]
Abstract
Research on the continued influence effect (CIE) of misinformation has demonstrated that misinformation continues to influence people's beliefs and judgments even after it has been corrected. Although most theorizing about the CIE attempts to explain why corrections do not eliminate belief in and influences of the misinformation, the present research takes a different approach and focuses instead on why corrections do reduce belief in misinformation (even if not entirely). We examined how a correction can change perceptions of the original source of the misinformation and how these changes in perceptions can mediate continued influence effects. We also examined causal evidence linking manipulations of misinformation source perceptions to continued belief and misinformation-relevant inferential reasoning. Study 1 demonstrated that an external correction (i.e., a new source labeling misinformation as false) influences perceptions of the misinformation source, and these perceptions of the misinformation source then correlated with belief in the misinformation. Study 2 replicated the results of Study 1 and used source derogation to manipulate misinformation source perceptions and further lessen continued belief. Study 3 was a preregistered replication of previous results using new methodology. These studies suggest that perceptions of the misinformation source is one mechanism that can cause changes in belief in misinformation, and changes in the perception of a source can be achieved simply by correcting the source or through other means. This approach can be used to find other mechanisms responsible for reducing belief in misinformation.
Collapse
Affiliation(s)
| | | | - Mark W Susmann
- Vanderbilt University, Psychological Sciences, Nashville, TN, USA
| |
Collapse
|
5
|
Sharma PR, Wade KA, Jobson L. A systematic review of the relationship between emotion and susceptibility to misinformation. Memory 2023; 31:1-21. [PMID: 36093958 DOI: 10.1080/09658211.2022.2120623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Inaccurate memory reports can have serious consequences within forensic and clinical settings, where emotion and misinformation are two common sources of memory distortion. Many studies have investigated how these factors are related; does emotion protect memory or leave it more vulnerable to the distorting effects of misinformation? The findings remain diffused. Thus, the present review aimed to clarify the relationship between emotion and susceptibility to misinformation. 39 eligible studies were reviewed. Results varied according to the type and dimension of emotion measured. Level of arousal may be unrelated to susceptibility to misinformation when retrieval occurs without delay; studies including delayed retrieval were limited. Stimuli valence may be associated with increased susceptibility to peripheral misinformation but unrelated to other misinformation. The following results were reported by limited studies: short-term distress and moderate levels of stress may decrease susceptibility, while anger and greater cortisol response to stress may increase susceptibility to misinformation. Source memory may also be unaffected by emotion. The results have important potential implications for forensic and clinical practice, for example by highlighting the value of enquiring witnesses' source memory. Methodological recommendations for future studies are made.
Collapse
Affiliation(s)
- Prerika R Sharma
- Turner Institute for Brain and Mental Health and School of Psychological Sciences, Monash University, Melbourne, Australia
| | - Kimberley A Wade
- Department of Psychology, University of Warwick, Warwick, United Kingdom
| | - Laura Jobson
- Turner Institute for Brain and Mental Health and School of Psychological Sciences, Monash University, Melbourne, Australia
| |
Collapse
|
6
|
Does explaining the origins of misinformation improve the effectiveness of a given correction? Mem Cognit 2023; 51:422-436. [PMID: 36125658 PMCID: PMC9487849 DOI: 10.3758/s13421-022-01354-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/16/2022] [Indexed: 11/08/2022]
Abstract
Misinformation often has a continuing influence on event-related reasoning even when it is clearly and credibly corrected; this is referred to as the continued influence effect. The present work investigated whether a correction's effectiveness can be improved by explaining the origins of the misinformation. In two experiments, we examined whether a correction that explained misinformation as originating either from intentional deception or an unintentional error was more effective than a correction that only identified the misinformation as false. Experiment 2 found no evidence that corrections explaining the reason the misinformation was presented, were more effective than a correction not accompanied by an explanation, and no evidence of a difference in effectiveness between a correction that explained the misinformation as intentional deception and one that explained it as unintentional error. We replicated this in Experiment 2 and found substantial attenuation of the continued influence effect in a novel scenario with the same underlying structure. Overall, the results suggest that informing people of the cause leading to presentation of misinformation, whether deliberate or accidental, may not be an effective correction strategy over and above stating that the misinformation is false.
Collapse
|
7
|
The independent effects of source expertise and trustworthiness on retraction believability: The moderating role of vested interest. Mem Cognit 2022; 51:845-861. [PMID: 36460863 PMCID: PMC9718466 DOI: 10.3758/s13421-022-01374-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2022] [Indexed: 12/03/2022]
Abstract
Past research suggests that the trustworthiness of a source issuing a retraction of misinformation impacts retraction effectiveness, whereas source expertise does not. However, this prior research largely used expert sources who had a vested interest in issuing the retraction, which might have reduced the impact of those expert sources. We predicted that source expertise can impact a retraction's believability independent of trustworthiness, but that this is most likely when the source does not have a vested interest in issuing a retraction. Study 1 demonstrated that retractions from an expert source are believed more and lead to less continued belief in misinformation than retractions from an inexpert source while controlling for perceptions of trustworthiness. Additionally, Study 1 demonstrated that this only occurs when the source had no vested interest in issuing the retraction. Study 2 found similar effects using a design containing manipulations of both expertise and trustworthiness. These results suggest that source expertise can impact retraction effectiveness and that vested interest is a variable that is critical to consider when determining when this will occur.
Collapse
|
8
|
Ecker U, Sanderson JA, McIlhiney P, Rowsell JJ, Quekett HL, Brown G, Lewandowsky S. EXPRESS: Combining Refutations and Social Norms Increases Belief Change. Q J Exp Psychol (Hove) 2022; 76:1275-1297. [PMID: 35748514 DOI: 10.1177/17470218221111750] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Misinformed beliefs are difficult to change. Refutations that target false claims typically reduce false beliefs, but tend to be only partially effective. In this study, a social norming approach was explored to test whether provision of peer norms could provide an alternative or complementary approach to refutation. Three experiments investigated whether a descriptive norm-by itself or in combination with a refutation-could reduce the endorsement of worldview-congruent claims. Experiment 1 found that using a single point estimate to communicate a norm affected belief but had less impact than a refutation. Experiment 2 used a verbally-presented distribution of four values to communicate a norm, which was largely ineffective. Experiment 3 used a graphically-presented social norm with 25 values, which was found to be as effective at reducing claim belief as a refutation, with the combination of both interventions being most impactful. These results provide a proof of concept that normative information can aid in the debunking of false or equivocal claims, and suggests that theories of misinformation processing should take social factors into account.
Collapse
Affiliation(s)
- Ullrich Ecker
- School of Psychological Science, University of Western Australia, 35 Stirling Hwy, Perth 6009, Australia 2720
| | - Jasmyne A Sanderson
- School of Psychological Science, University of Western Australia, 35 Stirling Hwy, Perth 6009, Australia 2720
| | - Paul McIlhiney
- School of Psychological Science, University of Western Australia, 35 Stirling Hwy, Perth 6009, Australia 2720
| | - Jessica J Rowsell
- School of Psychological Science, University of Western Australia, 35 Stirling Hwy, Perth 6009, Australia 2720
| | - Hayley L Quekett
- School of Psychological Science, University of Western Australia, 35 Stirling Hwy, Perth 6009, Australia 2720
| | - Gordon Brown
- Department of Psychology, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom 2707
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, 12a Priory Road, Bristol BS8 1TU, United Kingdom 1980.,School of Psychological Science, University of Western Australia, 35 Stirling Hwy, Perth 6009, Australia
| |
Collapse
|
9
|
Zhang Y, Guo B, Ding Y, Liu J, Qiu C, Liu S, Yu Z. Investigation of the determinants for misinformation correction effectiveness on social media during COVID-19 pandemic. Inf Process Manag 2022; 59:102935. [PMID: 35400028 PMCID: PMC8979789 DOI: 10.1016/j.ipm.2022.102935] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 02/10/2022] [Accepted: 03/21/2022] [Indexed: 01/14/2023]
Abstract
The rapid dissemination of misinformation in social media during the COVID-19 pandemic triggers panic and threatens the pandemic preparedness and control. Correction is a crucial countermeasure to debunk misperceptions. However, the effective mechanism of correction on social media is not fully verified. Previous works focus on psychological theories and experimental studies, while the applicability of conclusions to the actual social media is unclear. This study explores determinants governing the effectiveness of misinformation corrections on social media with a combination of a data-driven approach and related theories on psychology and communication. Specifically, referring to the Backfire Effect, Source Credibility, and Audience’s role in dissemination theories, we propose five hypotheses containing seven potential factors (regarding correction content and publishers’ influence), e.g., the proportion of original misinformation and warnings of misinformation. Then, we obtain 1487 significant COVID-19 related corrections on Microblog between January 1st, 2020 and April 30th, 2020, and conduct annotations, which characterize each piece of correction based on the aforementioned factors. We demonstrate several promising conclusions through a comprehensive analysis of the dataset. For example, mentioning excessive original misinformation in corrections would not undermine people’s believability within a short period after reading; warnings of misinformation in a demanding tone make correction worse; determinants of correction effectiveness vary among different topics of misinformation. Finally, we build a regression model to predict correction effectiveness. These results provide practical suggestions on misinformation correction on social media, and a tool to guide practitioners to revise corrections before publishing, leading to ideal efficacies.
Collapse
Affiliation(s)
- Yuqi Zhang
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| | - Bin Guo
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| | - Yasan Ding
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| | - Jiaqi Liu
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| | - Chen Qiu
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| | - Sicong Liu
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| | - Zhiwen Yu
- Northwestern Polytechnical University, Xi'an, 710129 CN, China
| |
Collapse
|
10
|
Buczel KA, Szyszka PD, Siwiak A, Szpitalak M, Polczyk R. Vaccination against misinformation: The inoculation technique reduces the continued influence effect. PLoS One 2022; 17:e0267463. [PMID: 35482715 PMCID: PMC9049321 DOI: 10.1371/journal.pone.0267463] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 04/08/2022] [Indexed: 11/26/2022] Open
Abstract
The continued influence effect of misinformation (CIE) is a phenomenon in which certain information, although retracted and corrected, still has an impact on event reporting, reasoning, inference, and decisions. The main goal of this paper is to investigate to what extent this effect can be reduced using the procedure of inoculation and how it can be moderated by the reliability of corrections' sources. The results show that the reliability of corrections' sources did not affect their processing when participants were not inoculated. However, inoculated participants relied on misinformation less when the correction came from a highly credible source. For this source condition, as a result of inoculation, a significant increase in belief in retraction, as well as a decrease in belief in misinformation was also found. Contrary to previous reports, belief in misinformation rather than belief in retraction predicted reliance on misinformation. These findings are of both great practical importance as certain boundary conditions for inoculation efficiency have been discovered to reduce the impact of the continued influence of misinformation, and theoretical, as they provide insight into the mechanisms behind CIE. The results were interpreted in terms of existing CIE theories as well as within the remembering framework, which describes the conversion from memory traces to behavioral manifestations of memory.
Collapse
Affiliation(s)
| | | | - Adam Siwiak
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| | | | - Romuald Polczyk
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| |
Collapse
|
11
|
Newman D, Lewandowsky S, Mayo R. Believing in nothing and believing in everything: The underlying cognitive paradox of anti-COVID-19 vaccine attitudes. PERSONALITY AND INDIVIDUAL DIFFERENCES 2022; 189:111522. [PMID: 35068637 PMCID: PMC8761558 DOI: 10.1016/j.paid.2022.111522] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 12/30/2021] [Accepted: 01/11/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Devora Newman
- Department of Psychology, The Hebrew University of Jerusalem, 9190501, Israel
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, Bristol BS8 1TH, UK
- School of Psychological Science, University of Western Australia, Perth 6009, Australia
| | - Ruth Mayo
- Department of Psychology, The Hebrew University of Jerusalem, 9190501, Israel
| |
Collapse
|
12
|
Miller AL, Wissman KT, Peterson DJ. The continued influence effect: Examining how age, retraction, and delay impact inferential reasoning. APPLIED COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1002/acp.3939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Alyssa L. Miller
- Department of Psychology North Dakota State University Fargo ND USA
| | | | | |
Collapse
|
13
|
King KK, Wang B, Escobari D, Oraby T. Dynamic Effects of Falsehoods and Corrections on Social Media: A Theoretical Modeling and Empirical Evidence. J MANAGE INFORM SYST 2022. [DOI: 10.1080/07421222.2021.1990611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Kelvin K. King
- School of Information Studies, Syracuse University, 343 Hinds Hall, Syracuse, NY 13244
| | - Bin Wang
- University of Texas Rio Grande Valley, 1201 W University Dr, Edinburg, TX 78539
| | - Diego Escobari
- University of Texas Rio Grande Valley, 1201 W University Dr, Edinburg, TX 78539
| | - Tamer Oraby
- University of Texas Rio Grande Valley, 1201 W University Dr, Edinburg, TX 78539
| |
Collapse
|
14
|
Kan IP, Pizzonia KL, Drummey AB, Mikkelsen EJV. Exploring factors that mitigate the continued influence of misinformation. Cogn Res Princ Implic 2021; 6:76. [PMID: 34837587 PMCID: PMC8627545 DOI: 10.1186/s41235-021-00335-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/12/2021] [Indexed: 11/24/2022] Open
Abstract
Background The term “continued influence effect” (CIE) refers to the phenomenon that discredited and obsolete information continues to affect behavior and beliefs. The practical relevance of this work is particularly apparent as we confront fake news everyday. Thus, an important question becomes, how can we mitigate the continued influence of misinformation? Decades of research have identified several factors that contribute to the CIE reduction, but few have reported successful elimination. Across three studies, we evaluated the relative contribution of three factors (i.e., targeting the misinformation, providing an alternative explanation, and relative importance of the misinformation content) to the reduction of the CIE. Results Across three studies and two different CIE measures, we found that alternative provision consistently resulted in CIE reduction. Furthermore, under certain conditions, the combination of alternative inclusion and direct targeting of misinformation in the correction statement resulted in successful elimination of the CIE, such that individuals who encountered that type of correction behaved similarly to baseline participants who never encountered the (mis)information. In contrast, under one CIE measure, participants who received correction statements that failed to include those elements referenced the (mis)information as frequently as baseline participants who never encountered a correction. Finally, we delineated several component processes involved in misinformation outdating and found that the extent of outdating success varied as a function of the causality of misinformation. Conclusions The damaging effects of fake news are undeniable, and the negative consequences are exacerbated in the digital age. Our results contribute to our understanding of how fake news persists and how we may begin to mitigate their effects.
Collapse
|
15
|
Winters M, Oppenheim B, Sengeh P, Jalloh MB, Webber N, Pratt SA, Leigh B, Molsted-Alvesson H, Zeebari Z, Sundberg CJ, Jalloh MF, Nordenstedt H. Debunking highly prevalent health misinformation using audio dramas delivered by WhatsApp: evidence from a randomised controlled trial in Sierra Leone. BMJ Glob Health 2021; 6:bmjgh-2021-006954. [PMID: 34758970 PMCID: PMC8578963 DOI: 10.1136/bmjgh-2021-006954] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 10/17/2021] [Indexed: 01/23/2023] Open
Abstract
Introduction Infectious disease misinformation is widespread and poses challenges to disease control. There is limited evidence on how to effectively counter health misinformation in a community setting, particularly in low-income regions, and unsettled scientific debate about whether misinformation should be directly discussed and debunked, or implicitly countered by providing scientifically correct information. Methods The Contagious Misinformation Trial developed and tested interventions designed to counter highly prevalent infectious disease misinformation in Sierra Leone, namely the beliefs that (1) mosquitoes cause typhoid and (2) typhoid co-occurs with malaria. The information intervention for group A (n=246) explicitly discussed misinformation and explained why it was incorrect and then provided the scientifically correct information. The intervention for group B (n=245) only focused on providing correct information, without directly discussing related misinformation. Both interventions were delivered via audio dramas on WhatsApp that incorporated local cultural understandings of typhoid. Participants were randomised 1:1:1 to the intervention groups or the control group (n=245), who received two episodes about breast feeding. Results At baseline 51% believed that typhoid is caused by mosquitoes and 59% believed that typhoid and malaria always co-occur. The endline survey was completed by 91% of participants. Results from the intention-to-treat, per-protocol and as-treated analyses show that both interventions substantially reduced belief in misinformation compared with the control group. Estimates from these analyses, as well as an exploratory dose–response analysis, suggest that direct debunking may be more effective at countering misinformation. Both interventions improved people’s knowledge and self-reported behaviour around typhoid risk reduction, and yielded self-reported increases in an important preventive method, drinking treated water. Conclusion These results from a field experiment in a community setting show that highly prevalent health misinformation can be countered, and that direct, detailed debunking may be most effective. Trial registration number NCT04112680.
Collapse
Affiliation(s)
- Maike Winters
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
| | - Ben Oppenheim
- Center on International Cooperation, New York University, New York, New York, USA.,Metabiota, San Francisco, California, USA
| | | | | | | | | | - Bailah Leigh
- College of Medicine and Allied Health Sciences, Freetown, Sierra Leone
| | | | - Zangin Zeebari
- Department of Economics, Finance and Statistics, Jönköping International Business School, Jönköping, Sweden
| | - Carl Johan Sundberg
- Department of Physiology and Pharmacology, Karolinska Institutet, Stockholm, Sweden
| | - Mohamed F Jalloh
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
| | - Helena Nordenstedt
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
16
|
McIlhiney P, Gignac GE, Weinborn M, Ecker UK. Sensitivity to misinformation retractions in the continued influence paradigm: Evidence for stability. Q J Exp Psychol (Hove) 2021; 75:1259-1271. [PMID: 34541938 DOI: 10.1177/17470218211048986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Research has consistently shown that misinformation can continue to affect inferential reasoning after a correction. This phenomenon is known as the continued influence effect (CIE). Recent studies have demonstrated that CIE susceptibility can be predicted by individual differences in stable cognitive abilities. Based on this, it was reasoned that CIE susceptibility ought to have some degree of stability itself; however, this has never been tested. The current study aimed to investigate the temporal stability of retraction sensitivity, arguably a major determinant of CIE susceptibility. Participants were given parallel forms of a standard CIE task 4 weeks apart, and the association between testing points was assessed with an intra-class correlation coefficient and confirmatory factor analysis. Results suggested that retraction sensitivity is relatively stable and can be predicted as an individual-differences variable. These results encourage continued individual-differences research on the CIE and have implications for real-world CIE intervention.
Collapse
Affiliation(s)
- Paul McIlhiney
- School of Psychological Science, The University of Western Australia, Perth, WA, Australia
| | - Gilles E Gignac
- School of Psychological Science, The University of Western Australia, Perth, WA, Australia
| | - Michael Weinborn
- School of Psychological Science, The University of Western Australia, Perth, WA, Australia
| | - Ullrich Kh Ecker
- School of Psychological Science, The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
17
|
Abstract
Research examining the continued influence effect (CIE) of misinformation has reliably found that belief in misinformation persists even after the misinformation has been retracted. However, much remains to be learned about the psychological mechanisms responsible for this phenomenon. Most theorizing in this domain has focused on cognitive mechanisms. Yet some proposed cognitive explanations provide reason to believe that motivational mechanisms might also play a role. The present research tested the prediction that retractions of misinformation produce feelings of psychological discomfort that motivate one to disregard the retraction to reduce this discomfort. Studies 1 and 2 found that retractions of misinformation elicit psychological discomfort, and this discomfort predicts continued belief in and use of misinformation. Study 3 showed that the relations between discomfort and continued belief in and use of misinformation are causal in nature by manipulating how participants appraised the meaning of discomfort. These findings suggest that discomfort could play a key mechanistic role in the CIE, and that changing how people interpret this discomfort can make retractions more effective at reducing continued belief in misinformation.
Collapse
|
18
|
Reasoning strategies determine the effect of disconfirmation on belief in false claims. Mem Cognit 2021; 49:1528-1536. [PMID: 34050493 DOI: 10.3758/s13421-021-01190-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2021] [Indexed: 11/08/2022]
Abstract
The dual-strategy model of reasoning proposes that people tend to use one of two reasoning strategies: either a statistical or a counterexample strategy, with the latter being more sensitive to potential counterexamples to a given conclusion. Previous studies have examined the effects of reasoning strategy in a variety of contexts. In the present study, we looked at the effects of gist repetition and disconfirmation on belief in an unknown claim. This is particularly interesting since there is no single normative analysis of this situation. We examine the hypotheses that (a) increasing gist repetition will result in higher levels of belief with both counterexample and statistical reasoners, and (b) that counterexample reasoners will have lower belief levels following a single disconfirming instance than will statistical reasoners. In a large-scale online study, over 2,000 adult participants received a False Claim procedure along with a Strategy Diagnostic. Results are consistent with the hypotheses. This provides strong evidence that the dual-strategy model captures a clear difference in information processing that is not captured by any normative/non-normative distinction.
Collapse
|
19
|
Trevors G, Bohn-Gettler C, Kendeou P. The effects of experimentally induced emotions on revising common vaccine misconceptions. Q J Exp Psychol (Hove) 2021; 74:1966-1980. [PMID: 33926324 DOI: 10.1177/17470218211017840] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Knowledge revision is the process of updating incorrect prior knowledge in light of new, correct information. Although theoretical and empirical knowledge has advanced regarding the cognitive processes involved in revision, less is known about the role of emotions, which have shown inconsistent relations with key revision processes. This study examined the effects of experimentally induced emotions on online and offline knowledge revision of vaccination misconceptions. Before reading refutation and non-refutation texts, 96 individuals received a positive, negative, or no emotion induction. Findings showed that negative emotions, more than positive emotions, resulted in enhanced knowledge revision as indicated by greater ease of integrating correct information during reading and higher comprehension test scores after reading. Findings are discussed with respect to contemporary frameworks of knowledge revision and emotion in reading comprehension and implications for educational practice.
Collapse
|
20
|
Miller AL, Wissman KT, Peterson DJ. The continued influence effect: Examining how age, retraction, and delay impact inferential reasoning. APPLIED COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1002/acp.3818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Alyssa L. Miller
- Department of Psychology North Dakota State University Fargo North Dakota USA
| | - Kathryn T. Wissman
- Department of Psychology North Dakota State University Fargo North Dakota USA
| | | |
Collapse
|
21
|
Autry KS, Duarte SE. Correcting the unknown: Negated corrections may increase belief in misinformation. APPLIED COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1002/acp.3823] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Affiliation(s)
- Kevin S. Autry
- Psychology Department California State Polytechnic University Pomona California USA
| | - Shea E. Duarte
- Psychology Department California State Polytechnic University Pomona California USA
| |
Collapse
|
22
|
Misinformation and public opinion of science and health: Approaches, findings, and future directions. Proc Natl Acad Sci U S A 2021; 118:1912437117. [PMID: 33837143 DOI: 10.1073/pnas.1912437117] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
A summary of the public opinion research on misinformation in the realm of science/health reveals inconsistencies in how the term has been defined and operationalized. A diverse set of methodologies have been employed to study the phenomenon, with virtually all such work identifying misinformation as a cause for concern. While studies completely eliminating misinformation impacts on public opinion are rare, choices around the packaging and delivery of correcting information have shown promise for lessening misinformation effects. Despite a growing number of studies on the topic, there remain many gaps in the literature and opportunities for future studies.
Collapse
|
23
|
Can you believe it? An investigation into the impact of retraction source credibility on the continued influence effect. Mem Cognit 2021; 49:631-644. [PMID: 33452666 PMCID: PMC7810102 DOI: 10.3758/s13421-020-01129-y] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/10/2020] [Indexed: 01/04/2023]
Abstract
The continued influence effect refers to the finding that people often continue to rely on misinformation in their reasoning even if the information has been retracted. The present study aimed to investigate the extent to which the effectiveness of a retraction is determined by its credibility. In particular, we aimed to scrutinize previous findings suggesting that perceived trustworthiness but not perceived expertise of the retraction source determines a retraction's effectiveness, and that continued influence arises only if a retraction is not believed. In two experiments, we found that source trustworthiness but not source expertise indeed influences retraction effectiveness, with retractions from low-trustworthiness sources entirely ineffective. We also found that retraction belief is indeed a predictor of continued reliance on misinformation, but that substantial continued influence effects can still occur with retractions designed to be and rated as highly credible.
Collapse
|
24
|
MacFarlane D, Tay LQ, Hurlstone MJ, Ecker UKH. Refuting Spurious COVID-19 Treatment Claims Reduces Demand and Misinformation Sharing. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2021; 10:248-258. [PMID: 33391983 PMCID: PMC7771267 DOI: 10.1016/j.jarmac.2020.12.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/20/2020] [Accepted: 12/20/2020] [Indexed: 01/14/2023]
Abstract
The COVID-19 pandemic has seen a surge of health misinformation, which has had serious consequences including direct harm and opportunity costs. We investigated (N = 678) the impact of such misinformation on hypothetical demand (i.e., willingness-to-pay) for an unproven treatment, and propensity to promote (i.e., like or share) misinformation online. This is a novel approach, as previous research has used mainly questionnaire-based measures of reasoning. We also tested two interventions to counteract the misinformation, contrasting a tentative refutation based on materials used by health authorities with an enhanced refutation based on best-practice recommendations. We found prior exposure to misinformation increased misinformation promotion (by 18%). Both tentative and enhanced refutations reduced demand (by 18% and 25%, respectively) as well as misinformation promotion (by 29% and 55%). The fact that enhanced refutations were more effective at curbing promotion of misinformation highlights the need for debunking interventions to follow current best-practice guidelines.
Collapse
Affiliation(s)
- Douglas MacFarlane
- School of Psychological Science, University of Western Australia, Australia
| | - Li Qian Tay
- School of Psychological Science, University of Western Australia, Australia
| | | | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Australia
| |
Collapse
|
25
|
Braun BE, Zaragoza MS, Chrobak QM, Ithisuphalap J. Correcting eyewitness suggestibility: does explanatory role predict resistance to correction? Memory 2020; 29:59-77. [PMID: 33290185 DOI: 10.1080/09658211.2020.1854788] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Many studies have documented that exposure to post event misinformation can lead eyewitnesses to misremember witnessing events they did not see and do so with high confidence. The goal of the present study was to investigate whether reporting of suggested misinformation can be reversed following a correction, and if so, whether misinformation would be more resistant to correction when it serves an explanatory function than when it does not. In two experiments participants witnessed an event, were exposed to a blatantly false suggestion(s) and one week later received a correction followed by a test of their memory for the witnessed event. We found evidence for both the persistence of misinformation following a correction (E1) and the complete reversibility of misinformation effects following a highly salient correction (E2). Although false reporting of the misinformation doubled when it served an explanatory function relative to when it did not (E1 and E2), in both experiments we found no evidence that resistance to correction varied as a function of the misinformation's explanatory role. Our findings suggest that, with a salient correction provided by a credible source, people are capable of updating their knowledge with new information that reverses what they previously thought.
Collapse
Affiliation(s)
- Blair E Braun
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| | - Maria S Zaragoza
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| | - Quin M Chrobak
- Department of Psychology, University of Wisconsin Oshkosh, Oshkosh, WI, USA
| | | |
Collapse
|
26
|
Do false allegations persist? Retracted misinformation does not continue to influence explicit person impressions. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2020. [DOI: 10.1016/j.jarmac.2020.08.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
27
|
Abstract
Previous research has shown that when information about a narrative event is retracted, people continue to use that information even though it has been explicitly identified as incorrect. Not only can this occur for implicitly inferred information, but also when the change is stated explicitly. The current study explored whether this effect reflects, at least in part, an unwillingness of some readers to accept changes to their understanding. Experiment 1 assessed this using a continued influence effect paradigm with an additional probe asking whether participants believed the explicitly stated change. Most did not. Those that did accept it showed evidence of a reduced use of the incorrect information, while those that did not accept it performed similarly to those who received no correction (control). Experiment 2 included an additional explicit instruction that participants could say "don't know" if they were unsure of how to respond. The pattern of results was largely the same as for Experiment 1. Experiment 3 modified the alternative account to increase plausibility, and added two additional stories/question sets to ensure effects were not limited to one set of materials. A greater number of participants found the retractions believable than in Experiments 1 and 2. Nonetheless, a similar pattern of results was found. Overall, these findings suggest that at least some of the evidence for the continued use of retracted information may be due to some people not accepting the retraction, even in the absence of external motivation to disregard it.
Collapse
|
28
|
Connor Desai SA, Pilditch TD, Madsen JK. The rational continued influence of misinformation. Cognition 2020; 205:104453. [PMID: 33011527 DOI: 10.1016/j.cognition.2020.104453] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 08/31/2020] [Accepted: 09/01/2020] [Indexed: 11/19/2022]
Abstract
Misinformation has become an increasingly topical field of research. Studies on the 'Continued Influence Effect' (CIE) show that misinformation continues to influence reasoning despite subsequent retraction. Current explanatory theories of the CIE tacitly assume continued reliance on misinformation is the consequence of a biased process. In the present work, we show why this perspective may be erroneous. Using a Bayesian formalism, we conceptualize the CIE as a scenario involving contradictory testimonies and incorporate the previously overlooked factors of the temporal dependence (misinformation precedes its retraction) between, and the perceived reliability of, misinforming and retracting sources. When considering such factors, we show the CIE to have normative backing. We demonstrate that, on aggregate, lay reasoners (N = 101) intuitively endorse the necessary assumptions that demarcate CIE as a rational process, still exhibit the standard effect, and appropriately penalize the reliability of contradicting sources. Individual-level analyses revealed that although many participants endorsed assumptions for a rational CIE, very few were able to execute the complex model update that the Bayesian model entails. In sum, we provide a novel illustration of the pervasive influence of misinformation as the consequence of a rational process.
Collapse
Affiliation(s)
| | - Toby D Pilditch
- University of Oxford, United Kingdom of Great Britain and Northern Ireland; University College London, United Kingdom of Great Britain and Northern Ireland
| | - Jens K Madsen
- London School of Economics, United Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
29
|
Rich PR, Zaragoza MS. Correcting Misinformation in News Stories: An Investigation of Correction Timing and Correction Durability. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2020. [DOI: 10.1016/j.jarmac.2020.04.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
30
|
Is it smart to read on your phone? The impact of reading format and culture on the continued influence of misinformation. Mem Cognit 2020; 48:1112-1127. [PMID: 32430888 DOI: 10.3758/s13421-020-01046-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite advances in digital technology that have resulted in more people accessing information via mobile devices, little is known about reading comprehension on mobile phones. This research investigated the impact of reading format by comparing sensitivity to misinformation presented either in printed texts or in digital format on mobile phones to readers of English versus Chinese. Participants read pairs of short newspaper-style articles containing a critical piece of information that was either retracted or not retracted, and were later assessed on their memory for critical and general details, as well as inferential judgements related to the retracted information. The average results replicated previous evidence that repeating the original misinformation at the time of retraction enhanced memory updating. However, reading on a mobile phone reduced the likelihood that readers noticed the retraction and updated their memory with alternative information in both language groups and reduced the extent to which inferences were modified by the alternative information in readers of Chinese but not English. Chinese readers showed significantly better general memory, but were more affected by the continued influence of the misinformation. These differences between Chinese and English-speaking participants may reflect cultural influences on the tendency to apply a dialectical rather than an analytic reasoning strategy and incorporate contradictory information into the memory representation of a discourse or event.
Collapse
|
31
|
Ithisuphalap J, Rich PR, Zaragoza MS. Does evaluating belief prior to its retraction influence the efficacy of later corrections? Memory 2020; 28:617-631. [PMID: 32302243 DOI: 10.1080/09658211.2020.1752731] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
News stories unfold over time, with initial reports sometimes containing mistaken accounts of the newsworthy outcome that are ultimately revised or corrected. Because facts associated with newsworthy events are accumulated in this piecemeal fashion, readers often have repeated opportunities to reflect upon, discuss, and evaluate their belief in these accounts before they learn that initial news reports have been revised or retracted. The primary goal of the present study was to assess whether rating the strength of one's belief in the initially reported, mistaken cause might influence the efficacy of a later correction. In the current study, participants evaluated their belief in the target cause by either rating how much they believed it caused the outcome (Experiment 1) or rating the probability that the target caused the outcome (Experiment 2). The results showed that evaluating belief in a target cause prior to its retraction (relative to not doing so) rendered the correction more effective. This enhanced correction effect was not observed when participants generated the target information prior to its retraction (Experiment 3). Collectively, the results suggest that it is not how much people believe something, but whether they have thought about why they do or do not believe it, that affects their later willingness to revise their mistaken beliefs.
Collapse
Affiliation(s)
| | - Patrick R Rich
- Department of Psychology, Connecticut College, New London, CT, USA
| | - Maria S Zaragoza
- Department of Psychological Sciences, Kent State University, Kent, OH, USA
| |
Collapse
|
32
|
Lyons BA, Akin H, Stroud NJ. Proximity (Mis)perception: Public Awareness of Nuclear, Refinery, and Fracking Sites. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2020; 40:385-398. [PMID: 31454092 PMCID: PMC7027911 DOI: 10.1111/risa.13387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 06/10/2019] [Accepted: 07/26/2019] [Indexed: 06/10/2023]
Abstract
Whether on grounds of perceived safety, aesthetics, or overall quality of life, residents may wish to be aware of nearby energy sites such as nuclear reactors, refineries, and fracking wells. Yet people are not always accurate in their impressions of proximity. Indeed, our data show that only 54% of Americans living within 25 miles of a nuclear site say they do, and even fewer fracking-proximal (30%) and refinery-proximal (24%) residents respond accurately. In this article, we analyze factors that could either help people form more accurate perceptions or distort their impressions of proximity. We evaluate these hypotheses using a large national survey sample and corresponding geographic information system (GIS) data. Results show that among those living in close proximity to energy sites, those who perceive greater risk are less likely to report living nearby. Conversely, social contact with employees of these industries increases perceived proximity regardless of actual distance. These relationships are consistent across each site type we examine. Other potential factors-such as local news use-may play a role in proximity perception on a case-by-case basis. Our findings are an important step toward a more generalizable understanding of how the public forms perceptions of proximity to risk sites, showing multiple potential mechanisms of bias.
Collapse
Affiliation(s)
- Benjamin A. Lyons
- Department of CommunicationUniversity of UtahSalt Lake CityUTUSA
- Department of PoliticsUniversity of ExeterExeterUK
| | - Heather Akin
- Missouri School of Journalism, College of Agriculture, Food and Natural ResourcesUniversity of MissouriColumbiaMOUSA
| | | |
Collapse
|
33
|
Comparing the use of open and closed questions for Web-based measures of the continued-influence effect. Behav Res Methods 2019; 51:1426-1440. [PMID: 29943224 PMCID: PMC6538818 DOI: 10.3758/s13428-018-1066-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Open-ended questions, in which participants write or type their responses, are used in many areas of the behavioral sciences. Although effective in the lab, they are relatively untested in online experiments, and the quality of responses is largely unexplored. Closed-ended questions are easier to use online because they generally require only single key- or mouse-press responses and are less cognitively demanding, but they can bias the responses. We compared the data quality obtained using open and closed response formats using the continued-influence effect (CIE), in which participants read a series of statements about an unfolding event, one of which is unambiguously corrected later. Participants typically continue to refer to the corrected misinformation when making inferential statements about the event. We implemented this basic procedure online (Exp. 1A, n = 78), comparing standard open-ended responses to an alternative procedure using closed-ended responses (Exp. 1B, n = 75). Finally, we replicated these findings in a larger preregistered study (Exps. 2A and 2B, n = 323). We observed the CIE in all conditions: Participants continued to refer to the misinformation following a correction, and their references to the target misinformation were broadly similar in number across open- and closed-ended questions. We found that participants’ open-ended responses were relatively detailed (including an average of 75 characters for inference questions), and almost all responses attempted to address the question. The responses were faster, however, for closed-ended questions. Overall, we suggest that with caution it may be possible to use either method for gathering CIE data.
Collapse
|
34
|
Gordon A, Quadflieg S, Brooks JCW, Ecker UKH, Lewandowsky S. Keeping track of 'alternative facts': The neural correlates of processing misinformation corrections. Neuroimage 2019; 193:46-56. [PMID: 30872047 DOI: 10.1016/j.neuroimage.2019.03.014] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 03/05/2019] [Accepted: 03/07/2019] [Indexed: 11/26/2022] Open
Abstract
Upon receiving a correction, initially presented misinformation often continues to influence people's judgment and reasoning. Whereas some researchers believe that this so-called continued influence effect of misinformation (CIEM) simply arises from the insufficient encoding and integration of corrective claims, others assume that it arises from a competition between the correct information and the initial misinformation in memory. To examine these possibilities, we conducted two functional magnetic resonance imaging (fMRI) studies. In each study, participants were asked to (a) read a series of brief news reports that contained confirmations or corrections of prior information and (b) evaluate whether subsequently presented memory probes matched the reports' correct facts rather than the initial misinformation. Both studies revealed that following correction-containing news reports, participants struggled to refute mismatching memory probes, especially when they referred to initial misinformation (as opposed to mismatching probes with novel information). We found little evidence, however, that the encoding of confirmations and corrections produced systematic neural processing differences indicative of distinct encoding strategies. Instead, we discovered that following corrections, participants exhibited increased activity in the left angular gyrus and the bilateral precuneus in response to mismatching memory probes that contained prior misinformation, compared to novel mismatch probes. These findings favour the notion that people's susceptibility to the CIEM arises from the concurrent retention of both correct and incorrect information in memory.
Collapse
Affiliation(s)
- Andrew Gordon
- School of Psychological Science, University of Bristol, Bristol, UK; MIND Institute, University of California, Davis, Sacramento, USA.
| | | | - Jonathan C W Brooks
- School of Psychological Science, University of Bristol, Bristol, UK; Clinical Research and Imaging Centre, University of Bristol, Bristol, UK
| | - Ullrich K H Ecker
- School of Psychological Science, University of Western Australia, Perth, Australia
| | - Stephan Lewandowsky
- School of Psychological Science, University of Bristol, Bristol, UK; School of Psychological Science, University of Western Australia, Perth, Australia
| |
Collapse
|
35
|
Metcalfe J, Eich TS. Memory and truth: correcting errors with true feedback versus overwriting correct answers with errors. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:4. [PMID: 30758685 PMCID: PMC6374496 DOI: 10.1186/s41235-019-0153-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 01/05/2019] [Indexed: 12/03/2022]
Abstract
In five experiments, we examined the conditions under which participants remembered true and false information given as feedback. Participants answered general information questions, expressed their confidence in the correctness of their answers, and were given true or false feedback. In all five experiments, participants hypercorrected when they had made a mistake; that is, they remembered better the correct feedback to errors made with high compared to low confidence. However, in none of the experiments did participants hyper'correct' when false feedback followed an initially correct response. Telling people whether the feedback was right or wrong made little difference, suggesting that people already knew whether the feedback was true or false and differentially encoded the true feedback compared to the false feedback. An exception occurred when false feedback followed an error: participants hyper'corrected' to this false feedback, suggesting that when people are wrong initially, they are susceptible to further incorrect information. These results indicate that people have some kind of privileged access to whether their answers are right or wrong, above and beyond their confidence ratings, and that they behave differently when trying to remember new “corrective” information depending upon whether they, themselves, were right or wrong initially. The likely source of this additional information is knowledge about the truth of the feedback, which they rapidly process and use to modulate memory encoding.
Collapse
Affiliation(s)
- Janet Metcalfe
- Department of Psychology, Columbia University, New York, USA.
| | - Teal S Eich
- Department of Neurology, Columbia University, New York, USA
| |
Collapse
|
36
|
Wojdynski BW, Binford MT, Jefferson BN. Looks Real, or Really Fake? Warnings, Visual Attention and Detection of False News Articles. OPEN INFORMATION SCIENCE 2019. [DOI: 10.1515/opis-2019-0012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
In recent years, online misinformation designed to resemble news by adopting news design conventions has proven to be a powerful vehicle for deception and persuasion. In a 2 (prior warning: present/absent) x 2 (article type: false/true) eye-tracking experiment, news consumers (N=49) viewed four science news articles from unfamiliar sources, then rated each article for credibility before being asked to classify each as true news or as false information presented as news. Results show that reminding participants about the existence of fake news significantly improved correct classification of false news articles, but did not lead to a significant increase in misclassification of true news articles as false. Analysis of eye-tracking data showed that duration of visual attention to news identifier elements, such as the headline, byline, timestamp on a page, predicted correct article classification. Implications for consumer education and information design are discussed.
Collapse
|
37
|
Aird MJ, Ecker UKH, Swire B, Berinsky AJ, Lewandowsky S. Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180593. [PMID: 30662715 PMCID: PMC6304148 DOI: 10.1098/rsos.180593] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 11/09/2018] [Indexed: 06/09/2023]
Abstract
In the 'post-truth era', political fact-checking has become an issue of considerable significance. A recent study in the context of the 2016 US election found that fact-checks of statements by Donald Trump changed participants' beliefs about those statements-regardless of whether participants supported Trump-but not their feelings towards Trump or voting intentions. However, the study balanced corrections of inaccurate statements with an equal number of affirmations of accurate statements. Therefore, the null effect of fact-checks on participants' voting intentions and feelings may have arisen because of this artificially created balance. Moreover, Trump's statements were not contrasted with statements from an opposing politician, and Trump's perceived veracity was not measured. The present study (N = 370) examined the issue further, manipulating the ratio of corrections to affirmations, and using Australian politicians (and Australian participants) from both sides of the political spectrum. We hypothesized that fact-checks would correct beliefs and that fact-checks would affect voters' support (i.e. voting intentions, feelings and perceptions of veracity), but only when corrections outnumbered affirmations. Both hypotheses were supported, suggesting that a politician's veracity does sometimes matter to voters. The effects of fact-checking were similar on both sides of the political spectrum, suggesting little motivated reasoning in the processing of fact-checks.
Collapse
Affiliation(s)
- Michael J. Aird
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Ullrich K. H. Ecker
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Briony Swire
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
- Department of Political Science, Northeastern University, Boston, MA, USA
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Adam J. Berinsky
- Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephan Lewandowsky
- School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
- School of Experimental Psychology and Cabot Institute, University of Bristol, Bristol, UK
| |
Collapse
|
38
|
Callcut RA, Moore S, Wakam G, Hubbard AE, Cohen MJ. Finding the signal in the noise: Could social media be utilized for early hospital notification of multiple casualty events? PLoS One 2017; 12:e0186118. [PMID: 28982201 PMCID: PMC5628942 DOI: 10.1371/journal.pone.0186118] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Accepted: 09/25/2017] [Indexed: 11/19/2022] Open
Abstract
Introduction Delayed notification and lack of early information hinder timely hospital based activations in large scale multiple casualty events. We hypothesized that Twitter real-time data would produce a unique and reproducible signal within minutes of multiple casualty events and we investigated the timing of the signal compared with other hospital disaster notification mechanisms. Methods Using disaster specific search terms, all relevant tweets from the event to 7 days post-event were analyzed for 5 recent US based multiple casualty events (Boston Bombing [BB], SF Plane Crash [SF], Napa Earthquake [NE], Sandy Hook [SH], and Marysville Shooting [MV]). Quantitative and qualitative analysis of tweet utilization were compared across events. Results Over 3.8 million tweets were analyzed (SH 1.8 m, BB 1.1m, SF 430k, MV 250k, NE 205k). Peak tweets per min ranged from 209–3326. The mean followers per tweeter ranged from 3382–9992 across events. Retweets were tweeted a mean of 82–564 times per event. Tweets occurred very rapidly for all events (<2 mins) and represented 1% of the total event specific tweets in a median of 13 minutes of the first 911 calls. A 200 tweets/min threshold was reached fastest with NE (2 min), BB (7 min), and SF (18 mins). If this threshold was utilized as a signaling mechanism to place local hospitals on standby for possible large scale events, in all case studies, this signal would have preceded patient arrival. Importantly, this threshold for signaling would also have preceded traditional disaster notification mechanisms in SF, NE, and simultaneous with BB and MV. Conclusions Social media data has demonstrated that this mechanism is a powerful, predictable, and potentially important resource for optimizing disaster response. Further investigated is warranted to assess the utility of prospective signally thresholds for hospital based activation.
Collapse
Affiliation(s)
- Rachael A. Callcut
- Department of Surgery, University of California San Francisco, San Francisco, California, United States of America
- * E-mail:
| | - Sara Moore
- Department of Biostatistics, University of California, Berkeley, California, United States of America
| | - Glenn Wakam
- Department of Surgery, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Alan E. Hubbard
- Department of Biostatistics, University of California, Berkeley, California, United States of America
| | - Mitchell J. Cohen
- Department of Surgery, University of Colorado & Denver Health, Denver, Colorado, United States of America
| |
Collapse
|
39
|
Exploring the neural substrates of misinformation processing. Neuropsychologia 2017; 106:216-224. [PMID: 28987910 DOI: 10.1016/j.neuropsychologia.2017.10.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2017] [Revised: 09/18/2017] [Accepted: 10/02/2017] [Indexed: 11/21/2022]
Abstract
It is well known that information that is initially thought to be correct but then revealed to be false, often continues to influence human judgement and decision making despite people being aware of the retraction. Yet little research has examined the underlying neural substrates of this phenomenon, which is known as the 'continued influence effect of misinformation' (CIEM). It remains unclear how the human brain processes critical information that retracts prior claims. To address this question in further detail, 26 healthy adults underwent functional magnetic resonance imaging (fMRI) while listening to brief narratives which either involved a retraction of prior information or not. Following each narrative, subjects' comprehension of the narrative, including their inclination to rely on retracted information, was probed. As expected, it was found that retracted information continued to affect participants' narrative-related reasoning. In addition, the fMRI data indicated that the continued influence of retracted information may be due to a breakdown of narrative-level integration and coherence-building mechanisms implemented by the precuneus and posterior cingulate gyrus.
Collapse
|
40
|
Chan MPS, Jones CR, Hall Jamieson K, Albarracín D. Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation. Psychol Sci 2017; 28:1531-1546. [PMID: 28895452 DOI: 10.1177/0956797617714579] [Citation(s) in RCA: 187] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
This meta-analysis investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Because misinformation can lead to poor decisions about consequential matters and is persistent and difficult to correct, debunking it is an important scientific and public-policy goal. This meta-analysis ( k = 52, N = 6,878) revealed large effects for presenting misinformation ( ds = 2.41-3.08), debunking ( ds = 1.14-1.33), and the persistence of misinformation in the face of debunking ( ds = 0.75-1.06). Persistence was stronger and the debunking effect was weaker when audiences generated reasons in support of the initial misinformation. A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect.
Collapse
|
41
|
Reminders and Repetition of Misinformation: Helping or Hindering Its Retraction? JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2017. [DOI: 10.1016/j.jarmac.2017.01.014] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
42
|
Guillory JJ, Geraci L. The Persistence of Erroneous Information in Memory: The Effect of Valence on the Acceptance of Corrected Information. APPLIED COGNITIVE PSYCHOLOGY 2015. [DOI: 10.1002/acp.3183] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
| | - Lisa Geraci
- Department of Psychology; Texas A&M University; College Station USA
| |
Collapse
|
43
|
Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychol Sci Public Interest 2015; 13:106-31. [PMID: 26173286 DOI: 10.1177/1529100612451018] [Citation(s) in RCA: 782] [Impact Index Per Article: 86.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people's memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners-including journalists, health professionals, educators, and science communicators-design effective misinformation retractions, educational tools, and public-information campaigns.
Collapse
Affiliation(s)
| | | | | | | | - John Cook
- University of Western Australia University of Queensland
| |
Collapse
|
44
|
Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Mem Cognit 2015; 42:292-304. [PMID: 24005789 DOI: 10.3758/s13421-013-0358-x] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Misinformation-defined as information that is initially assumed to be valid but is later corrected or retracted-often has an ongoing effect on people's memory and reasoning. We tested the hypotheses that (a) reliance on misinformation is affected by people's preexisting attitudes and (b) attitudes determine the effectiveness of retractions. In two experiments, participants scoring higher and lower on a racial prejudice scale read a news report regarding a robbery. In one scenario, the suspects were initially presented as being Australian Aboriginals, whereas in a second scenario, a hero preventing the robbery was introduced as an Aboriginal person. Later, these critical, race-related pieces of information were or were not retracted. We measured participants' reliance on misinformation in response to inferential reasoning questions. The results showed that preexisting attitudes influence people's use of attitude-related information but not the way in which a retraction of that information is processed.
Collapse
|
45
|
Mirenda P. Comments and a personal reflection on the persistence of facilitated communication. ACTA ACUST UNITED AC 2015. [DOI: 10.1080/17489539.2014.997427] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
46
|
Guillory JJ, Geraci L. Correcting erroneous inferences in memory: The role of source credibility. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2013. [DOI: 10.1016/j.jarmac.2013.10.001] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
47
|
|
48
|
Correcting false information in memory: manipulating the strength of misinformation encoding and its retraction. Psychon Bull Rev 2011; 18:570-8. [PMID: 21359617 DOI: 10.3758/s13423-011-0065-1] [Citation(s) in RCA: 97] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Information that is presumed to be true at encoding but later on turns out to be false (i.e., misinformation) often continues to influence memory and reasoning. In the present study, we investigated how the strength of encoding and the strength of a later retraction of the misinformation affect this continued influence effect. Participants read an event report containing misinformation and a subsequent correction. Encoding strength of the misinformation and correction were orthogonally manipulated either via repetition (Experiment 1) or by imposing a cognitive load during reading (Experiment 2). Results suggest that stronger retractions are effective in reducing the continued influence effects associated with strong misinformation encoding, but that even strong retractions fail to eliminate continued influence effects associated with relatively weak encoding. We present a simple computational model based on random sampling that captures this effect pattern, and conclude that the continued influence effect seems to defy most attempts to eliminate it.
Collapse
|
49
|
Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem Cognit 2011; 38:1087-100. [PMID: 21156872 DOI: 10.3758/mc.38.8.1087] [Citation(s) in RCA: 142] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Information that initially is presumed to be correct, but that is later retracted or corrected, often continues to influence memory and reasoning. This occurs even if the retraction itself is well remembered. The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning--giving detailed information about the continued influence effect (CIE)--succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning--reminding people that facts are not always properly checked before information is disseminated--was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether.
Collapse
|