1
|
Truong BT, Lou X, Flammini A, Menczer F. Quantifying the vulnerabilities of the online public square to adversarial manipulation tactics. PNAS NEXUS 2024; 3:pgae258. [PMID: 38994499 PMCID: PMC11238850 DOI: 10.1093/pnasnexus/pgae258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 06/18/2024] [Indexed: 07/13/2024]
Abstract
Social media, seen by some as the modern public square, is vulnerable to manipulation. By controlling inauthentic accounts impersonating humans, malicious actors can amplify disinformation within target communities. The consequences of such operations are difficult to evaluate due to the challenges posed by collecting data and carrying out ethical experiments that would influence online communities. Here we use a social media model that simulates information diffusion in an empirical network to quantify the impacts of adversarial manipulation tactics on the quality of content. We find that the presence of hub accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation. Among the explored tactics that bad actors can employ, infiltrating a community is the most likely to make low-quality content go viral. Such harm can be further compounded by inauthentic agents flooding the network with low-quality, yet appealing content, but is mitigated when bad actors focus on specific targets, such as influential or vulnerable individuals. These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
Collapse
Affiliation(s)
- Bao Tran Truong
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| | - Xiaodan Lou
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| | - Alessandro Flammini
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| | - Filippo Menczer
- Observatory on Social Media, Indiana University, 1015 E 11th St, Bloomington, IN 47408, USA
| |
Collapse
|
2
|
Porter E, Wood TJ. Factual corrections: Concerns and current evidence. Curr Opin Psychol 2024; 55:101715. [PMID: 37988954 DOI: 10.1016/j.copsyc.2023.101715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/11/2023] [Accepted: 10/18/2023] [Indexed: 11/23/2023]
Abstract
Factual corrections that target misinformation improve belief accuracy. They do so across a wide variety of countries, political beliefs and demographic characteristics. Instances of backfire, wherein exposure to corrections reduce accuracy, are exceedingly rare and may be an artifact of research design. The evidence regarding other common concerns is mixed. While the effects on corrections on belief are not permanent, they are not entirely ephemeral, either. With some exceptions, corrections mostly only affect belief accuracy, with minor to nonexistent influence on downstream attitudes and behaviors. While corrections are not unpopular among the public, limited available evidence suggests that those who see misinformation are exceedingly unlikely to see relevant corrections.
Collapse
|
3
|
Jain NR, Zachary I, Boren SA. COVID-19 vaccine equity and health equity conversations on Twitter. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2024; 2023:997-1006. [PMID: 38222403 PMCID: PMC10785937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
This study used social network analysis and trending hashtags on Twitter to identify trends related to health and vaccine equity during the Omicron wave. The analysis was conducted using consumer-friendly platforms/tools such as the Healthcare Hashtag Project and NodeXL. The study found that during the Omicron wave, there was a higher volume of tweets related to the more specific hashtag #VaccineEquity, as compared to the more general topic of #HealthEquity. The study also identified the top influencers for these hashtags and how they changed over time. The study proposes a combination of existing tools and approaches, including ontological surveillance and social network analysis, to develop proactive strategies that respond to public opinion in a timely manner. Social network analysis tools could also be useful for healthcare organizations and providers in training their staff involved in social media management to develop better social media communication strategies.
Collapse
Affiliation(s)
- Nishant R Jain
- MU Institute for Data Science and Informatics, University of Missouri, Columbia, Missouri
- Department of Health Management and Informatics, University of Missouri, Columbia, Missouri
- Missouri Cancer Registry and Research Center, Columbia, Missouri
| | - Iris Zachary
- MU Institute for Data Science and Informatics, University of Missouri, Columbia, Missouri
- Department of Health Management and Informatics, University of Missouri, Columbia, Missouri
- Missouri Cancer Registry and Research Center, Columbia, Missouri
| | - Suzanne A Boren
- MU Institute for Data Science and Informatics, University of Missouri, Columbia, Missouri
- Department of Health Management and Informatics, University of Missouri, Columbia, Missouri
| |
Collapse
|
4
|
González-Bailón S, Lazer D, Barberá P, Zhang M, Allcott H, Brown T, Crespo-Tenorio A, Freelon D, Gentzkow M, Guess AM, Iyengar S, Kim YM, Malhotra N, Moehler D, Nyhan B, Pan J, Rivera CV, Settle J, Thorson E, Tromble R, Wilkins A, Wojcieszak M, de Jonge CK, Franco A, Mason W, Stroud NJ, Tucker JA. Asymmetric ideological segregation in exposure to political news on Facebook. Science 2023; 381:392-398. [PMID: 37499003 DOI: 10.1126/science.ade7138] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 05/31/2023] [Indexed: 07/29/2023]
Abstract
Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta's Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook's news ecosystem than those favored by liberals.
Collapse
Affiliation(s)
| | - David Lazer
- Network Science Institute, Northeastern University, Boston, MA, USA
| | | | | | - Hunt Allcott
- Stanford Doerr School of Sustainability, Stanford University, Stanford, CA, USA
| | | | | | - Deen Freelon
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Andrew M Guess
- Department of Politics and School of Public and International Affairs, Princeton University, Princeton, NJ, USA
| | - Shanto Iyengar
- Department of Political Science, Stanford University, Stanford, CA, USA
| | - Young Mie Kim
- School of Journalism and Mass Communication, University of Wisconsin-Madison, Madison, WI, USA
| | - Neil Malhotra
- Graduate School of Business, Stanford University, Stanford, CA, USA
| | | | - Brendan Nyhan
- Department of Government, Dartmouth College, Hanover, NH, USA
| | - Jennifer Pan
- Department of Communication, Stanford University, Stanford, CA, USA
| | | | - Jaime Settle
- Department of Government, William & Mary, Williamsburg, VA, USA
| | - Emily Thorson
- Department of Political Science, Syracuse University, Syracuse, NY, USA
| | - Rebekah Tromble
- School of Media and Public Affairs and Institute for Data, Democracy, and Politics, The George Washington University, Washington, DC, USA
| | | | - Magdalena Wojcieszak
- Department of Communication, University of California, Davis, Davis, CA, USA
- Amsterdam School of Communication Research, University of Amsterdam, Amsterdam, Netherlands
| | | | | | | | - Natalie Jomini Stroud
- Moody College of Communication, University of Texas at Austin, Austin, TX, USA
- Center for Media Engagement, University of Texas at Austin, Austin, TX, USA
| | - Joshua A Tucker
- Wilf Family Department of Politics, New York University, New York, NY, USA
- Center for Social Media and Politics, New York University, New York, NY, USA
| |
Collapse
|
5
|
Robertson RE, Green J, Ruck DJ, Ognyanova K, Wilson C, Lazer D. Users choose to engage with more partisan news than they are exposed to on Google Search. Nature 2023:10.1038/s41586-023-06078-5. [PMID: 37225979 DOI: 10.1038/s41586-023-06078-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 04/12/2023] [Indexed: 05/26/2023]
Abstract
If popular online platforms systematically expose their users to partisan and unreliable news, they could potentially contribute to societal issues such as rising political polarization1,2. This concern is central to the 'echo chamber'3-5 and 'filter bubble'6,7 debates, which critique the roles that user choice and algorithmic curation play in guiding users to different online information sources8-10. These roles can be measured as exposure, defined as the URLs shown to users by online platforms, and engagement, defined as the URLs selected by users. However, owing to the challenges of obtaining ecologically valid exposure data-what real users were shown during their typical platform use-research in this vein typically relies on engagement data4,8,11-16 or estimates of hypothetical exposure17-23. Studies involving ecological exposure have therefore been rare, and largely limited to social media platforms7,24, leaving open questions about web search engines. To address these gaps, we conducted a two-wave study pairing surveys with ecologically valid measures of both exposure and engagement on Google Search during the 2018 and 2020 US elections. In both waves, we found more identity-congruent and unreliable news sources in participants' engagement choices, both within Google Search and overall, than they were exposed to in their Google Search results. These results indicate that exposure to and engagement with partisan or unreliable news on Google Search are driven not primarily by algorithmic curation but by users' own choices.
Collapse
Affiliation(s)
- Ronald E Robertson
- Stanford University, Stanford Internet Observatory, Stanford, CA, USA.
- Northeastern University, Network Science Institute, Boston, MA, USA.
| | - Jon Green
- Northeastern University, Network Science Institute, Boston, MA, USA
| | - Damian J Ruck
- Northeastern University, Network Science Institute, Boston, MA, USA
| | - Katherine Ognyanova
- Rutgers University, School of Communication & Information, New Brunswick, NJ, USA
| | - Christo Wilson
- Northeastern University, Network Science Institute, Boston, MA, USA
- Northeastern University, Khoury College of Computer Sciences, Boston, USA
| | - David Lazer
- Northeastern University, Network Science Institute, Boston, MA, USA
| |
Collapse
|
6
|
Hou L, Pan X, Liu K, Yang Z, Liu J, Zhou T. Information cocoons in online navigation. iScience 2022; 26:105893. [PMID: 36654864 PMCID: PMC9840977 DOI: 10.1016/j.isci.2022.105893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 12/05/2022] [Accepted: 12/22/2022] [Indexed: 12/29/2022] Open
Abstract
Social media and online navigation bring us enjoyable experiences in accessing information, and simultaneously create information cocoons (ICs) in which we are unconsciously trapped with limited and biased information. We provide a formal definition of IC in the scenario of online navigation. Subsequently, by analyzing real recommendation networks extracted from Science, PNAS, and Amazon websites, and testing mainstream algorithms in disparate recommender systems, we demonstrate that similarity-based recommendation techniques result in ICs, which suppress the system navigability by hundreds of times. We further propose a flexible recommendation strategy that addresses the IC-induced problem and improves retrieval accuracy in navigation, which are demonstrated by simulations on real data and online experiments on the largest video website in China. This paper quantifies the challenge of ICs in recommender systems and presents a viable solution, which offer insights into the industrial design of algorithms, future scientific studies, as well as policy making.
Collapse
Affiliation(s)
- Lei Hou
- School of Management Science and Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China,Informatics Research Centre, University of Reading, Reading RG66UD, UK
| | - Xue Pan
- School of Management Science and Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China,Informatics Research Centre, University of Reading, Reading RG66UD, UK
| | - Kecheng Liu
- Informatics Research Centre, University of Reading, Reading RG66UD, UK,Institute of Accounting and Finance, Shanghai University of Finance and Economics, Shanghai 200433, China
| | - Zimo Yang
- Beijing AiQiYi Science & Technology Co. Ltd., Beijing 100080, China
| | - Jianguo Liu
- Institute of Accounting and Finance, Shanghai University of Finance and Economics, Shanghai 200433, China,Research Group of Computational and AI Communication at Institute for Global Communications and Integrated Media, Fudan University, Shanghai 200433, China,Corresponding author
| | - Tao Zhou
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China,Corresponding author
| |
Collapse
|
7
|
Pastor-Galindo J, Gómez Mármol F, Martínez Pérez G. Profiling users and bots in Twitter through social media analysis. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.09.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8
|
Castioni P, Andrighetto G, Gallotti R, Polizzi E, De Domenico M. The voice of few, the opinions of many: evidence of social biases in Twitter COVID-19 fake news sharing. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220716. [PMID: 36303937 PMCID: PMC9597167 DOI: 10.1098/rsos.220716] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 10/07/2022] [Indexed: 05/25/2023]
Abstract
Online platforms play a relevant role in the creation and diffusion of false or misleading news. Concerningly, the COVID-19 pandemic is shaping a communication network which reflects the emergence of collective attention towards a topic that rapidly gained universal interest. Here, we characterize the dynamics of this network on Twitter, analysing how unreliable content distributes among its users. We find that a minority of accounts is responsible for the majority of the misinformation circulating online, and identify two categories of users: a few active ones, playing the role of 'creators', and a majority playing the role of 'consumers'. The relative proportion of these groups (approx. 14% creators-86% consumers) appears stable over time: consumers are mostly exposed to the opinions of a vocal minority of creators (which are the origin of 82% of fake content in our data), that could be mistakenly understood as representative of the majority of users. The corresponding pressure from a perceived majority is identified as a potential driver of the ongoing COVID-19 infodemic.
Collapse
Affiliation(s)
- Piergiorgio Castioni
- Istituto di Scienze e Tecnologie della Cognizione, Via Palestro 32, Roma, Lazio 00185, Italy
- Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, Tarragona 43007, Spain
| | - Giulia Andrighetto
- Istituto di Scienze e Tecnologie della Cognizione, Via Palestro 32, Roma, Lazio 00185, Italy
| | - Riccardo Gallotti
- CoMuNe Lab, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento 38123, Italy
| | - Eugenia Polizzi
- Istituto di Scienze e Tecnologie della Cognizione, Via Palestro 32, Roma, Lazio 00185, Italy
| | - Manlio De Domenico
- CoMuNe Lab, Fondazione Bruno Kessler, Via Sommarive 18, Povo, Trento 38123, Italy
- Department of Physics and Astronomy ‘Galileo Galilei’, University of Padova, Padova, Italy
| |
Collapse
|
9
|
Wojcieszak M, Casas A, Yu X, Nagler J, Tucker JA. Most users do not follow political elites on Twitter; those who do show overwhelming preferences for ideological congruity. SCIENCE ADVANCES 2022; 8:eabn9418. [PMID: 36179029 PMCID: PMC9524832 DOI: 10.1126/sciadv.abn9418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
We offer comprehensive evidence of preferences for ideological congruity when people engage with politicians, pundits, and news organizations on social media. Using 4 years of data (2016-2019) from a random sample of 1.5 million Twitter users, we examine three behaviors studied separately to date: (i) following of in-group versus out-group elites, (ii) sharing in-group versus out-group information (retweeting), and (iii) commenting on the shared information (quote tweeting). We find that the majority of users (60%) do not follow any political elites. Those who do follow in-group elite accounts at much higher rates than out-group accounts (90 versus 10%), share information from in-group elites 13 times more frequently than from out-group elites, and often add negative comments to the shared out-group information. Conservatives are twice as likely as liberals to share in-group versus out-group content. These patterns are robust, emerge across issues and political elites, and exist regardless of users' ideological extremity.
Collapse
Affiliation(s)
- Magdalena Wojcieszak
- University of California, Davis, Davis, CA, USA
- University of Amsterdam, Amsterdam, Netherlands
| | - Andreu Casas
- Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Xudong Yu
- University of Amsterdam, Amsterdam, Netherlands
| | - Jonathan Nagler
- Center for Social Media and Politics, New York University, New York, NY, USA
| | - Joshua A. Tucker
- Center for Social Media and Politics, New York University, New York, NY, USA
| |
Collapse
|
10
|
Abstract
Online misinformation is believed to have contributed to vaccine hesitancy during the Covid-19 pandemic, highlighting concerns about social media’s destabilizing role in public life. Previous research identified a link between political conservatism and sharing misinformation; however, it is not clear how partisanship affects how much misinformation people see online. As a result, we do not know whether partisanship drives exposure to misinformation or people selectively share misinformation despite being exposed to factual content. To address this question, we study Twitter discussions about the Covid-19 pandemic, classifying users along the political and factual spectrum based on the information sources they share. In addition, we quantify exposure through retweet interactions. We uncover partisan asymmetries in the exposure to misinformation: conservatives are more likely to see and share misinformation, and while users’ connections expose them to ideologically congruent content, the interactions between political and factual dimensions create conditions for the highly polarized users—hardline conservatives and liberals—to amplify misinformation. Overall, however, misinformation receives less attention than factual content and political moderates, the bulk of users in our sample, help filter out misinformation. Identifying the extent of polarization and how political ideology exacerbates misinformation can help public health experts and policy makers improve their messaging.
Collapse
|
11
|
Yang KC, Ferrara E, Menczer F. Botometer 101: social bot practicum for computational social scientists. JOURNAL OF COMPUTATIONAL SOCIAL SCIENCE 2022; 5:1511-1528. [PMID: 36035522 PMCID: PMC9391657 DOI: 10.1007/s42001-022-00177-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 08/01/2022] [Indexed: 05/16/2023]
Abstract
Social bots have become an important component of online social media. Deceptive bots, in particular, can manipulate online discussions of important issues ranging from elections to public health, threatening the constructive exchange of information. Their ubiquity makes them an interesting research subject and requires researchers to properly handle them when conducting studies using social media data. Therefore, it is important for researchers to gain access to bot detection tools that are reliable and easy to use. This paper aims to provide an introductory tutorial of Botometer, a public tool for bot detection on Twitter, for readers who are new to this topic and may not be familiar with programming and machine learning. We introduce how Botometer works, the different ways users can access it, and present a case study as a demonstration. Readers can use the case study code as a template for their own research. We also discuss recommended practice for using Botometer.
Collapse
Affiliation(s)
- Kai-Cheng Yang
- Observatory on Social Media, Indiana University Bloomington, Bloomington, IN 47408 USA
| | - Emilio Ferrara
- Information Sciences Institute, University of Southern California, Marina del Rey, CA 90292 USA
| | - Filippo Menczer
- Observatory on Social Media, Indiana University Bloomington, Bloomington, IN 47408 USA
| |
Collapse
|
12
|
Yang KC, Hui PM, Menczer F. How Twitter data sampling biases U.S. voter behavior characterizations. PeerJ Comput Sci 2022; 8:e1025. [PMID: 35875635 PMCID: PMC9299280 DOI: 10.7717/peerj-cs.1025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/10/2022] [Indexed: 06/15/2023]
Abstract
Online social media are key platforms for the public to discuss political issues. As a result, researchers have used data from these platforms to analyze public opinions and forecast election results. The literature has shown that due to inauthentic actors such as malicious social bots and trolls, not every message is a genuine expression from a legitimate user. However, the prevalence of inauthentic activities in social data streams is still unclear, making it difficult to gauge biases of analyses based on such data. In this article, we aim to close this gap using Twitter data from the 2018 U.S. midterm elections. We propose an efficient and low-cost method to identify voters on Twitter and systematically compare their behaviors with different random samples of accounts. We find that some accounts flood the public data stream with political content, drowning the voice of the majority of voters. As a result, these hyperactive accounts are over-represented in volume samples. Hyperactive accounts are more likely to exhibit various suspicious behaviors and to share low-credibility information compared to likely voters. Our work provides insights into biased voter characterizations when using social media data to analyze political issues.
Collapse
|
13
|
A general framework to link theory and empirics in opinion formation models. Sci Rep 2022; 12:5543. [PMID: 35365685 PMCID: PMC8976081 DOI: 10.1038/s41598-022-09468-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 03/23/2022] [Indexed: 11/30/2022] Open
Abstract
We introduce a minimal opinion formation model that is quite flexible and can reproduce a wide variety of the existing micro-influence assumptions and models. The model can be easily calibrated on real data, upon which it imposes only a few requirements. From this perspective, our model can be considered as a bridge, connecting theoretical studies on opinion formation models and empirical research on social dynamics. We investigate the model analytically by using mean-field approximation and numerically via Monte Carlo simulations. Our analysis is exemplified by recently reported empirical data drawn from an online social network. We demonstrate that the model calibrated on these data may reproduce fragmented and polarizing social systems. Furthermore, we manage to generate an artificial society that features properties quantitatively and qualitatively similar to those observed empirically at the macro scale. This ability became possible after we had advanced the model with two important communication features: selectivity and personalization algorithms.
Collapse
|
14
|
Bhadani S, Yamaya S, Flammini A, Menczer F, Ciampaglia GL, Nyhan B. Political audience diversity and news reliability in algorithmic ranking. Nat Hum Behav 2022; 6:495-505. [PMID: 35115677 DOI: 10.1038/s41562-021-01276-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 12/09/2021] [Indexed: 11/09/2022]
Abstract
Newsfeed algorithms frequently amplify misinformation and other low-quality content. How can social media platforms more effectively promote reliable information? Existing approaches are difficult to scale and vulnerable to manipulation. In this paper, we propose using the political diversity of a website's audience as a quality signal. Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 US residents, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards. We then incorporate audience diversity into a standard collaborative filtering framework and show that our improved algorithm increases the trustworthiness of websites suggested to users-especially those who most frequently consume misinformation-while keeping recommendations relevant. These findings suggest that partisan audience diversity is a valuable signal of higher journalistic standards that should be incorporated into algorithmic ranking decisions.
Collapse
Affiliation(s)
- Saumya Bhadani
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL, USA
| | - Shun Yamaya
- Department of Political Science, Stanford University, Stanford, CA, USA
| | | | - Filippo Menczer
- Observatory on Social Media, Indiana University, Bloomington, IN, USA
| | | | - Brendan Nyhan
- Department of Government, Dartmouth College, Hanover, NH, USA
| |
Collapse
|