1
|
Allem JP. The Need for Research on the Wellness Industry's Impact on Health Decisions. Am J Prev Med 2024; 67:627-630. [PMID: 38777206 DOI: 10.1016/j.amepre.2024.05.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/14/2024] [Accepted: 05/14/2024] [Indexed: 05/25/2024]
Affiliation(s)
- Jon-Patrick Allem
- Department of Health Behavior, Society and Policy, Rutgers School of Public Health, The State University of New Jersey, New Brunswick, New Jersey.
| |
Collapse
|
2
|
Epstein R, Li J. Can biased search results change people's opinions about anything at all? a close replication of the Search Engine Manipulation Effect (SEME). PLoS One 2024; 19:e0300727. [PMID: 38530851 DOI: 10.1371/journal.pone.0300727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/28/2024] [Indexed: 03/28/2024] Open
Abstract
In previous experiments we have conducted on the Search Engine Manipulation Effect (SEME), we have focused on the ability of biased search results to shift voting preferences. In three new experiments with a total of 1,137 US residents (mean age = 33.2), we sought to determine whether biased search rankings could shift people's opinions on topics that do not involve candidates or elections. Each of the new experiments looked at a different topic, and participants were pre-screened to make sure they didn't have strong opinions about these topics. The topics were: Is artificial intelligence useful or dangerous? Is fracking helpful or dangerous? And: Are people born gay or do they choose to be gay? All participants were first asked various demographic questions, then shown brief summaries of the "pro" and "anti" views on each topic, and then asked their opinions about each topic. Next, participants were allowed to conduct an online search using our mock search engine (Kadoodle) lasting up to 15 minutes. In each experiment, one-third of the participants saw biased search results favoring one perspective; one-third saw biased search results favoring the opposing perspective; and one-third (the control group) saw mixed search results. After completing their search, participants were again asked for their opinions about the topic. Our primary dependent variable was Manipulation Power (MP), the percentage increase in the number of participants favoring one viewpoint after having viewed search rankings favoring that viewpoint. The MPs in the three experiments were 25.0%, 30.9%, and 17.8%, respectively. Corresponding shifts were also found for how persuasive participants found each viewpoint to be and for how much they trusted each viewpoint. We conclude that search rankings favoring one viewpoint on a wide range of topics might be able to cause people who have not yet formulated a strong opinion on such topics to adopt the favored perspective. If our findings prove to be robust, we are exposing what might be considered an unforeseen consequence of the creation of search engines, namely that even without human interference, search algorithms will inevitably alter the thinking and behavior of billions of people worldwide on perhaps any topic for which they have not yet formed strong opinions.
Collapse
Affiliation(s)
- Robert Epstein
- American Institute for Behavioral Research and Technology, Vista, California, United States of America
| | - Ji Li
- American Institute for Behavioral Research and Technology, Vista, California, United States of America
| |
Collapse
|
3
|
Agudo U, Liberal KG, Arrese M, Matute H. The impact of AI errors in a human-in-the-loop process. Cogn Res Princ Implic 2024; 9:1. [PMID: 38185767 PMCID: PMC10772030 DOI: 10.1186/s41235-023-00529-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 12/12/2023] [Indexed: 01/09/2024] Open
Abstract
Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human-computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/b6p4z/ Experiment 2 was preregistered.
Collapse
Affiliation(s)
- Ujué Agudo
- Bikolabs/Biko, Pamplona, Spain
- Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain
| | | | | | - Helena Matute
- Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain.
| |
Collapse
|
4
|
Vicente L, Matute H. Humans inherit artificial intelligence biases. Sci Rep 2023; 13:15737. [PMID: 37789032 PMCID: PMC10547752 DOI: 10.1038/s41598-023-42384-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 09/09/2023] [Indexed: 10/05/2023] Open
Abstract
Artificial intelligence recommendations are sometimes erroneous and biased. In our research, we hypothesized that people who perform a (simulated) medical diagnostic task assisted by a biased AI system will reproduce the model's bias in their own decisions, even when they move to a context without AI support. In three experiments, participants completed a medical-themed classification task with or without the help of a biased AI system. The biased recommendations by the AI influenced participants' decisions. Moreover, when those participants, assisted by the AI, moved on to perform the task without assistance, they made the same errors as the AI had made during the previous phase. Thus, participants' responses mimicked AI bias even when the AI was no longer making suggestions. These results provide evidence of human inheritance of AI bias.
Collapse
Affiliation(s)
- Lucía Vicente
- Department of Psychology, Deusto University, Avenida Universidades 24, 48007, Bilbao, Spain
| | - Helena Matute
- Department of Psychology, Deusto University, Avenida Universidades 24, 48007, Bilbao, Spain.
| |
Collapse
|
5
|
Felmlee DH, Julien C, Francisco SC. Debating stereotypes: Online reactions to the vice-presidential debate of 2020. PLoS One 2023; 18:e0280828. [PMID: 36696430 PMCID: PMC9876360 DOI: 10.1371/journal.pone.0280828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 01/02/2023] [Indexed: 01/26/2023] Open
Abstract
The 2020 Vice-Presidential debate afforded the opportunity to examine online reactions toward a woman of color, Kamala Harris, and a white man, Mike Pence, as they vied for the same position. We collected tweets from the Twitter API related to Harris and Pence, mainly using neutral hashtags. We examined keywords for gender and race slurs and conducted a multivariate analysis of tweet sentiment. Gender and racial slurs surface in both Harris and Pence datasets, showcasing the insidious nature of sexist and racist stereotypes that seep into online conversations regarding a high-status job debate. As anticipated, tweets regarding Harris contained a higher proportion of racist and sexist curse words, highlighting greater levels of harassment and "intersectional," multi-ethnic/gender attacks. Racial insults targeting Blacks or Asians were more negative than those associated with Whites. Unexpectedly, tweets related to Harris were more positive in average sentiment than those regarding Pence. Yet, there were significantly more retweets, and more negativity of retweets, relating to Harris than to Pence, underscoring the relatively widespread broadcasting of derogatory messages about Harris. Overall, we found that harassing messages toward the candidates reinforced traditional race and gender stereotypes and bolstered the status of those who posted negative content by attaining more retweets. Harassers routinely invoked well-worn, stereotypical insults in their attacks, especially when targeting a multiracial woman.
Collapse
Affiliation(s)
- Diane H. Felmlee
- Department of Sociology and Criminology, Pennsylvania State University, University Park, Pennsylvania, United States of America
- Population Research Institute, Pennsylvania State University, University Park, Pennsylvania, United States of America
| | - Chris Julien
- Department of Sociology and Criminology, Pennsylvania State University, University Park, Pennsylvania, United States of America
| | - Sara C. Francisco
- Department of Sociology, Grinnell College, Grinnell, Iowa, United States of America
| |
Collapse
|
6
|
Curtis C, Gillespie N, Lockey S. AI-deploying organizations are key to addressing 'perfect storm' of AI risks. AI AND ETHICS 2023; 3:145-153. [PMID: 35634256 PMCID: PMC9127285 DOI: 10.1007/s43681-022-00163-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 04/13/2022] [Indexed: 12/03/2022]
Abstract
We argue that a perfect storm of five conditions heightens the risk of harm to society from artificial intelligence: (1) the powerful, invisible nature of AI, (2) low public awareness and AI literacy, (3) rapid scaled deployment of AI, (4) insufficient regulation, and (5) the gap between trustworthy AI principles and practices. To prevent harm, fit-for-purpose regulation and public AI literacy programs have been recommended, but education and government regulation will not be sufficient: AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and taking accountability to mitigate the risks.
Collapse
Affiliation(s)
- Caitlin Curtis
- grid.1003.20000 0000 9320 7537School of Business, The University of Queensland, Brisbane, QLD 4072 Australia ,grid.1003.20000 0000 9320 7537Centre for Policy Futures, The University of Queensland, Brisbane, QLD 4072 Australia
| | - Nicole Gillespie
- grid.1003.20000 0000 9320 7537School of Business, The University of Queensland, Brisbane, QLD 4072 Australia ,grid.4991.50000 0004 1936 8948Centre for Corporate Reputation, University of Oxford, Oxford, UK
| | - Steven Lockey
- grid.1003.20000 0000 9320 7537School of Business, The University of Queensland, Brisbane, QLD 4072 Australia
| |
Collapse
|
7
|
Russell AM, Bergman BG, Colditz JB, Massey PM. Algorithmic accountability on social media platforms in the context of alcohol-related health behavior change. Addiction 2023; 118:189-190. [PMID: 36065822 PMCID: PMC9722554 DOI: 10.1111/add.16042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 08/23/2022] [Indexed: 11/30/2022]
Affiliation(s)
- Alex M. Russell
- Department of Health, Human Performance and Recreation, Center for Public Health and Technology, University of Arkansas, Fayetteville, AR, USA
| | - Brandon G. Bergman
- Recovery Research Institute, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jason B. Colditz
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Philip M. Massey
- Department of Health, Human Performance and Recreation, Center for Public Health and Technology, University of Arkansas, Fayetteville, AR, USA
| |
Collapse
|
8
|
Epstein R, Lee V, Mohr R, Zankich VR. The Answer Bot Effect (ABE): A powerful new form of influence made possible by intelligent personal assistants and search engines. PLoS One 2022; 17:e0268081. [PMID: 35648736 PMCID: PMC9159602 DOI: 10.1371/journal.pone.0268081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/21/2022] [Indexed: 11/19/2022] Open
Abstract
We introduce and quantify a relatively new form of influence: the Answer Bot Effect (ABE). In a 2015 report in PNAS, researchers demonstrated the power that biased search results have to shift opinions and voting preferences without people’s knowledge–by up to 80% in some demographic groups. They labeled this phenomenon the Search Engine Manipulation Effect (SEME), speculating that its power derives from the high level of trust people have in algorithmically-generated content. We now describe three experiments with a total of 1,736 US participants conducted to determine to what extent giving users “the answer”–either via an answer box at the top of a page of search results or via a vocal reply to a question posed to an intelligent personal assistant (IPA)–might also impact opinions and votes. Participants were first given basic information about two candidates running for prime minister of Australia (this, in order to assure that participants were “undecided”), then asked questions about their voting preferences, then given answers to questions they posed about the candidates–either with answer boxes or with vocal answers on an Alexa simulator–and then asked again about their voting preferences. The experiments were controlled, randomized, double-blind, and counterbalanced. Experiments 1 and 2 demonstrated that answer boxes can shift voting preferences by as much as 38.6% and that the appearance of an answer box can reduce search times and clicks on search results. Experiment 3 demonstrated that even a single question-and-answer interaction on an IPA can shift voting preferences by more than 40%. Multiple questions posed to an IPA leading to answers that all have the same bias can shift voting preferences by more than 65%. Simple masking procedures still produced large opinion shifts while reducing awareness of bias to close to zero. ABE poses a serious threat to both democracy and human autonomy because (a) it produces large shifts in opinions and voting preferences with little or no user awareness, (b) it is an ephemeral form of influence that leaves no paper trail, and (c) worldwide, it is controlled almost exclusively by just four American tech companies. ABE will become a greater threat as people increasingly rely on IPAs for answers.
Collapse
Affiliation(s)
- Robert Epstein
- American Institute for Behavioral Research and Technology, Vista, California, United States of America
- * E-mail:
| | - Vivian Lee
- American Institute for Behavioral Research and Technology, Vista, California, United States of America
| | - Roger Mohr
- American Institute for Behavioral Research and Technology, Vista, California, United States of America
| | - Vanessa R. Zankich
- American Institute for Behavioral Research and Technology, Vista, California, United States of America
| |
Collapse
|
9
|
Agudo U, Arrese M, Liberal KG, Matute H. Assessing Emotion and Sensitivity of AI Artwork. Front Psychol 2022; 13:879088. [PMID: 35478752 PMCID: PMC9037325 DOI: 10.3389/fpsyg.2022.879088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 03/14/2022] [Indexed: 11/20/2022] Open
Abstract
Artificial Intelligence (AI) is currently present in areas that were, until recently, reserved for humans, such as, for instance, art. However, to the best of our knowledge, there is not much empirical evidence on how people perceive the skills of AI in these domains. In Experiment 1, participants were exposed to AI-generated audiovisual artwork and were asked to evaluate it. We told half of the participants that the artist was a human and we confessed to the other half that it was an AI. Although all of them were exposed to the same artwork, the results showed that people attributed lower sensitivity, lower ability to evoke their emotions, and lower quality to the artwork when they thought the artist was AI as compared to when they believed the artist was human. Experiment 2 reproduced these results and extended them to a slightly different setting, a different piece of (exclusively auditory) artwork, and added some additional measures. The results show that the evaluation of art seems to be modulated, at least in part, by prior stereotypes and biases about the creative skills of AI. The data and materials for these experiments are freely available at the Open Science Framework: https://osf.io/3r7xg/. Experiment 2 was preregistered at AsPredicted: https://aspredicted.org/fh2u2.pdf.
Collapse
Affiliation(s)
- Ujué Agudo
- Departamento de Psicología, Universidad de Deusto, Bilbao, Spain.,Laboratorio de intervención, Bikolabs/Biko, Pamplona, Spain
| | - Miren Arrese
- Laboratorio de intervención, Bikolabs/Biko, Pamplona, Spain
| | | | - Helena Matute
- Departamento de Psicología, Universidad de Deusto, Bilbao, Spain
| |
Collapse
|
10
|
Cho E, Sundar SS. How do we like our online dates—customized or personalized? The differential effects of user vs. system tailoring on date preferences. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2021.107037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
11
|
Martínez N, Vinas A, Matute H. Examining potential gender bias in automated-job alerts in the Spanish market. PLoS One 2021; 16:e0260409. [PMID: 34890392 PMCID: PMC8664211 DOI: 10.1371/journal.pone.0260409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/09/2021] [Indexed: 11/18/2022] Open
Abstract
Numerous field experiments based on the correspondence testing procedure have documented that gender bias influences personnel selection processes. Nowadays, algorithms and job platforms are used for personnel selection processes because of their supposed neutrality, efficiency, and costs savings. However, previous research has shown that algorithms can exhibit and even amplify gender bias. The present research aimed to explore a possible gender bias in automated-job alerts generated in InfoJobs, a popular job platform in Spain. Based on the correspondence testing procedure, we designed eight matched resumes in which we manipulated the gender of the candidate for two different professional sectors (female-dominated vs. male-dominated) and two different levels of age (24 vs. 38). We examined the 3,438 offers received. No significant differences were observed in the automated-job alerts received by female and male candidates as a function of occupation category, salary, and the number of long-term contracts included in the alerts. However, we found significant differences between the female-dominated and the male-dominated sectors in all the mentioned variables. Some limitations and implications of the study are discussed. The data and materials for this research are available at the Open Science Framework, https://osf.io/kptca/.
Collapse
Affiliation(s)
- Naroa Martínez
- Departamento de Psicología, University of Deusto, Bilbao, Spain
- * E-mail:
| | - Aranzazu Vinas
- Departamento de Psicología, University of Deusto, Bilbao, Spain
| | - Helena Matute
- Departamento de Psicología, University of Deusto, Bilbao, Spain
| |
Collapse
|