1
|
Ueshima A, Jones MI, Christakis NA. Simple autonomous agents can enhance creative semantic discovery by human groups. Nat Commun 2024; 15:5212. [PMID: 38890368 PMCID: PMC11189566 DOI: 10.1038/s41467-024-49528-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 06/07/2024] [Indexed: 06/20/2024] Open
Abstract
Innovation is challenging, and theory and experiments indicate that groups may be better able to identify and preserve innovations than individuals. But innovation within groups faces its own challenges, including groupthink and truncated diffusion. We performed experiments involving a game in which people search for ideas in various conditions: alone, in networked social groups, or in networked groups featuring autonomous agents (bots). The objective was to search a semantic space of 20,000 nouns with defined similarities for an arbitrary noun with the highest point value. Participants (N = 1875) were embedded in networks (n = 125) of 15 nodes to which we sometimes added 2 bots. The bots had 3 possible strategies: they shared a random noun generated by their immediate neighbors, or a noun most similar from among those identified, or a noun least similar. We first confirm that groups are better able to explore a semantic space than isolated individuals. Then we show that when bots that share the most similar noun operate in groups facing a semantic space that is relatively easy to navigate, group performance is superior. Simple autonomous agents with interpretable behavior can affect the capacity for creative discovery of human groups.
Collapse
Affiliation(s)
- Atsushi Ueshima
- Yale Institute for Network Science, Yale University, New Haven, CT, USA
- Department of Sociology, Yale University, New Haven, CT, USA
- Japan Society for the Promotion of Science, Tokyo, Japan
- Department of Human Sciences, Faculty of Letters, Keio University, Tokyo, Japan
| | - Matthew I Jones
- Yale Institute for Network Science, Yale University, New Haven, CT, USA
- Department of Sociology, Yale University, New Haven, CT, USA
- Sunwater Institute, North Bethesda, MD, USA
| | - Nicholas A Christakis
- Yale Institute for Network Science, Yale University, New Haven, CT, USA.
- Department of Sociology, Yale University, New Haven, CT, USA.
- Department of Statistics and Data Science, Yale University, New Haven, CT, USA.
| |
Collapse
|
2
|
Shi L, He Z, Shen C, Tanimoto J. Enhancing social cohesion with cooperative bots in societies of greedy, mobile individuals. PNAS NEXUS 2024; 3:pgae223. [PMID: 38881842 PMCID: PMC11179109 DOI: 10.1093/pnasnexus/pgae223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 05/24/2024] [Indexed: 06/18/2024]
Abstract
Addressing collective issues in social development requires a high level of social cohesion, characterized by cooperation and close social connections. However, social cohesion is challenged by selfish, greedy individuals. With the advancement of artificial intelligence (AI), the dynamics of human-machine hybrid interactions introduce new complexities in fostering social cohesion. This study explores the impact of simple bots on social cohesion from the perspective of human-machine hybrid populations within network. By investigating collective self-organizing movement during migration, results indicate that cooperative bots can promote cooperation, facilitate individual aggregation, and thereby enhance social cohesion. The random exploration movement of bots can break the frozen state of greedy population, help to separate defectors in cooperative clusters, and promote the establishment of cooperative clusters. However, the presence of defective bots can weaken social cohesion, underscoring the importance of carefully designing bot behavior. Our research reveals the potential of bots in guiding social self-organization and provides insights for enhancing social cohesion in the era of human-machine interaction within social networks.
Collapse
Affiliation(s)
- Lei Shi
- School of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming 650221, China
- Interdisciplinary Research Institute of data science, Shanghai Lixin University of Accounting and Finance, Shanghai 201209, China
| | - Zhixue He
- School of Statistics and Mathematics, Yunnan University of Finance and Economics, Kunming 650221, China
- Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Fukuoka 816-8580, Japan
| | - Chen Shen
- Faculty of Engineering Sciences, Kyushu University, Fukuoka 816-8580, Japan
| | - Jun Tanimoto
- Faculty of Engineering Sciences, Kyushu University, Fukuoka 816-8580, Japan
- Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Fukuoka 816-8580, Japan
| |
Collapse
|
3
|
Shirado H, Kasahara S, Christakis NA. Emergence and collapse of reciprocity in semiautomatic driving coordination experiments with humans. Proc Natl Acad Sci U S A 2023; 120:e2307804120. [PMID: 38079552 PMCID: PMC10743379 DOI: 10.1073/pnas.2307804120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 10/10/2023] [Indexed: 12/18/2023] Open
Abstract
Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.
Collapse
Affiliation(s)
- Hirokazu Shirado
- Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15206
| | - Shunichi Kasahara
- Sony Computer Science Laboratoires, Inc., Tokyo 141-0022, Japan
- Okinawa Institute of Science and Technology Graduate University, Onna son, Okinawa 904-0412, Japan
| | - Nicholas A Christakis
- Yale Institute for Network Science, Yale University, New Haven, CT 06520
- Department of Sociology, Yale University, New Haven, CT 06520
- Department of Statistics and Data Science, Yale University, New Haven, CT 06520
| |
Collapse
|
4
|
Guo H, Shen C, Hu S, Xing J, Tao P, Shi Y, Wang Z. Facilitating cooperation in human-agent hybrid populations through autonomous agents. iScience 2023; 26:108179. [PMID: 37920671 PMCID: PMC10618689 DOI: 10.1016/j.isci.2023.108179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/10/2023] [Accepted: 10/09/2023] [Indexed: 11/04/2023] Open
Abstract
Cooperative AI has shown its effectiveness in solving the conundrum of cooperation. Understanding how cooperation emerges in human-agent hybrid populations is a topic of significant interest, particularly in the realm of evolutionary game theory. In this article, we scrutinize how cooperative and defective Autonomous Agents (AAs) influence human cooperation in social dilemma games with a one-shot setting. Focusing on well-mixed populations, we find that cooperative AAs have a limited impact in the prisoner's dilemma games but facilitate cooperation in the stag hunt games. Surprisingly, defective AAs can promote complete dominance of cooperation in the snowdrift games. As the proportion of AAs increases, both cooperative and defective AAs have the potential to cause human cooperation to disappear. We then extend our investigation to consider the pairwise comparison rule and complex networks, elucidating that imitation strength and population structure are critical for the emergence of human cooperation in human-agent hybrid populations.
Collapse
Affiliation(s)
- Hao Guo
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi’an 710072, China
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Chen Shen
- Faculty of Engineering Sciences, Kyushu University, Kasuga-koen, Kasuga-shi, Fukuoka 816-8580, Japan
| | - Shuyue Hu
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Junliang Xing
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Pin Tao
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Yuanchun Shi
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Zhen Wang
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi’an 710072, China
| |
Collapse
|
5
|
Tchernichovski O, Frey S, Jacoby N, Conley D. Incentivizing free riders improves collective intelligence in social dilemmas. Proc Natl Acad Sci U S A 2023; 120:e2311497120. [PMID: 37931106 PMCID: PMC10655583 DOI: 10.1073/pnas.2311497120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 10/05/2023] [Indexed: 11/08/2023] Open
Abstract
Collective intelligence challenges are often entangled with collective action problems. For example, voting, rating, and social innovation are collective intelligence tasks that require costly individual contributions. As a result, members of a group often free ride on the information contributed by intrinsically motivated people. Are intrinsically motivated agents the best participants in collective decisions? We embedded a collective intelligence task in a large-scale, virtual world public good game and found that participants who joined the information system but were reluctant to contribute to the public good (free riders) provided more accurate evaluations, whereas participants who rated frequently underperformed. Testing the underlying mechanism revealed that a negative rating bias in free riders is associated with higher accuracy. Importantly, incentivizing evaluations amplifies the relative influence of participants who tend to free ride without altering the (higher) quality of their evaluations, thereby improving collective intelligence. These results suggest that many of the currently available information systems, which strongly select for intrinsically motivated participants, underperform and that collective intelligence can benefit from incentivizing free riding members to engage. More generally, enhancing the diversity of contributor motivations can improve collective intelligence in settings that are entangled with collective action problems.
Collapse
Affiliation(s)
- Ofer Tchernichovski
- Department of Psychology, Hunter College, The City University of New York, New York, NY10065
| | - Seth Frey
- Department of Communication, University of California, Davis, CA95616
- Ostrom Workshop, Indiana University Bloomington, Bloomington, IN47408
| | - Nori Jacoby
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Frankfurt60322, Germany
| | - Dalton Conley
- Princeton and National Bureau of Economic Research, Department of Sociology and Office of Population Research, Princeton University, Princeton, NJ08544
| |
Collapse
|
6
|
Shirado H, Hou YTY, Jung MF. Stingy bots can improve human welfare in experimental sharing networks. Sci Rep 2023; 13:17957. [PMID: 37864003 PMCID: PMC10589225 DOI: 10.1038/s41598-023-44883-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 10/12/2023] [Indexed: 10/22/2023] Open
Abstract
Machines powered by artificial intelligence increasingly permeate social networks with control over resources. However, machine allocation behavior might offer little benefit to human welfare over networks when it ignores the specific network mechanism of social exchange. Here, we perform an online experiment involving simple networks of humans (496 participants in 120 networks) playing a resource-sharing game to which we sometimes add artificial agents (bots). The experiment examines two opposite policies of machine allocation behavior: reciprocal bots, which share all resources reciprocally; and stingy bots, which share no resources at all. We also manipulate the bot's network position. We show that reciprocal bots make little changes in unequal resource distribution among people. On the other hand, stingy bots balance structural power and improve collective welfare in human groups when placed in a specific network position, although they bestow no wealth on people. Our findings highlight the need to incorporate the human nature of reciprocity and relational interdependence in designing machine behavior in sharing networks. Conscientious machines do not always work for human welfare, depending on the network structure where they interact.
Collapse
Affiliation(s)
- Hirokazu Shirado
- School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Yoyo Tsung-Yu Hou
- Department of Information Science, Cornell University, Ithaca, NY, 14853, USA
| | - Malte F Jung
- Department of Information Science, Cornell University, Ithaca, NY, 14853, USA
| |
Collapse
|
7
|
AI learns to encourage group cooperation by making new connections. Nat Hum Behav 2023; 7:1618-1619. [PMID: 37679442 DOI: 10.1038/s41562-023-01699-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
|
8
|
McKee KR, Tacchetti A, Bakker MA, Balaguer J, Campbell-Gillingham L, Everett R, Botvinick M. Scaffolding cooperation in human groups with deep reinforcement learning. Nat Hum Behav 2023; 7:1787-1796. [PMID: 37679439 PMCID: PMC10593606 DOI: 10.1038/s41562-023-01686-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 07/24/2023] [Indexed: 09/09/2023]
Abstract
Effective approaches to encouraging group cooperation are still an open challenge. Here we apply recent advances in deep learning to structure networks of human participants playing a group cooperation game. We leverage deep reinforcement learning and simulation methods to train a 'social planner' capable of making recommendations to create or break connections between group members. The strategy that it develops succeeds at encouraging pro-sociality in networks of human participants (N = 208 participants in 13 groups) playing for real monetary stakes. Under the social planner, groups finished the game with an average cooperation rate of 77.7%, compared with 42.8% in static networks (N = 176 in 11 groups). In contrast to prior strategies that separate defectors from cooperators (tested here with N = 384 in 24 groups), the social planner learns to take a conciliatory approach to defectors, encouraging them to act pro-socially by moving them to small highly cooperative neighbourhoods.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Matthew Botvinick
- Google DeepMind, London, UK
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| |
Collapse
|
9
|
Santos FP. On consensus and cooperation: Comment on "Reputation and reciprocity" by Xia et al. Phys Life Rev 2023; 46:187-189. [PMID: 37480728 DOI: 10.1016/j.plrev.2023.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/24/2023]
Affiliation(s)
- Fernando P Santos
- Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands.
| |
Collapse
|
10
|
Sharma G, Guo H, Shen C, Tanimoto J. Small bots, big impact: solving the conundrum of cooperation in optional Prisoner's Dilemma game through simple strategies. J R Soc Interface 2023; 20:20230301. [PMID: 37464799 PMCID: PMC10354466 DOI: 10.1098/rsif.2023.0301] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 06/28/2023] [Indexed: 07/20/2023] Open
Abstract
Cooperation plays a crucial role in both nature and human society, and the conundrum of cooperation attracts the attention from interdisciplinary research. In this study, we investigated the evolution of cooperation in optional Prisoner's Dilemma games by introducing simple bots. We focused on one-shot and anonymous games, where the bots could be programmed to always cooperate, always defect, never participate or choose each action with equal probability. Our results show that cooperative bots facilitate the emergence of cooperation among ordinary players in both well-mixed populations and a regular lattice under weak imitation scenarios. Introducing loner bots has no impact on the emergence of cooperation in well-mixed populations, but it facilitates the dominance of cooperation in regular lattices under strong imitation scenarios. However, too many loner bots on a regular lattice inhibit the spread of cooperation and can eventually result in a breakdown of cooperation. Our findings emphasize the significance of bot design in promoting cooperation and offer useful insights for encouraging cooperation in real-world scenarios.
Collapse
Affiliation(s)
- Gopal Sharma
- Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Fukuoka, 816-8580, Japan
| | - Hao Guo
- School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an 710072, People’s Republic of China
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi’an 710072, People’s Republic of China
| | - Chen Shen
- Faculty of Engineering Sciences, Kyushu University, Kasuga-koen, Kasuga-shi, Fukuoka 816-8580, Japan
| | - Jun Tanimoto
- Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Fukuoka, 816-8580, Japan
- Faculty of Engineering Sciences, Kyushu University, Kasuga-koen, Kasuga-shi, Fukuoka 816-8580, Japan
| |
Collapse
|
11
|
Battu B, Rahwan T. Cooperation without punishment. Sci Rep 2023; 13:1213. [PMID: 36681708 PMCID: PMC9867775 DOI: 10.1038/s41598-023-28372-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
A fundamental question in social and biological sciences is whether self-governance is possible when individual and collective interests are in conflict. Free riding poses a major challenge to self-governance, and a prominent solution to this challenge has been altruistic punishment. However, this solution is ineffective when counter-punishments are possible and when social interactions are noisy. We set out to address these shortcomings, motivated by the fact that most people behave like conditional cooperators-individuals willing to cooperate if a critical number of others do so. In our evolutionary model, the population contains heterogeneous conditional cooperators whose decisions depend on past cooperation levels. The population plays a repeated public goods game in a moderately noisy environment where individuals can occasionally commit mistakes in their cooperative decisions and in their imitation of the role models' strategies. We show that, under moderate levels of noise, injecting a few altruists into the population triggers positive reciprocity among conditional cooperators, thereby providing a novel mechanism to establish stable cooperation. More broadly, our findings indicate that self-governance is possible while avoiding the detrimental effects of punishment, and suggest that society should focus on creating a critical amount of trust to harness the conditional nature of its members.
Collapse
Affiliation(s)
- Balaraju Battu
- Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
| | - Talal Rahwan
- Science Division, New York University Abu Dhabi, Abu Dhabi, UAE.
| |
Collapse
|
12
|
Shirado H. Individual and collective learning in groups facing danger. Sci Rep 2022; 12:6210. [PMID: 35418611 PMCID: PMC9007963 DOI: 10.1038/s41598-022-10255-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 03/30/2022] [Indexed: 11/21/2022] Open
Abstract
While social networks jeopardize people’s well-being by working as diffusion pathways of falsehood, they may also help people overcome the challenge of misinformation with time and experience. Here I examine how social networks provide learning facilitation using an experiment involving an iterated decision-making game simulating an unpredictable situation faced by a group (2786 subjects in 120 groups). This study shows that, while social networks initially spread false information and suppress necessary actions, with tie rewiring, on the other hand, they facilitate improvement in people's decision-making across time. It also shows that the network's learning facilitation results from the integration of individual experiences into structural changes. In sum, social networks can support collective learning when they are built through people's experiences and accumulated relationships.
Collapse
Affiliation(s)
- Hirokazu Shirado
- School of Computer Science, Carnegie Mellon University, 5000 Forbes Ave, Newell-Simon Hall 3607, Pittsburgh, PA, 15213, USA.
| |
Collapse
|
13
|
Jones MI, Pauls SD, Fu F. The dual problems of coordination and anti-coordination on random bipartite graphs. NEW JOURNAL OF PHYSICS 2021; 23:113018. [PMID: 35663516 PMCID: PMC9165663 DOI: 10.1088/1367-2630/ac3319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In some scenarios ("anti-coordination games"), individuals are better off choosing different actions than their neighbors while in other scenarios ("coordination games"), it is beneficial for individuals to choose the same strategy as their neighbors. Despite having different incentives and resulting population dynamics, it is largely unknown which collective outcome, anti-coordination or coordination, is easier to achieve. To address this issue, we focus on the distributed graph coloring problem on bipartite graphs. We show that with only two strategies, anti-coordination games (2-colorings) and coordination games (uniform colorings) are dual problems that are equally difficult to solve. To prove this, we construct an isomorphism between the Markov chains arising from the corresponding anti-coordination and coordination games under certain specific individual stochastic decision-making rules. Our results provide novel insights into solving collective action problems on networks.
Collapse
Affiliation(s)
- Matthew I. Jones
- Department of Mathematics, Dartmouth College, Hanover, NH 03755, USA
| | - Scott D. Pauls
- Department of Mathematics, Dartmouth College, Hanover, NH 03755, USA
| | - Feng Fu
- Department of Mathematics, Dartmouth College, Hanover, NH 03755, USA
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Lebanon, NH 03756, USA
| |
Collapse
|
14
|
Sun W, Liu L, Chen X, Szolnoki A, Vasconcelos VV. Combination of institutional incentives for cooperative governance of risky commons. iScience 2021; 24:102844. [PMID: 34381969 PMCID: PMC8334382 DOI: 10.1016/j.isci.2021.102844] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 06/23/2021] [Accepted: 07/08/2021] [Indexed: 11/03/2022] Open
Abstract
Finding appropriate incentives to enforce collaborative efforts for governing the commons in risky situations is a long-lasting challenge. Previous works have demonstrated that both punishing free-riders and rewarding cooperators could be potential tools to reach this goal. Despite weak theoretical foundations, policy makers frequently impose a punishment-reward combination. Here, we consider the emergence of positive and negative incentives and analyze their simultaneous impact on sustaining risky commons. Importantly, we consider institutions with fixed and flexible incentives. We find that a local sanctioning scheme with pure reward is the optimal incentive strategy. It can drive the entire population toward a highly cooperative state in a broad range of parameters, independently of the type of institutions. We show that our finding is also valid for flexible incentives in the global sanctioning scheme, although the local arrangement works more effectively. Pure reward in a local scheme is more effective both for fixed and flexible incentives It can drive the entire population toward a highly cooperative state Increasing the efficiency of the institution can induce the success of pure reward A local scheme promotes group success more effectively than a global scheme
Collapse
Affiliation(s)
- Weiwei Sun
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Linjie Liu
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Xiaojie Chen
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Attila Szolnoki
- Institute of Technical Physics and Materials Science, Centre for Energy Research, P.O. Box 49, Budapest 1525, Hungary
| | - Vítor V Vasconcelos
- Informatics Institute, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands.,Institute for Advanced Study, University of Amsterdam, 1012 GC Amsterdam, The Netherlands
| |
Collapse
|
15
|
Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience 2021; 24:102679. [PMID: 34189440 PMCID: PMC8219775 DOI: 10.1016/j.isci.2021.102679] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 03/17/2021] [Accepted: 05/28/2021] [Indexed: 11/20/2022] Open
Abstract
We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that people mistrust algorithms, participants trusted their AI partners to be as cooperative as humans. However, they did not return AI's benevolence as much and exploited the AI more than humans. These findings warn that future self-driving cars or co-working robots, whose success depends on humans' returning their cooperativeness, run the risk of being exploited. This vulnerability calls not just for smarter machines but also better human-centered policies. People predict that AI agents will be as benevolent (cooperative) as humans People cooperate less with benevolent AI agents than with benevolent humans Reduced cooperation only occurs if it serves people's selfish interests People feel guilty when they exploit humans but not when they exploit AI agents
Collapse
|
16
|
Random choices facilitate solutions to collective network coloring problems by artificial agents. iScience 2021; 24:102340. [PMID: 33870136 PMCID: PMC8047171 DOI: 10.1016/j.isci.2021.102340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 03/09/2021] [Accepted: 03/17/2021] [Indexed: 11/22/2022] Open
Abstract
Global coordination is required to solve a wide variety of challenging collective action problems from network colorings to the tragedy of the commons. Recent empirical study shows that the presence of a few noisy autonomous agents can greatly improve collective performance of humans in solving networked color coordination games. To provide analytical insights into the role of behavioral randomness, here we study myopic artificial agents attempting to solve similar network coloring problems using decision update rules that are only based on local information but allow random choices at various stages of their heuristic reasonings. We show that the resulting efficacy of resolving color conflicts is dependent on the implementation of random behavior of agents and specific population characteristics. Our work demonstrates that distributed greedy optimization algorithms exploiting local information should be deployed in combination with occasional exploration via random choices in order to overcome local minima and achieve global coordination. Local information makes solving distributed network coloring problems difficult Greedy agents can become gridlocked, making it difficult to find a global solution Agents making random choices can facilitate the finding of a global coloring Randomness can be finely tuned to a specific underlying population structure
Collapse
|