1
|
Michel-Mata S, Kawakatsu M, Sartini J, Kessinger TA, Plotkin JB, Tarnita CE. The evolution of private reputations in information-abundant landscapes. Nature 2024; 634:883-889. [PMID: 39322674 DOI: 10.1038/s41586-024-07977-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/21/2024] [Indexed: 09/27/2024]
Abstract
Reputations are critical to human societies, as individuals are treated differently based on their social standing1,2. For instance, those who garner a good reputation by helping others are more likely to be rewarded by third parties3-5. Achieving widespread cooperation in this way requires that reputations accurately reflect behaviour6 and that individuals agree about each other's standings7. With few exceptions8-10, theoretical work has assumed that information is limited, which hinders consensus7,11 unless there are mechanisms to enforce agreement, such as empathy12, gossip13-15 or public institutions16. Such mechanisms face challenges in a world where empathy, effective communication and institutional trust are compromised17-19. However, information about others is now abundant and readily available, particularly through social media. Here we demonstrate that assigning private reputations by aggregating several observations of an individual can accurately capture behaviour, foster emergent agreement without enforcement mechanisms and maintain cooperation, provided individuals exhibit some tolerance for bad actions. This finding holds for both first- and second-order norms of judgement and is robust even when norms vary within a population. When the aggregation rule itself can evolve, selection indeed favours the use of several observations and tolerant judgements. Nonetheless, even when information is freely accessible, individuals do not typically evolve to use all of it. This method of assessing reputations-'look twice, forgive once', in a nutshell-is simple enough to have arisen early in human culture and powerful enough to persist as a fundamental component of social heuristics.
Collapse
Affiliation(s)
- Sebastián Michel-Mata
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ, USA
| | - Mari Kawakatsu
- Department of Biology, University of Pennsylvania, Philadelphia, PA, USA
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA, USA
| | - Joseph Sartini
- Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ, USA
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD, USA
| | - Taylor A Kessinger
- Department of Biology, University of Pennsylvania, Philadelphia, PA, USA
| | - Joshua B Plotkin
- Department of Biology, University of Pennsylvania, Philadelphia, PA, USA
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA, USA
| | - Corina E Tarnita
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
2
|
Murase Y, Hilbe C. Computational evolution of social norms in well-mixed and group-structured populations. Proc Natl Acad Sci U S A 2024; 121:e2406885121. [PMID: 39116135 PMCID: PMC11331111 DOI: 10.1073/pnas.2406885121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 07/15/2024] [Indexed: 08/10/2024] Open
Abstract
Models of indirect reciprocity study how social norms promote cooperation. In these models, cooperative individuals build up a positive reputation, which in turn helps them in their future interactions. The exact reputational benefits of cooperation depend on the norm in place, which may change over time. Previous research focused on the stability of social norms. Much less is known about how social norms initially evolve when competing with many others. A comprehensive evolutionary analysis, however, has been difficult. Even among the comparably simple space of so-called third-order norms, there are thousands of possibilities, each one inducing its own reputation dynamics. To address this challenge, we use large-scale computer simulations. We study the reputation dynamics of each third-order norm and all evolutionary transitions between them. In contrast to established work with only a handful of norms, we find that cooperation is hard to maintain in well-mixed populations. However, within group-structured populations, cooperation can emerge. The most successful norm in our simulations is particularly simple. It regards cooperation as universally positive, and defection as usually negative-unless defection takes the form of justified punishment. This research sheds light on the complex interplay of social norms, their induced reputation dynamics, and population structure.
Collapse
Affiliation(s)
- Yohsuke Murase
- RIKEN Center for Computational Science, Kobe650-0047, Japan
| | - Christian Hilbe
- Max Planck Research Group Dynamics of Social Behavior, Max Planck Institute for Evolutionary Biology, Plön24306, Germany
| |
Collapse
|
3
|
Kawakatsu M, Kessinger TA, Plotkin JB. A mechanistic model of gossip, reputations, and cooperation. Proc Natl Acad Sci U S A 2024; 121:e2400689121. [PMID: 38717858 PMCID: PMC11098103 DOI: 10.1073/pnas.2400689121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/12/2024] [Indexed: 05/18/2024] Open
Abstract
Social reputations facilitate cooperation: those who help others gain a good reputation, making them more likely to receive help themselves. But when people hold private views of one another, this cycle of indirect reciprocity breaks down, as disagreements lead to the perception of unjustified behavior that ultimately undermines cooperation. Theoretical studies often assume population-wide agreement about reputations, invoking rapid gossip as an endogenous mechanism for reaching consensus. However, the theory of indirect reciprocity lacks a mechanistic description of how gossip actually generates consensus. Here, we develop a mechanistic model of gossip-based indirect reciprocity that incorporates two alternative forms of gossip: exchanging information with randomly selected peers or consulting a single gossip source. We show that these two forms of gossip are mathematically equivalent under an appropriate transformation of parameters. We derive an analytical expression for the minimum amount of gossip required to reach sufficient consensus and stabilize cooperation. We analyze how the amount of gossip necessary for cooperation depends on the benefits and costs of cooperation, the assessment rule (social norm), and errors in reputation assessment, strategy execution, and gossip transmission. Finally, we show that biased gossip can either facilitate or hinder cooperation, depending on the direction and magnitude of the bias. Our results contribute to the growing literature on cooperation facilitated by communication, and they highlight the need to study strategic interactions coupled with the spread of social information.
Collapse
Affiliation(s)
- Mari Kawakatsu
- Department of Biology, University of Pennsylvania, Philadelphia, PA19104
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA19104
| | | | - Joshua B. Plotkin
- Department of Biology, University of Pennsylvania, Philadelphia, PA19104
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA19104
| |
Collapse
|
4
|
Morsky B, Plotkin JB, Akçay E. Indirect reciprocity with Bayesian reasoning and biases. PLoS Comput Biol 2024; 20:e1011979. [PMID: 38662682 PMCID: PMC11045068 DOI: 10.1371/journal.pcbi.1011979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 03/10/2024] [Indexed: 04/28/2024] Open
Abstract
Reputations can foster cooperation by indirect reciprocity: if I am good to you then others will be good to me. But this mechanism for cooperation in one-shot interactions only works when people agree on who is good and who is bad. Errors in actions or assessments can produce disagreements about reputations, which can unravel the positive feedback loop between social standing and pro-social behaviour. Cooperators can end up punished and defectors rewarded. Public reputation systems and empathy are two possible mechanisms to promote agreement about reputations. Here we suggest an alternative: Bayesian reasoning by observers. By taking into account the probabilities of errors in action and observation and their prior beliefs about the prevalence of good people in the population, observers can use Bayesian reasoning to determine whether or not someone is good. To study this scenario, we develop an evolutionary game theoretical model in which players use Bayesian reasoning to assess reputations, either publicly or privately. We explore this model analytically and numerically for five social norms (Scoring, Shunning, Simple Standing, Staying, and Stern Judging). We systematically compare results to the case when agents do not use reasoning in determining reputations. We find that Bayesian reasoning reduces cooperation relative to non-reasoning, except in the case of the Scoring norm. Under Scoring, Bayesian reasoning can promote coexistence of three strategic types. Additionally, we study the effects of optimistic or pessimistic biases in individual beliefs about the degree of cooperation in the population. We find that optimism generally undermines cooperation whereas pessimism can, in some cases, promote cooperation.
Collapse
Affiliation(s)
- Bryce Morsky
- Department of Biology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Mathematics, Florida State University, Tallahassee, Florida, United States of America
| | - Joshua B. Plotkin
- Department of Mathematics, Florida State University, Tallahassee, Florida, United States of America
| | - Erol Akçay
- Department of Mathematics, Florida State University, Tallahassee, Florida, United States of America
| |
Collapse
|
5
|
Chiba-Okabe H, Plotkin JB. Can institutions foster cooperation by wealth redistribution? J R Soc Interface 2024; 21:20230698. [PMID: 38471530 PMCID: PMC10932717 DOI: 10.1098/rsif.2023.0698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 02/06/2024] [Indexed: 03/14/2024] Open
Abstract
Theoretical models prescribe how institutions can promote cooperation in a population by imposing appropriate punishments or rewards on individuals. However, many real-world institutions are not sophisticated or responsive enough to ensure cooperation by calibrating their policies. Or, worse yet, an institution might selfishly exploit the population it governs for its own benefit. Here, we study the evolution of cooperation in the presence of an institution that is autonomous, in the sense that it has its own interests that may or may not align with those of the population. The institution imposes a tax on the population and redistributes a portion of the tax revenue to cooperators, withholding the remaining revenue for itself. The institution adjusts its rates of taxation and redistribution to optimize its own long-term, discounted utility. We consider three types of institutions with different goals, embodied in their utility functions. We show that a prosocial institution, whose goal is to maximize the average payoff of the population, can indeed promote cooperation-but only if it is sufficiently forward-looking. On the other hand, an institution that seeks to maximize welfare among cooperators alone will successfully promote collective cooperation even if it is myopic. Remarkably, even a selfish institution, which seeks to maximize the revenue it withholds for itself, can nonetheless promote cooperation. The average payoff of the population increases when a selfish institution is more forward-looking, so that a population under a selfish regime can sometimes fare better than under anarchy. Our analysis highlights the potential benefits of institutional wealth redistribution, even when an institution does not share the interests of the population it governs.
Collapse
Affiliation(s)
- Hiroaki Chiba-Okabe
- Program in Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, PA, USA
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA, USA
| | - Joshua B. Plotkin
- Program in Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, PA, USA
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA, USA
- Department of Biology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
6
|
Kawakatsu M, Michel-Mata S, Kessinger TA, Tarnita CE, Plotkin JB. When do stereotypes undermine indirect reciprocity? PLoS Comput Biol 2024; 20:e1011862. [PMID: 38427626 PMCID: PMC10906830 DOI: 10.1371/journal.pcbi.1011862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/28/2024] [Indexed: 03/03/2024] Open
Abstract
Social reputations provide a powerful mechanism to stimulate human cooperation, but observing individual reputations can be cognitively costly. To ease this burden, people may rely on proxies such as stereotypes, or generalized reputations assigned to groups. Such stereotypes are less accurate than individual reputations, and so they could disrupt the positive feedback between altruistic behavior and social standing, undermining cooperation. How do stereotypes impact cooperation by indirect reciprocity? We develop a theoretical model of group-structured populations in which individuals are assigned either individual reputations based on their own actions or stereotyped reputations based on their groups' behavior. We find that using stereotypes can produce either more or less cooperation than using individual reputations, depending on how widely reputations are shared. Deleterious outcomes can arise when individuals adapt their propensity to stereotype. Stereotyping behavior can spread and can be difficult to displace, even when it compromises collective cooperation and even though it makes a population vulnerable to invasion by defectors. We discuss the implications of our results for the prevalence of stereotyping and for reputation-based cooperation in structured populations.
Collapse
Affiliation(s)
- Mari Kawakatsu
- Department of Biology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Sebastián Michel-Mata
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Taylor A. Kessinger
- Department of Biology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Corina E. Tarnita
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Joshua B. Plotkin
- Department of Biology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
7
|
Wang X, Zhou L, McAvoy A, Li A. Imitation dynamics on networks with incomplete information. Nat Commun 2023; 14:7453. [PMID: 37978181 PMCID: PMC10656501 DOI: 10.1038/s41467-023-43048-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 10/30/2023] [Indexed: 11/19/2023] Open
Abstract
Imitation is an important learning heuristic in animal and human societies. Previous explorations report that the fate of individuals with cooperative strategies is sensitive to the protocol of imitation, leading to a conundrum about how different styles of imitation quantitatively impact the evolution of cooperation. Here, we take a different perspective on the personal and external social information required by imitation. We develop a general model of imitation dynamics with incomplete information in networked systems, which unifies classical update rules including the death-birth and pairwise-comparison rule on complex networks. Under pairwise interactions, we find that collective cooperation is most promoted if individuals neglect personal information. If personal information is considered, cooperators evolve more readily with more external information. Intriguingly, when interactions take place in groups on networks with low degrees of clustering, using more personal and less external information better facilitates cooperation. Our unifying perspective uncovers intuition by examining the rate and range of competition induced by different information situations.
Collapse
Affiliation(s)
- Xiaochen Wang
- Center for Systems and Control, College of Engineering, Peking University, Beijing, 100871, China
| | - Lei Zhou
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China
| | - Alex McAvoy
- School of Data Science and Society, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Aming Li
- Center for Systems and Control, College of Engineering, Peking University, Beijing, 100871, China.
- Center for Multi-Agent Research, Institute for Artificial Intelligence, Peking University, Beijing, 100871, China.
| |
Collapse
|
8
|
Murase Y, Hilbe C. Indirect reciprocity with stochastic and dual reputation updates. PLoS Comput Biol 2023; 19:e1011271. [PMID: 37471286 PMCID: PMC10359017 DOI: 10.1371/journal.pcbi.1011271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 06/13/2023] [Indexed: 07/22/2023] Open
Abstract
Cooperation is a crucial aspect of social life, yet understanding the nature of cooperation and how it can be promoted is an ongoing challenge. One mechanism for cooperation is indirect reciprocity. According to this mechanism, individuals cooperate to maintain a good reputation. This idea is embodied in a set of social norms called the "leading eight". When all information is publicly available, these norms have two major properties. Populations that employ these norms are fully cooperative, and they are stable against invasion by alternative norms. In this paper, we extend the framework of the leading eight in two directions. First, we include norms with 'dual' reputation updates. These norms do not only assign new reputations to an acting donor; they also allow to update the reputation of the passive recipient. Second, we allow social norms to be stochastic. Such norms allow individuals to evaluate others with certain probabilities. Using this framework, we characterize all evolutionarily stable norms that lead to full cooperation in the public information regime. When only the donor's reputation is updated, and all updates are deterministic, we recover the conventional model. In that case, we find two classes of stable norms: the leading eight and the 'secondary sixteen'. Stochasticity can further help to stabilize cooperation when the benefit of cooperation is comparably small. Moreover, updating the recipients' reputations can help populations to recover more quickly from errors. Overall, our study highlights a remarkable trade-off between the evolutionary stability of a norm and its robustness with respect to errors. Norms that correct errors quickly require higher benefits of cooperation to be stable.
Collapse
Affiliation(s)
- Yohsuke Murase
- RIKEN Center for Computational Science, Kobe, Japan
- Max Planck Research Group 'Dynamics of Social Behavior', Max Planck Institute for Evolutionary Biology, Plön, Germany
| | - Christian Hilbe
- Max Planck Research Group 'Dynamics of Social Behavior', Max Planck Institute for Evolutionary Biology, Plön, Germany
| |
Collapse
|
9
|
Kessinger TA, Tarnita CE, Plotkin JB. Evolution of norms for judging social behavior. Proc Natl Acad Sci U S A 2023; 120:e2219480120. [PMID: 37276388 PMCID: PMC10268218 DOI: 10.1073/pnas.2219480120] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 05/03/2023] [Indexed: 06/07/2023] Open
Abstract
Reputations provide a powerful mechanism to sustain cooperation, as individuals cooperate with those of good social standing. But how should someone's reputation be updated as we observe their social behavior, and when will a population converge on a shared norm for judging behavior? Here, we develop a mathematical model of cooperation conditioned on reputations, for a population that is stratified into groups. Each group may subscribe to a different social norm for assessing reputations and so norms compete as individuals choose to move from one group to another. We show that a group initially comprising a minority of the population may nonetheless overtake the entire population-especially if it adopts the Stern Judging norm, which assigns a bad reputation to individuals who cooperate with those of bad standing. When individuals do not change group membership, stratifying reputation information into groups tends to destabilize cooperation, unless individuals are strongly insular and favor in-group social interactions. We discuss the implications of our results for the structure of information flow in a population and for the evolution of social norms of judgment.
Collapse
Affiliation(s)
| | - Corina E. Tarnita
- Department of Ecology and Evolutionary Biology, Princeton University, Princeton, NJ08544
| | - Joshua B. Plotkin
- Department of Biology, University of Pennsylvania, Philadelphia, PA19104
- Center for Mathematical Biology, University of Pennsylvania, Philadelphia, PA19104
| |
Collapse
|
10
|
Li Q, Li S, Zhang Y, Chen X, Yang S. Social norms of fairness with reputation-based role assignment in the dictator game. CHAOS (WOODBURY, N.Y.) 2022; 32:113117. [PMID: 36456315 DOI: 10.1063/5.0109451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
A vast body of experiments share the view that social norms are major factors for the emergence of fairness in a population of individuals playing the dictator game (DG). Recently, to explore which social norms are conducive to sustaining cooperation has obtained considerable concern. However, thus, far few studies have investigated how social norms influence the evolution of fairness by means of indirect reciprocity. In this study, we propose an indirect reciprocal model of the DG and consider that an individual can be assigned as the dictator due to its good reputation. We investigate the "leading eight" norms and all second-order social norms by a two-timescale theoretical analysis. We show that when role assignment is based on reputation, four of the "leading eight" norms, including stern judging and simple standing, lead to a high level of fairness, which increases with the selection intensity. Our work also reveals that not only the correct treatment of making a fair split with good recipients but also distinguishing unjustified unfair split from justified unfair split matters in elevating the level of fairness.
Collapse
Affiliation(s)
- Qing Li
- Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Songtao Li
- Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Yanling Zhang
- Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Xiaojie Chen
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Shuo Yang
- Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| |
Collapse
|
11
|
Qian J, Sun X, Zhang T, Chai Y. Authority or Autonomy? Exploring Interactions between Central and Peer Punishments in Risk-Resistant Scenarios. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1289. [PMID: 36141176 PMCID: PMC9497953 DOI: 10.3390/e24091289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 09/05/2022] [Accepted: 09/09/2022] [Indexed: 06/16/2023]
Abstract
Game theory provides a powerful means to study human cooperation and better understand cooperation-facilitating mechanisms in general. In classical game-theoretic models, an increase in group cooperation constantly increases people's gains, implying that individual gains are a continuously varying function of the cooperation rate. However, this is inconsistent with the increasing number of risk-resistant scenarios in reality. A risk-resistant scenario means once a group does not successfully resist the risk, all individuals lose their resources, such as a community coping with COVID-19 and a village resisting a flood. In other words, individuals' gains are segmented about the collaboration rate. This paper builds a risk-resistant model to explore whether punishment still promotes collaboration when people resist risk. The results show that central and peer punishments can both encourage collaboration but with different characteristics under different risk-resistant scenarios. Specifically, central punishment constrains the collaboration motivated by peer punishment regardless of risk, while peer punishment limits the collaboration induced by central punishment only when the risk is high. Our findings provide insights into the balance between peer punishment from public autonomy and central punishment from central governance, and the proposed model paves the way for the development of richer risk-resistant models.
Collapse
Affiliation(s)
- Jun Qian
- National Engineering Laboratory for E-Commerce Technologies, Department of Automation, Tsinghua University, Beijing 100084, China
| | - Xiao Sun
- National Engineering Laboratory for E-Commerce Technologies, Department of Automation, Tsinghua University, Beijing 100084, China
| | - Tongda Zhang
- Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Yueting Chai
- National Engineering Laboratory for E-Commerce Technologies, Department of Automation, Tsinghua University, Beijing 100084, China
| |
Collapse
|
12
|
Lee S, Murase Y, Baek SK. A second-order stability analysis for the continuous model of indirect reciprocity. J Theor Biol 2022; 548:111202. [PMID: 35752284 DOI: 10.1016/j.jtbi.2022.111202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/26/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2022]
Abstract
Reputation is one of key mechanisms to maintain human cooperation, but its analysis gets complicated if we consider the possibility that reputation does not reach consensus because of erroneous assessment. The difficulty is alleviated if we assume that reputation and cooperation do not take binary values but have continuous spectra so that disagreement over reputation can be analysed in a perturbative way. In this work, we carry out the analysis by expanding the dynamics of reputation to the second order of perturbation under the assumption that everyone initially cooperates with good reputation. The second-order theory clarifies the difference between Image Scoring and Simple Standing in that punishment for defection against a well-reputed player should be regarded as good for maintaining cooperation. Moreover, comparison among the leading eight shows that the stabilizing effect of justified punishment weakens if cooperation between two ill-reputed players is regarded as bad. Our analysis thus explains how Simple Standing achieves a high level of stability by permitting justified punishment and also by disregarding irrelevant information in assessing cooperation. This observation suggests which factors affect the stability of a social norm when reputation can be perturbed by noise.
Collapse
Affiliation(s)
- Sanghun Lee
- Department of Physics, Pukyong National University, Busan 48513, Republic of Korea
| | - Yohsuke Murase
- RIKEN Center for Computational Science, Kobe, Hyogo 650-0047, Japan
| | - Seung Ki Baek
- Department of Scientific Computing, Pukyong National University, Busan 48513, Republic of Korea.
| |
Collapse
|
13
|
Perret C, Krellner M, Han TA. The evolution of moral rules in a model of indirect reciprocity with private assessment. Sci Rep 2021; 11:23581. [PMID: 34880264 PMCID: PMC8654852 DOI: 10.1038/s41598-021-02677-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 11/11/2021] [Indexed: 11/09/2022] Open
Abstract
Moral rules allow humans to cooperate by indirect reciprocity. Yet, it is not clear which moral rules best implement indirect reciprocity and are favoured by natural selection. Previous studies either considered only public assessment, where individuals are deemed good or bad by all others, or compared a subset of possible strategies. Here we fill this gap by identifying which rules are evolutionary stable strategies (ESS) among all possible moral rules while considering private assessment. We develop an analytical model describing the frequency of long-term cooperation, determining when a strategy can be invaded by another. We show that there are numerous ESSs in absence of errors, which however cease to exist when errors are present. We identify the underlying properties of cooperative ESSs. Overall, this paper provides a first exhaustive evolutionary invasion analysis of moral rules considering private assessment. Moreover, this model is extendable to incorporate higher-order rules and other processes.
Collapse
Affiliation(s)
- Cedric Perret
- Teesside University, Southfield Rd, Middlesbrough, TS1 3BX, UK.
| | - Marcus Krellner
- Teesside University, Southfield Rd, Middlesbrough, TS1 3BX, UK
| | - The Anh Han
- Teesside University, Southfield Rd, Middlesbrough, TS1 3BX, UK
| |
Collapse
|
14
|
Santos FP, Pacheco JM, Santos FC. The complexity of human cooperation under indirect reciprocity. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200291. [PMID: 34601904 DOI: 10.1098/rstb.2020.0291] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Indirect reciprocity (IR) is a key mechanism to understand cooperation among unrelated individuals. It involves reputations and complex information processing, arising from social interactions. By helping someone, individuals may improve their reputation, which may be shared in a population and change the predisposition of others to reciprocate in the future. The reputation of individuals depends, in turn, on social norms that define a good or bad action, offering a computational and mathematical appealing way of studying the evolution of moral systems. Over the years, theoretical and empirical research has unveiled many features of cooperation under IR, exploring norms with varying degrees of complexity and information requirements. Recent results suggest that costly reputation spread, interaction observability and empathy are determinants of cooperation under IR. Importantly, such characteristics probably impact the level of complexity and information requirements for IR to sustain cooperation. In this review, we present and discuss those recent results. We provide a synthesis of theoretical models and discuss previous conclusions through the lens of evolutionary game theory and cognitive complexity. We highlight open questions and suggest future research in this domain. This article is part of the theme issue 'The language of cooperation: reputation and honest signalling'.
Collapse
Affiliation(s)
- Fernando P Santos
- Informatics Institute, University of Amsterdam, Science Park 904, Amsterdam 1098XH, The Netherlands.,Department of Ecology and Evolutionary Biology, Princeton University, Princeton, USA.,ATP-Group, Porto Salvo P-2744-016, Portugal
| | - Jorge M Pacheco
- Centro de Biologia Molecular e Ambiental and Departamento de Matemática, Universidade do Minho, Braga 4710-057, Portugal.,ATP-Group, Porto Salvo P-2744-016, Portugal
| | - Francisco C Santos
- INESC-ID and Instituto Superior Técnico, Universidade de Lisboa, IST-Taguspark, Porto Salvo 2744-016, Portugal.,ATP-Group, Porto Salvo P-2744-016, Portugal
| |
Collapse
|