# Evolving cooperation in multichannel games ### Multichannel games

We provide a full account of the applied methods and the proofs of our mathematical results in the Supplementary Information. Here we provide a summary of the considered setup and the respective findings.

In a multichannel game, a group of individuals repeatedly interacts in several independent (elementary) games, as depicted in Fig. 1. Here, we discuss the special case that the group consists of two individuals who interact in m games, where each game takes the form of a social dilemma. In the main text we describe our results for m = 2 games. Generalizations are presented in the Supplementary Information.

In each round, players decide whether to cooperate (C) or to defect (D) for each of the m games. Games are independent in the sense that a player’s one-round payoff in each game only depends on the player’s and the co-player’s action in that game, irrespective of the outcome of the other games. For each game k, we denote the possible one-round payoffs by Rk, Sk, Tk, and Pk. Here, Rk is the reward when both players cooperate, Sk is the sucker’s payoff a cooperator obtains when the co-player defects, Tk is the temptation to defect when the co-player cooperates, and Pk is the punishment payoff for mutual defection. For the game to be a social dilemma52,53, we assume that Rk > Pk (such that mutual cooperation is favored to mutual defection), and that either Tk > Rk or Pk > Sk. The prisoner’s dilemma corresponds to the case where all three inequalities are satisfied. Throughout the main text, we focus on a special case of the prisoner’s dilemma, called donation game7. In the donation game, cooperation means to pay a cost ck > 0 to transfer a benefit bk > ck to the co-player. It follows that Rk = bk − ck, Sk = −ck, Tk = bk, and Pk = 0. However, the general framework is able to capture arbitrary kinds of social dilemmas (Supplementary Figs. 9 and  10).

The players’ decisions in each round depend on the previous history of play and on the players’ strategies. To quantify the effects of strategic spillovers between different games, we distinguish two versions of multichannel games. The unlinked case (Fig. 1b) serves as a control scenario. Here, any spillovers are excluded. Each player’s action in game k may only depend on the previous history of game k. In contrast, in the linked case (Fig. 1c), a player’s action in game k may depend on the outcome of other games as well.

To make a computational analysis feasible, we suppose players are restricted to strategies of some given complexity. Throughout most of the main text, we assume players use reactive strategies. That is, their actions in any given round may depend on their co-player’s actions in the previous round, but they are independent of all other aspects. In the unlinked case, we define reactive strategies as the elements of the set

$${{mathcal{R}}}_{U}=left{{bf{p}} = {({p}_{{a}_{1}}^{1};{p}_{{a}_{2}}^{2};ldots ;{p}_{{a}_{m}}^{m})}_{{a}_{k}in {C,D},kin {1,ldots ,m}},,left|,,{p}_{{a}_{k}}^{k} in [0,1] {rm{for}} {rm{all}} k,right.right}.$$

(6)

Here, ({p}_{{a}_{k}}^{k}) is the player’s cooperation probability in game k, which depends on which action ak ϵ {CD} the co-player has chosen in the previous round of that game. For m = 2, the elements of ({{mathcal{R}}}_{U}) take the form of the four-dimensional vector represented in Eq. (1). In the linked case, reactive strategies are the elements of the set

$${{mathcal{R}}}_{L}=left{{bf{p}} = {({p}_{{bf{a}}}^{k})}_{{bf{a}}in {{C,D}}^{m},kin {1,ldots ,m}},,left|,,{p}_{{bf{a}}}^{k} in [0,1],,{hbox{for all}},,k ,text{and}, {bf{a}},right.right}.$$

(7)

Here, ({p}_{{bf{a}}}^{k}) is again the player’s conditional cooperation probability in game k. However, this time, this probability depends on the co-player’s last actions in all m games, represented by the vector a = (a1, …, am) ϵ {CD}m. For m = 2, reactive strategies take the form of eight-dimensional vectors, as represented in Eq. (2). For the simulations, we assume that players can choose any strategy in either ({{mathcal{R}}}_{U}) (in the unlinked case) or ({{mathcal{R}}}_{L}) (in the linked case).

In addition to reactive strategies, we have also run simulations in which players can choose among all memory-1 strategies (Fig. 4 and Supplementary Fig. 7). Here the players’ actions depend on their co-player’s previous decisions and on their own previous decisions. We formally define the respective strategy spaces for the unlinked and the linked case in Supplementary Note 4. As with reactive strategies, simulations suggest that when players are able to link their games, they achieve more cooperation in both games (Fig. 4 and Supplementary Fig. 7).

We consider infinitely many rounds in the limit of no discounting. For each game k, we define the associated repeated-game payoff as the limit of the player’s average payoff per round (for the cases we consider, the existence of this limit is guaranteed). A player’s payoff in the multichannel game is defined as the sum over all her m repeated-game payoffs.

We may sometimes assume that a player misimplements her intended action. Specifically, with probability ε, a player who intends to cooperate instead defects, and conversely a player who intends to defect cooperates. In addition to making the model more realistic, implementation errors ensure that payoffs are well-defined, independent of the outcome of the very first round of the game7,23. Our simulation results are robust with respect to the exact magnitude of this error rate, provided that errors are sufficiently rare for the player’s strategies to have an impact (Supplementary Fig. 6d). For further details, see Supplementary Note 2.

### Evolutionary dynamics

To model the evolution of strategies over time, we consider a pairwise comparison process41,42 in a population of size N. Each player interacts with every other population member in the respective multichannel game. A player’s payoff in the population game is defined as her average payoff across all multichannel games she participates in.

To consider the most stringent case for the evolution of cooperation, initially each player adopts the strategy ALLD. That is, for any outcome of the previous round, each player’s conditional cooperation probability is zero. Then, in each time step of the simulation, one population member is chosen at random to update her strategy. There are two different updating methods. With probability μ (referred to as mutation rate), the chosen player engages in random strategy exploration. In that case, the player randomly picks a new strategy from the set of all available strategies (for reactive strategies, this set is ({{mathcal{R}}}_{U}) in the unlinked case, and it is ({{mathcal{R}}}_{L}) in the linked case; for memory-1 strategies the respective sets are defined analogously).

Alternatively, with probability 1 − μ, the chosen player picks a random role model from the population. If the focal player’s payoff is πF and the role model’s payoff is πR, the focal player adopts the role model’s strategy with probability54

$$rho =frac{1}{1+exp [-s({pi }_{R} – {pi }_{F})]}.$$

(8)

The parameter s ≥ 0 is called the strength of selection55. It reflects to which extent the focal player aims to achieve higher payoffs when updating her strategy. If s = 0, payoffs are irrelevant and imitation occurs at random. In the other limit when s → , a player always updates when considering a role model with higher payoff.

Over time, the interaction of random strategy exploration and imitation yields an ergodic process on the space of all possible population compositions. For our simulations, we implement this process in the limit of rare mutations, μ → 0, which allows for an easier computation of the dynamics43,44,45,46. The respective code is provided in Supplementary Note 5. As illustrated in Supplementary Fig. 6c, we obtain similar results for larger mutation rates, provided mutations are not too common compared to imitation events.

### Analytical results for reactive strategies

To complement our numerical simulations, we have mathematically characterized three different classes of Nash equilibria when each game k is a donation game. A strategy p is a Nash equilibrium if no player has an incentive to deviate if every other player adopts p. We note that deviations need to be interpreted broadly: for a strategy to be a Nash equilibrium, no other strategy is allowed to yield a higher payoff, not even a strategy of higher complexity as strategy p. We call a strategy self-cooperative in game k if its cooperation rate against itself in game k approaches one in the limit of no errors. Similarly, the strategy is self-defective in game k, if the respective cooperation rate approaches zero. Based on these notions, we define partners, semi-partners, and defectors as follows. A strategy is a partner if it is a Nash equilibrium and if it is self-cooperative in all games k. Similarly, a strategy is a defector if it is a Nash equilibrium and if it is self-defective in every game. Finally, the strategy is a game k semi-partner, if it is a Nash equilibrium and if it is self-cooperative in game k but self-defective in all other games.

Within the space of reactive strategies, we can characterize the partners, semi-partners, and defectors in the linked case as follows. To simplify notation, we introduce an indicator variable ({e}_{{bf{a}}}^{k}). Its value is one if the k-th entry of the co-player’s action profile a = (a1, …, am) is C and it is zero otherwise. Using this notation, we obtain (for details, see Supplementary Note 3, Propositions 1–3):

1. 1.

A strategy ({bf{p}}in {{mathcal{R}}}_{L}) that is self-cooperative in each game k is a partner if and only if (mathop{sum }nolimits_{k = 1}^{m}{b}_{k}cdot (1 – {p}_{{bf{a}}}^{k})ge mathop{sum }nolimits_{k = 1}^{m}{c}_{k} cdot (1 – {e}_{{bf{a}}}^{k})) for all co-player’s action profiles a ϵ {CD}m.

2. 2.

A strategy ({bf{p}}in {{mathcal{R}}}_{L}) that is self-defective in each game k is a defector if and only if (mathop{sum }nolimits_{k = 1}^{m}{b}_{k} cdot {p}_{{bf{a}}}^{k} le mathop{sum }nolimits_{k = 1}^{m}{c}_{k} cdot {e}_{{bf{a}}}^{k}) for all co-player’s action profiles a ϵ {CD}m.

3. 3.

A strategy ({bf{p}}in {{mathcal{R}}}_{L}) that is self-cooperative in game k but self-defective in all other games is a game k semi-partner if and only if ({b}_{k} cdot (1 – {p}_{{bf{a}}}^{k})-{c}_{k} cdot (1 – {e}_{{bf{a}}}^{k})ge {sum }_{lne k}{b}_{l} {p}_{{bf{a}}}^{l}-{sum }_{lne k}{c}_{l} {e}_{{bf{a}}}^{l}) for all co-player’s action profiles a ϵ {CD}m.

In the case of m = 2, the condition for partners simplifies to condition (5) in the main text. The above results are also illustrated in Supplementary Fig. 2.

Similarly, we can characterize partners, semi-partners, and defector among the reactive strategies for the unlinked case (for details, see Supplementary Note 3, Proposition 4).

1. 1.

A strategy ({bf{p}} in {{mathcal{R}}}_{U}) that is self-cooperative in each game k is a partner if and only if ({p}_{D}^{k}le 1 – {c}_{k}/{b}_{k}) for all games k.

2. 2.

A strategy ({bf{p}} in {{mathcal{R}}}_{U}) that is self-defective in each game k is a defector if and only if ({p}_{C}^{k} le {c}_{k}/{b}_{k}) for all games k.

3. 3.

A strategy ({bf{p}} in {{mathcal{R}}}_{U}) that is self-cooperative in game k and self-defective in all other games is a game k semi-partner if and only if ({p}_{D}^{k}le 1 – {c}_{k}/{b}_{k}) and ({p}_{C}^{l} le {c}_{l}/{b}_{l}) for all l ≠ k.

For the special case of m = 2 games, the respective condition for partners yields condition (4) in the main text. Supplementary Fig. 1 provides a graphical illustration. As one may expect, when there is only m = 1 game, the respective conditions in the linked case coincide with the respective conditions for the unlinked case. In particular, the condition for partner strategies yields a maximum cooperation rate after defection of ({p}_{D}^{k} = 1 – {c}_{k}/{b}_{k}), which recovers the value of the classical Generous Tit-for-Tat strategy3,4. We can also use the above conditions for partners, semi-partners, and defectors to calculate how abundant the respective strategies are among all reactive strategies. This calculation confirms that for most parameter values, partners are more abundant when games are linked (see Supplementary Fig. 3 and Supplementary Note 3 for details).

### Analytical results for memory-1 strategies

The simulations for memory-1 players in Fig. 4 suggest that in the unlinked case, players establish little cooperation when bk < 2ck. In contrast, in games with bk >  2ck, cooperation seems to be maintained with the strategy Win-Stay Lose-Shift (WSLS). A player with that strategy cooperates if and only if either both players have cooperated in the previous round of the respective game, or if no one did. In the linked case, evolving strategies resemble a different strategy, which we term CIC. Players with this strategy use in each round the same action in all games they participate in. This action is cooperation if and only if in each game, players used the same action in the last round; otherwise they defect.

We can characterize for which parameter values bk and ck these two strategies are subgame perfect equilibria. A subgame perfect equilibrium is a refinement of the Nash equilibrium: players are required not to have an incentive to deviate after any previous history of play56. We obtain the following conditions (Supplementary Note 4, Proposition 5).

1. 1.

WSLS is a subgame perfect equilibrium if and only if bk ≥ 2ck for all k.

2. 2.

CIC is a subgame perfect equilibrium if and only if ∑kbk ≥ 2∑kck.

The two conditions again reflect one reason why full cooperation is easier to sustain in the linked case. Unlinked strategies like WSLS require that the benefit satisfies bk ≥ 2ck in every single game. In contrast, in the linked case, CIC only requires that this condition is met on average, across all games. In particular, players may use cooperation in high-benefit games (with bk > 2ck) as a means to achieve cooperation in low-benefit games (with bk < 2ck).

### Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.