Computer models of game theory have been used to investigate the evolution of moral principles and precepts. A famous example is Axelrod’s computer tournament which modelled the ‘prisoner’s dilemma’ situation, in which the player has to decide (at each point in a series of interactions with another player) whether to ‘cooperate’ or ‘defect.’ Various algorithms were submitted and interacted with each other; then the scores were totalled up, and the strategy which scored highest in the long run was declared the winner. One outcome of this modeling process was a set of four precepts generally followed by the strategies that performed best in the tournament (Barlow 1991, 132):
- Don’t be envious.
- Don’t be the first to defect.
- Reciprocate both cooperation and defection.
- Don’t be too clever.
These resemble some familiar religious precepts, although the third contradicts the Christian injunction to ‘turn the other cheek’ and Chapter 49 of the Tao Te Ching:
To the good I act with goodness;
To the bad I also act with goodness:
Thus goodness is attained.
To the faithful I act with faith;
To the faithless I also act with faith:
Thus faith is attained.— tr. Ch‘u Ta-Kao
But then these guidance systems would probably define ‘winning’ differently from the rules of Axelrod’s tournament. Besides, subsequent tournaments made it clear that which strategy wins depends on what other strategies are in the game, which ones are dominant and which marginal. The implication is that reducing your strategy to an algorithm is not in itself a viable strategy, though it might be useful for modeling how strategies evolve.