Despite the limitations of computers as models of brain dynamics, models in the form of software that can be run on a computer have advanced the understanding of how the human mind works; see Holland (2003), for example. The ‘Copycat’ model was developed by Douglas Hofstadter and Melanie Mitchell to explore the kind of computations that can come up with creative analogies (Hofstadter and FARG 1995). This program did not try to simulate the human mind or brain as a whole, but instead explored one aspect of human mentality (analogy-making) in a very limited domain, a ‘microworld.’ Copycat comes up with solutions to analogy-making problems, such as this one: ‘I change efg into efw. Can you change ghi in a similar way?’ The program tackles the problem not once but hundreds of times, and the solution it arrives at is not predictable for any specific run. When statistics are compiled on the range of its answers, they often match very closely the statistics on the range of answers that a sample of human subjects arrive at when presented with the same problem. This is the empirical evidence that the way Copycat makes analogies is analogous to the way humans do it. But what makes the discussion of this working model so fascinating is that it provides a fresh context for familiar psychological concepts (such as ‘reifying,’ for instance).
For example, after a thorough explanation of the design principles involved in the model, the authors interpret their own work as follows:
The moral of all this is that in a complex world (even one with the limited complexity of Copycat’s microworld), one never knows in advance what concepts may turn out to be relevant in a given situation. The dilemma underscores the point made earlier: it is important not only to avoid dogmatically open-minded search strategies, which entertain all possibilities equally seriously, but also to avoid dogmatically close-minded search strategies, which in an ironclad way rule out certain possibilities a priori. Copycat opts for a middle way, in which it quite literally takes calculated risks all the time—but the degree of risk-taking is carefully controlled. Of course, taking risks by definition opens up the potential for disaster … But this is the price that must be paid for flexibility and the potential for creativity.— Hofstadter and FARG (1995, 256)
The implications for human creativity and decision-making (i.e. guidance systems) should be obvious. Principles along these lines could be applied in the realms of communication (see e.g. Sperber and Wilson 1995) and interpretation of texts (see Eco 1990), as well as the ‘economy of research’ (Peirce). Of course, our guidance systems have to guide us through a macroworld, not a microworld, and therefore any models incorporated into them can’t be easily tested in isolation from other components of the system. But such models can certainly simplify the challenge of living and thus reduce the risk of information overload.