Real economy


Here’s one iconic symbol we ought to be turning to. You can read all about it in Kate Raworth’s blog and book on Doughnut Economics. The doughnut is an icon well suited to “accounting for what really counts” (the slogan of gnusystems).

Economists and politicians, including our Prime Minister, are still chanting the old mantra of economic “growth” as if it were the panacea which would solve all our problems and improve all our lives. But as soon as you ask what the purpose of an economic system is, as Kate Raworth did, you see that growth does not always serve that purpose, and sometimes works against it. And what’s more, the politicians who have relied on this mantra to manufacture consent for their programs have used it mainly to increase the gap between rich and poor.

Kate’s latest blog post presents the choice between economic “paradigms” in its simplest terms. The old one is based on the belief that people are greedy, insatiable and competitive. The new one is based on the belief that “people are greedy and generous, competitive and collaborative – and it’s possible to nurture human nature.” You’re invited to decide which belief you want to live by.

Perceptipation

meaning cycle

… we perceive what we are adjusted for interpreting …

— Peirce, EP2:229, CP 5.185

Perception is an act of imagination based upon the available information.

— Frank H. Durgin (2002, 88)

The neural patterns and the corresponding mental images of the objects and events outside the brain are creations of the brain related to the reality that prompts their creation rather than passive mirror images reflecting that reality.

Damasio (2003, 198-9)

The world appears to us to contain objects and events. This way of looking at the world is so basic as to seem to be a consequence of the way the individual human central nervous system develops in its very early stages. Yet our stimulus world is not partitioned in this way, and certainly not uniquely partitioned in this way.

— Mark Turner (1991, 60)

Whatever we call reality, it is revealed to us only through the active construction in which we participate.

The simple fact is that no measurement, no experiment or observation is possible without a relevant theoretical framework.

— D.S. Kothari, cited in Prigogine and Stengers 1984, 293

Modeling morality

Computer models of game theory have been used to investigate the evolution of moral principles and precepts. A famous example is Axelrod’s computer tournament which modelled the ‘prisoner’s dilemma’ situation, in which the player has to decide (at each point in a series of interactions with another player) whether to ‘cooperate’ or ‘defect.’ Various algorithms were submitted and interacted with each other; then the scores were totalled up, and the strategy which scored highest in the long run was declared the winner. One outcome of this modeling process was a set of four precepts generally followed by the strategies that performed best in the tournament (Barlow 1991, 132):

  • Don’t be envious.
  • Don’t be the first to defect.
  • Reciprocate both cooperation and defection.
  • Don’t be too clever.

These resemble some familiar religious precepts, although the third contradicts the Christian injunction to ‘turn the other cheek’ and Chapter 49 of the Tao Te Ching:

To the good I act with goodness;
To the bad I also act with goodness:
Thus goodness is attained.
To the faithful I act with faith;
To the faithless I also act with faith:
Thus faith is attained.

— tr. Ch‘u Ta-Kao

But then these guidance systems would probably define ‘winning’ differently from the rules of Axelrod’s tournament. Besides, subsequent tournaments made it clear that which strategy wins depends on what other strategies are in the game, which ones are dominant and which marginal. The implication is that reducing your strategy to an algorithm is not in itself a viable strategy, though it might be useful for modeling how strategies evolve.

Loops within loops

What really matters is the complex reciprocal dance in which the brain tailors its activity to a technological and sociocultural environment, which—in concert with other brains—it simultaneously alters and amends. Human intelligence owes just about everything to this looping process of mutual accommodation.

— Clark (2003, 87)

Within the self-world or system/environment loop is the brain-body loop, and within that, other loops:

The proprioceptive and interoceptive loops are closed outside the brain but inside the body. The preafferent loops are within the brain, updating the sensory cortices to expect the consequences of incipient actions.

— diagram in Freeman 2000, 222 (see also Clark 2003, 106)

These provide a preconscious form of anticipation which works faster than sensorimotor loops. The nervous system guides its body by implicitly comparing these expectations with the real consequences of action as they are sensed. In dreaming, the sensing of external reality is cut off from this loop, so the sensation of (for instance) flying is experienced as actual (Hobson 2002, 27-8). When reality checks are temporarily left out of the loop, fantasy and discovery alike find their way into the cultural universe, for some newly imagined forms later turn out to inform the external world.

Inductive analogizing

Despite the limitations of computers as models of brain dynamics, models in the form of software that can be run on a computer have advanced the understanding of how the human mind works; see Holland (2003), for example. The ‘Copycat’ model was developed by Douglas Hofstadter and Melanie Mitchell to explore the kind of computations that can come up with creative analogies (Hofstadter and FARG 1995). This program did not try to simulate the human mind or brain as a whole, but instead explored one aspect of human mentality (analogy-making) in a very limited domain, a ‘microworld.’ Copycat comes up with solutions to analogy-making problems, such as this one: ‘I change efg into efw. Can you change ghi in a similar way?’ The program tackles the problem not once but hundreds of times, and the solution it arrives at is not predictable for any specific run. When statistics are compiled on the range of its answers, they often match very closely the statistics on the range of answers that a sample of human subjects arrive at when presented with the same problem. This is the empirical evidence that the way Copycat makes analogies is analogous to the way humans do it. But what makes the discussion of this working model so fascinating is that it provides a fresh context for familiar psychological concepts (such as ‘reifying,’ for instance).

For example, after a thorough explanation of the design principles involved in the model, the authors interpret their own work as follows:

The moral of all this is that in a complex world (even one with the limited complexity of Copycat’s microworld), one never knows in advance what concepts may turn out to be relevant in a given situation. The dilemma underscores the point made earlier: it is important not only to avoid dogmatically open-minded search strategies, which entertain all possibilities equally seriously, but also to avoid dogmatically close-minded search strategies, which in an ironclad way rule out certain possibilities a priori. Copycat opts for a middle way, in which it quite literally takes calculated risks all the time—but the degree of risk-taking is carefully controlled. Of course, taking risks by definition opens up the potential for disaster … But this is the price that must be paid for flexibility and the potential for creativity.

— Hofstadter and FARG (1995, 256)

The implications for human creativity and decision-making (i.e. guidance systems) should be obvious. Principles along these lines could be applied in the realms of communication (see e.g. Sperber and Wilson 1995) and interpretation of texts (see Eco 1990), as well as the ‘economy of research’ (Peirce). Of course, our guidance systems have to guide us through a macroworld, not a microworld, and therefore any models incorporated into them can’t be easily tested in isolation from other components of the system. But such models can certainly simplify the challenge of living and thus reduce the risk of information overload.

Comparadigms

We can never compare model with reality (Chapter 9). In a crux, though, we may compare models, and switch from one to another. Thomas Kuhn emphasizes this point with reference to the history of science:

… once it has achieved the status of a paradigm, a scientific theory is declared invalid only if an alternate candidate is available to take its place. No process yet disclosed by the historical study of scientific development at all resembles the methodological stereotype of falsification by direct comparison with nature.… The decision to reject one paradigm is always simultaneously the decision to accept another, and the judgment leading to that decision involves the comparison of both paradigms with nature and with each other.

— Kuhn (1969, 77)

A paradigm or theory can be ‘compared with nature’ only in the sense that the success of its applications can be repeatedly assessed by inductive reasoning based on many observations. Such a ‘comparison’ is indirect, while the comparison of paradigms with each other can be made directly when they are represented iconically.

Likewise, on the individual level, we cannot compare the memory of an event with the event’s occurrence in real time.

… memory is a system property reflecting the effects of context and the associations of the various degenerate circuits capable of yielding a similar output. Thus, each event of memory is dynamic and context-sensitive—it yields a repetition of a mental or physical act that is similar but not identical to previous acts. It is recategorical: it does not replicate an original experience exactly. There is no reason to assume that such a memory is representational in the sense that it stores a static registered code for some act. Instead, it is more fruitfully looked on as a property of degenerate nonlinear interactions in a multidimensional network of neuronal groups. Such interactions allow a non-identical ‘reliving’ of a set of prior acts and events, yet there is often the illusion that one is recalling an event exactly as it happened.

— Edelman (2004, 52)

But such a memory is representational in the semiotic sense; it is even a paradigm of semiosis, as Peirce said: ‘The type of a sign is memory, which takes up the deliverance of past memory and delivers a portion of it to future memory.’

God knows, we learn

Neither science nor evolution progresses in the sense that we approach omniscience or a state of perfection, but only in the sense that we learn from our mistakes. We progress when we make new mistakes, which we later recognize as such when we learn from them. This – and not any measurable decrease of distance between where we are and some final destination – defines the direction in which we are heading. Learning and evolving would not be possible for an omnipotent and omniscient God, whose acts cannot have unintended consequences.

For humans, the attributes of God can only be idealized human attributes. For instance, we take the human experience of knowing, make it absolute and all-encompassing, and call it omniscience. If we didn’t start from human experience, we would have no idea what these attributes could refer to; but with it, we can imagine a kind of knowing that we know to be far beyond human capacity. We arrive at the concept of omnipotence in a similar way.

We make our God in our own image, then idealize the image by saying that God made us in His image. Our theories about the realm of the divine are likewise maps of our mystical journeys in those realms. Moshe Idel (1988, 29) makes this observation about ‘the theoretical element in Kabbalistic literature’:

Being for the most part a topography of the divine realm, this theoretical literature served more as a map than as speculative description. Maps, as we know, are intended to enable a person to fulfill a journey; for the Kabbalists, the mystical experience was such a journey. Though I cannot assert that every ‘theoretical’ work indeed served such a use, this seems to have been the main purpose of the greatest part of this literature.

Testing?

The method of trial and error is applied not only by Einstein but, in a more dogmatic fashion, by the amoeba also.

Popper (1968, 68)

For Popper (1968), ‘the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability’ (48) by means of observations. ‘Thus science must begin with myths, and with the criticism of myths’ (66); ‘we may point out that every statement involves interpretation in the light of theories, and that it is therefore uncertain’ (55n.). ‘To put it more concisely, similarity-for-us is the product of a response involving interpretations (which may be inadequate) and anticipations or expectations (which may never be fulfilled)’ (59). Thus ‘repetition-for-us’ is ‘the result of our propensity to expect regularities and to search for them’ (60).

Observation is always selective. It needs a chosen object, a definite task, an interest, a point of view, a problem. And its description presupposes a descriptive language, with property words; it presupposes similarity and classification, which in their turn presuppose interests, points of view, and problems.

There is no measurement without a theory and no operation which can be satisfactorily described in non-theoretical terms.

— Popper 1968 (61, 82)

Most of the beliefs which actually guide practice, or determine one’s path, are not falsifiable in the way that would qualify them for ‘scientific status.’ Indeed it is doubtful whether any theory outside of the special sciences is falsifiable in that way. For Popper, such enterprises as Freudian psychoanalysis or Marxist dialectical materialism were not sciences but quasi-religions. Peirce had much the same attitude toward the kind of ‘psychical research’ current his day; yet he considered philosophy a science, at least potentially. To do otherwise would block the road of inquiry, which would be even worse than being too credulous.

How to be conscious of consciousness

Antonio Damasio gives an account of the arising of ‘core consciousness’ which maps easily onto the gnoxic meaning-cycle diagram, with emphasis on the automatic, instantaneous and unobservable nature of the process:

Core consciousness is generated in pulselike fashion, for each content of which we are to be conscious. It is the knowledge that materializes when you confront an object [W], construct a neural pattern for it [ception], and discover automatically that the now-salient image of the object is formed in your perspective [M], belongs to you, and that you can even act on it [practice]. You come by this knowledge, this discovery as I prefer to call it, instantly: there is no noticeable process of inference, no out-in-the-daylight logical process that leads you there, and no words at all—there is the image of the thing and, right next to it, is the sensing of its possession by you.

— Damasio (1999, 126)

By the time you are conscious of a phenomenon, its Firstness (quality), Secondness (actuality) and Thirdness (mediation by your ‘perspective’) are already intrinsic to the experience, and only a later abstractive process can distinguish among them as elements of it. Damasio goes on to explain that the time scale of brain events makes them invisible to us. If it takes half a second for the brain to generate a ‘pulse’ of consciousness, then we can’t be immediately conscious of events happening faster than that; we can only model the process and then analyze it as a train of events. The same is true of processes – such as evolution – going on at higher time scales than the human focal level; we can be conscious of them only by theoretical means.

Diagrammatic experimentation

Our models of the world develop through a recursive trial-and-error process, so naturally no step in the process starts “from scratch,” although the whole process must have had a beginning. Each step in an evolutionary, developmental or growth process is represented in our modeling by an experiment on a diagram. Any such process requires continuity with variation: variation without continuity is inconceivable, and continuity without variation is inertia.

Continuity is also … the basis for Peirce’s ‘medieval’ realism with regard to the existence of real universals which refer to natural habits and the continuity of their possible instantiations. But diagrams are intimately connected to symbols, as we have seen, in the diagrammatic reasoning process. Concepts are ‘the living influence upon us of a diagram’ – this should be compared with Peirce’s basic pragmatist meaning maxim, according to which the meaning of a concept is equal to its behavioral consequences in conceivable settings. This implies that signification of a symbol is defined conditionally: ‘Something is x, if that thing behaves in such and such a way under such and such conditions’ – ‘Something is hard, if it is not scratched by a diamond.’ But this maxim, developed on the basis of a conception of scientific experimenting, is formally equal to the idea of diagrammatic experiments: the signification of the concept is the diagram of the experiment. The aim of science is to try to make such conditional definitions as diagrammatic as possible. This is the diagrammatic component in Peirce’s laconic enlightenment maxim, ‘symbols grow’: new symbols arise through diagrammatic experimentation.

— Stjernfelt 2007, 115