Systemic governance

Does a dictionary “govern” the meanings of words?

Its “authority” depends not on the ability of its authors to govern the community of speakers of the language, or to prescribe rules of word usage, but on their ability to accurately describe the standard usage prevailing in that community. In other words, the compilers ‘speak for’ the community as a whole, not so much to guide it as to describe the verbal aspect of its guidance system. Part of this task, however, is to recognize that some usage habits are better than others for the coherence of the guidance system and for its communicative function. Since any instance of such recognition can only be based on the cumulative experience of one language user, and is as fallible as any judgment, the ‘experts‘ may disagree on which observable usages are standard and which are not.

The authors of a dictionary, by making implicit communal standards explicit, and by declaring some actual usage habits to be nonstandard (‘slang,’ ‘archaic,’ ‘rare’ etc.), are in effect prescribing usage habits for those who accept their descriptive authority. But that authority is based on the participation of the authors in the linguistic life of the whole community, not on their taking up a privileged position above it. If the dictionary is influential, the language tends to become what the authors describe – just as any cybernetic system (one self-governed by recursive or ‘feedback’ processes) develops self-control. Self-control (as opposed to remote control) is characteristic of living, semiotic and mental systems. As Gregory Bateson pointed out, ‘no part of such an internally interactive system can have unilateral control over the remainder or over any other part.’

Even in very simple self-corrective systems, this holistic character is evident. In the steam engine with a “governor,” the very word “governor” is a misnomer if it is taken to mean that this part of the system has unilateral control. The governor is, essentially, a sense organ or transducer which receives a transform of the difference between the actual running speed of the engine and some ideal or preferred speed. This sense organ transforms these differences into differences in some efferent message, for example, to fuel supply or to a brake. The behavior of the governor is determined, in other words, by the behavior of the other parts of the system, and indirectly by its behavior at a previous time.

The holistic and mental character of the system is most clearly demonstrated by this last fact, that the behavior of the governor (and, indeed, of every part of the causal circuit) is partially determined by its own previous behavior. Message material (i.e., successive transforms of difference) must pass around the total circuit, and the time required for the message material to return to the place from which it started is a basic characteristic of the total system. This behavior of the governor (or any other part of the circuit) is thus in some degree determined not only by its immediate past, but by what it did at a time which precedes the present by the interval necessary for the message to complete the circuit. Thus there is a sort of determinative memory in even the simplest cybernetic circuit.

The stability of the system (i.e. whether it will act self-correctively or oscillate or go into runaway) depends upon the relation between the operational product of all the transformations of difference around the circuit and upon this characteristic time. The “governor” has no control over these factors. Even a human governor in a social system is bound by the same limitations. He is controlled by information from the system and must adapt his own actions to its time characteristics and to the effects of his own past action.

Thus, in no system which shows mental characteristics can any part have unilateral control over the whole. In other words, the mental characteristics of the system are immanent, not in some part, but in the system as a whole.

— Gregory Bateson (1972, 315-16, his emphasis)

A beehive, for instance, is not ruled by a central authority; to imagine how it works, ‘a better image is the orderly growth of an individual body, brought about by communication between neighbouring cells. Work in the colony is organized by local communication between individuals.… global order can result from local rules’ (Maynard Smith and Szathmáry 1999, 133). Such ‘local rules’ are legisigns, general enough to govern a recurring series of interactions, and ‘local’ in the sense that their form is determined by the ‘global order’ which is the whole system’s self-control. Similarly, social order results from the implicit ‘rules’ governing local interactions among people, whether they have been legislated or not.

Self-control and the middle voice

Being a manipulative species, humans have mixed some notions of domination into the concept of control.

Heraclitus was quoted as writing, around 600 B.C., that

The wise is one, knowing the plan by which it steers all things through all.
ἓν τὸ σοφόν· ἐπίστασθαι γνώμην ὅκη κυϐερνῆσαι πάντα διὰ πάντων.

Kahn 1979, 54

The ‘knowing’ verb, ἐπίστασθαι, usually connotes mastery (literally ‘standing over’). But the exact form of the ‘steering’ verb κυϐερνῆσαι being uncertain (see Kahn 1979, 170), we might also translate ‘the thought by which all things steer themselves through all things.’ ‘Steer themselves’ would translate κυβερνᾶται, one of the most plausible readings (according to Kahn): being in the Greek middle voice, neither active nor passive, this reading is compatible with the concepts of autopoiesis, self-organization and enaction.

Various forms of the verb κυβερνάω (Greek root of the English govern) were commonly used in a ‘standard metaphor of cosmic steering’ (Kahn 1979, 272) even before Heraclitus. In the mid-20th century, the same Greek word was used to name the new field of cybernetics.

If Heraclitus did use the middle voice, this would open up a curious connection with another discipline developing in the early 20th century, generally called phenomenology. According to Henry Corbin:

The etymological meaning of the word ‘phenomenon,’ taken in the precise technical sense of phenomenology, is very much the original meaning of the Greek word phainomenon. This is the present participle of a verb in the middle voice; i.e., the subject is manifesting, appearing, and being shown to itself and for itself. It is the middle, the medium, the medial voice of the verb.

— Corbin 1948 (1998, 24)

There could be a more-than-verbal connection between the Greek middle voice and the semiotic concept of mediation (Peirce’s Thirdness), or the ‘middle way’ of Nagarjuna and Mahayana Buddhism. But English and its close relatives among languages lack a middle voice, so that our notion of ‘appearing’ tends to split into ‘subject’ and ‘object,’ while ‘control’ tends to split into a controller (active) and a controlled (passive). Cybernetics was defined by Norbert Wiener as ‘the science of control and communication, in the animal and the machine’ (Ashby 1956, 1). Some outsiders suspected that it was more about control of the animal (and human) by mechanical means, which was not the intention of the early cyberneticists.

Three centuries earlier, Descartes had used the same nautical metaphor, but without the sense that the living body steers itself; for him the body was merely a passive mechanism, and guidance was done by a separate mind or soul, the spiritual captain of the physical ship. This particular dualism has infected our thinking ever since. Intentionally or not, the word control tends to conjure up the dualistic vision of a controlling agency – captain, director, governor, dictator, boss – and a relatively passive subject (subject in the political sense of one who is governed). Once institutionalized, this becomes a domination system (Borg 2001) rather than a self-guidance system. Jean-Pierre Dupuy criticizes ‘the unfortunate choice of the very name “cybernetics”’ as ‘implying a theory of command, governance, and mastery’ (Petitot et al. 1999, 558). We can avoid some of these misconceptions by using the word guidance rather than control because it seems less suggestive of domination.

Even the notion of self-control may seem to split the whole self into two parts, with one imagined as controlling the other – although in Peirce’s usage, the continuity of semiosis involved in self-control is never in doubt.

Setting aside word choice, though, it seems that cybernetics (at least in its early days) was working with the same central ideas of closure and circularity that later found their way into autopoiesis theory and into this book.

Cybernetics might, in fact, be defined as the study of systems that are open to energy but closed to information and control – systems that are ‘information-tight’ (Ashby 1956, 4).

– or to put it another way, systems that are self-informed, ‘autonomous agents’ who are not directly controlled by external agencies.

Introducing an article about her father (Gregory Bateson) and mother (Margaret Mead), Mary Catherine Bateson provides this concise retrospective view of cybernetics:

Both my parents played important roles roles in the early development of cybernetics, participating for over a decade in the search for ways of thinking about the behavior of systems, their formal similarities and interactions, that could connect biology and the social sciences and inform various kinds of engineering and design … The way an organism adjusts to circumstances has similarities to the way a ‘smart’ missile stays on course, so by the time of my parents’ deaths the term had largely been usurped by engineering and computer science and had become associated in popular usage with mechanical, inhuman constructions.

— M.C. Bateson (2004, 44)

Polanyi (1962) was already associating the term with such mechanistic models. Likewise Robert Rosen (2000, Chapter 19) lumps cybernetics with information theory, ‘bionics’ and ‘artificial intelligence’ as developments of the organism-as-machine metaphor which goes back to Descartes. According to Rosen, ‘mechanical constructions’ were of the essence of these disciplines right from the beginning, because they never treated systems as complex in Rosen’s sense of the word. They were all simple because they were mechanical, whereas for Rosen ‘organism and machine are different in kind’ (2000, 295).

The difference between Rosen’s perspective and Bateson’s on this episode in history can serve to remind us that understanding what is meant by ‘complex’ in any given context is anything but simple. Rosen himself says that his own usage of the term ‘is completely different from that employed heretofore. This is unfortunate, but there were no other terms that could be used. In von Neumann’s terminology, every system is simple in my sense; what he calls complex I would merely call complicated’ (2000, 292). It was von Neumann who developed methods of quantifying ‘complexity,’ says Rosen (2000, 289), ‘and complexity in this sense became an explanatory principle for the characteristics of life’ – all of which kept ‘life’ firmly within the mechanical domain. But Rosen (2000, 303) also observes that a system controlling the system’s response to its environment amounts to a model of the environment – which brings us back to the meaning cycle.

These changing conceptions of ‘control’ and ‘complexity’ have taken yet another turn with the advent of infodynamics, linking information and thermodynamics. Salthe defines infodynamics as

the science of information changes in systems, especially in systems that are informed primarily from within. A combination of nonequilibrium thermodynamics and information theory based on the idea that, just as energy transformations lead to an increase in entropy, so do they, at least when viewed from within a system, lead to increases in information.

— Salthe (1993, 315)

The term (first used by David Depew and Bruce Weber in the late 1980s) reflects the twin sources of the concept. It differs from Shannon’s original information theory by focusing on nonequilibrium thermodynamics, a field not yet developed in Shannon’s time. While it remains a mathematical model, the ‘view from within’ or ‘internalist perspective’ embodied in infodynamics (Salthe 1993, 2004) aims to model dimensions of meaning not reflected in Shannon’s information theory.

On control

This is an era of unprecedented human impact on the environment. You could say, as do Brian Swimme and Thomas Berry (1992, 4), that ‘the human has taken over such extensive control of the life systems of the Earth that the future will be dependent on human decision to an extent never dreamed of in previous times.’ But this is a strange kind of ‘control’: we have repeatedly failed to anticipate the real consequences of our collective activity. Maybe the delusion of control, or the lust for it, is the whole problem; maybe control of one’s environment, in the absence of self-control, is a self-contradictory concept.

As individuals, we all feel the need for some degree of control. It is a component of ‘flow,’ and works as a ‘coping’ mechanism even after real control has ceased to exist. This is part of our biological heritage; a 2006 study conducted by the National Institute of Mental Health identified a ‘circuitry of resilience’ in the rat brain which functions so that ‘experiencing control over a stressor immunizes a rat from developing a depression-like syndrome when it later encounters stressors that it can’t control,’ according to the NIMH news release.

In any case, we have to live with the consequences of our decisions, and with the unpredictability of those consequences, and with the fact that – due to circumstances beyond our control – we have no choice but to make choices.

Fast fools

Doomsday ClockUnlike the genetic type of text, the external symbolic type can be reproduced, modified and shared almost at will. The symbolic species can thus develop external guidance systems. Such a system amounts to a common heritage which remains external to the individuals guided by it. Each of them can read and internalize its symbolic ‘maps,’ incorporating them as habits and expressing them in real time as actual behavior. If the maps or texts fall out of synch with current experience, they can be changed promptly and purposefully – no more waiting for natural selection to guide the development of the guidance system. In this way cultural evolution makes a jump to warp speed, so to speak, compared to its biological predecessor.

Evolution has speeded itself up before, for instance with the advent of sexual reproduction. This innovation enlarged the space for variation of the genetic code: now each new individual represented a remix of the genotype, consisting of parts drawn from two genetic texts. Space for variation (or polyversity) is a prerequisite for evolution to be guided by selection; the advent of cultural variation, mediated by symbolic coding, entails a leap into hyperspace. But it also entails the challenge of learning to navigate this greatly expanded space.

Navigation, as before, is guided from within the organic system, so an external guidance system has to be partially internalized in order to do its job. To inhabit a cultural universe, or to adapt one’s habits to it, takes time. As the technology of producing, transferring and retrieving texts improves, they proliferate far faster than they can be incorporated into our behavior. No wonder we humans are so much more bewildered than our wild cousins, who aren’t distracted by symbolic media or inundated by floods of information. But they do suffer, to the point of extinction, from the effects of human bewilderment and our proliferation.

We are bewildered because we are still wild at the biological core of our being, and the core process of all learning – including evolution itself – works by trial and error. Cultural evolution through the proliferation of external guidance systems has enormously amplified the possibilities of trial, the polyversity of success, and the effects of error. The question now is whether we can learn enough from our trials to avoid being overtaken by the consequences of our errors.

One early attempt to map the urgency of this complex situation onto a simple graphic device was the Doomsday Clock, introduced by the Bulletin of the Atomic Scientists in 1947. On this clock, ‘midnight’ stood for a nuclear holocaust, and the imminence of the danger of such a catastrophe was represented by the position of the minute hand. Starting at 7 minutes to midnight, the Clock (i.e. the minute hand) was moved forward or back every few years to indicate changes in the global situation, as seen by conscientious members of the scientific community. In January 2007 a new dimension was added: the clock was moved up to 5 minutes to midnight (closer to ‘Doomsday’ than it had been since 1988), taking into account this time the threat of a gradual global-warming holocaust along with renewed dangers of a sudden nuclear catastrophe. The irony in all this is that the faculty which enables us to reduce such a complex situation to a simple symbol is the same faculty which enables us to make such a mess of the situation in the first place. By learning to map the implicit intricacy of life onto simple explicit symbols, we set the stage for artificial intervention into complex natural processes. Now we are learning how lethal such intervention can be.

How to design a guidance system

The genome is the body’s internal instruction manual for becoming what it needs to be in order to pass on the instructions. The subject of this instructional text has been ‘designed’ by the billion-year dialogue between the organism’s ancestors and their changing circumstances. But developmental and evolutionary processes are unlike expert human designers in one crucial respect: they do not look for short cuts that would reach the intended product without going through the infinitely patient dialogue process. Rather than specifying the structure of their devices to suit their intended function, they incorporate a measure of vagueness and indeterminacy, so that the intentions develop along with the organism, the ends along with the means. If the purpose (or ‘meaning’) of a life were already fully determined before it begins, nothing new could happen among the living, except maybe novel styles of failure.

Where to?

Before the beginning of guidance, you are here now, there is nowhere else. Guidance begins when a difference appears between where you are and where you are heading: then you have forward and backward, front and back, start and finish. Guidance develops as paths proliferate.

So there you are, trying to imagine a story in which you might be a character who makes a difference – or at least, even if you’re only an extra, a story with a plot, one that goes somewhere.

Molecular coupling

The structural coupling between an organism and its ecological niche is mediated by various kinds of signs. The most immediate or purely physical form of this coupling occurs at the molecular level when allosteric proteins fold into one of two possible three-dimensional structures depending on the presence of a molecular ‘partner.’ These proteins couple with complementary shapes, and this allows them to act as ‘switches’ to facilitate chemical reactions within living cells. Some of these reactions act as signals for other actions, contributing to the guidance system at the molecular level. At this level, ‘everything that gets done in an organism or by an organism is done by proteins’ (Loewenstein 1999, 72). But the system guiding the behavior of the whole organism is irreducibly semiotic.

Bottleneck

What does it take to be well guided? One principle is this:

A good guidance system must be simple enough to be decisive, and complex enough to be careful.

Simplicity is required because attention is limited. The fewer decisions you have to make consciously, and the less time it takes to make them, the more well-marked your path. Conscious thinking slows down your response to your situation: its one advantage is that it allows you in the long run to improve your set of habits. Your investment of time and effort – in considering possible courses of action, and turning some of them into habits through actual or anticipated practice, to the point where they become ‘second nature’ – is repaid when your body can handle now-familiar situations on its own, leaving your conscious attention free for more significant things.

Consciousness is the narrow neck of the Klein bottle of mind. Passing through this bottleneck, intention becomes the experience of conscious will, perception becomes the experience of conscious awareness of the world, and the implicit model of the world becomes an explicit description. It is a bottleneck because working memory is so limited, attention so narrowly focused and conscious decision-making so slow that very little “content” flows through it – but its emergy is high in transformity.

Evolving consciousness

Living systems are self-organizing; inquiring systems are also self-critical. All are texts which revise themselves in dialogue with their contexts. Over generations of interpretant symbols, the types of these texts evolve.

Let us inquire into the role of consciousness in this process. Thomas Metzinger begins here:

First, let’s not forget that evolution is driven by chance, does not pursue a goal, and achieved what we now consider the continuous optimization of nervous systems in a blind process of hereditary variation and selection.

— Metzinger 2009, 55

But if evolution has achieved ‘what we now consider the continuous optimization of nervous systems,’ why can’t we say that this was (and is) an intrinsic ‘goal’ of evolution, a final cause, before anyone considered it? Surely a real tendency (or intention) does not need to be consciously chosen in order to guide a process in a general direction. Why not say that a ‘goal’ of evolution is the development of guidance systems, of what Peirce calls self-control? Wouldn’t any real guidance system, no matter how primitive, have a tendency to optimize itself? After all, no process can be driven by ‘chance,’ although chance may contribute to the variation which is necessary in order for selection to operate. Nothing can be driven unless in some direction, and that directedness may itself evolve, from vague tendency to preconscious intention to conscious purpose, from natural selection to ethical inquiry.