Chapter 13· Turning Signs (Contents) References gnoxic home

14·     Communicoding

  1. Cosmic spaces
  2. Meaning in formation
  3. Feeling and forming
  4. Perceptual coding
  5. Implicity
  6. Social and biological codes
  7. Conventions, conversations and conversions



On earth, information has the highest transformities of the energy hierarchy. Here information is defined as the parts and relationships of something that take less resources to copy than to generate anew. Examples are the thoughts on a subject, the text of a book, the DNA code of living organisms, a computer program, a roadmap, the conditioned responses of an animal, and the set of species developed in ecological organization. Each of these takes emergy to make and maintain.
— H. T. Odum (2007, 87)

Cosmic spaces

Symbols grow; and in Peircean terms, the growth of a semiotic system involves ‘increase of information’ proceeding toward ‘the ideal state of complete information’ (EP1:54) – ‘ideal’ meaning ‘the limit which the possible cannot attain’ (EP1:52). In this respect, the life of a general idea (or sign) and the life of a person (individual or corporate) are analogous to one another, as we saw in Chapter 12. Peirce returned to this idea in his 1892 paper on ‘Man's Glassy Essence’ (EP1:350):

The consciousness of a general idea has a certain ‘unity of the ego’ in it, which is identical when it passes from one mind to another. It is, therefore, quite analogous to a person; and, indeed, a person is only a particular kind of general idea.… every general idea has the unified living feeling of a person.
‘All that is necessary’ (wrote Peirce in the next paragraph) ‘to the existence of a person is that the feelings out of which he is constructed should be in close enough connection to influence one another’; and the same applies to the self-construction of communities out of relations between individual persons. Ideally, religion is a quest for true community – first the social community within the religion itself, and ultimately with the whole universe, perhaps through faith in the unity of its Creator. Scientific inquiry, on the other hand, is ideally a communal quest for the whole Truth embodied in the universe, the cosmos in all its connectedness.

Both science and religion use signs to approach ‘the ideal state of complete information’ and the ideal state of universal community, respectively. Any symbolic system that could represent the whole Truth about a universe of discourse must first be capable of representing all general types possible in that universe. According to Giovanni Manetti, the earliest ‘foregrounded use of signs’ in human history is found in Mesopotamian divinatory tablets (Cobley 2010, 13). Any systematic divination practice is a guidance system giving us direction as to what should be done in a given type of situation. Its symbol system must therefore be capable of representing every possible type of situation in the abstract – not in all of its complex detail, but in a highly simplified form.

Consider the hexagrammatic symbol system of the ancient Chinese I Ching (or Book of Changes): its value as a guidance system depends on its mapping of meaning space being complete (closed, as the syntax of a language must be closed). It carves up the whole universe of possible situations into 64 types, each capable of variation that can be vaguely indicated by the ‘changing lines’ in the hexagram. By obtaining a hexagram, the diviner can tell the inquirer which kind of situation he is dealing with. But in order to comprehend the completeness of the symbol system, it is necessary to study diagrams which embody various arrangements of the trigrams and hexagrams, which clarify by juxtaposition the relations among the types of situation. (See for instance Thomas Cleary (1989), I Ching Mandalas.)

Another depiction of a cosmic ‘meaning space’ is the ‘jewel net of Indra,’ a symbol frequently used by the Hua-yen school of Buddhism as formulated in the seventh century C.E. According to Francis Cook's description, this ‘wonderful net’ is hung ‘in the heavenly abode of the great god Indra,’ with

a single glittering jewel in each ‘eye’ of the net, and since the net itself is infinite in dimension, the jewels are infinite in number.… If we now arbitrarily select one of the jewels for inspection and look closely at it, we will discover that in its polished surface there are reflected all the other jewels in the net, infinite in number. Not only that, but each of the jewels reflected in this jewel is also reflecting all the other jewels, so that there is an infinite reflecting process occurring. The Hua-yen school has been fond of this image, mentioned many times in its literature, because it symbolizes a cosmos in which there is an infinitely repeated interrelationship among all the members of the cosmos. This relationship is said to be one of simultaneous mutual identity and mutual intercausality.
— Cook (1977, 2)
The mutual recognition/reflection which is typical of a social structure is here extended to the whole of the universe: in this play there are no dead or inert ‘props,’ only mutually defining roles, in which each performance implies the whole drama. As ‘a vast body made up of an infinity of individuals all sustaining each other and defining each other’ (Cook 1977, 3), the Net of Indra is both a communion of subjects and a community of signs. Its form is both social and semiotic, and manifests the creative tention between individual and community that animates the arts.
An artist's individuality is manifest not only in the creation of new, unique symbols (i.e. in a symbolic reading of the non-symbolic), but also in the actualization of symbolic images which are sometimes extremely archaic. But it is the system of relationships which the poet establishes between the fundamental image-symbols which is the crucial thing. Symbols are always polysemic, and only when they form themselves into the crystal grid of mutual connections do they create that ‘poetic world’ which marks the individuality of each artist.
— Lotman (1990, 86-7)
What is this ‘crystal grid’ but the Jewel Net of Indra turned inside out?

In physics, David Bohm's theory of the implicate order (and the super-implicate order) could be read as cognate with the Hua-yen view, especially in terms of the part/whole relationship: ‘a total order is contained, in some implicit sense, in each region of space and time.’ In keeping with the etymology of ‘implicate,’ this order is ‘enfolded’ within the region (Bohm/Nichol 2003, 129). Likewise, a whole meaning space is implicit in any statement. Yet since no symbol, however complex, can occupy more than a part of that space, no sign can attain the scientific ideal of making the Whole Truth explicit. Semiosis itself involves partiality. Likewise, to be alive, and to be sentient, is to imply more than you know, more than you are; or as Deacon (2011) puts it, to be incomplete. Eihei Dogen expressed a Buddhist understanding of this in his ‘Genjokoan’:

When dharma does not fill your whole body and mind, you think it is already sufficient. When dharma fills your body and mind, you understand that something is missing. For example, when you sail out in a boat to the middle of an ocean where no land is in sight, and view the four directions, the ocean looks circular, and does not look any other way. But the ocean is neither round nor square; its features are infinite in variety. It is like a palace. It is like a jewel. It only looks circular as far as you can see at that time. All things are like this.
Though there are many features in the dusty world and the world beyond conditions, you see and understand only what your eye of practice can reach. In order to learn the nature of the myriad things, you must know that although they may look round or square, the other features of oceans and mountains are infinite in variety; whole worlds are there. It is so not only around you, but also directly beneath your feet, or in a drop of water.
(Tanahashi 2010, 31)

Meaning in formation

For an inquiring mind at least, this feeling of incompleteness expresses itself as a quest for meaning. It's as if conceptual spaces were reaching out for their content, just as all living systems reach out for consumable emergy. We take or make from ‘the world,’ the physical/cultural surround which is ready to hand or mind, whatever will suffice to fill the niches in meaning space most essential to our integrity.

Likewise in communication, people will use available words, if possible, to fill the essential niches in cultural meaning space. But symbols develop habitual attachments to specific niches which greatly constrain what they can mean. In historical time, each word that is widely used will develop a branching network of meanings; and as the various contexts in which those meanings operated are left behind, current meanings of a single word may diverge to the point where two separate meanings have nothing in common except a forgotten history. You can open the Oxford English Dictionary almost anywhere to find examples. Frans de Waal (1996, 35) mentions the case of ethology: it ‘comes from the Greek ethos, which means character, both in the sense of a person or animal and in the sense of moral qualities. Thus, in seventeenth-century English an ethologist was an actor who portrayed human characters on stage, and in the nineteenth century ethology referred to the science of building character.’ It took another hundred years to settle into its current meaning, given in Webster's as ‘the scientific study of the characteristic behavior patterns of animals.’ (In this sense, cultural anthropologists would be ethologists who specialize in human behavior patterns.)

The term ‘symbol’ itself is subject to divergent usages, as Terrence Deacon explains:

Despite superficial agreement on most points, there are significant differences in the ways that symbols and non-symbols are defined in the literature. Symbolic reference is often negatively defined with respect to other forms of referential relationships. Whereas iconic reference depends on form similarity between sign vehicle and what it represents, and indexical reference depends on contiguity, correlation, or causal connection, symbolic reference is often only described as being independent of any likeness or physical linkage between sign vehicle and referent. This negative characterization of symbolic reference—often caricatured as mere arbitrary reference—gives the false impression that symbolic reference is nothing but simple unmediated correspondence.
Consequently, the term ‘symbol’ is used in two quite dichotomous ways. In the realm of mathematics, logic, computation, cognitive science, and many syntactic theories the term ‘symbol’ refers to a mark that is arbitrarily mapped to some referent and can be combined with other marks according to an arbitrarily specified set of rules. This effectively treats a symbol as an element of a code, and language acquisition as decryption. In contrast, in the humanities, social sciences, theology, and mythology the term ‘symbol’ is often reserved for complex, esoteric relationships such as the meanings implicit in totems or objects incorporated into religious ritual performances. In such cases, layers of meaning and reference may be impossible to fully plumb without extensive cultural experience and exegesis.
This multiplicity of meanings muddies the distinction between symbolic forms of reference and other forms and also contributes to confusion about the relationship between linguistic and non-linguistic communication. Within linguistics itself, ambiguity about the precise nature of symbolic reference contributes to deep disagreements concerning the sources of language structure, the basis of language competence, the requirements for its acquisition, and the evolutionary origin of language. Thus, the problem of unambiguously describing the distinctive properties of symbolic reference as compared to other forms of reference is foundational in linguistic theory.
— Deacon (2012, 394)

The term information, which we have been using in this book since Chapter 7, is another case in point. The word is derived from a Latin root, and its earliest use in English referred to the process of forming someone's mind or character (McArthur 1992). The Peircean concept of information introduced in previous chapters further articulates this as the process of habit-formation. But the word's history took a new turn in the aftermath of World War II with the advent of information theory – a mathematical model of communication, first developed by Claude Shannon, which defines information in terms of reduction of uncertainty, and quantifies it in relation to the total number of distinct symbols in a system or elements of a code.

This theory, according to a colleague of Shannon's, ‘came as a bomb, and something of a delayed-action bomb’ (Campbell 1982, 20) – a bit like the Nag Hammadi library. As mentioned in Chapter 3, the immediate postwar period also saw the advent of cybernetics, a discipline which overlapped in some respects with information theory. Originally, both disciplines grew out of the quest for simplicity in modeling – simplicity in the sense that one abstract model can represent many different phenomena, especially different forms of communication. The early cyberneticists were looking for principles that would build useful bridges between the ‘hard’ and ‘soft’ sciences. Shannon, for his part, found a link between communication and physics with his discovery that the mathematical equations defining entropy could equally well be used to quantify information. This proved useful both for code-breaking and for engineering more efficient communication channels. But the pragmatic usefulness of information theory depends on the assumption that the coded messages sent through such channels are meaningful.

This concept of information withdraws attention from the act of meaning and its context by taking them for granted. Gregory Bateson regarded this as a misguided attempt to simplify the engineering task. ‘By confining their attention to the internal structure of the message material, the engineers believe that they can avoid the complexities and difficulties introduced into communication theory by the concept of “meaning”’ (Bateson 1972, 414). Bateson (1979) bridged the gap between the mathematical/engineering sense of information and the logical/semiotic sense by defining it as ‘any difference that makes a difference’ (1979, 250) – a definition that covers perception and learning as well as communication. ‘Making a difference’ (to a system, its behavior or its habits) is virtually synonymous with meaning something to that system – or as Peirce would put it, determining that mind or quasi-mind to an interpretant. Thus a text, as part of an external guidance system, can inform you in the old sense of forming character, because reading it can make a difference to your habits. Any perceptual judgment can ‘make a difference’ to us subjectively, or become intuitively meaningful to us, when it changes our state of bodymind or feeling.

The closure of the self-organizing process guarantees that meaning as feeling cannot be observed or measured, but we can devise measures of observable differences that make that kind of difference, and thus model how meaning arises from the encounter between sign and interpreter, or text and reader. For this we need to grasp the sense in which a text is a ‘difference.’ The simplest way to develop this sense is to start small, say with a single letter, or even a punctuation mark, in a written or printed text. It is visible because it stands out, by contrast, from the blankness of the page. Likewise, everything we perceive or conceive emerges from its background by differing from it in some perceptible way. Theoretically this ‘difference’ can be modeled with the same mathematical techniques used for modeling order and disorder (entropy) in energetics, or signal and noise in messages. Once measured, information can be thought of as a quantity or even a substance (rather than a process). But we might better call this potential information: it doesn't actually make a difference until somebody reads the sign that conveys it. Just as energy is only potential work until it is harnessed to drive a process, a symbol can do no actual semiotic work until some replica of it manifests itself as a functional part of an embodied system.

Feeling and forming

When we use the word meaning in relation to a word or symbol, it can refer to (at least) three different things:

  1. a relationship to other symbols. (When you look up the ‘meaning’ of a word in a dictionary, what you find is other words.) These relationships form a lexical space occupied by word meanings.
  2. a guidance function: the reading of a sign in context makes a difference in behavior and habit, either directly or via changes in the model at the heart of the guidance system. These changes take place in time and constitute practical or pragmatic meanings.
  3. an immediate experience of a sign as significant – the feeling of the guidance function. This occurs when a felt sense (Gendlin) recognizes or finds its formulation.
Meaning is formed in the interaction between felt experiencing and something that functions symbolically. Feeling without symbolization is blind; symbolization without feeling is empty.
— Gendlin (1962/1997, 5)

Felt experiencing (or in Damasio's phrase, ‘the feeling of what happens’) is the first-person feel for what an observer would see as the dynamics of the system. To ‘function symbolically’ is to inhabit a niche in a systemic meaning space; when this habit-space is shared, like a common language, communication is possible. It is not quite true that ‘the letter killeth but the spirit giveth life’: the two must collude, for the spirit of this very saying can express itself only by being ‘spelled out’ in a fixed set of ‘letters’ or lexemes. Otherwise it vanishes without a trace in the formless flux of endless variation, in a stream of unconsciousness. Yet that stream is the water of life itself, for it makes the difference between a process and a thing.

When we say that the meaning of a word ‘is’ a concept, we are talking about what has been formed and not about the process of forming. As noted in Chapter 4, we may experience a niche or gap in meaning space as the absence of a word or symbol which can fill it. This ‘felt sense’ has no explicit form, but its interaction with symbols is what makes them meaningful and makes pragmatic meaning explicit enough to serve as a guide. We might describe the felt sense as a niche in meaning space which is currently unnamed, or unoccupied by a long-term tenant, but is nevertheless felt to be the crux of the current situation, a powerful attractor of meaning. In the act (or event) of meaning, what was implicit becomes explicit, yet implies even more than before. When this does not happen, feeling remains formless (‘blind’) and symbols remain meaningless (‘empty’).

We learn to use language, and to mean it, by interaction with others. In order to communicate, we conform to conventions in naming things, events and acts that we can point to in consensual domains. The responses of our partners in this dance guide us in selecting and refining our descriptions of the world. But how do we name those things we can't point to? How do we learn what we are supposed to be talking about when we use words like love, conscience, mystery, faith, freedom, nature, world, presence? And how do we choose general, public names for private, individual experiences?

If you and I agree that a statement is true, we are tacitly assuming that we share a common meaning for it. But we have no way of verifying this, except to carry on the dance of conversation in a manner that we both feel to be relevant and consistent with what we've said already. Missteps in the dance can occur when we differ in our conceptual models or in our language habits, or both; sorting out these differences can be difficult. On the other hand, even if we do manage to avoid collisions, this may be due to skillful negotiation, or mere politeness, or even laziness, rather than a genuine meeting of minds.

In any case, we suppose that there is a reliable connection between our felt senses of our common situation and the meaning spaces intrinsic to our common language. This is a reasonable assumption because our linguistic habits have co-developed and co-evolved with our bodyminds. Terrence Deacon argues that ‘semiotic constraints have acted as selection pressures on the evolution of both language and brain structures.’ These constraints are neither biological nor social in themselves, but result from the way symbols work. In particular, the systemic nature of symbols determines the shape of linguistic meaning space.

Symbols implicitly indicate other symbols. This is reflected in the implicit word-word networks captured differently by a dictionary, a thesaurus, or an encyclopedia. Their relationships with one another constitute a system. This systematicity determines their possibilities of concatenation, substitution, alternation, and so forth, which constrains their useful combinations, and creates a structured space of relationships in which each becomes a marker of semantic position. But because there is also a conventional correspondence between words and things in the world, the topologies of these two ‘spaces’ (i.e. the system of word-word valence relationships and some systematization of the regularities linking certain physical objects) can potentially be mapped one to the other. The result is that ‘positional’ relationships within semantic space can be taken as corresponding to physical relationships. Symbolic reference is thus reference mediated by reference to a system, and by that system's relationship to a perceived systematicity in the world. This system embeddedness of symbols is reflected in the way linguistic utterances can still refer to objects of reference in the complete absence or nonexistence of these objects; a feature that is often called ‘displacement.’ So the combination of systematicity and indirectness of reference allows words without simple reference, and with reference that has no real-world counterpart. Symbolic reference is thus irreducibly systemic.
— Deacon (2003, 99)
Deacon goes on to explain that the ‘implicit abstract infrastructure’ of symbolic meaning spaces ‘can have real physical effects because it makes special demands on learning.’ Thus learning to use a language is learning a code: ‘symbolic reference is the archetypal encryption relationship; a fact that is evident in any attempt to interpret a foreign language or ancient script’ (Deacon 2003, 99-100). Reading is decoding, writing is encoding and translation (or interpretation) is recoding. This broad sense of -coding was employed by Robert Rosen in his explanation of the meaning cycle, and by Bateson in formulating his principle that ‘All messages are coded’ (Bateson 1979, 235); a more Peircean equivalent might be ‘All semiosis is mediation’). This applies even to ‘messages’ that are not intentionally ‘sent,’ such as those we receive from the environment in perception. The physical impact on our senses of light and sound waves (for instance) are transformed into neural activation patterns which can inform our guidance systems. ‘In mental process, the effects of difference are to be regarded as transforms (i.e. coded versions) of the differences which preceded them’ (Bateson 1979, 121).

Perceptual coding

All messages are coded, but not all messages are linguistic, and not all are intentionally sent. We could say, for instance, that the retina sends coded messages to the visual processing areas of the brain. The influence of the ‘sender’ or ‘source’ on the ‘receiver’ of the message is always one of prompting:

The neural patterns and the corresponding mental images of the objects and events outside the brain are creations of the brain related to the reality that prompts their creation rather than passive mirror images reflecting that reality.
Damasio (2003, 198-9)

And why does this code seem so transparent that we do not see it as a code at all, but rather seem to see things through it? How is it that we so effortlessly attain consensus about the nature of the reality around us? Damasio continues (2003, 200):

There is a set of correspondences, which has been achieved in the long history of evolution, between the physical characteristics of objects independent of us and the menu of possible responses of the organism.… The neural pattern attributed to a certain object is constructed according to the menu of correspondences by selecting and assembling the appropriate tokens. We are so biologically similar among ourselves, however, that we construct similar neural patterns of the same thing. It should not be surprising that similar images arise out of those similar neural patterns. That is why we can accept, without protest, the conventional idea that each of us has formed in our minds the reflected picture of some particular thing. In reality we did not.
The ‘correspondences’ here embody the structural coupling of autopoiesis theory. But given the limitations on direct observation of ‘neural patterns,’ how do we know that similar images arise out of similar patterns? This is highly plausible because we can usually point to the ‘particular thing’ in the environment to which the pattern corresponds – to its location in space, to its parts, and/or to the part it plays in the current scene. Any two people will generally do this in roughly the same way. Thus the correspondence is established by consensus, i.e. by structural coupling between ourselves, from which we infer a coupling between some external object (thing or event) and our ‘idea’ or experience of it. Language users can make this inference routinely by giving things common names; but the logical inferences we can make using language (and indeed language itself) are grounded in the deeper logic through which all sentient beings make perceptual inferences.

When the current activity of the brain is ‘perturbed’ by an event in its environment, this event can trigger a shift in the state of the whole system. The dynamic coupling of events with brain states has a physical effect which leaves the traces we call memory (see LeDoux 2002 for the microscopic details). Neither the physical form of these traces, nor the process of ‘reading’ them which we experience as remembering, bears any resemblance to the external event which the observer could describe as triggering or ‘causing’ these changes in the state of the brain.

Memories are records of how we have experienced events, not replicas of the events themselves. Experiences are encoded by brain networks whose connections have already been shaped by previous encounters with the world. This preexisting knowledge powerfully influences how we encode and store new memories, thus contributing to the nature, texture, and quality of what we will recall of the moment.
— Daniel Schacter (1996, 6)
So the internal sign here is not iconic in itself; it is the whole sign-system constituting the Innenwelt which bears an iconic or ‘modeling’ relation to the external world.

A closer look at any memory or perceptual event reveals a ‘chain’ of signs. For instance, a pattern of light triggers certain cells in the retina, which in turn send trains of impulses to the lateral geniculate nucleus of the thalamus, which ‘interprets’ that pattern and sends its own messages to the primary visual cortex, and so on – see Koch (2004) for details, which are exceedingly complex in the case of vision. Each sign triggers and informs the next; or, reading in the other direction, each is an interpretant of the preceding sign in the chain. But even if we ignore the subloops and branchings of this ‘chain,’ it is part of a guidance system, and therefore part of a loop, a meaning cycle. Thus it can be misleading to think of the ‘message’ propagated along this chain as something carried into the brain from outside. ‘There is no picture of the object being transferred from the object to the retina and from the retina to the brain’ (Damasio 1999, 321).

We commonly use the term ‘information’ as if the ‘contents’ of sense experience were ‘given’ (in Latin, data) prior to, and independent of, the process of perception. Indeed our feel for its independence, its externality, is what makes it real for us, as we have seen in Chapter 12. But however appropriate that model is for reality monitoring or consensus-building, it is not biologically realistic.

The point here is that the brain does not process ‘information’ in the commonly used sense of the word. It processes meaning. When we scan a photograph or an abstract, we take in its import, not its number of pixels or bits. The regularities that we should seek and find in patterns of central neural activity have no immediate or direct relations to the patterns of sensory stimuli that induce the cortical activity but instead to the perceptions and goals of the subjects.
— Walter Freeman (1995)

In Freeman's ‘circular causality’ model, ‘the patterns of neural activity are self-organized by chaotic dynamics’ (Freeman 1995). The process is rooted in the fact that neurons have to ‘fire’ every now and then (perhaps once a second or so), regardless of input from other neurons. If they don't act, they die; so they act spontaneously. The result is a continuous background noise even when the neural neighborhood is ‘at rest.’ This noise is called chaotic because an observer (reading it via electroencephalogram) perceives no pattern in it. But when perturbed by some ‘input,’ the local system becomes more active, and through the mutual influence of the excitatory and inhibitory processes thus triggered, patterns begin to appear and propagate themselves. Many such patterns may be going on at once in the brain, in parallel, and interact among themselves, so that the brain as a whole passes through a constantly shifting succession of complex ‘states.’ This process is the physical aspect of what William James called ‘the stream of thought’ – the semiotic medium through which all objects appear to us.

James noted that ‘however complex the object may be, the thought of it is one undivided state of consciousness’ (James 1890, I.276). Peirce's view is even more holistic: ‘the entire consciousness at any one instant is nothing but a feeling,’ and ‘a feeling is absolutely simple and without parts’ (CP 1.310, 1907). True, we may be able to analyze the structure of a static object, or the memory of a dynamic event, up to a point, by paying closer attention to it, so that it appears quite complex; but the phaneron is not even divided between subject and object, let alone divided into a number of objects. A ‘state of mind,’ like a ‘state’ of the brain (or of any living system), is an abstraction – unlike the flow of time, which we experience directly as continuous.

The dynamic flow of the thought process probably explains why the ‘stream’ strikes most people as a natural metaphor for experience or consciousness. It also explains why remembering is more like recreation than retrieval. Since the self-organizing patterns tend to entrench the physical relationships among the neurons involved, they can recur the next time a similar ‘input’ appears. However, the patterns are not ‘stored’ explicitly in the way binary data can be stored in computer memory or storage media; they are stored rather as attractors in meaning space, tendencies which recur as ‘families’ of patterns, any one of which could organize itself in response to a given triggering event. Encoding in the brain is much more analog than digital. A memorable experience probably correlates best with an attractor which organizes the ongoing interaction of a neural population into a pattern persisting long enough to be ‘felt’ as distinctive. The persistence of this pattern restructures the connectivity of the neurons involved, so that a similar pattern is more likely to recur when brain states are perturbed in some manner related to the original circumstances which laid down the ‘memory trace.’ The form of this attractor (the information “stored” in it) is determined by the intrinsic and extrinsic constraints on brain dynamics, which are imposed simultaneously by ‘codes’ (internalized social and semiotic systems) and by dynamic objects.

Implicity

The computer model of data storage is inappropriate in several ways when applied to brain dynamics. Unlike a ‘bit’ of data stored at a particular location, the physical basis of a remembered ‘fact’ is distributed over multiple regions of the brain. It is also misleading to speak of neural ‘programs,’ because the distinction between ‘data’ and ‘software’ is of little or no use in describing brain dynamics. But in practice, the computer has been very useful in the process of developing better theoretical models. Its computational speed has enabled us to better understand how complex order emerges from chaos, not only in brains but in the macrocosm as well. For instance, we can model the evolutionary process of natural selection in terms of attractors in a ‘fitness landscape’ (Gell-Mann 1994, 249). And perhaps along this path we can gather some clues as to how or why that process got started in the first place.

Natural languages (and other semiotic systems) share the autopoietic mode of organization with other living organisms. Before ‘autopoiesis’ was chosen to name this mode, English words descended from the Greek verb ποιέω were used only in reference to the art of poetry. But the ‘making’ of a poem is only a special case of poiesis, just as conscious ‘making’ is a special case of the ‘being and doing’ of an organic system. As we will see in other cases, the name of an artifice, or an art that takes conscious effort, stands in for a whole range of behavior, most of which is spontaneous, unconscious, ‘natural.’ We do this kind of cross-naming by association constantly, whether or not we know the technical term for it (metonymy) – so this in itself demonstrates the way natural languages self-organize. Our knowledge of their ‘rules’ is mostly implicit, as is most of our cognition generally. This implicit knowledge works transparently (recall Chapter 2); if we try to make it explicit – that is, to make an external map of it – we often end up with something that looks exceedingly complicated. What's easy to simply do is often hard to explain simply: in other words the ‘simplicity’ of internal guidance is very different from the ‘simplicity’ of external guidance.

A sign is transparent (or diaphanous) when the interpretation of it is not deliberate or conscious. When interpretation does become deliberate, we become conscious that the message is coded. Polyversity is invisible when signs are transparent; the sign simply means what it says. When you look at the sign as such, and make it the object of another sign representing the form of the first, you lose sight of its original object, but you gain awareness of semiosis, or the dynamics of the representing process. In the terms used by the Latin logicians, you regard concepts as ‘second intentions,’ which are the ‘objects of the understanding considered as representations, and the first intentions to which they apply are the objects of those representations’ (Peirce, EP1:7).

It should be clear by now that perceptual coding as well as conceptual coding is constrained from the top down, with the whole bodymind determining what its parts are doing. But semiosis, including perception, is also constrained from the bottom up, by the nature of its material causes, even down to the molecular scale. A transformational process at that level can also be regarded as a transfer of information.

When we speak about the transmission of information from one molecule to another, we mean a transfer of information inherent in the molecular configuration—in the linear sequence of the unit structure or in the three-dimensional disposition of the atoms. Since molecules cannot talk or engage in other human forms of communication, their method of transmitting information is straightforward: the emitter molecule makes the atoms of the receiver deploy themselves in an analogue spatial pattern.
— Werner R. Loewenstein (1999, 31, italics his)
This can only happen within the electrostatic field around the molecules, which is
effectively limited to distances of 3 to 4 x 10-8 cm. in organic molecules. Thus, to transfer information from one organic molecule to another, the participating atomic components of the two molecules have to be situated within that distance. In other words, the two molecules must fit together like glove and hand or like nut and bolt, and this is the sine qua non for all molecular transfers of information.
— Loewenstein (1999, 32)
This distance limitation, and the ‘fit’ between the two molecules, are constraints intrinsic to the molecular level of interaction. Such constraints determine how physical and semiosic processes will play out at every level. As Deacon (2011, 317-18) says, ‘whether it is embodied in specific information-bearing molecules (as in DNA) or merely in the molecular interaction constraints of a simple autogenic process, information is ultimately constituted by preserved constraints.’

If you select the impact of light on the retina as the beginning of the visual process, then the whole process as described above is pre-conscious and not subject to deliberate control. Actually, though, what you see depends on where you look, which is often affected by deliberate decisions; and your decisions, whether conscious or not, obviously originate in your intent, not in your visual field. So the impact of light on your retina is better described as a ‘perturbation’ of your ongoing brain dynamics, rather than incoming information, even though its pattern (or its difference from prior patterns) does indeed inform the next cycle of visual processing. As Hoffmeyer (2001) has suggested, brain dynamics (and biological dynamics generally) can be semiotic too, but they work by analog rather than digital coding – that is, the messages are not made up of discrete and relatively static elements, as they are in an alphabetic or genetic code. The distinction between the dynamic/experiential and the ‘symbolic,’ introduced above, now appears as a distinction between analog and digital coding. Evolution requires digital coding, which is symbolic in the sense that genomes and natural languages are symbolic: they are systems involving dynamic recreation and recombination of static elements.

Nature had her own systems and rules long before humans came along to formulate them; in fact, the coming-along of humans was guided entirely by those rules, and they still function implicitly in our guidance systems. We might say that the key role of the human has been to co-author guidance systems with nature. In the genetic code, and in other codes such as the syntax of a natural language, the rules are followed automatically (unconsciously). Even rules that have to be learned, such as linguistic and social conventions, are for the most part learned by paying attention to what people say and do, and to objects of joint attention – not by paying attention to explicit rules.

The genome, like all information that makes a biological difference, has to be interpreted. Concerning that process, Hoffmeyer (1997) reminds us of ‘a simple but crucial fact: DNA does not contain the key to its own interpretation.’ Nor does any other symbol; interpretation is an interactive process. In this kind of symbolization, the interpretant is presumably not ‘mental,’ as we have no reason to believe that the interpreting system is conscious of it. But that judgment may reflect nothing more than the human bias toward our own scale of mentality. The replication process takes place at a physical scale far finer than we can relate to directly, and the developmental process which grows a bodily interpretant of the whole genome takes far longer than our thinking process. The differences in scale are even greater for the evolutionary process, but even that is a mental process by some definitions (such as Gregory Bateson's). If we call this meaning of mind ‘metaphorical,’ or ‘figurative,’ we are only saying that it differs from our habitual usage. There is no definite boundary between ‘metaphorical’ and ‘literal’ meanings.

The genetic code functions symbolically in the sense that each replica of the genotype will be read in a regular way by the next generation of the phenotype, although (as Peirce would say) the habit is natural, not conventional: these rules are not legislated but evolved. What molecular geneticists call a ‘gene’ is a location on a chromosome (Dawkins 2004, 44). The chromosome is a one-dimensional (linear) meaning space, with a definite (chemical) structure which allows for various occupants (called ‘alleles’) at specific niches. The realization (embodiment, development, ‘meaning’) of the ‘message’ carried by the chromosome varies with the alleles who actually occupy the niches, but the genotype only changes when the structure of the chromosome changes. This is the molecular basis of the ‘degeneracy’ which makes variation (and therefore evolution) possible.

Social and biological codes

Both encoding and decoding are semiotic transformations, just as all physical processes are transformations of energy (Odum). In the process of work, the energy ‘lost’ to useless forms is called entropy; in the process of communication through a given channel, the useful part of the transference is called the signal while the useless part is noise. The transmission of a message always includes both. The signal is the meaningful or decodable part; the noise is (ideally) filtered out so that the receiver can process the message. The filter which embodies the distinction between signal and noise is actually part of the code; what is signal for one code can be noise for another, and vice versa. A code in this sense is primarily a systemic legisign, a complex of relational habits, of rules which function implicitly, whether they can be explicated or not. The grammar of the language you are now using, for instance, is a set of rules for production and interpretation of symbols. So one corollary of Bateson's principle is that every language has a grammar.

The word ‘grammar’ comes down to us from the Greek gramma, which in fact was the word translated as ‘letter’ in the King James version of Paul's famous remark (2 Corinthians 3.6) quoted above; the RSV translation says ‘the written code kills, but the Spirit gives life.’ This might remind us that the oldest meaning for code given in the OED is ‘a digest or systematization of rules’; another is ‘a collection of sacred writings.’ In laws and scriptures we find a code of conduct that can be consciously obeyed (or disobeyed) – an external guidance system. Providing codes of conduct is obviously an important function of religions and other social institutions. But as these codes are themselves coded (like all messages), their guidance can be no better than the interpretive process by which they are decoded.

In the 20th century, the concept of coding found applications in information theory, cybernetics, genetics, and computer science. Programmers refer to ‘machine code’ (the binary ‘language’ which directly controls the computer's behavior or output) and ‘source code’ (the higher-level language in which they write their instructions to the computer); ‘compiler’ software is supposed to translate source code unambiguously into machine code so that the computer does what the programmer wants. In this respect, the code which interprets source-code input as machine-code output is a degenerate kind of code called a cipher, which systematically maps one set of symbol-elements onto another (as for instance a phonetic alphabet maps letters onto sounds). Ciphers are ‘degenerate’ codes because they do not function implicitly without first being made explicit (i.e. ‘programmed’) by an external user. A deciphered message is not decoded in the full sense of the word until it has been understood. Cipher systems can be used to encrypt messages so that only someone in possession of the ‘key’ can decode and read them. Morse code, for instance, translates back and forth between letters of the alphabet and sets of ‘dots’ and ‘dashes’ (short or long bits of signal sent over telegraph wires).

Bateson's dictum that ‘all messages are coded’ makes two main points about any process of transmission. First, what is received is a transform of what is sent, and thus cannot be identical to it. It can't be the same event because any process takes time, and strictly speaking, no event can happen twice. It should be obvious that if a message such as any sentence in this book ‘encodes’ the author's experience, the reader's ‘decoding’ cannot produce an experience which is exactly the same as the author's. However, if the process deserves the name of communication (and the sentence deserves the name of message), then there must be some regular relationship between experience and sign-system. That regular relationship can be called a ‘rule,’ a system of rules, or a code, whether it can be explicated or not. That is Bateson's second point. Taken together, the two points imply the simplexity of all semiotic systems.

As we saw in Chapter 11, the need to simplify is entangled with the semiotic processes which are characteristic of cognition and life itself. What we call the ‘genetic code,’ for instance, is greatly simplified compared to the lives of the organisms who reproduce themselves by its means. We saw in Chapter 3 that the human genome does not contain a complete description of a human being; rather it decodes into the basic transformations necessary to begin the process of growing a human being within the appropriate matrix (the mother's womb). A gene can specify a protein to be constructed by spelling out a linear chain of amino acids, but it doesn't tell that protein how to fold into the 3D shape it must assume in order to play its role in the developing system. That folding is an orthograde process that ‘just happens’ – no order needs to be imposed on it from without; the constraints on the process are intrinsic.

Loewenstein (1999, 114) identifies ‘two realms’ in the living cell, genome and soma, ‘one maximizing conservation and the other maximizing the transfer’ of information. The first is ‘shielded from the ordinary hustle and bustle in the world (in higher organisms the DNA is secluded in the cell nucleus).’ The linear structure of DNA is ideal for conservation, while the 3D structure of a protein is essential to its bodily functioning.

The molecules in both biological realms carry information—an RNA, a protein, or a sugar is as much an informational molecule as DNA is. The quantities they carry individually are different, to be sure, but if we could weigh the total amounts of core information in the molecules of the two realms, they would about balance—the two realms are but the flip sides of the same information.
— Loewenstein (1999, 115)

Semiosis inside the cell is protected from degradation not by channeling but by encryption – ‘sending information in a particular form that only the intended receiver has the key to’ (Loewenstein 1999, 143). Of course no conscious intention (or attention) is needed, for the information is ‘protected’ simply by the fact that only the ‘receiver’ has the right molecular shape for interacting with it.

In cellular communication, the codescript is in the form of a special molecular configuration, a spatial arrangement of atoms that is singular enough so as not to be confused with other arrangements of atoms that might occur elsewhere in the system. That three-dimensional singularity is the nub of biological coding.
— Loewenstein (1999, 142)

Conventions, conversations and conversions

In language use, explicit or conventional codes must come to function implicitly (habitually, post-consciously) just as natural codes do in order to serve the guidance system well. How does this happen? We can begin to answer this question with one of Wittgenstein's thought-experiments about ‘language-games’:

Suppose I had agreed on a code with someone; “tower” means bank. I tell him “Now go to the tower”—he understands me and acts accordingly, but he feels the word “tower” to be strange in this use, it has not yet ‘taken on’ the meaning.
— Wittgenstein (PI, IIxi, 214)

Here we have a kind of cipher: the agreement is that the word ‘tower’ will be substituted for the word ‘bank’ – in other words, the niche in meaning space usually filled by ‘bank’ will now be filled by ‘tower.’ Eventually, if the agreement is consistently adhered to, this new usage would become ‘usual’ (habitual). But how did the usage of ‘bank’ in that niche become usual in the first place? It must have been learned by listening to (interacting with) other language users. There is no ‘natural’ connection between a bank and the word ‘bank,’ any more than there is between a bank and the word ‘tower.’ Since the connection is made by consensus, and not inferred from nonlinguistic experience with banks, we call it conventional or sometimes arbitrary. Wittgenstein's point here is that the connection comes to feel ‘natural.’ But in the case of a cipher, where one symbol is substituted for another that is already in place, it takes awhile for the word to ‘take on’ the meaning (i.e. for the usage to feel natural).

When a child is learning her first language, on the other hand, that ‘natural’ feeling is there from the beginning, and the realization that word meanings are conventional comes much later (if at all!) in the development process. As Annie Dillard (1974, 104) tells it:

When I was quite young I fondly imagined that all foreign languages were codes for English. I thought that “hat,” say, was the real and actual name of the thing, but that people in other countries, who obstinately persisted in speaking the code of their forefathers, might use the word “ibu,” say, to designate not merely the concept hat, but the English word “hat.”
That's because the first, ‘natural’ consensus about the meaning of a word is tacit. In Wittgenstein's scenario, the ‘code’ is a cipher because the usage agreement is made explicitly. When a word has ‘taken on its meaning,’ or (more generally) when the use of a code or symbol is habitual, it functions implicitly.

When we say that the meaning of a symbol is ‘conventional’ or ‘arbitrary,’ as we often do to distinguish that kind of sign from others, we do not necessarily mean that the meaning was ever explicitly assigned. (It could just as well have been formed by tacit consensus or frozen accident.) We only mean that some other symbol could have filled the same niche. This does not imply that we could easily plug another symbol into that niche in the current meaning space: it would take time for the new symbol to ‘take on’ its meaning – and in the meantime, the other symbols implicated with that niche would have to go on functioning naturally (i.e. implicitly). For instance, in the sentence ‘Now go to the tower,’ as long as the usage of ‘tower’ remains ‘strange,’ all the other words in the sentence have to function implicitly.

To make anything explicit requires an entire code or symbol system to be functioning implicitly.

People use words (and other symbols) to indicate distinctions, articulations and combinations in the body of experience. How we use one term will motivate specific uses of others, in order to maintain the organization (the integrity) of meaning space: its parts, once we divide it into parts, are related to one another so that definitions (identities, meanings, ..... ) are interdependent. When a term is habitually attached to a given set of these functions and relationships, we naturally think of the term as having a definite meaning (whether we can specify it or not). These habit-sets can be broken, or at least loosened up, with respect to a specific term, or a few terms, so that ‘their meaning’ can be questioned – but only if the bulk of the terms used in the questioning process maintain their habitual meanings. We can question anything, but not everything at once, for the articulation of questions depends on the currently unquestioned –

(since in this scherzarade of one's thousand one nightinesses that sword of certainty which would identifide the body never falls) …
— Joyce, Finnegans Wake (51)

Every act of meaning is part of a semiosic process: occupation of a meaning space also occupies time. Thelen and Smith (1994, 140) summarize research which provides ‘compelling support for the dynamic and self-organizing nature of mental activity – that categories of perception and action are assembled from multiple brain sites and interconnections on the basis primarily of temporal and not spatial codes.’

Although the world contains information for the organism, the information is always in relation to the organism's past and current functioning in the world. The problem for the developing nervous system, then, is to make sense of the world with sufficient specificity to know how to correctly act within an information-rich environment, and at the same time, be able to generalize broadly to recognize novel objects, even from very few instances of that category.
— Thelen and Smith (1994, 144)
Perception, cognition, information and communication are all semiosic processes involving bodyminds which actualize the generic meaning cycle, and thus carry forward the semiotic spiral, in various ways. An intentional communication process can be mapped onto our meaning-cycle diagram as follows: First let's say that my ‘idea,’ or the thought i intend to transmit to you, is W. I encode it for transmission, you receive it through perception, then you decode it to produce a formulation or ‘model’ (M) of my idea. But any information you get from this message must be some modification of your internal guidance system, which in turn will determine what you do (or say) next, your current practice. Now, if we are in a conversation loop, it's your turn to encode and transmit your idea (presumably informed by mine) as your next utterance.

When communication is intentional, the ‘encoding’ stage of this recursive process is a loop in itself. W is the experience informing my intent, and M is the symbol system common (more or less) to utterer and interpreter. Parts of M are prompted by this experience to generate the coded message. This loop is a mirror image of the main meaning cycle: here W (the ‘felt sense’) is internal or ‘private’ while M (the language) is external or ‘public.’ Your ‘decoding’ of the message is of course another loop, where your experience of my message is your W. Your interpretation of it cycles through your conceptual and linguistic systems (M) to confirm or correct our pragmatic feeling for our shared situation. These loops within loops ensure that our respective symbol and guidance systems are thoroughly entangled. Meanwhile, to the extent that we are visible to each other at the moment, much of our meaning is transmitted and synchronized by ‘body language.’

To perfect the communicative loop – to understand each other perfectly without ambiguity – we would have to use exactly the same rules in the encoding and decoding (modeling) processes, or the decoder must ‘run’ the encoder's very ‘program’ in reverse. But real time is irreversible, and the necessary difference that makes each of us Second to the other guarantees that our models could never match perfectly, even if we had any direct way to compare them. Thus our ‘tendency to analyze reading in terms of disembodied decoding of inherent meanings’ (Boyarin 1993, 3) can be quite misleading. Actually, translating a text from one symbolic system into another is not decoding but recoding – which may involve a fresh observation, a current experience, of the object of the sign, coupled with a further exploration of meaning space. Then it can be a discovery, a revelation.

Next chapter: Context and Content →

This work is licensed under a Creative Commons Attribution 3.0 Unported License.

Turning Signs Contents References SourceNet Reverse (rePatch ·14)