|← Chapter 13·||Turning Signs (Contents)||References||blog|
On earth, information has the highest transformities of the energy hierarchy. Here information is defined as the parts and relationships of something that take less resources to copy than to generate anew. Examples are the thoughts on a subject, the text of a book, the DNA code of living organisms, a computer program, a roadmap, the conditioned responses of an animal, and the set of species developed in ecological organization. Each of these takes emergy to make and maintain.Transformity is a measure of the quality of energy, although it is calculated quantitatively. If we think of information as the product of a series of transformations, it rises to the top of the energy hierarchy because it uses a small amount of energy to direct or harness much larger amounts of energy toward serving systemic purposes. It does this by means of feedback into the processes that generate emergy. For example, the semiotic processes within a bodymind form a guidance system which gets its small charge of energy from metabolism, but uses that energy to direct and control the whole organism's physical work, which is where most of its metabolic energy goes.— H. T. Odum (2007, 87)
Perhaps it is time to recognize the energy hierarchy as a fundamental property of the universe in which each kind of energy has its place. To put it more simply, large quantities of low-quality energy are the basis for but controlled by small quantities of high-quality energy.— Odum 2007, 73
Information is a product of semiosis, and the ‘copying cycle’ on which it depends is what we call communication. But Odum's definition above gives some examples of information that are deeper and older than human communication media, including language itself. Just as ‘the conditioned responses of an animal’ constitute the guidance system which informs its behavior, we might say that ‘the set of species developed’ in an ecological system informs its ongoing self-organization – on an evolutionary time scale.
The recent development of human consciousness and language, and the even more recent development of “social media” for communication, along with a 200-year pulse of fossil fuel energy harnessed to human purposes, have transformed the whole planetary ecosystem. In the Anthropocene, the quality of information sharing has degenerated in a way that cuts off human consciousness from the deepest roots, and from the consequences, of human behavior. The meaning spaces of the whole earth have been polluted along with its air, water and soil. Chapter 11 has given one historical explanation of how this situation evolved. This chapter will focus on the holarchy of systems (from the whole biosphere down to the biochemical level) and the semiotic laws governing communication and information transfer at levels up to and including that of human consciousness. This may help to explain why it was possible that things could turn out this way. It might even support Gregory Bateson's argument that human conscious purposes are responsible for the planetary predicament.
All of us earthlings are anticipatory systems, but humans made the first attempts to consciously predict the future by using symbols. According to Giovanni Manetti, the earliest ‘foregrounded use of signs’ in human history is found in Mesopotamian divinatory tablets (Cobley 2010, 13). Any systematic divination practice is a guidance system giving us direction as to what should be done in a given type of situation. Its symbol system must be capable of representing every possible type of situation in the abstract, so that any actual situation can be recognized in enough simplexity to guide its actors appropriately.
Consider the hexagrammatic symbol system of the ancient Chinese I Ching (or Book of Changes): its value as a guidance system depends on its mapping of meaning space being complete (closed, as the syntax of a language must be closed). It carves up the whole universe of possible situations into 64 types, each capable of variation that can be vaguely indicated by the ‘changing lines’ in the hexagram. By obtaining a hexagram, the diviner can tell the inquirer which kind of situation he is dealing with. But in order to comprehend the completeness of the symbol system, it is necessary to study diagrams showing various arrangements of the trigrams and hexagrams, which clarify by juxtaposition the relations among the types of situation. (See for instance Thomas Cleary (1989), I Ching Mandalas.)
Other models of a universal meaning space embrace an infinite rather than a limited number of interrelated elements. One is the ‘jewel net of Indra,’ a symbol frequently used by the Hua-yen school of Buddhism as formulated in the seventh century C.E. According to Francis Cook's description, this ‘wonderful net’ is hung ‘in the heavenly abode of the great god Indra,’ with
a single glittering jewel in each ‘eye’ of the net, and since the net itself is infinite in dimension, the jewels are infinite in number.… If we now arbitrarily select one of the jewels for inspection and look closely at it, we will discover that in its polished surface there are reflected all the other jewels in the net, infinite in number. Not only that, but each of the jewels reflected in this jewel is also reflecting all the other jewels, so that there is an infinite reflecting process occurring. The Hua-yen school has been fond of this image, mentioned many times in its literature, because it symbolizes a cosmos in which there is an infinitely repeated interrelationship among all the members of the cosmos. This relationship is said to be one of simultaneous mutual identity and mutual intercausality.The mutual recognition/reflection which is typical of a social structure is here extended to the whole of the universe: in this play there are no dead or inert “props,” only mutually defining roles, in which each performance implies the whole drama. As ‘a vast body made up of an infinity of individuals all sustaining each other and defining each other’ (Cook 1977, 3), the Net of Indra is both a communion of subjects and a community of signs. Its form is both social and semiotic, and manifests the creative tention between individual and community that animates the arts.— Cook (1977, 2)
An artist's individuality is manifest not only in the creation of new, unique symbols (i.e. in a symbolic reading of the non-symbolic), but also in the actualization of symbolic images which are sometimes extremely archaic. But it is the system of relationships which the poet establishes between the fundamental image-symbols which is the crucial thing. Symbols are always polysemic, and only when they form themselves into the crystal grid of mutual connections do they create that ‘poetic world’ which marks the individuality of each artist.What is this ‘crystal grid’ but the Jewel Net of Indra turned outside in?— Yuri Lotman (1990, 86-7)
In physics, David Bohm's theory of the implicate order (and the super-implicate order) could be read as cognate with the Hua-yen view, especially in terms of the part/whole relationship: ‘a total order is contained, in some implicit sense, in each region of space and time.’ In keeping with the etymology of ‘implicate,’ this order is ‘enfolded’ within the region (Bohm/Nichol 2003, 129). Likewise, a whole meaning space is implicit in any statement. Yet since no particular symbol, however complex, can occupy more than a part of that space, no sign can actually attain the ideal of making the Whole Truth explicit. Semiosis itself involves partiality as well as continuity. Likewise, to be alive, and to be sentient, is to imply more than you know, more than you are; or as Deacon (2011) puts it, to be incomplete. Eihei Dogen expressed a Buddhist understanding of this in his ‘Genjokoan’:
When dharma does not fill your whole body and mind, you think it is already sufficient. When dharma fills your body and mind, you understand that something is missing. For example, when you sail out in a boat to the middle of an ocean where no land is in sight, and view the four directions, the ocean looks circular, and does not look any other way. But the ocean is neither round nor square; its features are infinite in variety. It is like a palace. It is like a jewel. It only looks circular as far as you can see at that time. All things are like this.Though there are many features in the dusty world and the world beyond conditions, you see and understand only what your eye of practice can reach. In order to learn the nature of the myriad things, you must know that although they may look round or square, the other features of oceans and mountains are infinite in variety; whole worlds are there. It is so not only around you, but also directly beneath your feet, or in a drop of water.(Tanahashi 2010, 31 (alternate translation in Meaning Time))
We might call this the conscious experience of the limitations of consciousness. It's a phenomenological parallel to the global workspace theory in neuroscience: the information made explicit in the workspace is a single simplified selection from the vast breadth of possibilities suggested by the ‘massively parallel computation’ going on in the rest of the brain – which is already limited to what it can glean from the sensorium of the individual. The constraints of embodiment exclude ‘whole worlds,’ as Dogen says. Yet the dharma which fills your bodymind also implies that the worlds missing from the conscious workspace are implicated in the universe of reality. We sense that the contents of consciousness in that moment are determined by the actual context of that moment, which is quite literally inconceivable.
This experience of sensing how little we know, or can know consciously, should be supremely humbling. Yet it also awakens whatever will to learn is built into the bodymind. It's as if conceptual spaces were reaching out for their content, just as all living systems reach out for consumable emergy. We take or make from “the world,” the physical/cultural surround which is ready to hand or mind, whatever will suffice to fill the hungry niches in meaning space.
Likewise in communication, people will use available words, if possible, to fill the niches in cultural meaning space. But symbols develop habitual attachments to specific niches which limit their likely interpretants. In historical time, for instance, each word that is widely used will develop a branching network of meanings; and as the various contexts in which those meanings operated are left behind, current meanings of a single word may diverge to the point where two separate meanings have nothing in common except a forgotten history. You can open the Oxford English Dictionary almost anywhere to find examples.
Frans de Waal (1996, 35) mentions the case of ethology, which
comes from the Greek ethos, which means character, both in the sense of a person or animal and in the sense of moral qualities. Thus, in seventeenth-century English an ethologist was an actor who portrayed human characters on stage, and in the nineteenth century ethology referred to the science of building character.The word took another hundred years to settle into its current meaning, given in Webster's as ‘the scientific study of the characteristic behavior patterns of animals.’ (Robert Sapolsky (2017, 81) calls ethology ‘the science of interviewing an animal in its own language.’) In this sense, cultural anthropologists would be ethologists who specialize in human behavior patterns.
The term “symbol” itself is subject to divergent usages, as Terrence Deacon explains:
Despite superficial agreement on most points, there are significant differences in the ways that symbols and non-symbols are defined in the literature. Symbolic reference is often negatively defined with respect to other forms of referential relationships. Whereas iconic reference depends on form similarity between sign vehicle and what it represents, and indexical reference depends on contiguity, correlation, or causal connection, symbolic reference is often only described as being independent of any likeness or physical linkage between sign vehicle and referent. This negative characterization of symbolic reference—often caricatured as mere arbitrary reference—gives the false impression that symbolic reference is nothing but simple unmediated correspondence.Consequently, the term ‘symbol’ is used in two quite dichotomous ways. In the realm of mathematics, logic, computation, cognitive science, and many syntactic theories the term ‘symbol’ refers to a mark that is arbitrarily mapped to some referent and can be combined with other marks according to an arbitrarily specified set of rules. This effectively treats a symbol as an element of a code, and language acquisition as decryption. In contrast, in the humanities, social sciences, theology, and mythology the term ‘symbol’ is often reserved for complex, esoteric relationships such as the meanings implicit in totems or objects incorporated into religious ritual performances. In such cases, layers of meaning and reference may be impossible to fully plumb without extensive cultural experience and exegesis.This multiplicity of meanings muddies the distinction between symbolic forms of reference and other forms and also contributes to confusion about the relationship between linguistic and non-linguistic communication. Within linguistics itself, ambiguity about the precise nature of symbolic reference contributes to deep disagreements concerning the sources of language structure, the basis of language competence, the requirements for its acquisition, and the evolutionary origin of language. Thus, the problem of unambiguously describing the distinctive properties of symbolic reference as compared to other forms of reference is foundational in linguistic theory.— Deacon (2012, 394)
The term information, which we have been using since the beginning of this book, is another case in point. The word is derived from a Latin root, and its earliest use in English referred to the process of forming someone's mind or character (McArthur 1992). The Peircean concept of information introduced in previous chapters further articulates this as the process of habit-formation. But the word's history took a new turn in the aftermath of World War II with the advent of information theory – a mathematical model of communication, first developed by Claude Shannon, which defines information in terms of reduction of uncertainty, and quantifies it in relation to the total number of distinct symbols in a system or elements of a code.
This theory, according to a colleague of Shannon's, ‘came as a bomb, and something of a delayed-action bomb’ (Campbell 1982, 20) – a bit like the Nag Hammadi library. As mentioned in Chapter 3, the immediate postwar period also saw the advent of cybernetics, a discipline which overlapped in some respects with information theory. Originally, both disciplines grew out of the quest for simplicity in modeling – simplicity in the sense that one abstract model can represent many different phenomena, especially different forms of communication. The early cyberneticists were looking for principles that would build useful bridges between the “hard” and “soft” sciences. Shannon, for his part, found a link between communication and physics with his discovery that the mathematical equations defining entropy could equally well be used to quantify information. This proved useful both for code-breaking and for engineering more efficient communication channels. But the pragmatic usefulness of Shannon's information theory depends on the tacit assumption that the coded messages sent through such channels are intended to be meaningful.
This concept of information withdraws attention from the act of meaning and its context by taking them for granted. Gregory Bateson regarded this as a misguided attempt to simplify the engineering task. ‘By confining their attention to the internal structure of the message material, the engineers believe that they can avoid the complexities and difficulties introduced into communication theory by the concept of “meaning”’ (Bateson 1972, 414).
Bateson bridged the gap between the mathematical/engineering sense of information and the logical/semiotic sense by defining it as ‘any difference that makes a difference’ (1979, 250) – a definition that covers perception and learning as well as communication. ‘Making a difference’ (to a system, its behavior or its habits) is virtually synonymous with meaning something to that system – or as Peirce would put it, determining that mind or quasi-mind to an interpretant. Thus a text, as part of an external guidance system, can inform you in the old sense of forming character, because reading it can make a difference to your habits. Any perceptual judgment can ‘make a difference’ to us subjectively, or become intuitively meaningful to us, when it changes our state of bodymind or feeling.
When we use the word meaning in relation to a symbol, it can refer to (at least) three different things:
Meaning is formed in the interaction between felt experiencing and something that functions symbolically. Feeling without symbolization is blind; symbolization without feeling is empty.— Gendlin (1962/1997, 5)
The closure of the self-organizing process guarantees that meaning as feeling cannot be observed or measured, but we can devise measures of observable differences that make that kind of difference, and thus model how meaning arises from the encounter between sign and interpreter, or text and reader. For this we need to grasp the sense in which a text is a ‘difference.’ The simplest way to develop this sense is to start small, say with a single letter, or even a punctuation mark, in a written or printed text. It is visible because it stands out, by contrast, from the blankness of the page. Likewise, everything we perceive or conceive emerges from its background by differing from it in some perceptible way.
Theoretically this ‘difference’ can be modeled with the same mathematical techniques used for modeling order and disorder (entropy) in energetics, or signal and noise in messages. Once measured, information can be thought of as a quantity or even a substance (rather than a process). But we might better call this potential information: it doesn't actually make a difference until somebody reads the sign that conveys it. Just as energy is only potential work until it is harnessed to drive a process, a symbol can do no actual semiotic work until some replica of it manifests itself as a functional part of an embodied system.
Felt experiencing (or in Damasio's phrase, ‘the feeling of what happens’) is the inner feel for what an outside observer would see as the dynamics of the feeling system. (In Peircean terms it would correspond roughly to the Firstness of Secondness.) To ‘function symbolically’ is to inhabit a niche in a systemic meaning space; when this habit-space is shared, as a common language is shared, communication is possible. It is not quite true that ‘the letter killeth but the spirit giveth life’ (2 Corinthians 3:6): the two must collude, for the spirit of this very saying can express itself only by being “spelled out” in the standard set of letters of the alphabet, phonemes in the spoken language or lexemes in the lexicon. Otherwise it vanishes without a trace in the formless flux of endless variation, in a stream of unconsciousness. Yet that stream is the water of life itself, for it makes the difference between a process and a static thing.
When we say that the meaning of a word “is” a concept, we are talking about what has been formed and not about the process of forming. As noted in Chapter 4, we may experience a niche or gap in meaning space as the absence of a word or symbol which can fill it. This ‘felt sense’ has no explicit form, but its interaction with symbols is what makes them meaningful and makes pragmatic meaning explicit enough to serve as a guide. We might describe the felt sense as a niche in meaning space which is currently unnamed, or unoccupied by a long-term tenant, but is nevertheless felt to be the crux of the current situation, a powerful attractor of meaning. In the act (or event) of meaning, what was implicit becomes explicit, yet implies even more than before. When this does not happen, feeling remains formless (‘blind’) and symbols remain meaningless (‘empty’).
We learn to use language, and to mean it, by interacting with others. In order to communicate, we conform to conventions in naming things, events and acts that we can point to in consensual domains. The responses of our partners in this dance guide us in selecting and refining our descriptions of the world. But how do we name those things we can't point to? How do we learn what we are supposed to be talking about when we use words like love, conscience, mystery, faith, freedom, nature, world, presence? And how do we choose general, public names to denote private, individual experiences?
If you and I agree that a statement is true, we are tacitly assuming that we share a common meaning for it. But we have no way of verifying this, except to carry on the dance of conversation in a manner that we both feel to be relevant and consistent with what we've said already. Missteps in the dance can occur when we differ in our conceptual models or in our language habits, or both; sorting out these differences can be difficult. On the other hand, even if we do manage to avoid collisions, this may be due to skillful negotiation, or mere politeness, or even laziness, rather than a genuine meeting of minds.
In any case, we suppose that there is a reliable connection or similarity between our felt senses of our common situation and the meaning spaces intrinsic to our common language. This is a reasonable assumption because our linguistic habits have co-developed and co-evolved with our bodyminds and our communities. Terrence Deacon argues that ‘semiotic constraints have acted as selection pressures on the evolution of both language and brain structures’ (Deacon 2003, 98). These constraints are neither biological nor social in themselves, but result from the way symbols work. In particular, the systemic nature of symbols determines the shape of linguistic meaning space.
Symbols implicitly indicate other symbols. This is reflected in the implicit word-word networks captured differently by a dictionary, a thesaurus, or an encyclopedia. Their relationships with one another constitute a system. This systematicity determines their possibilities of concatenation, substitution, alternation, and so forth, which constrains their useful combinations, and creates a structured space of relationships in which each becomes a marker of semantic position. But because there is also a conventional correspondence between words and things in the world, the topologies of these two ‘spaces’ (i.e. the system of word-word valence relationships and some systematization of the regularities linking certain physical objects) can potentially be mapped one to the other. The result is that ‘positional’ relationships within semantic space can be taken as corresponding to physical relationships. Symbolic reference is thus reference mediated by reference to a system, and by that system's relationship to a perceived systematicity in the world. This system embeddedness of symbols is reflected in the way linguistic utterances can still refer to objects of reference in the complete absence or nonexistence of these objects; a feature that is often called ‘displacement.’ So the combination of systematicity and indirectness of reference allows words without simple reference, and with reference that has no real-world counterpart. Symbolic reference is thus irreducibly systemic.Deacon goes on to explain that the ‘implicit abstract infrastructure’ of symbolic meaning spaces ‘can have real physical effects because it makes special demands on learning.’ Thus learning to use a language is learning a code: ‘symbolic reference is the archetypal encryption relationship; a fact that is evident in any attempt to interpret a foreign language or ancient script’ (Deacon 2003, 99-100). Reading is decoding, writing is encoding and translation (or interpretation) is recoding. This broad sense of -coding was employed by Robert Rosen in his explanation of the meaning cycle, and by Bateson in formulating his principle that ‘All messages are coded’ (Bateson 1979, 235); a more Peircean equivalent might be ‘All semiosis is mediation’). This applies even to “messages” that are not intentionally sent, such as those we receive from the environment in perception. The physical impact on our senses of light and sound waves (for instance) are transformed into neural activation patterns which can inform our guidance systems. ‘In mental process, the effects of difference are to be regarded as transforms (i.e. coded versions) of the differences which preceded them’ (Bateson 1979, 121).— Deacon (2003, 99)
All messages are coded, but not all messages are linguistic, and not all are intentionally sent. We could say, for instance, that the retina sends coded messages to the visual processing areas of the brain. In the semiotic terms deployed in Chapter 13, the message is a sign whose interpretant is codetermined by the sign and the ‘current state of information of the bodymind.’
The neural patterns and the corresponding mental images of the objects and events outside the brain are creations of the brain related to the reality that prompts their creation rather than passive mirror images reflecting that reality.Semiotically speaking, the ‘prompting’ constitutes an indexical relation between the external reality or dynamic object and the resulting transformation of neural activity. The index is ‘a reactional sign, which is such by virtue of a real connection with its object’ in the external world (Peirce, EP2:163). The index is informative insofar as it involves an iconic sign of correspondence between mental images and external events. But the apparent “resemblance” between image and event is really an association between the current transformation of the sensory activity of the brain and the kind of difference it makes to the organism's pragmatic response, which is grounded in its habitual relation to its environment.— Damasio 2003, 198-9
Damasio (2003, 200) explains how it is that different brains can effortlessly attain consensus about the nature of the reality that “prompts” creation of these neural patterns and mental images:
There is a set of correspondences, which has been achieved in the long history of evolution, between the physical characteristics of objects independent of us and the menu of possible responses of the organism.… The neural pattern attributed to a certain object is constructed according to the menu of correspondences by selecting and assembling the appropriate tokens. We are so biologically similar among ourselves, however, that we construct similar neural patterns of the same thing. It should not be surprising that similar images arise out of those similar neural patterns. That is why we can accept, without protest, the conventional idea that each of us has formed in our minds the reflected picture of some particular thing. In reality we did not.These ‘correspondences’ embody the structural coupling of autopoiesis theory. But given the limitations on direct observation of ‘neural patterns,’ how do we know that similar images arise out of similar patterns? This is highly plausible because we can often point to the ‘particular thing’ in the environment corresponding to the pattern – to its location in space, to its parts, and/or to the part it plays in the current scene. (This pointing is another indexical sign, one that is both intentional and attentional.) Any two people will generally do this in roughly the same way. Thus the correspondence is established by consensus, i.e. by structural coupling between ourselves, from which we infer a coupling between some external object (thing or event) and our “idea” or experience of it. Language users can make this inference routinely by giving things common names; but the logical inferences we can make using language (and indeed language itself) are grounded in the deeper logic of semiosis, through which all sentient beings make perceptual inferences and learn thereby.
When the current activity of the brain is “perturbed” by an event in its environment, this event can trigger a shift in the state of the whole system. The dynamic coupling of events with conscious brain activity has a physical effect which leaves the traces we call memory (LeDoux 2002 gives the microscopic details). Neither the physical form of these traces, nor the process of reading them which we experience as remembering, bears any resemblance to the external event which the observer could describe as prompting, triggering or causing these changes in the state of the brain. The process is one of codetermination; all memories, like all messages, are coded.
Memories are records of how we have experienced events, not replicas of the events themselves. Experiences are encoded by brain networks whose connections have already been shaped by previous encounters with the world. This preexisting knowledge powerfully influences how we encode and store new memories, thus contributing to the nature, texture, and quality of what we will recall of the moment.In semiotic terms, the memory of an event is a symbol of the event, not an image of it; it is the whole sign-system constituting the Innenwelt which bears an iconic or ‘modeling’ relation to the whole Umwelt.— Daniel Schacter (1996, 6)
A more microscopic look at any memory or perceptual event reveals a “chain” or “train” of indexical reactions. For instance, a pattern of light triggers certain cells in the retina, which in turn send trains of impulses to the lateral geniculate nucleus of the thalamus, which ‘interprets’ that pattern and sends its own messages to the primary visual cortex, and so on – see Koch (2004) for details, which are exceedingly complex in the case of primate vision. Each sign triggers and informs the next; or, reading in the other direction, each is an interpretant of the preceding sign in the chain. But the “train” of thought is nonlinear, because most of the processors in this chain also receive feedback from later or “higher” stages of the process. When the object external to all these signs is consciously perceived, the whole circuit is sustained by feedback from the prefrontal cortex. But the process can make a difference to the whole ‘state of information’ of the bodymind without becoming conscious.
The information we “receive” via perception may appear to be “given” (in Latin, data) prior to the interpretive process which generates our thoughts or mental images. Indeed our feel for the independence of a perceived object from our minds, its externality or Secondness to us, is what makes it real for us, as we have seen in Chapter 12. But our reality monitoring and consensus-building have to work this way because the biological coding or mediation grounding our consciousness is ‘opaque to the mind's eye.’ Conscious meaning is informed by it but uninformed about it.
The point here is that the brain does not process ‘information’ in the commonly used sense of the word. It processes meaning. When we scan a photograph or an abstract, we take in its import, not its number of pixels or bits. The regularities that we should seek and find in patterns of central neural activity have no immediate or direct relations to the patterns of sensory stimuli that induce the cortical activity but instead to the perceptions and goals of the subjects.— Walter Freeman (1995)
In Freeman's ‘circular causality’ model, ‘the patterns of neural activity are self-organized by chaotic dynamics’ (Freeman 1995). The process is rooted in the fact that neurons have to “fire” every now and then (perhaps once a second or so), regardless of input from other neurons. If they don't act, they die; so they act spontaneously. The result is a continuous background noise even when the neural neighborhood is “at rest.” This noise is called chaotic because an observer (reading it via electroencephalogram) perceives no pattern in it. Hence the need for it to be inhibited in order for patterns to propagate themselves in the brain enough to appear to a conscious subject.
This process is the physical aspect of what William James called ‘the stream of thought’ – the semiotic medium through which all objects appear to us. James noted that ‘however complex the object may be, the thought of it is one undivided state of consciousness’ (James 1890, I.276), a point confirmed by researchers like Dehaene. Peirce's view is even more holistic: ‘the entire consciousness at any one instant is nothing but a feeling,’ and ‘a feeling is absolutely simple and without parts’ (CP 1.310, 1907). The phaneron in its Firstness is not even divided between subject and object, let alone divided into a number of objects. But an ‘instant’ is an abstraction from the flow of time, and a ‘state of mind,’ or a ‘state’ of the brain, is an abstraction from the continuity of experience. Analysis of a process depends on such abstractions.
The dynamic flow of the thought process probably explains why the ‘stream’ strikes most people as a natural metaphor for experience or consciousness. It also explains why remembering is more like recreation than retrieval. Since the self-organizing patterns tend to entrench the physical relationships among the neurons involved, they can recur the next time a similar “input” appears. However, the patterns are not “stored” explicitly in the way binary data can be stored in computer memory or storage media; they are stored rather as attractors in meaning space, tendencies which recur as “families” of patterns, any one of which could organize itself in response to a triggering event.
Encoding in the brain is much more analog than digital. A memorable experience probably correlates best with an attractor which organizes the ongoing interaction of a neural population into a pattern persisting long enough to be ‘felt’ as distinctive. The persistence of this pattern restructures the connectivity of the neurons involved, so that a similar pattern is more likely to recur when brain states are perturbed in some manner related to the original circumstances which laid down the memory trace. The form of this attractor (the information “stored” in it) is determined by the intrinsic and extrinsic constraints on brain dynamics, which are imposed simultaneously by “codes” (internalized social and semiotic systems) and by dynamic objects. Biological memory is very different from the ‘random access memory’ of a computer. Nevertheless, computer-based simulations of neural networks have been useful for developing and testing theories such as the global workspace model (Dehaene 2014, 181 ff.).
Natural languages (and other semiotic systems) seem to share the ‘autopoietic‘ mode of organization with living organisms. Before that word was introduced as a technical term, English words descended from the Greek verb ποιέω were used mainly in reference to the art of poetry. But the making of a poem is only a special case of poiesis, just as conscious making is a special case of the being and doing of an organic system.
In cases like this, the name of an art that takes conscious effort stands in for a whole range of behavior, most of which is spontaneous, unconscious, “natural.” We do this kind of cross-naming by association constantly, whether or not we know the technical term for it (metonymy) – so this in itself demonstrates the way natural languages self-organize. The “rules” governing our use of language function implicitly when we utter or interpret linguistic signs, enabling us to focus explicitly on the signs and sense their meanings. The mediation process is opaque to us, but in communication the sign is transparent, in the sense that we see through the sign to cognize its meaning.
A sign is transparent to the extent that its interpretant immediately replaces it in focal consciousness. When the act of interpretation becomes a deliberate or laborious translation, we become conscious that the message is coded. Polyversity is invisible when signs are transparent; the sign simply means what it says. But when you look at the sign as such, and make it the object of another sign representing the form of the first, you lose sight of its original object in order to gain awareness of semiosis, or the dynamics of the mediation process. In the terms used by the Latin logicians, you regard concepts as ‘second intentions,’ which are the ‘objects of the understanding considered as representations, and the first intentions to which they apply are the objects of those representations’ (Peirce, EP1:7).
It should be clear by now that perceptual coding as well as conceptual coding is constrained from the top down, with the whole bodymind (or its final cause) determining what its parts are doing. But semiosis, including perception, is also constrained from the bottom up, by the nature of its material causes, even down to the molecular scale. A transformational process at that level can also be regarded as a transfer of information.
When we speak about the transmission of information from one molecule to another, we mean a transfer of information inherent in the molecular configuration—in the linear sequence of the unit structure or in the three-dimensional disposition of the atoms. Since molecules cannot talk or engage in other human forms of communication, their method of transmitting information is straightforward: the emitter molecule makes the atoms of the receiver deploy themselves in an analogue spatial pattern.This can only happen within the electrostatic field around the molecules, which is— Werner R. Loewenstein (1999, 31, italics his)
effectively limited to distances of 3 to 4 x 10-8 cm. in organic molecules. Thus, to transfer information from one organic molecule to another, the participating atomic components of the two molecules have to be situated within that distance. In other words, the two molecules must fit together like glove and hand or like nut and bolt, and this is the sine qua non for all molecular transfers of information.This distance limitation, and the ‘fit’ between the two molecules, are constraints intrinsic to the molecular level of interaction. Each level in the holarchy of semiosis has its own constraints which determine how the information process will play out at that level. As Deacon (2011, 317-18) says, ‘whether it is embodied in specific information-bearing molecules (as in DNA) or merely in the molecular interaction constraints of a simple autogenic process, information is ultimately constituted by preserved constraints.’— Loewenstein (1999, 32)
If you select the impact of light on the retina as the beginning of the visual process, then the whole process as described above is pre-conscious and not subject to deliberate control. Actually, though, what you see depends on where you look, which is often affected by deliberate decisions; and your decisions, whether conscious or not, obviously originate in your intent, not in your visual field. So the impact of light on your retina is better described as a ‘perturbation’ of your ongoing brain dynamics, rather than incoming information, even though its pattern (or its difference from prior patterns) does indeed inform the next cycle of visual processing.
As Hoffmeyer (2001) has suggested, brain dynamics (and biological dynamics generally) can be semiotic too, but they work by analog rather than digital coding – that is, the messages are not made up of discrete and relatively static elements, as they are in an alphabetic or genetic code. The distinction between the dynamic/experiential and the ‘symbolic,’ introduced above, now appears as a distinction between analog and digital coding. Evolution requires digital coding, which is symbolic in the sense that genomes and natural languages are symbolic: they are systems involving dynamic recreation and recombination of relatively static elements.
Nature had her own systems and rules long before humans came along to formulate them; in fact, the coming-along of humans was guided entirely by those rules, and they still function implicitly in our guidance systems. We might say that the key role of the human has been to co-author guidance systems with nature. In the genetic code, and in other codes such as the syntax of a natural language, the rules are followed automatically (unconsciously). Even rules that have to be learned, such as linguistic and social conventions, are for the most part learned by paying attention to what people say and do, and to objects of joint attention – not by paying attention to explicit rules. Moral codes, as guidance systems regulating the behavior of individuals, are probably ‘uniquely human’ (Boehm 2012 (89, 165)) because they are both explicitly taught and tacitly learned. But all signs, whether implicit or explicit (or both), have to be interpreted in order to make a difference.
Even the genome has to be interpreted. Concerning that process, Hoffmeyer (1997) reminds us of ‘a simple but crucial fact: DNA does not contain the key to its own interpretation.’ Nor does any other symbol; interpretation is an interactive process. In this kind of symbolization, the interpretant is presumably not “mental” in the usual sense of that word. But that sense may reflect the human bias toward our own style of mentality. The replication process takes place at a physical scale far finer than we can relate to directly, and the developmental process which grows a bodily interpretant of the whole genome takes far longer than our thinking process. The differences in scale are even greater for the evolutionary process, but even that is a “mental” process by some definitions (such as Gregory Bateson's). If we call this meaning of mind “metaphorical,” or “figurative,” we are only saying that it differs from our habitual usage. There is no definite boundary between “metaphorical” and “literal” meanings.
The genetic code functions symbolically in the sense that each replica of the genotype will be read in a regular way by the next generation of the phenotype, although (as Peirce would say) the habit is natural, not conventional: these rules are not legislated but evolved. What molecular geneticists call a “gene” is a location on a chromosome (Dawkins 2004, 44). The chromosome is a one-dimensional (linear) meaning space, with a definite (chemical) structure which allows for various occupants (called “alleles”) at specific niches. The realization (embodiment, development, “meaning”) of the “message” carried by the chromosome varies with the alleles who actually occupy the niches, but the genotype only changes when the structure of the chromosome changes. This is the molecular basis of the ‘degeneracy’ which makes variation (and therefore evolution) possible.
Both encoding and decoding are semiotic transformations, just as all physical processes are transformations of energy (Odum). In the process of work, the energy “lost” to useless forms is called entropy; in the process of communication through a given channel, the useful part of the transference is called the signal while the useless part is noise. The transmission of a message always includes both. The signal is the meaningful or decodable part; the noise is (ideally) filtered out so that the receiver can process the message.
The filter that makes the distinction between signal and noise is actually part of the code: a sensation or phenomenon that is signal for one code can be noise for another, and vice versa. A code in this sense is primarily a systemic legisign, a complex of relational habits, of rules which function implicitly, whether they are explicable or not. The grammar of the language you are now using, for instance, is a set of rules for production and interpretation of symbols. All of them are coded, and every language has a grammar which is part of the code.
The word grammar comes down to us from the Greek gramma, which in fact was the word translated as ‘letter’ in the King James version of Paul's famous remark that ‘the letter killeth but the spirit giveth life’ (2 Corinthians 3.6); the RSV translation says ‘the written code kills, but the Spirit gives life.’ This might remind us that the oldest meaning for code given in the OED is ‘a digest or systematization of rules’; another is ‘a collection of sacred writings.’ In laws and scriptures we find a code of conduct that can be consciously obeyed (or disobeyed) – an external guidance system. Providing codes of conduct is obviously an important function of religions and other social institutions. But as these codes are themselves coded (like all messages), their guidance can be no better than the interpretive process by which they are decoded into actual conduct.
In the 20th century, the concept of coding found applications in information theory, cybernetics, genetics, and computer science. Programmers refer to ‘machine code’ (the binary “language” which directly controls the computer's behavior or output) and ‘source code’ (the higher-level language in which they write their instructions to the computer). ‘Compiler’ software is supposed to translate source code unambiguously into machine code so that the computer does what the programmer wants. In this respect, the code which interprets source-code input as machine-code output is a degenerate kind of code called a cipher, which systematically maps one set of symbol-elements onto another, as a phonetic alphabet maps sounds (phonemes) onto letters. Morse code is a cipher system that translates back and forth between letters of the alphabet and sets of ‘dots’ and ‘dashes’ (short or long bits of signal sent over telegraph wires).
Ciphers, unlike perceptual or linguistic codes, do not function implicitly without first being made explicit or “programmed” by an external user. Cipher systems can be used to encrypt messages so that only someone in possession of the “key” can decode and read them. But a deciphered message is not decoded in the full sense of the word until it has been understood.
Bateson's assertion that ‘all messages are coded’ makes two main points about any process of transmission. First, what is received is a transform of what is sent, and thus cannot be identical to it: the sending and the receiving are different events, separated in time and joined by mediation. It should be obvious that if a message such as any sentence in this book ‘encodes’ the author's experience, the reader's ‘decoding’ cannot produce an experience which is exactly the same as the author's. However, if the process deserves to be called communication, there must be some regular relationship between experience and sign-system. The regulating factor can be called a code, and that is Bateson's second point. Taken together, the two points imply the simplexity of all semiotic systems.
As explained in Chapter 11, the need to simplify is entangled with the semiotic processes which are characteristic of cognition and life itself. What we call the ‘genetic code,’ for instance, is greatly simplified compared to the lives of the organisms who reproduce themselves by means of it. We saw in Chapter 3 that the human genome does not contain a complete description of a human being; rather it decodes into the basic transformations necessary to begin the process of growing a human being within the appropriate matrix (the mother's womb). A gene can specify a protein to be constructed by spelling out a linear chain of amino acids, but it doesn't tell that protein how to fold into the 3D shape it must assume in order to play its role in the developing system. That folding is an orthograde process that “just happens” – no order needs to be imposed on it from without; the constraints on the process are intrinsic.
Loewenstein (1999, 114) identifies ‘two realms’ in the living cell, genome and soma, ‘one maximizing conservation and the other maximizing the transfer’ of information. The first is ‘shielded from the ordinary hustle and bustle in the world (in higher organisms the DNA is secluded in the cell nucleus).’ The linear structure of DNA is ideal for conservation, while the 3D structure of a protein is essential to the somatic functioning which effects the transfer.
The molecules in both biological realms carry information—an RNA, a protein, or a sugar is as much an informational molecule as DNA is. The quantities they carry individually are different, to be sure, but if we could weigh the total amounts of core information in the molecules of the two realms, they would about balance—the two realms are but the flip sides of the same information.— Loewenstein (1999, 115)
Semiosis inside the cell is protected from degradation not by channeling but by encryption – ‘sending information in a particular form that only the intended receiver has the key to’ (Loewenstein 1999, 143). Of course no conscious intention (or attention) is needed, for the information is “protected” simply by the fact that only the “receiver” has the right molecular shape for interacting with it.
In cellular communication, the codescript is in the form of a special molecular configuration, a spatial arrangement of atoms that is singular enough so as not to be confused with other arrangements of atoms that might occur elsewhere in the system. That three-dimensional singularity is the nub of biological coding.— Loewenstein (1999, 142)
In language use, explicit or conventional codes must come to function implicitly (habitually, post-consciously) just as natural codes do in order to serve the guidance system well. How does this happen? We can begin to answer this question with one of Wittgenstein's thought-experiments about ‘language-games’:
Suppose I had agreed on a code with someone; “tower” means bank. I tell him “Now go to the tower”—he understands me and acts accordingly, but he feels the word “tower” to be strange in this use, it has not yet ‘taken on’ the meaning.— Wittgenstein (PI, IIxi, 214)
Here we have a kind of cipher: the agreement is that the word ‘tower’ will be substituted for the word ‘bank’ – in other words, the niche in meaning space usually filled by ‘bank’ will now be filled by ‘tower.’ Eventually, if the agreement is consistently adhered to, this new usage would become “usual” (habitual). But how did the usage of ‘bank’ in that niche become usual in the first place? It must have been learned by listening to (interacting with) other language users. There is no “natural” connection between a bank and the word ‘bank,’ any more than there is between a bank and the word ‘tower.’ Since the connection is made by consensus, and not inferred from nonlinguistic experience with banks, we call it conventional or sometimes arbitrary. Wittgenstein's point here is that the connection comes to feel “natural.” But in the case of a cipher, where one symbol is substituted for another that is already in place, it takes awhile for the word to ‘take on’ the meaning (i.e. for the usage to feel natural).
When a child is learning her first language, on the other hand, that “natural” feeling is there from the beginning, and the realization that word meanings are conventional comes much later (if at all!) in the development process. As Annie Dillard (1974, 104) tells it:
When I was quite young I fondly imagined that all foreign languages were codes for English. I thought that “hat,” say, was the real and actual name of the thing, but that people in other countries, who obstinately persisted in speaking the code of their forefathers, might use the word “ibu,” say, to designate not merely the concept hat, but the English word “hat.”That's because the first, ‘natural’ consensus about the meaning of a word is tacit. In Wittgenstein's scenario, the “code” is a cipher because the usage agreement is made explicitly. When a word has ‘taken on its meaning,’ or (more generally) when the use of a code or symbol is habitual, it functions implicitly.
When we say that the meaning of a symbol is “conventional” or “arbitrary,” as we often do to distinguish that kind of sign from others, we do not necessarily mean that the meaning was ever explicitly assigned. (It could just as well have been formed by tacit consensus or frozen accident.) We only mean that some other symbol could have filled the same niche. This does not imply that we could easily plug another symbol into that niche in the current meaning space: it would take time for the new symbol to ‘take on’ its meaning – and in the meantime, the other symbols implicated with that niche would have to go on functioning naturally (i.e. implicitly). For instance, in the actual interpretation of the sentence ‘Now go to the tower,” as long as the usage of ‘tower’ remains ‘strange,’ all the other words in the sentence have to function implicitly.
To make anything explicit requires an entire code or symbol system to be functioning implicitly.
People use words (and other symbols) to indicate distinctions, articulations and combinations in the body of experience. How we use one term will motivate specific uses of others, in order to maintain the organization (the integrity) of meaning space: its parts, once we divide it into parts, are related to one another so that definitions (identities, meanings, ..... ) are interdependent. When a term is habitually attached to a given set of these functions and relationships, we naturally think of the term as having a definite meaning (whether we can specify it or not). These habit-sets can be broken, or at least loosened up, with respect to a specific term, or a few terms, so that “their meaning” can be questioned – but only if the bulk of the terms used in the questioning process maintain their habitual meanings. We can question anything, but not everything at once, for the articulation of questions depends on the currently unquestioned.
Every act of meaning is part of a semiosic process: occupation of a meaning space also occupies time and involves temporal coding. Thelen and Smith (1994, 140) summarize research which provides ‘compelling support for the dynamic and self-organizing nature of mental activity – that categories of perception and action are assembled from multiple brain sites and interconnections on the basis primarily of temporal and not spatial codes.’
Although the world contains information for the organism, the information is always in relation to the organism's past and current functioning in the world. The problem for the developing nervous system, then, is to make sense of the world with sufficient specificity to know how to correctly act within an information-rich environment, and at the same time, be able to generalize broadly to recognize novel objects, even from very few instances of that category.Perception, cognition, information and communication are all semiosic processes involving bodyminds which actualize the generic meaning cycle, and thus carry forward the semiotic spiral, in various ways.— Thelen and Smith (1994, 144)
An intentional communication process can be mapped onto our meaning-cycle diagram as follows: First let's say that my “idea,” or the thought i intend to transmit to you, is W. I encode it for transmission, you receive it through perception, then you decode it to produce a formulation or ‘model’ (M) of my idea. But any information you get from this message must be some modification of your internal guidance system, which in turn will determine what you do or say next, your current action or recoding. Now, if we are in a conversation loop, it's your turn to encode and transmit your idea (presumably informed by mine) as your next utterance. But all of this is going on behind the scenes as we both attend to the content or “matter” of the conversation.
When communication is intentional, the ‘encoding’ stage of this recursive process is a loop in itself. W is the experience informing my intent, and M is the symbol system common (more or less) to utterer and interpreter. Parts of M are prompted by this experience to generate the coded message. This loop is a mirror image of the main meaning cycle: here W (the ‘felt sense’) is internal or “private” while M (the language) is external or “public.” Your ‘decoding’ of the message is of course another loop, where your experience of my message is your W. Your interpretation of it cycles through your conceptual and linguistic systems (M) to confirm or correct our pragmatic feeling for our shared situation. These loops within loops ensure that our respective symbol and guidance systems are thoroughly entangled. Meanwhile, to the extent that we are visible to each other at the moment, much of our meaning is transmitted and synchronized by “body language.”
To perfect the communicative loop – to understand each other perfectly without ambiguity – we would have to use exactly the same rules in the encoding and decoding (modeling) processes, or the decoder must “run” the encoder's very “program” in reverse. But real time is irreversible, and the necessary difference that makes each of us Second to the other guarantees that our models could never match perfectly, even if we had any direct way to compare them. Thus our ‘tendency to analyze reading in terms of disembodied decoding of inherent meanings’ (Boyarin 1993, 3) can be quite misleading. Meanings are always swimming in a continuum of indeterminacy.
Decoding or recoding any symbolic message requires not only background knowledge of the code(s), but also background knowledge of the context – which is also both natural and conventional, both implicit and explicit, as the next chapter will show.
Next chapter: Context and Content →
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|Turning Signs Contents||References||SourceNet||Reverse (rePatch ·14)|