Spinning

In Joyce’s Ulysses, Stephen Dedalus (metaphorical son of the archetypal ‘artificer’) addresses himself as ‘weaver of the wind.’ Weaving and spinning (as threadmaking) are both handy metaphors for the construction of meaning. But there is another sense of “spin” (based on a different metaphor) that we need in any good account of how language works.

The metaphor of spin or bias refers vaguely to a speaker’s more or less subtle (and often unconscious) attempts to manipulate the emotional interpretant while maintaining some semblance of truth in the sign-object relation. When a word is used frequently with a special emotional overtone, the spin tends to stick to the word as an undertone persisting in other uses. Some examples:

“Progress,” as a noun, puts a positive spin on the idea of a progression, i.e. a forward motion: we generally use it in reference to a sequence in which later points (or states) are improvements over earlier points in the sequence. The same happens with “success”: it generally refers to a positive outcome of a succession of acts.

We see the same pattern in the evolution of “happy” in English. Things happen; if the outcome is positive for us, if our luck is good, then the events are “happy” or “lucky”; and we describe our own resulting state in the same terms. (In English, the usage of “happy” as referring to events rather than emotional states has almost disappeared, but we still use “lucky” both ways.) Similarly “fortune” can be kind or unkind, and someone who “tells your fortune” may bring good news or bad news, but if you are “fortunate,” that means the news is good.

Other words have gone in the opposite direction. “Fate” usually has ominous overtones, probably because it is beyond our control, and a “fatal” event, in current usage, is about as negative as anything can be.

A successful translation from one language into another would translate the spin of each phrase as well as its more “objective” reference – but the shifting relationships between sense and spin are rarely parallel across languages, and even differ between people. This is yet another reason why a perfectly “successful” translation is an ideal that can hardly be realized.

Do we understand one another?

Anything worth saying is worth saying in more than one way. There is no ‘best’ way of saying it, since the reader is going to interpret the utterance in ways that are not fully predictable.

Consider for instance Ray Jackendoff’s (1992) essay on ‘The Problem of Reality,’ which explains why ‘constructivist’ psychology is a more viable model of our relationship to external reality than analytical philosophy which tries to base meaning on ‘truth-conditions.’ (He does not consider the Peircean alternative that the sense of reality is grounded in a ‘dyadic consciousness.’) Toward the end of the essay he takes up an objection to the constructivist view, stated as a more or less rhetorical question:

If reality is observer-relative, how is it that we manage to understand one another? (This objection is essentially Quine’s (1960) ‘indeterminacy of radical translation,’ now applied at the level of the single individual.)

— Jackendoff (1992, 173)

Jackendoff gives two answers to this. Translating the first into my own terms: assuming that ‘we’ are both human, our common biological nature makes it a good bet that our meaning spaces are similar. (Jackendoff uses ‘combinatorial space’ where i use ‘meaning space.’)

The second answer to this objection is that we don’t always understand each other, even when we think we do. This is particularly evident in the case of abstract concepts. Quine’s indeterminacy of radical translation in a sense does apply when we are dealing with world views in areas like politics, religion, aesthetics, science, and, I guess, semantics. These are domains of discourse in which the construction of a combinatorial space of concepts is underdetermined by linguistic and sensory evidence, and innateness does not rush in to the rescue.

Since we are now at work in one of those domains of discourse, it is possible that Jackendoff’s ‘combinatorial space’ is more different from my ‘meaning space’ than i think it is. Investigating that possibility would involve exploring the context of each usage more fully, always guided by Peirce’s ‘maxim of pragmatism.’

Energetics of information

Use it or lose it, they say. Another side of this maxim is that if you try to keep too much “information,” you degrade its usefulness.

Howard Odum describes a version of the meaning cycle operating on a global level:

Because information has to be carried by structures, it is lost when the carriers disperse (second energy law). Therefore, emergy is required to maintain information.… So in the long run, maintaining information requires a population operating an information copy and selection circle …. The information copies must be tested for their utility. Variation occurs in application and use because of local differences and errors. Then the alternatives that perform best are selected, and the information of the selected systems is extracted again. Many copies are made so that the information is broadly shared and used again, completing the loop. In the process, errors are eliminated, and improvements may be added in response to the adaptation to local variations.

— H.T. Odum (2007, 88)

The information process, which is the ‘application and use’ of “stored” or potential information to actually inform the guidance system, is the crucial part of this copy and selection circle; ‘the ability to retrieve and use information is rapidly diluted as the number of stored information items increases. To accumulate information without selection is to lose its use’ (Odum 2007, 241).

Idiomatics

While there is no private language (Wittgenstein), every natural language abounds in idioms (from the Greek word meaning private) – expressions whose specific usage is not derivable from the logos of the language. In fact any particular usage of an expression may occupy any position along a spectrum running from fully public to fully private. Idioms, jargon and slang are perhaps signs of the propensity of cultures to articulate themselves as subcultures, and of people to identify themselves as members of a group by adopting the group’s articulation habits.

Looking at language itself is like looking at a mirror rather than at the reflection in a mirror, or looking at a window rather than through it: the phenomenon of language ceases to be transparent and we focus on its dynamics rather than those of “the world.” This can raise its automatic functions into consciousness and restore the immediacy of the habits we have taken for granted.

For instance, we can look at familiar idioms which we normally use automatically, and try to trace their peculiar logic back to the root. Consider the English expression “back and forth”: why is it not “forth and back”? Do we normally imagine that kind of motion as beginning with the return? Maybe we do: perhaps an irreversible “going forth” is the default kind of motion, as it were, and only when this is interrupted by a “coming back” do we notice a distinctive pattern – which we then call “back-and-forth.”

But then why do we say that someone tumbles “head over heels”? Taken “literally,” this expression would be equivalent to “upside up”; “upside down” (or “downside up”) would be better expressed as “heels over head.” Well, maybe the idiom is dominated by the dynamic (rather than the static positional) sense of over – as in “fall over,” “turn over” etc. – and we simply ignore the order of the nouns.

There’s an element of chance in any evolutionary process, including historical changes of word meaning; idiomatic and conventional expressions are full of ‘frozen accidents.’ Some accidents are more likely, more ‘motivated,’ than others (Lakoff 1987, Sweetser 1990), but all are unruly to some degree, as is life itself.

Transparency

When we say the meaning of the text is ‘clear,’ we are testifying to an experience described metaphorically as ‘seeing the meaning through’ the text. By this metaphor the text is transparent. This occurs when the act of reading is effortless, so that we are not conscious of the text as ‘coded’ or of interpretation as such. When we are conscious of the text as coded – usually because we are unable to decode it through an unconscious meaning process – we say that the text is opaque.

If someone points to a text that is transparent for you and asks you ‘What does this mean?,’ your first impulse may be to say that it means exactly what it says. But then you realize that such a response is not helpful for someone to whom the text is opaque; and only someone in such a predicament would ask such a question. In order to deal with this predicament you have to raise the decoding (meaning) process into consciousness somehow. And in doing this, you sacrifice the transparency of the text. This sacrifice is motivated by compassion. (And so is any genuine question about the meaning of the text – for the one asking the question is motivated by trust that the text is meaningful although it is still opaque to him.)

Even a text which has been transparent may lose its transparency if the reader notices an ambiguity in it. Perceiving an ambiguity entails having to make a conscious choice, and thus makes us conscious of the text as coded. If you manage to recover the transparency of the text without losing its ambiguity, the text has gained for you an added dimension of meaning. Thus the ‘fall’ from ambiguity into opacity is ‘redeemed’ by a deeper, richer transparency.

Use and mention

To make anything explicit requires an entire code or symbol system to be functioning implicitly.

While a sign is functioning symbolically within your act of meaning – i.e. while it is in actual use – you can’t pay attention to, or even mention, its function. As Douglas Hofstadter put it (modeling his epigram after a familiar saying), you can’t have your use and mention it too. Likewise Michael Polanyi: ‘we cannot look at our standards in the process of using them, for we cannot attend focally to elements that are used subsidiarily for the purpose of shaping the present focus of attention’ (Polanyi 1962, 183). In scientific practice, you can’t make your measurement (observation) and describe your measuring device at the same time:

even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.

— Pattee (2001)

Likewise in the realm of cognition or experiencing, of which science is the public expression: if the creative or forming power could emerge visibly from behind the forms which are its expression, then it could not be seen as a form; the seer would instead be ‘blinded by the light.’ As we have already heard from Thomas 83: ‘The light of the Father will reveal itself, but his image is hidden by his light.’ Or as Moses Cordovero put it, ‘revealing is the cause of concealment and concealment is the cause of revealing’ (Scholem 1974, 402).

Feel the concept

Meaning is formed in the interaction between felt experiencing and something that functions symbolically. Feeling without symbolization is blind; symbolization without feeling is empty.

— Gendlin (1962/1997, 5)

Gendlin’s second sentence closely resembles a famous Kantian statement, quoted as follows by Cassirer (1944, 56): ‘Concepts without intuitions are empty; intuitions without concepts are blind.’ Is Gendlin then repeating something already said by Kant? That depends on whether ‘intuitions’ are equivalent to ‘feeling’ and ‘concepts’ to ‘symbolization.’

Does it all mean?

As the human conversation with nature, science is our ongoing attempt to decode the message sent to us constantly by Nature, which message is the phenomenal world. This conversation will continue as long as our actions into the natural world have unexpected consequences. Skeptics may well doubt whether that message really means anything, but a scientist as such cannot be such a skeptic.

Nobody can doubt that we know laws upon which we can base predictions to which actual events still in the womb of the future will conform to a marked extent, if not perfectly. To deny reality to such laws is to quibble about words. Many philosophers say they are ‘mere symbols.’ Take away the word mere, and this is true. They are symbols; and symbols being the only things in the universe that have any importance, the word ‘mere’ is a great impertinence.

— Peirce (EP2:269)

Peirce does not say that symbols are the only realities – quite the contrary. He says that they alone have importance, which implies significance, or meaning; which in turn implies that reality is their Object.

Degeneracy, codes and laws

Different symbols can fill the same niche in meaning space, and two different acts of meaning can find expression in the same text. This inherent ambiguity or ‘polyversity’ of language is rooted in our biological heritage as complex adaptive systems.

A clear-cut example is seen in the genetic code. The code is made up of triplets of nucleotide bases, of which there are four kinds: G, C, A, and T. Each triplet, or codon, specifies one of the twenty different amino acids that make up a protein. Since there are sixty-four different possible codons – actually sixty-one, if we leave out three stop codons – which makes a total of more than one per amino acid, the code words are degenerate. For example, the third position of many triplet codons can contain any one of the four letters or bases without changing their coding specificity. If it takes a sequence of three hundred codons to specify a sequence of one hundred amino acids in a protein, then a large number of different base sequences in messages (approximately 3100) can specify the same amino-acid sequence. Despite their different structures at the level of nucleotides, these degenerate messages yield the same protein.

— Edelman (2004, 43-4)

This biological usage of the term degenerate is quite different from the mathematical sense used by Peirce (polyversity strikes again!); here degeneracy refers to the ability of different structures to serve the same systemic function. As Ernst Mayr (1988, 141) points out, this complicates evolutionary theory because it means that mutations consisting of base-pair substitutions can be ‘neutral’ with respect to selection. But this inconvenience is not one we could dispense with, as Edelman goes on to explain:

Degeneracy is a ubiquitous biological property. It requires a certain degree of complexity, not only at the genetic level as I have illustrated above, but also at cellular, organismal, and population levels. Indeed, degeneracy is necessary for natural selection to operate and it is a central feature of immune responses. Even identical twins who have similar immune responses to a foreign agent, for example, do not generally use identical combinations of antibodies to react to that agent. This is because there are many structurally different antibodies with similar specificities that can be selected in the immune response to a given foreign molecule.

What Edelman calls degeneracy is called ‘multiple realizability’ by Deacon (2011, 29), who gives the example of oxygen transport in circulatory systems. This is realized by hemoglobin in humans and other mammals, but by other molecules in (for instance) clams and insects.

For us humans, degeneracy is perhaps most interesting for its role in generating conscious experience. Neural processes related to the experience of having a world can be analyzed in terms of ‘maps,’ and the relations among these maps turn out to be degenerate. Visual experience alone may involve dozens of them, cooperating (in Edelman’s theory) by means of

mutual reentrant interactions that, for a time, link various neuronal groups in each map to those of others to form a functioning circuit.… But in the next time period, different neurons and neuronal groups may form a structurally different circuit, which nevertheless has the same output. And again, in the succeeding time period, a new circuit is formed using some of the same neurons, as well as completely new ones in different groups. These different circuits are degenerate – they are different in structure but they yield similar outputs …

— Edelman (2004, 44-5)

By its very nature, the conscious process embeds representation in a degenerate, context-dependent web: there are many ways in which individual neural circuits, synaptic populations, varying environmental signals, and previous history can lead to the same meaning.

— Edelman (2004, 105)

Even within a given context, there are many ways for implicit guidance to become explicit. So naturally different texts can yield the same meaning, and different verbal expressions of belief can yield the same practice.

Another aspect of this degeneracy is that different theories may articulate the same implicit models: for example, Edelman’s ‘theory of neuronal group selection’ appears to have the same significance as Bateson’s theory of ‘the great stochastic processes’: in each case evolution and learning are processes which differ only in time scale. ‘In this theory,’ says Edelman,

the variance and individuality of brains are not noise. Instead, they are necessary contributors to neuronal repertoires made up of variant neuronal groups. Spatiotemporal coordination and synchrony are provided by reentrant interactions among these repertoires, the composition of which is determined by developmental and experiential selection.

— Edelman (2004, 114)

The necessity of ‘variance and individuality’ is not confined to brains. ‘The biologist is constantly confronted with a multiplicity of detailed mechanisms for particular functions, some of which are unbelievably simple, but others of which resemble the baroque creations of Rube Goldberg’ (Lewontin 2001, 100). Degeneracy rules – and not only in a figurative sense, for it plays a crucial role in ‘the control hierarchy which is the distinguishing characteristic of life’ (Pattee 1973, 75). This differs from the hierarchy of scale in that it ‘implies an active authority relation of the upper level over the elements of the lower levels’ (75-6). This relation is also known as ‘supervenience’ or ‘downward causality’ (Pattee 1995), which is part of Freeman’s ‘circular causality’, as it ‘amounts to a feedback path between levels’ (Pattee 1973, 77). The development process in a multicellular organism offers an example. Each cell carries a copy of the entire genome in its nucleus; how does it manage to differentiate into a liver cell, or a blood cell, or a specific type of neuron? It receives ‘chemical messages from the collections of cells that constrain the detailed genetic expression of individual cells that make up the collection.’ Like all messages, these are coded, but the coding/decoding function is not to be found in the structure of the molecules carrying the message, whether they be enzymes, hormones or DNA. Likewise the control function is not found in any special qualities of those elements of the system which appear to be in ‘control’: rather it is found at ‘the hierarchical interface between levels’ (Pattee 1973, 79). The control function is degenerate in that the choice of particular elements to exercise control is to some degree arbitrary, and a different choice does not make a significant difference in the control itself.

Of course, this is also the general nature of social control hierarchies. As isolated individuals we behave in certain patterns, but when we live in a group we find that additional constraints are imposed on us as individuals by some ‘authority.’ It may appear that this constraining authority is just one ordinary individual of the group to whom we give a title, such as admiral, president, or policeman, but tracing the origin of this authority reveals that these are more accurately said to be group constraints that are executed by an individual holding an ‘office’ established by a collective hierarchical organization.

— Pattee 1973, 79

The control function is not a property of, and does not belong to, the individual who executes it. When someone tries to appropriate that function for himself, we call him a tyrant – a person who tries to control others for his own sake instead of serving the higher level of organization.

The polysemy of the term hierarchy is rooted in that of the Greek ἀρχη, which can mean either ‘a beginning, origin’ or ‘power, dominion, command’ (LSG). In English, first has a similar ambiguity: it can denote either one end of a time-ordered series or the ‘top’ spot in a ranking order.

In speaking of ‘control functions,’ we often need to distinguish between two kinds of ‘law’ or ‘rule,’ which we may call logos and nomos. The logos (or ‘logic’) of a system is its self-organizing function, while nomos is ‘assigned’ (LSG) artificially rather than arising naturally. Nomos is the kind of law which is formulated and ‘ordered’ so that it can be obeyed or ‘observed,’ while the ‘laws of nature’ are formulated (by science) in order to explain why the universe does what it is observed to be doing already. The distinction is denied by creationists, for whom nature itself is artificial (having been intentionally designed and manufactured by a God whose existence is prior to it), and perhaps by some who consider every formulation of science to be a disguised assertion of power. And the distinction is indeed problematic, because nomos in Greek can mean ‘usage’ or ‘custom’ as well as ‘law’ and ‘ordinance’ (LSG again). Are the ‘rules’ of a ‘natural’ language nomoi or logoi? I would say that the deepest grammatical rules are examples of logos, while the more ephemeral standards of usage are much more arbitrary, and therefore examples of nomos, even before they are formalized. But the boundary between them is fuzzy. You could put the question this way: How natural is human nature?