Articulate emergence

Just as evolution is articulation of the biosphere, and development articulation of the body, perception is articulation of the phaneron.

When a fox emerges from its den, it is no longer inside the den. When a part emerges from the whole, it is separated from the whole. But this is not so with emergent phenomena. When something “emerges” from the phaneron, it is thereby included in the phaneron. In perception, figure emerges from ground, and thus the phaneron articulates itself. Likewise continuous practice articulates itself by particular acts.

In the same manner as tonally discrete music, the body-surrounding fit is possible only through discretization of the continuum of possibilities, both in the perception and the action relation. Perception possesses a highly constrained selection of possible environment stimuli – ranging from simple cases like the possibility of sensing only groups of specific chemicals and to more complicated cases like the necessary limit of discrimination ability in any continuous perception spectrum (visual, auditive, tactile, etc.). In short, perception and action both possess a certain granularity which allows them to be pragmatically efficient at the price of a certain imprecision. This imprecision, it is evident, implies certain limitations – larger or lesser – on the perfection of the organism-environment fit. Both more perceptual precision (which is also energetically more expensive), on the one hand, and more perceptual economy (which is also less precise), on the other, may be favored by selection, according to the specific conditions in the single case. In semiotic terms, this implies that in the functional circle, a tension is at stake in embodied semiosis between semiotic complexity on the one hand and semiotic economy on the other.

— Stjernfelt 2007, 262

No watch

As related in Chapter 9, William Paley’s Natural Theology used the watch analogy to argue that nature must have a designer because it was so complicated, and the parts so admirably suited to their functions. Richard Dawkins (1987, 5), admiring Paley’s ‘beautiful and reverent descriptions of the dissected machinery of life,’ went on to argue that ‘the only watchmaker in nature is the blind forces of physics.’ But Dawkins actually carries on the Cartesian (and Paleyan) tradition of viewing animals as complicated machines, based on the ‘misleading engineering metaphor in which independent parts preexist an assembled whole. In biologically evolved systems, however, the integration and complementarity of “parts” come as natural consequences of the progressive differentiation of an antecedent less differentiated whole structure, both phylogenetically and embryologically’ (Deacon 2003, 105).

According to Depew and Weber (1995, 477-8), Dawkins does not offer much of an improvement over Paley.

… Paley’s watchmaker does not completely disappear in Dawkins’s version of evolutionary theory (Dawkins 1986). He is said only to be a ‘blind watchmaker.’ From our perspective, however, there is no watchmaker, blind or sighted, for the simple reason that there is no watch. Natural organization is not an artifact, or anything like it, but instead a manifestation of the action of energy flows in informed systems poised between order and chaos. Directionalities, propensities, and self-organization in a thermodynamic perspective actually exclude the notion that evolution is oriented toward an end in the intentional or design sense. The thermodynamic perspective allows biological adaptedness precisely by excluding design arguments. Directionality of informed, dissipative natural processes excludes directedness.

You could say that organic and mechanical are two ways of looking at systems, rather than two kinds of systems. We can look at some systems either way: we are capable of ‘getting personal’ with machines, or conversely of treating organisms as inanimate objects. But we have two ways of looking at systems because there is a real difference between the two kinds, and most systems fall naturally into one or the other. A mechanical system such as a watch, or a missile guidance system, or the ignition system of a car, are relatively simple to map because they were actually built from maps in the first place – that is, they were deliberately designed and engineered to serve some conscious purpose. You, on the other hand, are much more complex, having self-organized rather than having been artificially assembled from pre-existing parts for predetermined purposes. As Gendlin (APM IV-A.c) puts it, ‘there are no simply separate parts of the body … a part changes and may disintegrate if the processes (subprocesses and larger processes) in which it is involved stop and never resume. Parts of the body are derivative from process-events.’

Mind evolving

According to Peirce’s cosmological hypothesis, evolution is a continuing process of growth which accounts for both the diversity and the regularities we observe in a universe both mental and physical.

The regularities or ‘laws’ of nature result from the habit-taking tendency, which tends toward the extreme ‘crystallization’ of form which in physics we call ‘matter.’ But the behavior patterns of the physical universe are never completely determinate, the laws of nature never absolutely exact, because the habit-taking tendency is countered and complemented by a spontaneity which keeps the universe alive and accounts for the growing diversity and complexity of forms. This spontaneity is primal to the mental side of evolution, which involves both taking and breaking habits.

Here is Peirce’s explanation of the ‘Uniformity’ of nature in Baldwin’s Dictionary:

The hypothesis suggested by the present writer is that all laws are results of evolution; that underlying all other laws is the only tendency which can grow by its own virtue, the tendency of all things to take habits. Now since this same tendency is the one sole fundamental law of mind, it follows that the physical evolution works towards ends in the same way that mental action works towards ends, and thus in one aspect of the matter it would be perfectly true to say that final causation is alone primary. Yet, on the other hand, the law of habit is a simple formal law, a law of efficient causation; so that either way of regarding the matter is equally true, although the former is more fully intelligent. Meantime, if law is a result of evolution, which is a process lasting through all time, it follows that no law is absolute. That is, we must suppose that the phenomena themselves involve departures from law analogous to errors of observation. But the writer has not supposed that this phenomenon had any connection with free-will. In so far as evolution follows a law, the law of habit, instead of being a movement from homogeneity to heterogeneity, is growth from difformity to uniformity. But the chance divergences from law are perpetually acting to increase the variety of the world, and are checked by a sort of natural selection and otherwise (for the writer does not think the selective principle sufficient), so that the general result may be described as ‘organized heterogeneity,’ or, better, rationalized variety. In view of the principle of continuity, the supreme guide in framing philosophical hypotheses, we must, under this theory, regard matter as mind whose habits have become fixed so as to lose the powers of forming them and losing them, while mind is to be regarded as a chemical genus of extreme complexity and instability. It has acquired in a remarkable degree a habit of taking and laying aside habits. The fundamental divergences from law must here be most extraordinarily high, although probably very far indeed from attaining any directly observable magnitude. But their effect is to cause the laws of mind to be themselves of so fluid a character as to simulate divergences from law. All this, according to the writer, constitutes a hypothesis capable of being tested by experiment.

— Peirce, BD ‘Uniformity’ (1901)

Peirce says here that ‘the law of habit’ – as opposed to the ‘fundamental law of mind,’ which is the tendency of all things to take habits – ‘is a simple formal law, a law of efficient causation.’ Ten years earlier, in ‘The Doctrine of Necessity Examined,’ Peirce had written that the necessitarian, while believing that irregular events are inexplicable, also says

that the laws of nature are immutable and ultimate facts, and no account is to be given of them. But my hypothesis of spontaneity does explain irregularity, in a certain sense; that is, it explains the general fact of irregularity, though not, of course, what each lawless event is to be. At the same time, by thus loosening the bond of necessity, it gives room for the influence of another kind of causation, such as seems to be operative in the mind in the formation of associations, and enables us to understand how the uniformity of nature could have been brought about.

W8:123, CP 6.60

This ‘other kind of causation’ is called by Jesper Hoffmeyer semiotic causality, which ‘gives direction to efficient causality, while efficient causality gives power to semiotic causality’ (Hoffmeyer 2008, 64). This duality or complementarity of causes accounts for the two sides of evolution, the physical and the psychical or mental.

Semiotic causality is implicit in Peirce’s definitions of ‘sign,’ which generally follow the path of determination object > sign > interpretant:

I will say that a sign is anything, of whatsoever mode of being, which mediates between an object and an interpretant; since it is both determined by the object relatively to the interpretant, and determines the interpretant in reference to the object, in such wise as to cause the interpretant to be determined by the object through the mediation of this ‘sign.’

EP2:410

The reverse side of this path of determination is a path of representation: the sign represents the object to the interpretant, which then represents the sign – by determining another interpretant sign, or else a ‘habit-change,’ which is both an end of semiotic causality and a governor of efficient causation, i.e. a determinant of future transformations in the physical realm. Any actual occurrence of semiotic determination/representation must itself determine and represent a change in a state of mind, quasi-mind or bodymind: in other words, it must make a difference to that bodymind, and this difference is both semiotically and efficiently caused.

In other words, the logical interpretant of a sign, as a ‘habit-change’ or modification of the guidance system which ‘gives direction’ to the subsequent practice of the guided system or bodymind, will determine the energetic interpretants of future signs, which over time will make the path of practice by walking it. This in turn will make a difference to the physical (as well as the mental) context of further semiosis.

In terms of evolutionary biology, the way a type of organism interacts with its environment can effect changes in both organism and environment, which may in turn affect the ability of the species to be represented in another generation of organisms. Over time, then, natural selection will weed out the ethos which does not maintain its viability as an occupant of its ecological niche. But natural selection must have a variety of possibilities to select from, and does not in itself account for that variety. Hence the need for the hypothesis that spontaneity or ‘chance’ is a primal element in an evolving universe such as the one we all inhabit.

Origins of life, the universe and everything

What do development and evolution have in common?

Any system that starts off simple will tend to get more complex. It has nowhere else to go. Natural selection does not have a lot to do except act as a coarse filter that rejects utter failures. So we get a description of evolution in terms of dynamics and stability, which always belong together.

— Goodwin (1994, 157)

Since both development and evolution proceed toward greater complexity, it’s a natural guess that life must have begun with the simplest possible self-organizing process. As we know that the physico-chemical conditions of the early earth are no longer current, a spontaneous process that was possible then may no longer be possible now. Even if it could occur, the relatively simple systems it would produce would probably get consumed by ubiquitous life on earth before they could reproduce. If all forms now living have evolved from previous forms, they have also changed the conditions and the very process which produced them. In order to explain how it could have happened, then, either on this planet or elsewhere, we need an account of the process which is general enough to be possible in a broad range of conditions yet specific enough to generate testable predictions.

There are of course alternatives to the guess that life began by self-organizing. We might guess that life, or indeed the whole cosmos, could have been created artificially by some pre-existing entity – as we create buildings and machines, only more arbitrarily (and without depending on existing resources as we do). This has the advantage of casting the Creator in our own image, and thus containing creation within the familiar cognitive bubble. This kind of anthropomorphizing may even be instinctive, as Peirce claimed, seeing no more adequate way for man to conceive the Creator ‘than as vaguely like a man’ (CP 5.536). But the hypothesis of an omnipotent, unconstrained yet purposeful Creator can’t be investigated, since there is no way it could ever be refuted by observable events. Appealing to an inexplicable Creator does nothing to explain the origin of anything, but rather blocks the road of inquiry – to which we are drawn just as instinctively as we are drawn to the idea of an intentional Creator. The instinct of inquiry calls us to use the best method of investigation we can find, one that is honestly self-critical and self-correcting, and above all, capable of learning from experience. That’s the scientific method outlined in Chapter 7, and it requires a testable theory to explain how self-organization could arise from unorganized energy flows. Deacon (2011) is an attempt at such a theory (see Chapters 10 and 11).

Disturbulence

Life continues to organize its consumption of energy in closed loops like the semiotic or ‘meaning’ cycle. When direct perception and direct expression are one, there is no thought of process, or complexity, or simplicity: presence is immediate. When the gap opens up between theory and practice, anticipation and experience, intention and attention, questions arise.

Why are we always engaging in inquiry – opening questions and striving for answers that will close them? What makes cognitive closure so important to us? Probably our physical embodiment: we must value closure because we are energetically open systems. Energy flows through you, so that your self-organizing identity depends on that flow and on your ability to make it your own. This complementarity between closure and openness to energy flow accounts for the biocognitive tension between simplicity and complexity.

Biological equilibrium is far from energetic equilibrium. Maybe psychological equilibrium is equally far from biological equilibrium. That would explain why challenges are essential to the experience of flow.

Differentiation and integration

All the complex systems we can observe, from organisms to ecosystems, have developed from simpler beginnings. A human life, for instance, develops from a single cell, whose descendants differentiate as they divide and multiply. A fully developed human body has thousands of different types of cells, each doing its separate job to maintain the organic unity of the whole. This quality of highly differentiated unity is what we call complexity.

Differentiation refers to the degree to which a system (i.e. an organ such as the brain, or an individual, a family, a corporation, a culture, or humanity as a whole) is composed of parts that differ in structure or function from one another. Integration refers to the extent to which the different parts communicate and enhance one another’s goals. A system that is more differentiated and integrated than another is said to be more complex.

— Csikszentmihalyi (1993, 156)

Uniformity and conflict are degenerate forms of unity and diversity respectively. Complexity is the logical product of unity and diversity, just as development (or evolution) is the logical product of change and continuity, and information is the logical product of breadth and depth (see Chapter 10· or Fuhrman 2010).

Polyversity and degeneracy

Everyone knows that a single sign can have various meanings for various interpreters. But we often forget that it also works the other way round: a single intention can be expressed (coded, realized, ….. ) in many ways. A single verbal formulation of a precept can prompt the formation of many different habits, and a single habit can be the logical interpretant of many different precepts. These are among the forms of polyversity. We have chosen this term for the element of indeterminacy which may enter into any semiotic process because terms such as polysemy and ambiguity do not seem to cover all cases. Since it is equivalent to the biological pattern of degeneracy, as Edelman called it, we could have chosen that term instead, if we were not already using it for the very different Peircean concept explained in Chapter 7, which comes from mathematics rather than biology or physics.

‘Degeneracy’ is defined by Edelman (2004, 43) as ‘the ability of structurally different elements of a system to perform the same function or yield the same output.’ The ‘words’ of the genetic code – triplets of the nucleotide bases G, C, A and T – are degenerate, since a particular amino acid can be encoded by more than one triplet. Immune systems and nervous systems also incorporate degeneracy (Edelman 2004, Chapter 4; more on this in rePatch ·14), and this tends to promote robustness in systems (Page 2011, 228). Another form of degeneracy in this context is pleiotropy, ‘the phenomenon whereby one single gene has an effect on several different phenotypic traits’ (Hoffmeyer 2008, 127). Symbolic (and especially linguistic) texts involve a double degeneracy: different terms may represent the same concepts, and different concepts may carry out the same guidance function or yield the same behavior pattern. This is in addition to the kind of degeneracy known as ‘polysemy,’ in which a word with the same literal form (such as ‘pit’) has several meanings.

The term degeneracy is obviously related to degenerate, used in English as both verb and adjective. This word also appeared in Chapter 2, with Thoreau’s remark that ‘When our life ceases to be inward and private, conversation degenerates into mere gossip.’ To degenerate is to ‘become of a lower type,’ according to a definition (probably by Peirce) in the Century Dictionary. The moral sense of the term is related to the mathematical sense whereby a point is a degenerate case of a circle as the radius of the circle approaches 0, and a circle is a degenerate case of an ellipse as the eccentricity approaches 0. You could say it refers to the loss of a dimension of complexity.

In his semiotic analysis, Peirce does not say that one symbol can have many meanings, but rather that two symbols which have the same function are ‘replicas of the same symbol’ (EP2:317). According to this usage, two instances of a word (such as ‘degeneracy’) which look and sound the same may nevertheless be different symbols. Or if they are the same symbol but are understood differently, they may be ambiguous or equivocal – a quality which lovers of precision would eliminate from language if they could. A perfect language (see Eco 1995) would presumably eliminate the word/thought gap, and thus we could articulate the one common Logos explicitly and unambiguously. From this viewpoint, the perfect language must be one that hasn’t been fractured and frayed by ‘vulgar’ everyday usage, and the ambiguity which is a feature of natural languages may indeed seem almost morally ‘degenerate’ by comparison. So maybe the technical senses of the word are not so far from Thoreau’s usage after all.

However, it is doubtful whether a ‘perfect’ language would serve as a medium of discovery. If you consider language as a system interacting with biological systems, the degeneracy of each system contributes to the fruitfulness of the interaction.

According to Edelman and Gally (2001), degeneracy is ‘a well known characteristic of the genetic code and immune systems,’ but appears to the most remarkable degree in neural connectivity. Surely it is no fluke that (a) the human brain is the most ‘degenerate’ system we know, and (b) humans are the only systems we know to be capable of generating utterances in natural symbolic languages. The connection is revealed by our gradual discovery that we can’t simply map experience or habits onto brain structures (or vice versa), any more than we can map words directly onto meanings or meanings into signs, without considering context.

Although, in the past, variations in the gross shape of the brain were studied carefully in efforts to find correlations between anatomical features and mental abilities or propensities, it now is accepted that these efforts are largely fruitless. Instead, it is recognized that many different patterns of neural architecture are functionally equivalent, i.e., functional neuroanatomy is highly degenerate.

— Edelman and Gally (2001)

By substituting linguistic terms here, we could generate a valid comment on polyversity: Although, in the past, various texts were studied carefully in efforts to establish the proper name or correct expression of a specific concept, it now is accepted that these efforts are largely fruitless. Instead, it is recognized that many different idioms are functionally equivalent, i.e., language in use is highly degenerate. According to Peirce’s ‘Ethics of Terminology’ (EP2), this tendency must be resisted in the sciences, but only a limited community of specialists could actually attain the level of precision recommended by Peirce.

Biologically as well as linguistically, degeneracy works in both directions, and this is crucial for evolvability:

Applying suitable quantitative measures, we have found that degeneracy is high in systems in which very many structurally different sets of elements can affect a given output in a similar way. In such systems, however, degeneracy also can lead to different outputs. Unlike redundant elements, degenerate elements can produce new and different outputs under different constraints. A degenerate system, which has many ways to generate the same output in a given context, is thus extremely adaptable in response to unpredictable changes in context and output requirements. The relevance to natural selection is obvious.

— Edelman and Gally (2001)

A related usage is Ernst Mayr’s reference to the ‘degeneracy’ of the genetic code, which makes possible ‘neutral’ mutations – cases where a change in base pairs has no effect on amino-acid production, so that ‘different’ statements in that code make no difference to the development of the organism (Mayr 1988, 141; Loewenstein 1999, 188 remarks that ‘there is synonymity, but no ambiguity in the communications ruled by the genetic code’).

In physics, Boltzmann’s definition of entropy makes it measurable in terms of the relation between the possible microstates of a system and its macrostate. To picture this, consider a large number of particles moving around randomly in a closed space. An account of the various positions and velocities of all the particles at any given time describes a microstate of the system. A macrostate of the system, on the other hand, is assessed by measuring properties of the system as a whole, such as its temperature. Many different microstates of such a system can coexist with a single macrostate. For instance, when we measure the kinetic energy over the whole space, it makes no difference to the macrostate where the faster- and slower-moving particles are located within the space. If we consider the locations of the fast-moving particles, for instance, very few of the possible microstates will have them all grouped tightly together, while the number of microstates in which they are more evenly distributed will be much larger; in other words, it is ‘much more probable that the energy states will explore the entire range of possibilities’ (Depew and Weber 1995, 262).

As the number of possible microstates corresponding to a given macrostate increases, the macrostate becomes increasingly degenerate, in the sense in which a code or a language is degenerate when it constains multiple, and thus ambiguous, ways of coding the same information. Boltzmann called this measure of degeneracy W.

Boltmann’s formula for entropy (S = kln W) thus correlates entropy with degeneracy (Depew and Weber 1995, 262).

The notion of supervenience appears to be closely related: ‘A property is supervenient when the same macrostate can be accessed by any number of microstates’ (Depew and Weber 1995, 471); Deacon (2011, 552) defines supervienience as ‘the relationship that emergent properties have to the base properties that give rise to them.’

What is called convergent evolution is another manifestation of degeneracy: for instance, different kinds of eyes have evolved in separate lines of descent among animals, but they all serve the same purpose in each animal’s life. It has also been noted that plants or animals in one bioregion may be very similar in form and behavior to counterparts in another region – may occupy ‘the same’ niche – even though they are entirely different and unrelated species. Same function, different structures; or same form, different lineage; these too are forms of polyversity, on a larger scale of time.

Polyversity is also crucial to the perceptual process, in proportion to the complexity of the animal’s Umwelt. ‘An essential aspect of object-oriented behavior is therefore that the same object has to be simultaneously represented in multiple ways’ (Jeannerod 1997, quoted in Millikan 2004, 178).

The simpler explanation

Models exist in order to simplify the modeler’s relations with the world. We may gain in precision by adding more detail to a model, but this may reduce its usefulness.

We assume that even the most complex symbol system, like the brain, has a correct and detailed physical description, at least in principle, but we recognize that a correct model need not be a useful model. Recall Einstein’s reply when asked if everything has a correct physical description. He said, “Yes, that is conceivable, but it would be of no use. It would be a picture with inadequate means, just as if a Beethoven symphony were presented as a graph of air pressure.”

— Pattee (2004)

Maynard Smith and Szathmáry (1999, 146) take Einstein’s point a bit further:

… complex systems can best be understood by making simple models. … the point of a model is to simplify, not to confuse. … if one constructs a sufficiently complex model one can make it do anything one likes by fiddling with the parameters: a model that can predict anything predicts nothing.

According to Peirce, the main ‘difficulties of explanatory science’ have not been that adequate hypotheses were in short supply, but ‘that different and inconsistent hypotheses would equally acount for the observed facts’ (EP2:467).

Simplifying in science and scripture

The identity of every phenomenon, and every cause, is its otherness from every other, including the very system in which it is embedded. In scientific inquiry, we simplify our models of changes in the phenomenal world by focussing on one causal factor at a time.

The practice of changing one variable at a time while holding others constant is important, but it is incomplete. Additional investigation is required, both to show how a causal factor is coupled in a system of causes and to reveal the ways in which these links change over time. It does not require considering everything at once, as some seem to fear, but can be done by coordinating diverse investigations.

— Oyama, Griffiths and Gray (2001, 4-5)

Lotman likewise points out the limitations of ‘the scientific practice which dates from the time of the Enlightenment, namely to work on the “Robinson Crusoe” principle of isolating an object and then making it into a general model.’ This accounts for the ‘transmission’ model of communication, which takes the sender, the message and the receiver as separate units. This practice is also incomplete, because a working semiotic system has to be ‘immersed in semiotic space’ – in the semiosphere, ‘the semiotic space necessary for the existence and functioning of languages’ (Lotman 1990, 123).

According to Lotman (1990, 104), ‘symbols with elementary expression levels have greater cultural and semantic capacity than symbols which are more complex.’ The simpler an utterance seems to the interpreter, the less semiotic energy he has to put into interpretation, and the more it seems to mean in itself. But when it comes to language, observes Northrop Frye (1982, 211),

there are different kinds of simplicity. A writer of modern demotic or descriptive prose, if he is a good writer, will be as simple as his subject matter allows him to be: that is the simplicity of equality, where the writer puts himself on a level with his reader, appeals to evidence and reason, and avoids the kind of obscurity that creates a barrier. The simplicity of the Bible is the simplicity of majesty, not of equality, much less of naïveté: its simplicity expresses the voice of authority. The purest verbal expression of authority is the word of command … The higher the authority, the more unqualified the command …

Pragmatically, obedience to the voice of authority simplifies guidance, makes an ethos “elementary” – but also incomplete, as a guidance system.

Causes

The scientific way of reducing complexity is typically an understanding of causes. In modern times, what we call ‘reductionism’ often involves reducing causality to what Aristotle called ‘efficient cause.’ But more recently, several scientists have resurrected versions of Aristotle’s other causes: see Peirce (EP2 selections 9 and 22), Salthe (1993), Ulanowicz (1997, 12-13), Rosen (2000), Deely (2001). Deacon’s (2011) concept of teleodynamics is an update of Aristotle’s ‘final cause.’

One way of distinguishing among Aristotle’s four ‘causes’ is to apply them to the building of a house. The material cause consists of the construction materials such as bricks or lumber, while the formal cause is the Bauplan or design (perhaps represented by a blueprint) that informs the construction process. The building’s form is constrained by the ambient conditions in which the house is built – gravity, climate and so on – so the formal cause can never be made fully explicit in the blueprint, or it would be bigger than the house! The efficient cause is the hands-on, energy-consuming work of the construction crew, and the final cause the purpose for which it is built, namely that somebody should live in it.

Efficient and final causes relate mostly to dynamic functioning or behavior, while formal and material causes relate to embodiment or structure. In terms of process, though, the difference between structure and function is a matter of time scale. A structure can be viewed as a deeply entrenched and consolidated kind of habit. The difference between form and matter is also relative to scale: for instance, cells constitute the matter of which flesh is made, but under the microscope, a cell appears as the form of a system made of molecules.

The final causes of an organism’s behavior can be seen as the role played by its whole life in the larger life of its species. The formal causes, which generally appear at the focal level (Ulanowicz 1997), shape the organism’s role in the life of the current ecosystem.

‘Final’ cause can be thought of as ‘ultimate context’ so long as we do not take ‘ultimate’ in any absolute sense. A scalar level may be ‘ultimate’ to us because it is above any level we are equipped to focus on. This does not imply that there is no higher level, only that whatever higher level there may be is beyond the reach even of speculation. Nor is any final cause necessarily the only final cause of the phenomenon in question. (It is obvious that efficient causes can be plural, but not so obvious with the more vague or general kinds of cause.)

The above is applied to an act or behavior. From a somewhat different point of view, Peirce preferred to both cause and what is caused as ‘facts’ (follow link to rePatch 14 for details).