Emergence

Stephen C. Pepper (with an introduction by Jeffrey Goldstein)

Emergence then and now: Concepts, criticisms, and rejoinders

“We seem to be in the presence of a perfectly good dilemma: We must either explain things by what they are or else by what they are not. If we explain them by what they are, we leave them unexplained. If we explain them by what they are not, our explanation is fallacious.” William Ernest Hocking, (1941)

The evolution of emergence

Although it is not generally well-known among complexity aficionados, the concept of emergence had a well established history before the advent of the present day study of complex systems. As early as 1874, the American/British philosopher and man of letters G. H. Lewes (1874-1879) had coined the term “emergent” in its modern technical meaning: “... although each effect is the resultant of its components, we cannot always trace the steps of the process, so as to see in the product the mode of operation of each factor. In the latter case, I propose to call the effect an emergent. It arises out of the combined agencies, but in a form which does not display the agents in action.”1 Several decades later this nascent notion of ‘emergent’ was elaborated into a process of ‘emergence’ which became the basis for a loosely joined scientific and philosophical movement called ‘Emergent Evolutionism’ (for history and review see Blitz, 1992). Such eminent Emergent Evolutionist philosophers and scientists as Samuel Alexander, C. L. Morgan, C. D. Broad, W. Wheeler, A. N. Whitehead, and others discussed emergence in terms of a sudden arising of new ‘collocations’ or ‘integrations’ with new properties arising on a new ‘higher’ emergent level out of ‘lower’ level components. This enriched concept of emergence was offered as a counter to the then prevalent interpretation of evolution as taking place through incremental steps, a process understood in mechanistic terms. In contrast, Emergent Evolutionists held that a scientific and philosophical perspective founded on emergence was capable of steering between the extremes of mechanistic reductionism on the one side and an ungrounded vitalism on the other (see Goldstein, 1999, 2003).

Emergent Evolutionism as a movement died-out by the mid-nineteen thirties but the idea at its heart proved to have staying power as it found expression in the philosophy of science, process philosophy and theology shaped by Whiteheadian metaphysics, theoretical biology, and the burgeoning arena of neuroscience. The dominant emphasis continued to be the capacity for the idea of emergence to combine an anti-mechanistist / anti-reductionist stance with a way to talk about higher level organization and its novelty, and doing so without passing over into a supra-naturalism (Goldstein, 2000). These advantages have continued into contemporary research into complex systems where the idea of emergence has moved from mostly armchair speculation into actual laboratories, physical, computational and social. Nowadays when the term ‘emergence’ is used, it refers to a group of phenomena sharing a family resemblance with the following features (see Goldstein, 1999):

Examples of emergence in complex systems include the coherence seen in various kinds of phase transitions, the new patterns and properties exhibited in so-called self-organizing systems, higher level patterns and structures found in simulations like cellular automata and multi-agent models, collective level behavior arising in networked systems (whether social, technological, or computational), and more recently the ‘quantum protectorates’ studied in the field known as complex, adaptive matter, about which more will be discussed below.

Although embraced by quite a few prestigious thinkers, the Emergent Evolutionist adumbration of the idea of emergence did not go without detractors, chief among whom, during the nineteen twenties, was the influential philosopher Stephen Pepper, more widely known later on for his seminal work World Hypotheses (Pepper, 1942). E:CO is presenting Pepper’s article on emergence from 1925 for at least two reasons. First, although Pepper’s article was critical of the idea of emergence, his description was fairly faithful to what the early emergentists had in mind. We can accordingly compare that older meaning of emergence with the current one. Second, by taking a close look at Pepper’s reductionist argument against emergence, we can appreciate how a commitment to the idea of emergence has generally run counter to reductionist explanatory strategies. I will also go over some of the typical fault lines of such reductionist arguments.

Collapsing new variables into old

In his critique, Pepper took on the emergentist position that new variables were needed in order to represent the new ‘higher’ level emergent phenomena and their dynamics. What he attempted to demonstrate instead was that any such new variable could in actuality be collapsed into pre-existing variables that were already in use (or least could potentially be used) in representing lower level functional relationships2. The first step of Pepper’s attack was his contention that the new emergent level must be describable in one of two ways: (1) as a new type of functional relation among the already existing variables of the system; or, (2) as a functional relationship among new variables which are revealed on the new level (see discussion of Pepper’s critique in Meehl & Sellars, 1956). In the first case of a new type of functional relationship among old variables, Pepper argued that any new functional relation could always be expressed by some kind of modification of already existing relations, here showing his card as a modern-day Anaxogorian in denying that any modification, rearrangement, or restructuring could possibly introduce genuine novelty. Pepper’s argument reflected a persistent assumption of hard core reductionists, namely, that the coming into being of what is radically novel is simply not possible, a pessimistic perspective so eloquently expressed in the Biblical Book of Ecclesiastes, “What has been is what will be, and what has been done is what will be done; and there is nothing new under the sun. Is there a thing of which it is said, “See, this is new”? It has been already, in the ages before us.”

Regarding the second case, that of the need on the part of emergentists to introduce new variables to represent the emergent level, Pepper attempted to prove how they too (and their functional relationships) were nothing more than an elaboration, no matter how complicated, of a functional relation between old variables. To do so he explicitly assumed, first of all, that emergence was the result of a deterministic process (the only option for him if it was not to be supra-naturalist in origin) and therefore could not expropriate randomness. To be sure, complexity theory has witnessed a repudiation of this wide spread presumption3. For Pepper, though, the purportedly new variables involved in deterministic processes must either possess some kind of functional relationship with old variables on the lower level or they do not. In the first option, if the new variables of the emergent level do indeed have a functional relationship to the lower level variables, then, according to Pepper, these new variables must necessarily be expressible in terms of the lower level variables since, for Pepper, the mere existence of a functional relationship implied that one set of variables could be translated to the other given the appropriate means of expressing the new variables in terms of the old. This was really a restatement of Pepper’s earlier argument stated in the previous paragraph. He concluded that “…[the new variables would] have to drop down and take their place among the lower level variables as elements of a lower level shift.”

Pepper now had to come to terms with the second option, i.e., when the new variables purportedly required by the new emergent phenomena did not have a functional relationship with the old variables. According to Pepper, if the new variables didn’t have a functional relationship with the old ones, this implied that the emergent phenomena expressed by these disconnected variables must lack the potency necessary for what the philosophers of science Meehl and Sellars (1956) in their commentary on Pepper’s argument called “making a difference.” In other words, Pepper concluded that emergent phenomena would amount to nothing more than mere epiphenomena, again, a point of view not uncommonly found among reductionist detractors of emergence.

There were several faulty moves in Pepper’s arguments, not the least of which was their pervasive question-begging. For example, Pepper first assumed that any kind of purported genuine novelty attributed to emergent phenomena could always be shown to be an epiphenomenon by definition and thus he simply could not imagine any natural process powerful enough to bring about the radically original. Indeed, we can discern in Pepper’s posture towards emergence a manifestation of two linchpins in the then prevailing picture of natural change, viz., first that deterministic processes must abjure the incorporation of chance and, second, they could not possess operations powerful enough to bring about the kind of radical novelty entailed by a doctrine of emergence. Yet, the study of complex systems has shown that the radical novelty of emergent phenomena can be the result of both an appropriation of chance (or ‘noise’) as well as iterative and combinatory operations. The eminent philosopher of science Karl Popper (quoted in Stephan, 1992: 34-35), who, incidentally, later entertained his own view of emergence, characterized anti-emergentist positions like Pepper’s in these words, “Given the precise arrangement of the atoms it should in principle be possible, the argument goes, to derive, or to predict, all the properties of every new arrangement from a knowledge of the ‘intrinsic’ properties of the atoms,” an argument Popper disagreed with by asserting instead it was indeed possible for new arrangements to lead to physical and chemical properties not derivable from like-minded ‘atomistic’ theories.

Reductionism’s preparatory destruction of emergent level phenomena

When Pepper and others of his ilk look to emergent phenomena as epiphenomena, they presume the lack of any real causal efficacy on the part of emergent level phenomena by first imaginatively destroying the integrity of emergent level wholeness. That is, Pepper’s approach to emergence, like all totalistic reductionist positions, can only work by ignoring all the preparatory destruction that precedes and accordingly makes possible reductionist explanations, namely, a preliminary destruction of upper emergent level phenomena particularly as regards their wholeness. Indeed, it has been pointed out that much of the ingenuity of reductionist explanations lies precisely in upper level features being destroyed, either directly or indirectly (see Sklar, 1995). After all, ‘reduce’ means ‘to decrease’ or ‘to diminish’.

The direct type is like what the philosopher Thomas Nickles (cited in Wimsatt, 1994) has discussed as transformative operations similar to the reduction of ores to metals or wood to pulp, cases where it is obvious that such higher level features (e.g., the grain of wood) do not survive in the pulp. In a similar vein, the complexity oriented neuro-physiologist Jack Cowan once described the difference between the subject matters of biophysics and theoretical biology in the following vivid manner: take an organism and homogenize it in a Waring blender, the biophysicist is interested in those properties which are invariant under that transformation (cited in Wimsatt, 1997). Because a Waring blender type of destruction will typically eliminate higher level features entirely out of existence, features which need to be accounted for by a credible reducing theory, the philosopher of science William Wimsatt, (1974) has cautioned that adequately formulated “bridge” laws included in reductive explanations need to include qualifications concerning the mode and extent of the destructiveness wrought by the reductive explanation.

The indirect type of preparatory destruction contained in reductionist explanations has to do with the stripping away of higher level qualities that has already taken place as part of the general scientific climate. An example provided by the philosopher Robert Nozick, (1981) concerns how Maxwell’s brilliant, but reductive identification of light with electro-magnetism only, could have been achieved after science had first stripped light of its sensed qualities of color and hue4. According to Nozick, one of the main reasons reductionist explanations can appear compelling at all is their appeal to theoretical primitives which do not possess the qualities of the phenomena under question and thus impart the sense of going “deeper.” This is related to the quote from Hocking above who presented the paradox of explaining something by what it is not. For instance, the thermodynamic property of heat is explained by heat-less statistical mechanics and Mendelian laws by gene-less Mendelian properties. The theoretical primitives of reductionist explanations must not be allowed to possess higher level properties. In fact, as Nozick has remarked, if the lower level, reducing theory contained the exact same properties as the upper level, reduced theory, the explanation might get caught in an endless loop of self-reference, upper level property referring to lower level property referring to upper level property, around and around in a vicious circle. We can conclude, consequently, that one of the ways reductionist explanations operate is to stop this vicious circle by a preliminary making sure that the upper level is robbed of its possible sui generis character before the explanation can even proceed. And once rid of its valence, the higher emergent level no longer poses a particular challenge since the reductionist does not how have to imagine the possibility of processes or operations that could bring about this higher level valence.

This contention that too much is destroyed by reductive explanations is at the heart of the Nobel Laureate, condensed matter physicist Philip Anderson’s (1972) famous anti-reductionist and pro-emergentist Constructionist Hypothesis which states that the ability to reduce everything to simple fundamental laws does not then imply the ability to start from those laws and reconstruct the universe. Humpty Dumpty, alas, once he’s had his great fall from his higher level perch and is then broken into the many shards of his lower level components cannot be put back together again! For Anderson, what ineluctably gets in the way of the possibility of ‘reconstructing’ the universe from reduced fragments and their simple laws are both scale and complexity, since at each new level of complexity entirely new properties appear requiring new laws, constructs, and generalizations. Consequently, Anderson also emphasized the need for a hierarchy of sciences, an idea repeatedly appealed to on the part of emergentists, each science focusing its theoretical and empirical energies on a specific level, and with the methods, constructs, and theories of each science not reducible to the one beneath. Indeed, Anderson’s point can be understood as an echo almost a century later of Lewes’ definition whereby an “emergent,” unlike a “resultant, “ cannot always be traced back, “... so as to see in the product the mode of operation of each factor.”

The constructive destruction of the lower level in emergence

From a different angle, another kind of destruction can be discerned in the process of emergence itself. That is, one of the reasons why emergent phenomena are not essentially amenable to reduction is that the processes involved in emergence possess a certain kind of destruction of lower level elements with the result that these lower level elements are no longer there, at least in the manner they previously existed, to be reduced to! This is quite different than the intentional preparatory destruction on the part of reductionists for the destructive element in emergence is in fact a necessary condition for emergence to have the capacity to lead to unpredictable, nondeducible, and unpredictable outcomes. In other words, the higher emergent level cannot be reduced to the lower level since that the entities and properties on the lower level have in effect been destroyed in the constructional building up of emergent order.

Such a scenario for emergence was obliquely noticed by one of the chief philosophical critics of early emergentism, Charles Baylis (see Blitz, 1992). Baylis argued that one aspect of emergence would involve the destruction of those properties the parts had before they were emergently combined, referring here explicitly to the way properties are transformed in chemical reactions, e.g., the gaseous nature of hydrogen and oxygen being destroyed when they are fused into water with its property of liquidity. More recently, the philosopher Paul Humphries (1997) has pointed to just such a destruction in his characterization of emergence in terms of a “fusion,” the latter a catch-all term for processes leading to emergent outcomes. It is precisely this destructive side of emergence that Humphries appeals to in his suggestion that reduction doesn’t make sense as a strategy for explaining emergents precisely because the lower level to which the explanation is supposed to be reduced to has effectively been destroyed during the processes of fusion. In this sense of destruction accompanying emergence, we can say the whole is less than the sum of the parts - the sum of the parts of the oxygen and hydrogen atoms not in the form of H2O would include properties that are not found when they are combined to form H2O. It is this destructive aspect of emergence which can help account for what Lewes referred to as untraceability of emergents in contrast to resultants.

Emergence and new variables in complex systems

In contrast to Pepper’s arguments against the viability of new variables being introduced to explain higher level emergent phenomena, order parameters are now commonly used along with a control parameters to focus the attention on the higher level order arising during phase transitions. For instance, in an unmagnetized state of iron, the atomic spins, not having a preferred orientation, point in all different directions; a state described as possessing a high degree of symmetry since at any particular location in the system what is happening on either side is roughly the same as in a mirror image. At a comparatively higher temperature (the metric of which is the control parameter), this high symmetry is associated with disorderly motion occasioned by the affect of fluctuations. However, by reducing the temperature, the spins become aligned in one direction thereby breaking the initial symmetry and leading to the commencement of magnetization due to the newly arisen preferred directionality of the spins.

Another example of symmetry breaking and the accompanying emergence of new order can be found in the phase transition of a nematic liquid crystal which, at first, is disordered with rotational symmetry but, undergoing a change of certain thermodynamic parameters, becomes ordered characterized by a breaking of rotational symmetry towards a special direction (Anderson & Stein, 1987). An order parameter therefore represents emergent order coincident with the breaking of symmetry. Its value would be zero at the symmetrical, disorderly phase and one in a totally ordered phase.

The synergetics approach founded by the eminent German physicist Hermann Haken (1981, 1987) has demonstrated that large collectives, whether complex materials, societies or brains, may be analyzed globally with the use of order parameters. Local interactions among the individual members of the collective result in the emergence of long term correlations in the behavior of these individuals. These correlations in turn give rise to order parameters, or macroscopic variables, which describe the global behavior of the collective. These order parameters are frequently ‘slaved’ to other control parameters which may be externally modified to influence the global behavior of the collective.

The idea of higher level variables such as order parameters also opens the way towards findings of universalities among different systems which can be understood as indications of emergence since they represent new laws. The physicist and Nobel Laureate Robert Laughlin, (2005) has recently put forward the strong thesis many scientific laws are in general emergent since they involve higher level organizing principles. Van Gelder (cited in Clark, 1996) has suggested that the variables expressed in dynamical explanations are not about the underlying dynamics anyway, but about the dynamical region which is understood in terms of the global patterns showing up in phase space portraits and other ‘qualitative’ dynamics. Moreover, Clark cites Luc Steels’s distinction between controlled and uncontrolled variables. Whereas the first refers to variables that can be directly controlled, e.g., a robot can directly increase or decrease its speed, the second changes indirectly as a side-effect. Clark, (1996) thereby defines emergence as involving those phenomena whose “roots involve uncontrolled variables and are thus the products of collective activity rather than dedicated components or control systems” (p. 267). Furthermore, the real locus of the uncontrolled variable is the relation between the system and its environment. In this way, the variables can themselves be emergent.

A resurgence of emergence and higher level organizing principles

Recently, a group of condensed matter physicists making-up the Institute for Complex Adaptive Matter have been using the term ‘emergent’ explicitly to describe scientific laws pertaining to higher level organizing principles. One of the members of this group, the aforementioned Robert Laughlin, (2002) cogently argues that scientific laws in general are emergent in the sense that they represent higher level organizing principles, are “collective in nature,” are “encoded only indirectly by the underlying laws of quantum mechanics, and in a deep sense independent of them,” and “are exact only in the thermodynamic limit.” Writing with his colleague David Pines, Laughlin (Laughlin & Pines, 2000) further writes:

“The emergent physical phenomena regulated by higher organizing principles have a property, namely their insensitivity to microscopics that is directly relevant to the broad question of what is knowable in the deepest sense of the term. The low energy excitation spectrum of a conventional superconductor, for example, is completely generic and is characterized by a handful of parameters that may be determined experimentally but cannot, in general, be computed from first principles. An even more trivial example is the low-energy excitation spectrum of a conventional crystalline insulator, which consists of transverse and longitudinal sound and nothing else, regardless of details. (p. 29)

Other candidates for emergence offered by Laughlin and Pines include so-called “quantum protectorates” like the crystalline state, the fractional quantum Hall effect and “quasiparticles.” Laughlin (2002, 2003) has gone even further suggesting that relativity, renormalizability, gauge forces, fractional quantum numbers, and the Big Bang itself are all genuinely emergent phenomena.

Conclusion

Although as we’ve seen above, the idea of emergence has gone through signficant changes during its century and a quarter life span, certain crucial features have stayed put, namely, its focus on higher level organizing principles as the key to explanation as well as understanding, and its being as alternative to pure reductionism. In respect to its use in scientific explanation, the construct of emergence is appealed to when the dynamics of a system seem better understood by focusing on across-system organization rather than on the parts or properties of parts alone. But here emergence functions not so much as an explanation but rather as a descriptive term pointing to the patterns, structures, or properties that are exhibited on the macro-level (see Goldstein, 1999).

This emphasis on the descriptive role of emergence is by no means to be taken as downplaying the importance of the idea of emergence, but is offered rather to highlight how emergence functions more as the starting point rather than the terminus of an explanation. As a result, recognizing the emergent level of patterns, dynamics, and properties is when deeper exploration of the laws governing complex systems can begin. It is to further the appreciation of the critical moment of recognizing this emergent level that Pepper’s article on emergence is being offered here in the pages of E:CO.

The original article can be downloaded from here

Notes

References

Anderson, P. (1972). “More is different: Broken symmetry and the nature of the hierarchical structure of science,” Science, 177 (4047): 393-396.

Anderson, P. and Stein, D. (1987). “Broken symmetry, emergent properties, dissipative structures, life: Are they related?” in F. E. Yates, A. Garfinkel, D. Walter and G. Yates (eds.), Self-organizing systems: The emergence of order (Life science monographs), NY: Plenum Press, pp. 445-457.

Blitz, D. (1992). Emergent evolution: Qualitative novelty and the levels of reality, Dordrecht: Kluwer Academic Publishers.

Bunge, M. (1979). Causality and modern science, NY: Dover.

Clark, A. (1996). “Happy couplings: Emergence and explanatory interlock,” in M. Boden (ed.), The philosophy of artificial life, Oxford, England: Oxford University Press, pp. 262-281.

Goldstein, J. (1999). “Emergence as a construct: History and issues,” Emergence, 1(1): 49 72.

Goldstein, J. (2000). “Emergence: A construct amid a thicket of conceptual snares,” Emergence, (2)1: 5-22.

Goldstein, J. (2003). “The construction of emergent order, or how to resist the temptation of hylozoism,” Nonlinear Dynamics, Psychology, and Life Sciences, 7(4): 295-314.

Haken, H. (1981). The science of structure: Synergetics, F. Bradley (trans.), NY: Van Nostrand Reinhold Company

Haken, H. (1987). “Synergetics: An approach to self¬organization,” in F. E. Yates, A. Garfinkel, D. Walter and G. Yates (eds.), Self-organizing systems: The emergence of order (Life science monographs), NY: Plenum Press, pp. 417-433.

Hocking, W. E. (1941). “Whitehead on mind and nature,” in P. Schilpp (ed.), The philosophy of Alfred North Whitehead, Carbondale, Illinois: Library of Living Philosophers, pp. 381-404. .

Humphreys, P. (1997). “How properties emerge,” Philosophy of Science, 64: 1-17.

Laughlin, R. B. (2002). “The physical basis of computability,” Computing in Science and Engineering, 4 (27). Online version.

Laughlin, R. B. (2003). “Emergent relativity,” in Frontiers in science: In celebration of the 80th Birthday of C. N. Yang, Singapore: World Scientific. Online version.

Laughlin, R. B. (2005). A different universe: Reinventing physics from the bottom down, NY: Basic Books.

Laughlin, R. B. and Pines, D. (2000). “The theory of everything,” Proceedings of the National Academy of Sciences, (97)1: 28-31. Online version.

Lewes, G. H. (1874-1879). Problems of life and mind, London: Truebner.

Meehl, P. and Sellars, W. (1956). “The concept of emergence,” in H. Feigl and M. Scriven (eds.), The foundations of science and the concepts of psychology and psychoanalysis. (Minnesota studies in the philosophy of science, Vol. 1). Minneapolis: University of Minnesota Press, pp. 239¬252.

Nozick, R. (1981). Philosophical explanations, Cambridge, MA: Harvard University Press.

Pepper, S. (1942). World hypotheses: A study in evidence, Los Angeles: University of California Press.

Stephan, A. (1992). “Emergence: A systematic view on its historical aspects,” in A. Beckermann, H. Flohr and J. Kim (eds.), Emergence or reduction: Essays on the prospects of non-reductive physicalism, Berlin: Walter de Gruyter, pp. 25-47.

Sklar, L. (1995). Physics and chance: Philosophical issues in the foundations of statistical mechanics, Cambridge, England: Cambridge University Press.

Wimsatt, W. C. (1994). “Levels of organization, perspectives, and causal thickets,” Canadian Journal of Philosophy, Supp. 20: 207-274.

Wimsatt, W. C. (1974). “Complexity and organization,” in K. F. Schaffner and R. S. Cohen (eds.), PSA 1972, Proceedings of the Philosophy of Science Association, Dordrecht: Reidel, pp. 67-86.


1 Lewes went on: “Every resultant is either a sum or a difference of the co-operant forces ... every resultant is clearly traceable in its components because these are homogenous and commensurable ... the emergent is unlike its components in so far as these are incom­mensurable, and it cannot be reduced either to their sum or their difference...” (pp. 368-369).
2 Pepper was here appealing to the accepted use of mathematics to express functional relationships within a system (see Bunge, 1979; Bunge also pointed out that emergent laws need not be entirely new or absolutely new, but merely have to be new in regard to the laws followed by the object under question).
3 The exclusion of chance from explanations involv­ing natural processes was prevalent at Pepper’s time despite the critical role of chance in evolution as well as Charles Sanders Peirce’s notion of tychism which rather presciently foreshadowed the recent reconcili­ation of determinism and apparent randomness in the case of chaos theory.
4 Indeed, reductionism since Galileo has relied upon a previous stripping away of so-called ‘secondary’ qualities from phenomena so as to not be encumbered by them in explanation. Thus, as Sklar has suggested, Maxwell didn’t say the secondary quality of color was somehow correlated with electro-magnetic waves, rather color was dismissed as non-essential to his theory of electromagnetism.