Complexity theory is everywhere. It occupies a substantial academic niche with its own courses, programs, departments, and extensive lists of publications, including journals devoted exclusively to complexity. By nature interdisciplinary, complexity theory has been applied across disciplines as diverse as meteorology, biology, geology, mathematics, physics, medicine, history, sociology, economics, education, business management, and political science, to name a few (see for example the range of disciplines in the references provided at the end of this paper). Or perhaps listing the disciplines not yet under the influence of complexity theory would be the shorter way to describe the explosion of complexity theorizing. Still, there is a significant gap. In this paper, I extend the list by sketching the application of complexity theory to yet another subject, itself, and offer specific proposals for further research.
What is the purpose of applying complexity theory to itself? This paper has no room for a full treatment of the importance and implications of notions such as self-reference, reflexivity, recursion, and paradox (see for example Luhmann, 1990; Bartlett, 1992). Several observations will have to serve as placeholders for a longer discussion. It is simply interesting and fun to apply a pattern to itself and see where it leads. It is also likely to lead to something useful, since recursive procedures can be very expressive and efficient. To describe something by reference to itself is to abbreviate the description. So self-reference can simplify. Also, parts of complexity theory are already expressed reflexively, so it seems sensible to ask whether the whole thing is reflexive. We might try to construct complexity theory from complexity theories all the way down.
Applying theory to itself joins theory and practice: The theory has to put its money where its mouth is, as they say. For instance, Cilliers (2005: 261) has asked whether the statement that we can never have complete knowledge of complex systems is trapped in a performative contradiction, between the absolute statement itself and the incompleteness in its claim. This is a logical analog of the ideal that no one, or in this case no concept, is above the law, so complexity theory should not hover in the air above its own rules. Any theory can be challenged by showing its hypocrisy, that it does not fit into its own description of the world, but this discussion is not just about logical consistency, or self-referential consistency, as much as it is concerned with the ways in which complexity theory can provide insight into itself. Complexity theory would be more persuasive if it could parse itself, by analogy to the way a new computer language can be used to write a parser to process and test the new language itself.
Piecemeal hints at applying complexity theory to itself appear in the literature. Allen (2001), for example, describes complexity science in the nested expression “knowledge of the limits to knowledge.” Luhmann (1990) offers a theory of social systems as recursively closed systems with respect to communications. Beyond these and other similar hints, however, I am not aware of any project specifically proposing to apply complexity theory to itself.
As these are just early steps, I address only some of the most basic questions, such as asking what is complexity and what is emergence. But from these basics, surprising issues appear, such as whether computer source code forms a part of complexity theory and literature. I hope not only to clarify issues within complexity theory and inspire more ideas, but also to provide the satisfaction of entering if not closing the circle of self-reference that is one of the hallmarks of complex systems.
The promise of complexity theory, implicit in its name, is to explain complex phenomena. What could be more exciting to a scientist than revealing the great mysteries of our complex world, maybe even the origin of life? Yet it is natural to be wary of the heart-racing promises made in the mysterious, and sometimes mystical, apparently self-organizing realm of complexity theory, with its breathless exuberance for everything that can be characterized by a well-worn cluster of terms such as nonlinear dynamics, chaos, self-organization, criticality, autocatalysis, emergence, phase transition, random boolean networks, fitness landscapes, fractals, and so on. These collected notions may be inspiring, but they are also dreamlike in their pageantry.
Complexity theory seeks simple rules for complex phenomena. But that is the essence of all science and all theorizing. Science is also skeptical, seeking replication, confirmation, and falsification, but with complexity theory there is something uncomfortable that inspires extra caution. It is not just the inevitability that scientific theories of every era, including ours, will eventually turn out to be false. It is not just that similar overarching theories such as general systems theory and catastrophe theory have come and gone, and so one day we may also awaken from the complexity dream. It is partly the refusal to be pinned down, the very all-inclusiveness that lets complexity theory be applied to virtually any phenomena, not only the physical but all levels from the subatomic to the social. Complexity theorists are on firm ground when discussing observed data from dissipative physical systems, but when they venture far afield their persuasiveness wanes, as they reach to explain how a market economy and democracy are pinnacles of evolution (explanations that can only be validated, if at all, in retrospect and in any event cannot help us understand a future that is sensitive to initial conditions). For instance, while complexity theory may suggest that democracy has evolved by analogy to the processes that created life, and purport to explain why the former Soviet Union dissolved by analogy to avalanches and extinctions, the same theory has curiously little to say about the growing power of Communist China or even the continued existence of Cuba (see for example Kauffman, 1995: 245). The dream of finding simple physics-like laws governing all complexity seems as simplistic as the dream of world peace. Complexity theorizing floats on its own plane, not down on earth in the tangible grit of science, but hovering in the air like smog over Los Angeles, with the promising but ghostly apparition of something beautiful hidden behind an obscuring layer. The question comes to mind whether there is any there there.
But let us not dwell on the negative side. On the positive side, complexity theory is also full of inspiring metaphors that spark other metaphors: butterfly wings that trigger tornadoes, avalanches that change the world, automatons that organize themselves, bacteria that know their world. Complexity theorists are most persuasive in their metaphorical musings about nonlinear patterns. Following that tradition, this metaphorical, imprecise, and non-technical essay is an attempt to turn the spotlight of complexity theory inward to illuminate its own complexity.
This project of applying complexity theory to itself “emerged” from what seem to be casual references made to the emergence of complexity theory. For instance, Waldrop (1992) enshrined the emergence of complexity theory in the title of his book about the emergence of the Santa Fe Institute, Complexity: The Emerging Science at the Edge of Order and Chaos. From the continental side, Emmece (1997: 43) writes,
“I leave it to others to speculate on the possibility that the emergence of the “sciences of complexity” is a reflection of the changing social situation for the scientific subsystem in a postmodern and hyper-differentiated world.”
In a similar vein, when discussing the properties of complex systems Kauffman (1995: 19) states, “The search for such properties is emerging as a fundamental research strategy, one I shall make much use of in this book.” Kauffman is probably not literally describing emergence twice removed, the search for properties emerging from some unstated meta-complexity composed of properties including the property called emergence. Instead, these writers may be using the term emergence in the colloquial sense, as a top-of-the-mind concept, or as a clever literary device. But these and other similar references prompt me to wonder: What if complexity theory were indeed an emergent, self-similar, nonlinear phenomenon, as those terms are applied to any other so-called complex system?
I should also note that I use the term complexity theory, not complexity science. While there is no general debate that theories about complex systems help us to understand our complex world, on the other hand, there is no general agreement that there is a separate science of complexity or even that it is science. Even its proponents cannot offer more than generalities about complexity science such as calling it “a subject that’s still so new and wide-ranging that nobody quite knows how to define it, or even where its boundaries lie” (Waldrop 1992: 9). The subject called complexity science shifts, like a pseudo-science that refuses to be refuted, as the subjects studied under the complexity view change. Emmece (1997), for example, set out four descriptions of complex systems and a fifth, “quite separate” notion, for social systems.
If I am wrong and complexity theory does not itself exhibit the very features that are the objects of complexity theory, then what features properly describe it? Is complexity theory linear? Is complexity theory an emergent whole or can it be reduced to a composite of its parts?
Undefined but complex theory
To apply a theory to something, we could begin by defining the theory so that we know exactly what we are applying and to what we are applying it. In the present case, we might want to know what complexity theory is and how complex something must be for it to become an object of the theory. When applying complexity theory to itself, we might think that we could solve both problems at once, scoping both the theory and the objects; that is, the parts of complexity theory that we are theorizing about. But defining complexity theory is elusive. Kauffman, for instance, uses the word complexity in various contexts, and tells us how fundamental complexity is to our very existence, but he avoids telling us exactly what complexity is. He writes confident statements, such as “Laws of complexity spontaneously generate much of the order of the natural world” (Kauffman, 1995: 8). From such assertions we are led to believe not only that complexity is something real and fundamental, but also that it has laws and other definite features.
Like many others, Kauffman champions complexity theory in association with chaos theory. Neither of these frequently paired notions is easily captured. One critic, merging them into the discordant term “chaoplexity,” says,
“Each term, and chaos in particular, has been defined in specific, distinct ways by specific individuals. But each has also been defined in so many overlapping ways by so many different scientists and journalists that the terms have become virtually synonymous, if not meaningless.” (Horgan, 1996: 192)
A more charitable view is that definitions are not always easy:
“Scientific terms may be roughly divided into two categories: those that are introduced by means of a precise and even formal definition (which is the case for many of the more recent mathematical terms) and those that are drawn from everyday language and which have further to travel before they attain the status of an unequivocal definition. The word complexity (from the Latin complecti, grasp, comprehend, embrace) belongs to the second category and is particularly resistant to precise definition.” (Israel, 2005: 479)
In any event, there is no generally accepted statement of what complexity theory is or how complex something must be to come within the ambit of complexity theory.
Without defining the boundaries of complexity theory, we could be left with the task of applying a theory of everything to itself. Does complexity theory explain the most complex phenomena and therefore everything included in that complexity? Exemplars of complex phenomena cited in the literature vary in their scale of complexity from simple one-line equations, such as the logistic function, to the entire workings of the human brain. Complex entities studied under the umbrella of complexity theory range from the microscopic to the global. Kauffman, for example, claims that complexity theory implies that the bacterium E. coli and the multinational corporation IBM “know their worlds in much the same way” (Kauffman, 1993: 388, 404). Cilliers (2001: 6) refers to “accepting the complexity of the boundaries of complex systems.” In so doing, he implicitly supports this project of applying complexity theory to itself, as we try to understand the complex boundaries in order to sort out the complexities of the systems delimited by those complex boundaries.
A definition of complexity theory is not essential. We do not need to be so formal about definitions that we become exhausted with the effort before getting to the meat of the matter. Lacking the structure, or stricture, of definitions, we are free to debate and create our own ideas. If we cannot define complexity theory, however, then we may not gain much from attempting to define its parts. Or despite lacking a definition, something might emerge from its parts.
If a scientific theory is something that simplifies, unifies, and explains the apparent complexity of our world, then perhaps complexity theory may never be defined because it is an oxymoron. Complexity may be one thing and a tractable theory something else. As Arthur C. Clarke states in his widely quoted third law, “Any sufficiently advanced technology is indistinguishable from magic” (Clarke, 1962). The same may be said of sufficiently complex systems and theory. If we know the simple laws, if we see the sleight of hand for what it really is, then we are not mesmerized by the complex system. If it is no longer magic then it is no longer complex. Chaos theory may also be an oxymoron if a scientific theory finds order in apparent disorder. Perhaps then complexity and chaos theory do not simplify, unify, explain, and provide order. Or perhaps chaos and complexity theory properly converge on complexity from both the simple and the not-simple.
We will return to the difficulties of nomenclature when we come to the concept of emergence. For now, let us continue to apply complexity theory to itself.
One part of complexity theory is the concept of nested self-similarity. Complex systems can be composed of other complex systems. An ecology, for example, is composed of organisms that are composed of internal systems including individual cells, and each of those levels of organization is a complex system. Complexity theory itself is composed of other complex parts, including notions such as nonlinearity and self-organization, each of which has further complex components. So here we have one way in which complexity theory exhibits self-similarity, and thereby recurses into itself consistently with its own theory. Self-similarity may be considered to be a complex phenomenon within the cluster of facets comprising complexity theory. This concept, and others in the set of complexity-theoretical ideas, is unlikely to be exactly self-similar, but is more likely to be quasi-self-similar, as is typical of fractal recursion relations.
Fractals quantify the geometry of self-similar entities. They also represent another example of simple rules that generate complex results. If complexity theory is composed of self-similar features, a project for the future would be to calculate various estimates of the fractional dimensionality of complexity theory and its components.
As a coevolving group of ideas, each influencing the other, the facets of complexity theory could be considered to exhibit complexity all the way down to single statements such as “complexity theory is defined as...” or “self-organized criticality is...” Each of these statements is, metaphorically speaking, like an individual logistic map (discussed in more detail elsewhere in this paper). If not pushed too far, the statements are at equilibrium and not interesting, while if pushed too far they become incomprehensible, no better than random signals, mere noise. On the edge of incomprehensibility, however, they become interesting ideas.
To the extent that complexity theory is a separate unifying theory, it dissolves the boundaries separating its parts, and so must involve various disciplines that divide and view the parts differently. Understanding complexity theory also demands an understanding of its origins in physics, chemistry, biology, economics, mathematics, and so on. Every complexity theorist must join an expedition that ventures from sandpiles to tornadoes, fractals to boolean networks, bacteria to humans to multinational corporations, while feasting on a primordial complexity soup of ideas. Viewed through the lens of complexity theory applied to itself, the parts of complexity theory appear to be interdisciplinary complexities in their own right.
Self-organization is another part of complexity theory, and like other parts this notion stands for simple rules generating complex results. What are the reasons for the order we see in a thermodynamically disordering world? What rules explain why complex systems that could exist in many different states seem to be attracted only to comparatively few states? Commonly cited examples of self-organization include everyday phenomena such as crystallization, magnetization, and turbulence, as well as more exotic phenomena such as Belousov-Zhabotinsky chemical reactions, the persistent red spot on the planet Jupiter, and the origin of life. An algorithmic example of self-organization is the flocking of birds that can be modeled with simple rules (Flake, 1998). Like complexity theory itself, the contained notion of self-organization has been applied to many objects in many disciplines, from substances in physics to organisms in biology to human social activity including economics and politics. As with complexity theory, the question might be asked: Does the theory of self-organizing entities apply to itself? What if complexity theory were self-organizing?
Building on a century-old idea of autocatalytic enzymes (Fry, 2000: 74), Kauffman (1993: 309) proposes that life originated from self-organizing, autocatalytic sets of molecules. He offers various estimates of the number of polymers needed, in terms such as these: “for a probability of catalysis of only 10-9, a mere 18,000 to 19,000 polymers should achieve the critical minimum complexity for collective autocatalysis!” (Kauffman, 1993: 311). A critical mass of ideas could also be said to generate a synergy of self-catalyzing ideas; rub a sufficient diversity of interdisciplinary notions together, and then like molecules and nuclear reactors, they may go supracritical, exploding into new ideas. We may have reached this metaphorical autocatalysis, as complexity reactions have been bursting forth in every discipline, and as everyone who is paid to think important thoughts now asks whether complexity theory can be applied to understand each discipline’s underlying order.
Under the umbrella of self-organization, another part of complexity theory is the notion of self-organized criticality. When systems are in ordered states, a small change affects only local events, while in the disordered chaotic realm a small change can change everything. Self-organized criticality refers to states between order and disorder. On the critical edge, a small change has the optimum effect to maintain the system. Bak et al. (1987: 382) used the image of a sandpile to explain the idea:
“In order to visualize a physical system expected to exhibit self-organized criticality, consider a pile of sand. If the slope is too large, the pile is far from equilibrium, and the pile will collapse until the average slope reaches a critical value where the system is barely stable with respect to small perturbations.”
Bak’s sandpile model has yielded sub-metaphors, including the image of periodic avalanches caused by events as small as the addition of a single grain of sand. The sandpile model is a visible expression of nonlinearity and sensitivity to initial conditions. Other indicia of self-organized criticality include various mathematical relationships such as 1/f laws, power laws, scale invariance, and, of course, complexity.
In a trivial sense, every theory could be the consequence of the self-organizing mind, as Bak (1996: 175), among others, has argued that the mind is in a self-organized critical state. In a practical sense, every text, including this one, begins as a completely ordered blank page that may become a disordered set of notes that eventually settle around the attractor that is the thesis of the text. In that way, order and disorder become organized puzzle solving. More specifically, the ideas within complexity theory could be envisioned as poised at the edge of chaos, at the leading edge of understanding.
The avalanche metaphor may also be applied to complexity theory. Kauffman (1995: 129) uses the image in reference to biological systems, saying for instance, “Like the sandpile grains of sand, where each may unleash a small or large avalanche, the poised ecosystems will create small and large bursts of molecular novelty.” Substitute the word “theory” for “ecosystems,” and it could be said that the various theories within complexity theory will create bursts of theoretical activity, avalanches of ideas sparked by notions as small as grains of sand. If the avalanche is significant enough, it may eventually qualify as a new scientific paradigm.
Unlike physical systems such as sandpiles, the more complex category of live self-organizing systems, including individual organisms, ecologies, and social systems, adapt to their environments. Adaptive systems invite metaphors from biological adaptation with its notions such as fitness, competition, survival, and evolution. Applied to complexity theory, we could think of the Darwinian marketplace of ideas. When considering how fit are the ideas of fitness landscapes and other aspects of complexity theory, we could measure the ideas not only in the general marketplace but also within the organism we are calling complexity theory, by analogy to the way fitness depends on interactions among genes (see for example Kauffman 1995: 170). If there is an adaptation in the theory of fitness landscapes or of self-similarity that affects the survival of those ideas, for example, then related ideas will also be influenced.
Because ideas cannot readily be measured in the abstract, we could use scientometrics, citations of authors and publications, or frequency of word usage, or prevalence of complexity course syllabi, conferences, and workshops, as proxies for fitness. We could build computer models to represent these idea networks and compare the models to actual citations, conferences, and so on (see for example Price, 1965). We could create fitness landscapes with peaks and valleys (all we need is to represent something, anything at all, as having a height and a distance from something else, with a measure of success in competition, such as rate of reproduction, attached to the something being represented). In the context of the present discussion, the rate of reproduction could be of growth in number of ideas, frequency of publications and of cited papers and authors, courses offered in higher education institutions, number and size of research grants. These and other measures could provide models of the evolution of complexity theory.
The great hope of complexity theory is to find simple rules underlying the complexity of our world. Of course, this could be said of all science, indeed of all knowledge: that we seek order in the buzzing confusion. Simple rules are the legends of science. Archimedes discovered a simple rule about buoyancy, apparently while sitting in his bathtub. Newton discovered many simple rules, but probably not while sitting under an apple tree. Einstein gave us the most famous of all one-line rules of nature, E=mc2. But these examples are from physics. The promise of complexity theory is to find simple physics-like rules in other disciplines (critics have used the term “physics envy,” attributed to various sources, to describe this reductionist goal).
A single physics-like equation is insufficient to cover all of complexity theory. But a large part of complexity theory can be stated in only four words: sensitivity to initial conditions. This is a compact way of saying that complex systems are nonlinear, inherently unpredictable, and dependent on history. Data describing a complex system cannot be infinitely exact, so as errors multiply, the noise overwhelms the signal. But is this not just a description of chaos theory? Chaos theory, nonlinear dynamics, and complexity theory all cover overlapping or the same territory, but again precise definitions are elusive. Lorenz (1993: 8), one of the leading figures in complexity and chaos theory, defines chaos as sensitive dependence on initial conditions, although he also devotes the whole of the first chapter and much of the rest of his book to refinements of the definition. Chaos is deterministic behavior (not random but governed by laws) that does not appear to be deterministic. Lorenz also addresses complexity, explaining,
“complexity is sometimes used to indicate sensitive dependency and everything that goes with it... Sometimes a distinction is made between “chaos” and “complexity,” with the former term referring to irregularity in time, and the latter implying irregularity in space. The two types of irregularity are often found together, as, for example, in turbulent fluids. Complexity is frequently used in a rather different sense, to indicate the length of a set of instructions that one would have to follow to depict or construct a system.” (Lorenz, 1993: 167)
One way of using the theme of nonlinearity to examine complexity theory itself can be seen in the observation of bifurcation in complex phenomena. Bifurcation is the sudden branching seen in complex systems near the edge of chaos. A frequently observed bifurcation is period doubling (with a specific ratio of steps in doubling known as the Feigenbaum constant). Finite difference equations such as the logistic equation are often used to illustrate these concepts; one version is xx+1 = rxn(1—xn). This equation, which describes dynamic systems such as the evolution of animal populations, shows many features of nonlinear systems, from order to bifurcation to chaotic, apparent randomness (for diagrams see Glass & Mackey, 1988: 26ff). Applying these ideas to complexity theory at the metaphorical level, bifurcation could be a divergence of opinions and viewpoints. Only the simplest notions would have a unity of views. Multiple splits in viewpoint could be a measure of complexity. At a higher level, bifurcations could be seen as a split between disciplines and their approach to complexity studies, as they each head in their own direction but in a self-similar way, like the paths of a bifurcating graph. In yet a different way to slice the concepts, bifurcation could be a metaphor representing the cycling of paradigms over time. For example, at various times aspects of complexity theory have been gathered as systems theory, as cybernetics, and as complexity theory, each overlapping the other but not covering exactly the same ideas and not engaging the same minds.
There is also a historical theme of complexity within complexity theory. Lorenz concludes his book, while still fleshing out the definition of chaos, with a chapter titled “What else is chaos?” There while explaining what fractals have to do with chaos, he makes a retrospective prediction about the theory that describes the unpredictable:
“It was near the close of the seventies that ‘chaos’ was rapidly becoming established as a standard term for phenomena exhibiting sensitive dependence. It was also at just about this time that new strange attractors were rapidly being encountered and these attractors with their fractal structure, rather than the absence of periodicity or the presence of sensitive dependence, were the features that some specialists were finding most appealing. Temporarily, at least, they were becoming the principal subject of chaos theory. It was but a short step for “chaos” to extend its domain to fractals of all kinds, and even to more general shapes that had not become familiar objects of study before the advent of computers. In retrospect, it would be hard to imagine that the original meaning of “chaos” could more appropriately have been extended to one of these categories of shapes than another. There is little question but that “chaos” like “strange attractor” is an appealing term—the kind that tends to establish itself. I have often speculated as to how well James Gleick’s best-seller would have fared at the bookstores if it had borne a title like Sensitive Dependence: Making a New Science.” (Lorenz, 1993: 177)
Or if finding simple rules for complex phenomena had been called simplicity theory, it might have died quickly, for who would want to be known as an expert in simplicity? Who would fund research into what sounds like a mere restatement of science, or simply an updated Ockham’s Razor? Much more fulfilling for the ego to be an expert in complexity. Not only is complexity theory dependent on its history, but so are its constituent parts, including its terminology. Had there been a slight adjustment to the initial historical conditions, we could now be discussing sensitive-dependence theory, parsing the notions of sensitivity and dependence instead of chaos and complexity. Just as we cannot change history, however, we cannot avoid it. Cilliers (2001: 1) refers to the Santa Fe approach, which he calls the most popular, as “lots of chaos theory and mathematics.” We cannot ignore these origins of complexity theory in chaos theory, mathematics, and computers, no matter how complexity theory has evolved or continues to evolve, any more than we can ignore our own human evolution. Reflexively speaking, this is one reason why this paper relies heavily on those historical origins of complexity and chaos theory (see also the discussion in this paper of Bak preferring computer models to “grandiose philosophical claims”).
Of all possible states in which nonlinear dynamical systems could exist, they are attracted to a relatively small set. If complexity theory itself is considered to be such a system, possible attractors could be the set of concepts, including self-organization, nonlinearity, and so on, with ideas in complexity orbiting around these central concepts. Other metaphorical attractors are the set of authors and their publications concerned with complexity theory.
At a global level, in the stable equilibrium region, with too little complexity, not much interesting happens. In the chaotic region, where nobody agrees on anything, nothing productive happens. At the edge of chaos, complexity research thrives.
And for one more example of the application of nonlinear dynamics and sensitive-dependence theory to itself: Nobody predicted that theories of a meteorologist trying to predict weather would develop into theories of the organization of bacteria and large corporations.
Networks are another feature common to complex systems. Draw connections between the parts, consider the parts to be the nodes of a connected graph, and suddenly a network exists. The network metaphor is so pervasive in our computer-networked, broadcast-networked, socially networked vocabulary that it seems more substantial than a mere metaphor. At times the notion of a network is used as a synonym for a complex system. So when applying complexity theory to its own parts, it will come as no surprise that networks are easy to find. The interlinked notions of complexity, chaos, nonlinearity, and so on form a network of notions. Other ways to slice the network concept include networks of authors, publications, research institutions, graduate programs, and courses devoted to complexity research.
The many parts and groups of parts comprising complex systems interact in combinatorial explosions of complex phenomena. Consider for instance a simple 10 by 10 square, like a crossword puzzle, with either 1 or 0 in each box. Those 100 boxes might represent networked nodes in a computer model that could be in 2100 different states. To tame that kind of explosion, random boolean networks can serve as idealized, frictionless models of complexity. In biology, for instance, genes can be idealized as present or absent, active or inactive, and their interactions modeled accordingly (Kauffman, 1993: 444). In particular, the models can demonstrate how a complex system can be attracted to comparatively few of its billions and billions of possible states. These computer models are a boolean analog of the logistic equation, showing a range of results from ordered to chaotic, depending on how the experimenter tunes the computer code. Summarizing adaptation in complex systems, Kauffman states,
“Random NK Boolean networks with K = 2 inputs to each of 100 000 binary elements yield systems which typically localize behavior to attractors with about 317 states among 2100 000 possible alternative states of activity. Whatever else you may mark to note and remember in this book, note and remember that our intuitions about the requirements for order in very complex systems have been wrong. Vast order abounds for selection’s further use. Having marked to note that complex systems exhibit spontaneous order, mark a second, bold and fundamental possibility: Adaptive evolution achieves the kind of complex systems which are able to adapt. The lawful property of such systems may well be that they abide on the edge of chaos. This possibility appears to me to be terribly important.” (Kauffman, 1993: 235) [Italics in the original text]
The notion of idealizing complexity as networked nodes, which are attracted to relatively few ordered states, has been pushed to the limit in theories based on infinite boolean networks of symbols. All interactions within any computer model can be considered to be symbol strings. Those abstract symbols can be interpreted as any phenomena. They can, for instance, be deemed to be chemical reactions of strings representing polymers catalyzing combinations and splits in other strings. The chemical reactions in turn can be interpreted as precursors of the replication of RNA and DNA in living organisms. Interactions of strings can also be considered as grammars for transforming strings to other strings. All of this, as ever, is in aid of finding patterns that explain the world, including possibly explaining complexity theory itself. In proposing the study of infinite boolean networks and random grammars for mapping strings to other strings, Kauffman stated the aim this way:
“Random grammars and the resulting systems of interacting strings will hopefully become useful models of functionally integrated, functionally interacting molecular, biological, neural, psychological, technological, and cultural systems. The central image is that a string represents a polymer, a good or service, an element in a conceptual system, or a role in a cultural system. Polymers acting on polymers produce polymers; goods acting on goods produce goods; ideas acting on ideas produce ideas. The aim is to develop a new class of models in which the underlying grammar implicitly yields the ways in which strings act on strings to produce strings, to interpret such production as functional couplings, and to study the emergent behaviors of string systems in these contexts.” (Kauffman, 1993: 387)
As yet we have no such grammar or string representation for the conceptual system known as complexity theory.
When extending the boolean network model to infinite boolean networks represented by strings of 1s and 0s, the experimenter can decide everything that goes in and comes out, numbering each possible string and combination of strings, and deciding how one string transforms another or inhibits transformation, where new strings come from, whether they are born from nothing or recombined from other strings, and whether they stay in the system or vanish in an arbitrary string death. These logical systems are not constrained as physical systems are.
“Obviously, finiteness in physical systems is also controlled - by thermodynamics in chemical systems, for instance, and by costs of production, aggregate demand, and budget constraints in economics. However, in the worlds of ideas, myths, scientific creations, cultural transformations, and so on, no such bound may occur. Thus it is of interest to see how such algorithmic string systems can control their own exploration of their possible composition set of dynamic control over the processes they undergo.” (Kauffman, 1993: 382)
Kauffman is not referring to complexity theory, but by analogy string systems could be used to explore complexity theory and generate data that can illuminate the theory of the theory. How closely coupled are the parts of complexity theory? What parts are most important? What is missing? This brings us to another proposal for future research: to develop a computer model of the parts and interactions within the network of related ideas gathered under the term complexity theory.
Computer models have not been considered to be features of complexity theory, but only the medium through which the theory is expressed. But it is worth noting that complexity theory depends so heavily on computers that without them the necessary calculations would be impracticable. In theory, clever people could rely on the computing power of their brains and maybe a pencil and paper. In practice, however, complexity theory has only emerged with the advent of powerful computer technology. So in that sense computers and the phenomena they exhibit are part of the theory.
All of the phase transitions and other patterns found in random boolean networks are not data from the observed physical world but are merely representations, abstractions, metaphors, imagery. The abstractions can represent anything or nothing. Kauffman’s proposal to study random strings acting on random strings inside a computer illustrates how far removed from the natural world the phenomena underlying complexity theory can become. Yet if computers are a vital part of the theory, then we cannot understand the theory without understanding how those abstractions behave in the world of the computer, quite apart from the metaphorical relationship, if any, of computer models to the natural world.
Still, it would be easy to dismiss computer models altogether as mere games, just automated abstractions that teach us only about computers and nothing outside them. Compared to the natural world, computer models may simply be too simple. Bak (1996: 138) discusses algorithms in his models of simulated evolution:
“The programs were so simple that the programming for each version would take no more than ten minutes, and the computer would take a few seconds to arrive at some rough results... In summary the model was probably simpler than any model that anybody had ever written for anything. Random numbers are arranged in a circle. At each time step, the lowest number, and the numbers at its two neighbors, are each replaced by new random numbers. That’s all! This step is repeated again and again. What could be simpler than replacing some random numbers with some other random numbers? Who says complexity cannot be simple? This simple scheme leads to rich behavior beyond what we could imagine. The complexity of its behavior sharply contrasts with its simple definition.” [Italics in the original text]
Leaving aside the exclamations and overzealous generalization about “anybody” and “anything,” how is it possible that flipping random numbers should tell us anything about the complexity of our world, let alone fundamental realities in it like the evolution of life? Bak (1996: 161) explains,
“The main reason for dealing with grossly oversimplified toy models is that we can study them not only with computer simulations but also with mathematical models. This puts our results on a firmer ground, so that we are not confined to general grandiose, philosophical claims.”
The models have to be simple enough to be understood as mathematical models. Computer models, essential as they are, nevertheless are not entirely trustworthy unless they are simple.
Another source of distrust is in the failure of complexity researchers to publish source code. In the early days of computers, computer source code was mostly secret, like the secret discoveries of alchemists and early scientists. Today source code for entire operating systems and applications has been widely published. Open source frameworks are also available specifically for developing models used in researching complex systems. So it is puzzling that researchers like Bak, Kauffman, and others, who base their arguments on computer simulations, publish only their results without source code, while in the traditionally secret corporate world, rivals like Microsoft, Sun, and IBM are now publishing some of their commercial source code.
From a textual description, we do not know exactly how computer models are constructed. When discussing physical systems constructed to carry out computations, Kauffman writes,
“We know that some computations cannot be described in a more compact form than carrying out the computation and observing its unfolding. Thus, we could not, in principle, have general laws, shorter more compact descriptions, of such behavior. Thus we could not have general laws about the behavior of arbitrary, far-from-equilibrium systems. This argument, however, contains a vital premise. It is we who construct the non-equilibrium system in some arbitrary way. Having specified its structure and logic, we find the system capable of arbitrary behavior” (Kauffman, 1993: 387) [Italics in original text].
Without source code, other researchers can still try to reproduce the same output using different code on different systems. It could be argued that such independent “experimentation” is essential for verification of any results and therefore makes publishing source code unnecessary. But this is not a matter of verification of scientific observations of the world available to anyone outside the computer. Source code is needed to verify the inputs and logic used to create those particular arbitrary worlds inside the computer from which analogies to the world outside the computer are derived.
We cannot understand complexity theory without understanding the medium of its expression. Computer code is as significant to the literature as are the equations and graphs resulting from computer models. That is, computer code itself is the literature relevant to any explanation of phenomena observed in computer simulations. Consistent with the scientometric study I have proposed for the natural language text of complexity theory, a higher-order study of the complexity of the computer language text of complexity theory could be done using common source code complexity metrics. Further to the proposal to develop computer models of complexity theory, then, is this proposal: that source code be published, not just a description of the models and a summary of the output. With the source code, particularly the kind discussed by Bak that can be written in ten minutes, we may find that complexity theory, like complex phenomena, is built on very simple rules indeed.
Levels with indeterminate boundaries
The parts of complex systems can be considered at different levels. The levels of a living organism, for example, include the whole individual, the various internal systems, the cells, the parts making up the cells, and so on from macroscopic down through microscopic to subatomic levels. The levels are not exact but have indeterminate boundaries. So that opens the question of how to describe where the boundaries are. We can debate what exactly are the genes and other separate components of a cell. Complexity theory also has levels with indeterminate boundaries.
One way of slicing the levels is within the theory itself. Is emergence, for example, a part of complexity theory, or such an essential feature of complexity that it is at the same level, a peer of the whole emergent theory? Another way of slicing the levels is between the theoretical and other levels, such as the nontechnical metaphorical level of this discussion, the technical level expressed in mathematical terms, the experimental and observation levels, the computer simulation level, and the ever-present shadow of any theoretical discussion, the practical level. A third possible slicing of the levels of complexity theory can be found within computer simulations themselves. Computer models range in complexity from simple equations in a few lines of code to massive internetworked systems. The notion of levels brings us to the emergence of those levels.
Complexity theory may be hard to pin down in a definition as something greater than, or at least different from, its parts. Here is another recursive opportunity, a self-similarity emerging from the notion of emergence, one of the principal features of complexity theory. Is complexity theory just a cluster of related ideas or something more?
Examples of emergent phenomena are everywhere. How do the properties of water emerge from the combination of hydrogen and oxygen atoms? How do ant colonies emerge from individual ants? How does a mind emerge from neurons? Yet despite its ubiquity, emergence has eluded precise definition by philosophers from before Aristotle to the present day. It would be impossible here to address the full range of definitions, some of which hinge on slippery concepts such as supervenience and downward causation. Nor will I address the holist-reductionist conundrum of seeing emergence in the reduction of complexity to simple, so-called holistic laws. Instead, I will use the statement of Holland (1998: 1), who explains emergence in only three or four words: much coming from little, or simply, much from little. His definition leaves room for the skeptical theme in this paper; that is, the suspicion that much there may have been made of little there. The definition also is readily applied to complexity theory itself without requiring an exact analogy to physical systems. By contrast, narrower definitions rely on multiple identical or similar entities acting autonomously, such as water molecules, ants, and neurons. There is no need, however, to suggest that the component parts of complexity theory are anything like billions of identical water molecules, or that the parts of complexity theory are autonomous agents, in order to suggest that much comes from little.
Let us metaphorically overlap parts of complexity theory like polymers in an autocatalytic soup. Consider the parts we are calling emergence, networks, systems, and nonlinearity. Parts of linear systems can be added or multiplied, but a linear system does not exceed the sum of its parts. We know from basic Euclidean geometry and elementary algebra that the whole can be entirely expressed in simple linear relations among its composite parts, such as x(a+b) = xa + xb. What results is only that, a resultant, like the term used for the addition of vectors. By contrast, a nonlinear system may be said to exceed the sum of its describing equations. Does the concept of emergence say anything more than the concept of nonlinearity? It could be argued that emergence is merely some variety of nonlinear geometry, expressed in nonlinear equations, not more than its parts, just a nonlinear composite. If this is so, then complexity, emergence, and nonlinearity are virtually interchangeable ways of explaining much coming from little. Or complexity theory, if it has an independent existence beyond its constituent parts, is a system, a group of ideas forming a whole. Complexity theory could be construed as an emergent whole, and emergence itself an emergent property of this whole set of ideas that have become known as complexity theory. Like other parts of complexity theory, such as networks and nonlinear dynamics, emergence can appear to be part of complexity theory or a synonym for complexity; if we suspect that a system is complex, even before having analyzed it, then what we are calling the system must be some kind of whole. Whether we can explain the system as emergent is another question.
On the other hand, patterns emerge from parts everywhere. Humans are pattern perceivers. It is what we do. We do not see only scattered stars in the sky, we also see constellations. If seeing patterns in 1s and 0s of a random boolean network is evidence of emergence, what patterns are not emergent? This is one of the lessons from Thomas Kuhn’s analysis of science in which observations from normal puzzle-solving science are slotted into the prevailing paradigm (Kuhn, 1962). Having decided that there is a pattern, then scientists play the game of finding evidence to support the pattern. Patterns within (emerging from?) complexity theory include not only emergence but also nonlinearity, networks, self-similarity, power laws, periodic avalanches, and so on. Likewise, this project of applying complexity theory to itself could be described as slotting patterns into patterns. Complexity theory promises that we can find comparatively simple rules, possibly reducible to individual equations, that explain the apparently improbable living organism or brain or other complexities in the universe. But the perception of patterns, including complexity theory itself, does not necessarily entail whole new entities emerging from their parts.
A concept linked to emergence and networks is that of a phase change, another notion borrowed from the physical sciences. The change from a liquid to a solid—that is, freezing—is one of the more common examples of phase change. To demonstrate phase change in relation to complex systems, Kauffman (1995: 54) uses the image of buttons connected by threads in a random graph. As the number of connections is increased, the connectedness of the graph does not increase linearly but instead suddenly becomes a clump of connected buttons, a composite, something other than the separate parts. Similarly, according to the theme of this paper, as networks of concepts become interlinked, does a phase change happen and complexity theory suddenly emerge? The networked parts of complexity theory are tightly coupled. Each concept leads to other concepts. There is no obvious beginning or end point.
This paper, for example, could equally have started with emergence or nonlinearity or networks. No single concept can be discussed without adding a see-also link to a related concept: complexity theory, see also chaos theory, see also catastrophe theory, see also general systems theory, see also cybernetics, see also nonlinear dynamics, see also emergence, and so on. Or starting with the geometry of complexity theory, see also fractal, see also self-similarity, see also recursion, and so on all the way around the network of ideas. With no obvious direction, and no linear progression from concept to concept, we get the composite of all threads bundled, or buttoned, into complexity theory, just as the theory itself suggests.
The template may not be a perfect fit, but I have tried to show briefly how complexity theory can be applied to itself at a metaphorical level to generate ideas about complexity. For most of the discussion I have considered basic issues such as defining the parts of complexity theory. I have taken a middle ground between gushing and derisive accounts of complexity theory, preferring a skeptical but positive view of its potential. In doing so I have teased out various approaches, including proposals for modeling the evolution of complexity theory and treating computer modeling as a part of complexity theory, not just a medium of its expression. Because computers in my view are peers, not subordinates, in complexity theory, I have also advocated the release of computer source code as scientific literature supporting any models. In addition, I have made suggestions for moving from the metaphorical to the mathematical. Proposals for more rigorous application of complexity theory to complexity theory include exploring the mathematical relationships, such as power laws, within complexity theory; studying the fitness of ideas within complexity theory; using various proxies for measuring those ideas; studying the autocatalysis of ideas; estimating the fractal geometry; and developing general computer models of complexity theory.