The routes by which its aficionados have come to complexity science are many and varied but several prototypes of complex systems stand out as salient “attractors” of interest (for more on these prototypes, see 1):

  • Self-organizing systems and the schools devoted to studying them, e.g., Haken’s synergetics and Prigogine’s far-from-equilibrium thermodynamics;
  • Computational emergence observed in agent-based modeling, artificial life, and other computational simulations;
  • Networks of various kinds including social/organizational networks, electronic networks, communication networks, and so forth;
  • Phase transitions and collective condensed matter phenomena such as “Quantum Protectorates”;
  • Biological emergence including symbiogenesis, lateral gene transfer, novel speciations, etc.;
  • Bifrucation scenarios in nonlinear dynamical systems (NDS; chaos theory).

The last entry in this list is the only one that is primarily mathematical in nature although the others rely upon and are deeply invested in mathematical frameworks. It would be hard put not to find elements of NDS, in one form or another, in the research and theorizing that finds its way into E:CO. The ongoing vitality of NDS is attested by its predominance in the work of this year’s Fields Medalists winners, particularly in the research of the Brazilian savant Arthur Avila who has found surprising connections between dynamical systems and other branches of math.

Nonlinear dynamical systems goes back at least as far as Poincare’s glimpses of phenomena mathematically so bizarre, he declared he couldn’t dare contemplate them! What he was seeing was a type of instability in his formulations of differential equations hithertofore unprecedented and conceptually unexpected. This glimpse of chaos threatened the stability of nothing less than the cosmos itself and moreover his work was taking place during an age when religious solace in such matters was considerably on the wane. Poincare’s quest and the methods he developed were taken up as subjects of inquiry in their own own right as well prompting the development of topology and other rich mathematical fields.

The two classic papers that are being reprinted in the current and next issues of E:CO represent milestones, forty some years ago, in the burgeoning growth of chaos theory. These papers exhibit incisive probings into the inner structure of one of the main mathematical processes at work in complex systems, namely, that of functional iteration especially as exhibited in the logistic map, a nonlinear difference equation that has been found of great usefulness in several arenas of applications including, e.g., population dynamics. It is important to note, however, that neither of these two papers focuses on the applications of the logistic map but instead go right into its mathematical core in order to lay bare the unforeseen role of parameters in prompting bifurcation of new attractor regimes.

Over many years of teaching complexity science, I have come to appreciate the instructional utility of thematic images or pictures of complexity imbuing each prototype of complexity. In this I have followed the preeminent philosopher of science Mary Hesse’s2 cogent account of the heart of scientific creativity as resting on metaphoric/analogical visualizations of the natural objects being studied. Mathematics can also be expounded by recognizing and probing the imagistic and pictorial associations of equations which employ figurative notations and “codes” of the processes, operations, parameters, and other means, that is, a kind of metaphoric short-hand that delineates relationships among different mathematical objects3.

By invoking the notion of a picture or image, I am not suggesting a dumbing down of the math. That this is not the case will be obvious when we encounter: first, the mathematical “images” of the logistic map which, as a nonlinear difference equation, portrays the changes in the values of variable(s) at discrete time intervals; second, , bifurcation diagrams showing the critical points of the bifurcation or control variables when attractors; third, parameter values ratio equations (e.g., revealing the amazing constant Feigenbaum discovered);, and so on. These specific thematic “images” follow from the fact that the centerpiece of both May’s and Feigenbaum’s inquiries is the sequence of bifurcations that occur during the operation of functional iteration in the now notorious logistic map which functions as the apotheosis of thematic imagery in their explorations of chaos. Without needing to dumb it down, however, a few preliminary remarks on what May explicitly focused on mathematically can be helpful.

Population dynamics and the logistic map

The author of the first classic paper, Robert May, stands out as a true polymath. An Australian trained as a theoretical physicist, serving stints at premier universities like Princeton, Harvard, Oxford (only the beginning of a long list); he has made groundbreaking discoveries in theoretical ecology and biology, population dynamics, pure and applied mathematics and many other fields, being knighted along the way for his persistently outstanding work.

In his forays in population dynamics and theoretical ecology, May inevitably came into contact with the so-called logistic map, a type of discrete (a difference not a differential) function fashioned in order to capture certain features of how populations change, for instance, the manner by which a predator/prey model effectuates an increase, then decrease of population:

  • First, the number of predators increases since the amount of prey is high because they have not as yet been eaten;
  • After a peak in the population of predators, there are less prey to bear consumed resulting to a decline in the number of predators (they starve to death);
  • As the population of predators decline, the population of prey rises since there are less predators to eat them.

It is useful to keep this parabolic (when graphed) dynamic in mind since the logistic map (see Eqn. 1 below), to which May and Feigenbaum devote so much of their attention, displays this rise and fall in terms of discrete changes in the population, e.g., from one year to the next. The logistic map discretizes the population variable by repeating or iterating the functional operation at each temporal interval of time (stipulated such as weekly, or monthly, or yearly…). This repetition is called functional iteration since each newly-arrived-at value of the variable is repeatedly “pumped-back” into the function and the functional operation is iterated to arrive at the next value.

Here is the logistic map in the usual form found in population dynamics:

Eqn 1.

Pt is the population at some arbitrarily chosen initial year t, Pt+1 is the population at the next year t+1, and a is the birth/death rate (after suitable adjustments such as setting at a constant some such factor comprising the system/environment, e.g., available space, food, etc.) This Eqn. 1a is nonlinear because the variable Pt is multiplied by itself:

Eqn 1a.

It is also useful to keep in mind that although a big deal is usually made about the “nonlinear”ity of the “nonlinear” dynamical systems aspect of complexity science, by itself, this doesn’t imply much more than the simple fact that multiplication in some form is involved and that when displayed graphically, a curvilinear outcome is drawn. Accordingly, what May is getting at with the “complicatedness” in his paper’s title, is not this implication of nonlinearity. Rather, it is what’s been called the “folded nonlinearity” of the logistic map’s operation which results in the startling results of bifurcation (even to the point of chaos emerging from what appears so simple). The logistic map as a relatively simple one-variable/one-parameter difference equation exhibiting bifurcation is an apt way to represent jumps in variable values in a host of different fields besides population growth, for instance, in epi­demiology the fraction of population infected with a pathogen; in economics the relation between the quantity of a commodity and its price as well as in theories of business cycles; in learning theory how many bits of information can be retained in some time period; in social science, the propagation of rumors in differently configured social systems.

In portraying the function graphically for the purpose of representing nontrivial values of the population and thus the complicated/complex dynamics “hidden” within it, Ptis plotted against Pt+1only when the parameter a is less than 4 which keeps the population from exploding beyond any sensible interpretation. The plot is called a cobweb diagram since the way the solutions change discretely looks like a cobweb (see the parabolic curves in May’s paper, Figure 1 on p. 2). The birth/death parameter a shows up as the slope of the tangent line, when Pt = Pt+1 .

I remember from math courses that such parabolic equations were studied in calculus but usually in the form of differential equations that were not particularly interesting nor revealing but instead consisted of just finding solutions, graphically displaying the equations, the mechanisms of differentiation, and so forth (unless we were fortunate enough to have taken classes in dynamical systems but they were few at that time, for undergraduates at least). What May discovered and labeled “complicated dynamics” in the title of his classic paper was in radical contrast to what most of us encountered in calculus. His findings were due not to the usual exploration of changes of the values of the population but instead arose from playing around with the values of the parameter a. He refers to this manipulation of the parameter a “tuning the nonlinearity” of the logistic map since the steepness of the tangent line indicates how “nonlinear”, i.e., how “complicated” (in his terminology) things are going to get when bifurcation occurs.

By “tuning the nonlinearity” of the logistic map, May found, against expec­tations, that as the values of the parameter aincreased, the values of Pt+1did not consistently change in the usual manner but instead could get “trapped” at some particular value(s) or circumscribed set of values and stay there no matter how many times the iteration involved took place, or they could in contrast display sudden jumps in values These “trapped” and “jumped” values”, as May found out, hinged on the values of the parameters and whether they kept the system inside an attractor or instead prompted a bifurcation or jump out of the attractor. are constrained by attrac­tors because they metaphorically “attract” the values of the variables in the long-run. It is only when the parameter values reach a critical threshold that they can prompt bifurcation or the emergence or disappearance of new attractors.

Complexity from simplicity?

It is generally thought that one of the most significant findings stemming from chaos theory is the idea that highly complex outcomes can emerge from very simple beginnings, a concept finding expression in the title of May’s classic paper and in the notion of “deterministic chaos” (in the sense that whatever is determined by a causal linkage—functional iteration consisting of such a causal linkage—should only generate results proportionate to whatever rudimentary elements are at work in the causation). Yet, this is not what happens. Rather, deterministic chaos brings forth something as complex as a random seeming pattern, i.e., “chaos”. In May’s terminology, functional iteration engenders this “complicatedness” out of “simple” although “complicated” for May was not so much chaos per se as the operation of functional iteration, indeed, the entire bifurcation scenario generated by changes in the bifurcation parameter. Should we then, not attempt to understand what May accomplished as falling in line behind the slogan of “complexity from simplicity.”

It is not only in chaos theory and its associated fields that this implication has found a foothold. We can hear it in other complexity science branches as well. For instance, it is very frequently heard in emergentist circles that emergence, in many of its different contexts, can only occur when a certain threshold of complexity (this threshold measured by different metrics according to what kind of emergence is being queried). Perhaps the most extreme, recent statement of such a viewpoint is the claim that consciousness is an emergent phenomena that can only come about when evolution has produced some sufficiently critical level of complexity, e.g., some supposed metric of neural connectivity or neural networking (see a very recent version of this thesis formulated in terms of “integrated information” in the work of Giulio Tononi and Cristof Koch4).

The two classic papers being republished in E:CO should be of aid for us only only in grasping how functional iteration in nonlinear maps takes place—they should also help give the lie to any such conclusions of complexity coming from simplicity. That is, these two classic papers can be read as showing how the logistic map and similar complexity “images” in actuality contain a huge reservoir of complex dynamics that is made manifest under the right conditions. In other words, what May and Feigenbaum have done is to further the project of discerning how complex patterns emerge from antecedent patterns or as Turing described it in the arising of novel morphogenic patterns: “…developing from one pattern into another, rather than from homogeneity into a pattern.”5

This may seem like a semantical issue but I am contending it is much deeper since the popularization of the sloganeering of complexity coming out of simplicity can get in the way of appreciating just how much complexity, complicatedness, complexification are going on already or have been going on. For instance, consider functional iteration: it is fundamentally a complexifying operation in the sense likened to kneading dough to make bread, an operation that aims at optimal mixing of ingredients, especially diffusing yeast throughout the flour where the yeast can do its work. This can be seen vividly in the closely related horseshoe map of Steve Smale which illustrates the “folding nonlinearity” of functional iteration by way of a horseshoe which is elongated, then folded over, the result being elongated and then folded over again and so on.

The following is an algebraic expansion of the logistic map which I generated to better visualize the complexity inherent (perhaps “hidden” is a better term) in the seeming simplicity of the map as the first generation is taken to the the P t+4 generation by way of the operation of functional iteration (or pumping back-into) performed at each generation, i.e., here expressing Pt+4 in terms only of Pt:

Eqn. 2

Eqn. 3

Eqn. 4

Eqn. 5

Eqn. 5 is only the fourth generation yet it is already a mess of complicatedness or complexity! Certainly, from the vantage point had by starting at Pt (say, the zeroth year of when a population is measured), this initial condition obviously appears as the simplest facet of this expansion of the logistic map following its functional iteration. With that apparent simplicity as a starting point, then in comparison with Eqn. 5, it appears that a gargantuan complication has indeed ensued. But such a conclusion fails to take into consideration the fact that Pt was arbitrarily chosen as the zeroth year of the population being measured. What is left out is the complication involved in arriving at the value of Pt before any operation of functional iteration shown in the sequence of Eqn. 2 to Eqn. 5 above had taken place applied (taking Pt to Pt+4). This complication is left out for an economical reason since, otherwise, one could not escape the enormous complication resulting from a need to keep updating the value of whatever Pt happens to be at any arbitrary time in terms of Pt+n.

For this and other reasons, it should really come as no surprise (and definitely not after May’s brilliant exposition) that the logistic map and other similar “one-humped” equations are thoroughly imbued with “complicated dynamics” or“complexity” from the get-go. The finding of “chaos” along the way of a period-doubling route or other routes, therefore is just more confirmation that the logistic map or equation contains an inherent complexity playing itself out as the complex systems modeled by it evolve, change, adapt, grow, die, and so forth.

Moreover, it is not only its functional iteration that renders the logistic map a complexifying operation, it is perhaps even more so because of the bifurcation scenario that May so amply describes. The possibility of bifurcation into new attractors was not expected at May’s time but now, because of his and others’ work in dynamical systems research, we expect this kind of bifurcation complexification. What a surprise it was to many of us brought up on calculus and the prominent role of linearity in studying it. Who knew that the values we were seeking were confined to these “attractors”? We didn’t even know that values resulting from computing these equations were confined at all! The complexity was there all along but all we saw was the simplicity of what we took for granted. For the fact is that we are always already steeped in a nonlinear complex world replete with complex systems and all of the phenomena complex systems exhibit. And the causes of the complex mathematical dynamics of the equations are primarily found in the structure of the mathematics and not just in interaction with external causative agents.

May’s insightful analysis of the bifurcation scenario of the logistic map circles around the issue of instability, that new attractors arise when attracting equilibrium points become unstable and lead to divergence, not convergence. May shows how this instability leading to bifurcation is a function of the parameter in steepening the nonlinearity and the function’s internal mathematical structure. So along with nonlinearity, complicatedness, bifurcation and so on, we also live in a world abounding in instability … islands of stability in oceans of instability, islands of instability in oceans of stability. A physicist of no less a luminary status than Enrico Fermi (who was also well known as a genius mathematician for his ability to surmise what the solutions of mathematical problems would look like before even methods for solving them had been developed!) was of the strong conviction that future theories would involve these kinds of nonlinearity and “complicatedness” and therefore pushed hard for the development of computers which could hopefully make some headway using numerical methods.

Conclusion: Complexity gets more complex

When chaos theory first splashed on the scene in the 1980s (in the popular press, that is), among those given to speculation about weighty metaphysical themes, it was considered to have revolutionary implications particularly in so far as it now appeared that nature had a capacity for generating surprisingly complex, even random dynamics out of much simpler initial states. Chaos suggested a plethora of reconceptualizations of the relation of order to disorder, predictability to unpredictability, determinism to stochasticity, and stability to instability. It has become a commonplace to hear, not just within complexity science circles but elsewhere in science and philosophy, about some “drive” or “proclivity” or “tendency”, no matter how slight, in nature “pushing” for complexity to arise from the simple to the complex (not unlike Spencer’s early evolutionary claim of the heterogeneous springing forth from the homogenous.) We can see such an inference, for example, in emergence-tinged conceptions of how life began out of much simpler autocatalytic chemical reactions, or even how the universe itself and its vast complicatedness and complexity emerged from the much simpler dynamics of a supposed singularity and a subsequent process of “inflation”. Not a few esteemed physicists or cosmologists seem to take such a cosmic drive for complexity for granted, even among the more sober and thoughtful like Frank Wilzcek.

Helping to empower this supposed cosmic engine of complexity have been further discoveries in chaos theory which have uncovered even greater depths of “complicatedness”. Here I’ll briefly described two such trends. First, according to Palmer, Doring, and Seregin6 in their careful and thoughtful examination of Edward Lorenz’s early work on the “butterfly” effect (i.e., sensitivity to initial conditions) at the end of the sixties and during the early seventies, it appears Lorenz was not yet finished with his earlier work of 1963 since over the next decade, he ventured into a far more unpredictable arena, that of multi-scale systems (e.g., in a presentation from he gave in 1972).

Remember that in the original butterfly effect, unpredictability in chaotic sysems showed itself in sensitive dependence on initial conditions that would lead to a “blowing-up” of even the most miniscule margin of imprecision in a measurement—hence a vastly different outcome than had been predicted. Although various ways were offered to mitigate the explosion of imprecision (e.g., see the brilliant method recommended by Shaw7), damage had been done to the age-old linkage of two of the cornerstones of scientific explanation, namely, determinism in the sense of a causal linkage and predictability of the outcome of the causation acting on the initial conditions. Lorenz, however, asked if there were other sources of unpredictability that would not even yield to improvements in the measurement of initial conditions, that is, were not due to just sensitive dependence on initial conditions.

What he came up with was the presence of a certain kind of multi-scale constituency in a system (again using weather systems as his forte), which rendered unpredictability to a much greater extent than the chaos he had first discovered. An example of what he had in mind for multi-scale systems6 would be a hurricane which scales at say 600 miles in diameter but contains mesoscale structures whose scales might be a fifty miles or less (embedded super cell sub-cyclonic activity), individual cloud formations with scales of miles, turbulent sub-cloud eddies with scales appreciably smaller, and so on. Lorenz pointed out that small errors in measurements of the finer structures tend to grow more rapidly and when these errors are large enough, they can effect measurements of coarser structures. Thus measure imprecision itself jumps up scales, at each new scale rapidly expanding, on up the scaling hierarchy.

Moreover, at some point, because what is being studied is fluid dynamics, there would be a need to bring in heavy mathematical equipment like the Navier Stokes initial-boundary value approach to partial differential equations, and like methods. But this could quickly bring the whole mathematical modeling enterprise up against intractabitily, and thereby wind-up against intractability. It is this multi-scale complexity which Palmer, Doring, and Seregin conclude is what Lorenz meant by the “real” butterfly effect. Whereas improvements in measuring initial conditions might be of help in dealing with the explosion of unpredictability due to sensitivity to initial conditions, in Lorenzian multi-scale systems predictability estimates would not be extended in any significant way by such methods. Hence, certain formally deterministic fluid systems which possess many scales of motion are observationally indistinguishable from indeterministic systems.

At around the same time as Lorenz was doing his “real” butterfly effect work and that both May and Feigenbaum were exploring the functional iteration route to chaos via bifurcation Edward Ott8, a colleague of James Yorke (who had coined the term “chaos” in 1975), was drawn toward another source kind unpredictability related to chaos but that went one better even than Lorenz’s “real” butterfly effect: a riddled basin of attraction. A basin of attraction refers to all possible values taken by the variable(s) in question (such as P in the logistic map) which through the dynamical operations effectuated at particular values of the control parameter (e.g., the birth/death rate in the logistic function), are attracted to the attractors. For instance, the basin of attraction for a population which doesn’t change at its yearly update (May’s equilibrium fixed point) are all the values possible for the population which wind-up at the same value at the fixed point attractor, that is, remain the same at the yearly measurement.

But, in a riddled basin of attraction, there are points arbitrarily close to but not the same as the other initial conditions in the basin of attraction. However, these close-by initial points generate phase orbits that go to a different attractor! In measuring a system in this case, because the initial point is arbitrarily close to another and thus susceptible to measurement imprecision, it can be chosen instead as the value of the variable that will go through the nonlinear operation (even the blow-up from sensitive dependence on initial conditions) but leading to an entirely different phase portrait. As Ott puts it, the existence of riddled basins calls into question the repeatability of experiments for how does one know the initial conditions are of “this” particular basin of attraction and not that one?

Bifurcations, sensitivity to initial conditions, multi-scale amplificafions, riddled basins of attraction, instability—these terms and their associated images would have us believe we are condemned to a hurly burly, chaotic, unstable universe with nowhere to stand that is determinate, stable, safe from the fluctuations of nature. Yet, there have been parallel developments as well which have uncovered contrasting metaphoric descriptions of complex systems. In the next issue of E:CO we will look at one such example where constancy replaces flux, and universality replaces the vicissitudes of specific contexts, notably the famous Feigenbaum constants which have shown universal constants at work in dynamical systems that reveal nature also consists of reservoirs of constancy.

May’s original article was published as R.M. May (1976). “Simple mathematical models with very complicated dynamics,” Nature, 261: 459-67. It can be downloaded from here.