The aim of this article is to ask, “Whither complexity science?” On the face of it, it may seem strange to ask such a question. We live in a complicated world, which becomes more complicated by the day. A science that explained this fact, and counselled on how to live with it, would seem most useful. Yet, even before the discipline is fully established, complexity science has the air of something not being quite right.

I do not pretend to a great historical knowledge of complexity science, yet throughout the range of studies that I have seen, there always seems to be the whiff of some skeleton buried under the foundations. It is undeniable that much good and detailed work has been done, but a number of authors, from both within and without the field, in their more reflective moments, seem to express doubts. To take a few examples at roughly fiveyear intervals from the past 15 or so years:

  • Rosen (1985) expresses doubt about the concept of emergence.

  • Cariani (1989) similarly calls emergence into question and provides a critique of computer models of emergence.

  • A special issue of Futures asks whether complexity science is “just the fashion of the 1990s, or will it have a more lasting impact on the way in which we conceive and operate on the world around us?” (Sardar & Ravetz, 1994: 563).

  • Richardson, Cilliers, and Lissack (2000) also express concern over the hype around complexity science, and suggest that complexity science has “some affinity with skeptical postmodernism” in that it tends to undermine all attempts to fully characterize the world, including its own attempts. It is therefore a “gray” rather than a “black and white” science.

In what follows, I shall lay out what I see as the reason for this unease about the discipline. I suggest that there is a tension at the heart of complexity science that handicaps it, and that must be resolved before it can move forward.


Before looking at complexity science per se, it is useful to consider the environment in which it exists, which is that of modern analytic, reductionist, Newtonian, etc. science (hereinafter referred to as “analytic science”). The following account will draw heavily on the work of Robert Rosen (e.g., 1985, 1991), a noted complexity theorist. He provides an account of the basic core of science, which demonstrates its essential nature and the consequences this has. He begins by noting that the aim of science generally is to represent causation—why things happen in the world—by inference—how we can legitimately turn one symbolic expression into another (Rosen, 1991: 59). The parallelism of the two sorts of entailment (causal and inferential) can best be expressed diagrammatically (Figure 1).

Fig. 1: Causal and inferential entailment

W represents a process in the world, operating from time T1 to time T2. E represents the encoding of the nature of the relevant part of the world at time T1. L represents the working of the model to produce the state of the model corresponding to time T2. And finally, D represents the decoding or interpretation of the model to determine how the world will look at time T2. In a good model, the path E to L to D should equate to the process, W, in the world.

The inferential modeling system that science has generally chosen is one in which a system has states that change according to a set of dynamical laws (e.g., Rosen, 1991: 102). Indeed, Rosen claims that effectively this model has become what science is (e.g., Rosen, 1991: 68). In simple notation, this model can be written as S(t + 1) = T.S(t), where S(t) is the state of a system at time t, and T is a recursive operator that transforms a state into its value at the next instant.

This sets up the following coding scheme (Rosen, 1991: 5-6, 67):

Table 1

Inferential modeling system coding scheme

World Model
A finite set of attributes A finite set of meaningless symbols
A state of a system composed from the set of attributes A proposition built up from the meaningless symbols
A set of dynamical laws for turning states into new states A finite set of production rules for turning given propositions into new ones
And so overall:
Configurations of structureless particles pushed around by dynamical laws Sets of meaningless symbols pushed around by production rules

This scheme is that of Newtonian mechanics. Rosen contends that though it has required some supplementation and generalization, the heart of Newtonian mechanics remains the heart of science today. All science is based around the idea of states and their transformation through dynamical laws.

Normally, all attention is focused on the process in the world (W) and its corresponding model (L). However, the rather neglected coding scheme (E and D) also plays a vital part. It should be noted that neither E nor D corresponds to anything in the world, nor to anything in the model. They are intermediaries between the two, but necessarily lie outside of both. Their status is anomalous, as indeed might be guessed by noting that the commonest form of encoding is measurement, the nature of which has been unclear since the advent of quantum mechanics (e.g., see Pattee, 1993, for further thoughts on this).

Consider what would happen if we tried to follow a system according to this scheme. Although it is not feasible in practice, the point can be made more clearly by considering a system corresponding to the evolution of the Earth from the time of the primeval soup (T1) to the current time (T2). An encoding (E) of the Earth at time T1 would presumably be in terms of the positions and energies of the various simple molecules around at that point. The manipulations of the model (L) would constitute a very sophisticated chemistry and would determine how all these positions and energies varied over time. Eventually, we would reach time T2, and we would have a state representing the chemical state of the world today. This could be decoded to say how the world would look: the positions, energies, etc. of the various compounds.

Even assuming that we had a perfect chemistry, so that we did end up with a good representation of the Earth today in chemical terms, that is precisely all we would end up with. As we know, the world today consists of intelligent humans with sophisticated technology, who have minds and emotions, and so on, yet all our model represents is chemicals. Ex hypothesi, if we analyzed the world as it is today, we would end up with exactly the same state as came out of our model (that this is true is, of course, crucial to the whole idea of emergence). What we lack is the parts of the world that have appeared since the primeval soup (life, minds, emotions, language, and all the rest). But it is not possible to get such things out of the model, because they are not codable into the model. The only way in which such things could be obtained from the model is to use a different decoding (D*) than the decoding (D) that is the inverse of the encoding. If we used an appropriate D*, we would conceivably end up with a description of the world in which we live; though there is a problem that things like life are resistant to coding (e.g., see Hiett, 1998).

However, as emphasized above, the coding corresponds to nothing in the world. There is a big enough problem understanding a single encoding with a decoding that is its inverse. An encoding that used a noncorresponding decoding would doubly have no meaning. And crucially, because the change of coding is the only way we can get the world as it is at T2 out from a model starting with the world of the primeval soup at T1, it means that we cannot explain how the world at T2 appeared.

So, if we take our model at T1 as a true and complete model of a system (potentially the whole universe), which is what science strives for, we cannot explain how the system ends up qualitatively different at T2. Since qualitative change does exist, this might be taken as a refutation of the proposed modeling scheme. It is certainly a problem for standard, analytic science.


Complexity science is very familiar with this problem. In many ways, it is its raison d’être. It sees that analytic science reaches its limits here, and tries to do something about it. Accordingly, there are many calls (e.g., Funtowicz & Ravetz, 1994; Kampis, 1991; Prigogine & Stengers, 1985; Rosen, 1985) for a new science to replace the old analytic science. However, in its implementation, the new science is usually found to be not so much a new science as a correction to the old science. Moreover, because of the nature of the old science, the effect of the correction is to introduce a contradiction, from which the new complexity science can never recover.

Conceptually, analytic science can be seen as anything fitting Rosen’s diagram. That is, if the world can be coded into a symbolic state, manipulations of the symbolic state performed, and the resulting state decoded back into the world, then we have a candidate for analytic science (note that here we are talking about the general form of analytic science, not the particular coding and laws actually corresponding to the real world).

Any scheme of this form will be causal, deterministic, and noncreative. Since, as we have seen, such a scheme does not in fact model the world, complexity science needs to add something to, or change something in, this scheme. However, the scheme of analytic science is complete. Anything fitting the scheme could be analytic science, and anything not fitting the scheme is not and cannot be analytic science. There is therefore no way to add anything new to the scheme, and such is the scheme’s simplicity and unity that if anything is taken away, it collapses.

What is especially important to note here is that the models of analytic science are complete with respect to causation. All the causes in the process (W) that take a system from T1 to T2 are represented in the model (L). No further causes are needed. As a corollary, because we explain the world in terms of causes—Why this? Because that; cf. Rosen (1985)—no further causes being needed means that no further explanations are possible. But because there are further things to explain (the creativity, indeterminacy, emergent phenomena, etc.), this puts complexity science in an impossible position. Effectively, it must satisfy two contradictory conditions:

  1. The models of analytic science are (ideally) complete with respect to causation.

  2. The models of analytic science are clearly not complete with respect to causation, since there are further things to explain, which equates to further causes being needed.

That is, complexity science must both not change the model (L) and change the model (L) at the same time. (The appendix at the end of this article deals with some of the objections that might be raised against this position.)

These two contradictory constraints are naturally difficult to satisfy, but complexity science has found two general methods that do appear to do so. These are the use of chance and a change in the coding.

First, consider chance. The usual way of introducing chance in complexity science is to add a very small, random correction, d, to a state. Thus, if a system in state S(t) should move to state S(t + 1), it is actually made to move to state S(t + 1) + d. Now, because certain systems are very sensitive to even very minor alterations at certain points, incorporation of such noise can have significant effects. A system may end up in a very different state than it would have done had no noise been applied. Thus, in a sense, chance is the cause of the difference in the final state.

The second general method that complexity science uses involves a change in the coding. For instance, the primeval soup is encoded as molecules and after it has evolved it is decoded as living organisms. Cariani (1989: especially 157-9) discusses the use of this tactic in computer models that try to demonstrate emergence. The general way in which this method works has already been described above, and it can be seen how it fulfills both criteria. Nothing changes in the model (L), but only in the decoding (the use of D* rather than D). Yet, the output of the model (once decoded) is different from what it would have been with a decoding of D. Thus, something different is produced without changing the causes as portrayed in the model.

While it seems that both methods cause something different to appear without altering the causes in the model (L), if we look more closely we will see that this is not the case. The method of adding noise makes use of a vanishingly small correction. This allows it to say to the first constraint that the correction is so small that really nothing has been added. But to the second it can say that something has indeed been added, and because of certain regions of incredible sensitivity, this very small addition can have a very significant effect.

However, if we ask what cause this correction amounts to, we cannot answer our question. Is noise real? Is it a reflection of our ignorance? Our inability to measure exactly? Or what? If we do not know what noise models in the world, we cannot use it as a proper explanation of what happens. If it models a failure on our part (e.g., our ignorance or our inability to measure precisely), then it clearly does not represent a cause in the world. If it does represent something in the world, then either that something is not truly dependent on chance, in which case it should be included explicitly in the model, and the model is currently incorrect; or, if the something is truly just a matter of chance, then it does not allow us to explain what goes on, other than to simply say that it did happen by chance. Naturally, this could be the case and one might even say that the apparent inherent indeterminacy in quantum mechanics supports it, although see the note on this in the appendix.

Nevertheless, before becoming too embroiled in the ontological status of chance, it is perhaps more important to note that the use of chance only changes the final state in which the system ends up from one particular state to another of the same kind. It does not cause new properties to appear, so it does not change the nature of the states of the system. In other words, it does not model emergent phenomena. Saying this is not to deny the importance of recognizing that certain models are very sensitive to even minute changes. If we intend to use such models to make predictions, it pays us to investigate this sensitivity. However, although it might explain some of the indeterminacy in the world, it does not deal with the problem of emergence. For this, it is necessary to change the coding. However, this also has its problems.

Changing the coding does appear to produce emergent phenomena. It does indeed satisfy the first constraint, because it does not introduce new causes. But consequently, it does not satisfy the second. This means that while it does produce a different output (“emergent phenomena”), the model (L) is unchanged, and so it cannot be said to model the cause of the new output. Rather, the change causing the different output is in the coding, and this does not represent anything (and certainly no cause) in the world. So, from the perspective of the model, the emergent phenomena are uncaused. And thus, from the perspective of the model, which is our theory of the world, we cannot explain why they emerged.

In summary, then, neither chance nor recoding offers us a way of understanding complexity. And, unless the contradiction of both needing and not needing new causes is overcome, complexity science will always be hamstrung in similar ways to the above. The contradiction means that complexity science can never have new causes to explain emergent phenomena, so at base it can only say things such as new phenomena “just emerge” or that noise “may affect things,” neither of which really provides an understanding of complexity.

In what follows, I shall try to suggest a different tack that takes complexity science away from this contradiction.


This alternative perspective accepts that there are limitations to analytic science, but it does not seek to find corrections for them. This is because it does not see that corrections are possible. That this is the case can be seen from the simplicity and generality of Rosen’s diagram. Any representation of the world is linked to the world by a coding. But there appears to be a fundamental mismatch between these three things. The world is seen as creative, while the coding and the representation are fixed. Symbols cannot spontaneously change, yet the world to which they are linked apparently can (once life did not exist; now it does, etc.). There is therefore no way in which any symbolic representation can adequately mirror the world. At most, symbols can be rearranged into different configurations, but nothing really new appears (see also Kampis, 1991). Thus, neither analytic science, nor any similar scheme, can provide us with a complete understanding of the world.

Complexity science is therefore completely right on this point. It is similarly completely wrong if it tries to correct analytic science, since no manner of corrections will overcome this basic problem. This is exemplified by the commonly used corrections (noise and recoding), which, as we have seen, are not real corrections since they explain nothing (see also Cariani, 1989; Kampis, 1991, Rosen, 1991).

This leaves complexity science with a difficult problem. It is fine to make a negative point (that analytic science can never provide the true description of the world), but after a while it becomes a little boring. One would eventually like to move on and say something positive, but what?

One possible route is a pragmatic, practical one. It is clear that we do not have any theoretical handle on why the world is complex, how one should act in such a situation, how to make things less complex, and so on. However, through years of experience and sensitivity to situations, various abilities, techniques, and ideas have been developed that seem to work. These skills are not particularly the property of complexity science, but of systems people in general, and perhaps just of people in general.

There are, no doubt, many good managers who have never heard of complexity science, but who are very good at managing the complicated situations in which they find themselves.

Although it is merely a terminological point, these skills and methods do not constitute a science, but more an art. To repeat, no understanding of the sort that we currently mean by science is possible here. Therefore, there is no possibility of developing techniques and methods consonant with that understanding. It may be that the various methodologies established help one in dealing with complex situations, but if they do so, it is not in virtue of our theories of the complex. Rather, it appears, it is a sensitivity to situations and an ability to sense possibilities that are more important than following a technique. If the discipline is seen as an art rather than a science, then the focus shifts from methods to practitioners, and there is less danger of people who know the methods thinking that they understand something and/or that application of the methods is the key to success (e.g., Dreyfus & Dreyfus, 1986).

Still, the question remains as to whether any sort of understanding is possible. For those of us who lack the skills to be artists of the complex, but who have the desire to know, can anything be done? This is a large and difficult question, for, effectively, the task that complexity science is setting itself is the understanding of the whole world, which is properly a philosophic rather than a scientific task (cf. Verene, 1993a, 1993b). However, undertaking such a task is necessary, since all sciences need good foundations; complexity science is saying that the foundations of analytic science are built on shifting sands, and that therefore new foundations need to be constructed before new sciences are possible.

There is a major question as to whether such foundations can exist. Indeed, a skeptical wing of complexity science (e.g., Richardson, Cilliers, & Lissack, 2000) sees complexity science as having affinities with the postmodern route in which there are no universal, absolute foundations, but only limited, provisional ones. That is, from the perspective of science and not just that of the arts, there is the suggestion that the search for universal foundations is hopeless. My “hope” is that such a conclusion is unduly pessimistic, resulting from the inability to modify the scheme of analytic science in a noncontradictory way. Logically, a contradiction allows all things to be deduced from it, and so necessarily arouses skepticism. However, if one sees complexity science as rejecting analytic science as a route to an ultimate description of the world, rather than attempting to correct it, one has a clean slate with which to work. The skepticism that comes from trying to deal with a contradiction does not arise.

However, the problem of skepticism is replaced with the problem of the clean slate: What should one write on it? At this point, it is useful to return to Rosen’s diagram and consider it in more general terms. Effectively, it and its accompanying methods represent an incarnation of Cartesian dualism. On one side we have the physical world, on the other the symbolic world, with the coding providing the link between the two. And, as with Cartesian dualism, the link between the two presents a problem. How do two completely distinct worlds become joined? How, even, is it that they are separate in the first place (and do not say that the symbolic domain emerged from the physical)?

It must be true that if there is to be an understanding (which lies in the symbolic world) of the physical world, then the physical and symbolic cannot be completely separate. If they were separate, it would be miraculous if the symbolic could contain any understanding of the physical. But, if the physical and symbolic are joined by a rigid link from symbol to world, then as we have seen the rigidity of symbol and link cannot match the creativity and flexibility of the world.

Thus, if neither separation nor linking works, the only remaining possibility is to bring world and symbol together, as in a monistic philosophy (e.g., that of Spinoza). World and symbol then go together like the two sides of one coin. This means that any symbolization is always appropriate to the world (symbolization is effectively part of the world and is just as natural as anything else). However, most symbolizations are not understandings. The symbolization appropriate to hitting one’s thumb with a hammer is usually “Ouch!”—not an analysis of forces and arcs of swing. Similarly, the appropriate symbolization for a situation might be a lie, not the truth of the matter. This makes achieving an understanding of the world in such a framework very difficult. Indeed, I do not believe that anyone has succeeded in developing a monistic framework that provides a basis for understanding, and certainly this is not the place in which to attempt it. The point to be made is that a dualistic framework of the Cartesian sort does not finally work and therefore it needs to be replaced. A monistic framework would appear to be the only viable alternative, but what it amounts to is very unclear.


The aim of this article has been to suggest that, as currently conceived, complexity science contains a contradiction. It both accepts and rejects analytic science. In practical terms, this means that it accepts the causal scheme of analytic science, and so can only add to it noncausal factors (chance and recoding). But, because its additions are noncausal they provide no explanations, and so complexity science fails in its mission of explaining emergent phenomena. By definition, one cannot explain/predict/understand emergence or noise.

However, complexity science does not have to take this route. It can accept that analytic science does not provide a full account of the world, but there is no need for it to see its task as correcting analytic science. Two possible routes follow from this.

First, there is the pragmatic route of dealing with complex situations. I have not said much about this, nor do I want to, for doing so would give the impression that much could be said about it; although I would like to encourage its adoption. The essence of the interpretation of complexity science contained in this article is that no formalization of complexity is possible. It cannot therefore be described, nor formally taught. It is more an art or craft for which an aptitude is necessary, and for which experience is the only schooling.

Secondly, there is the possibility that some understanding of complexity is possible. I do not pretend to know what this is and, for this reason, I cannot say much on this possibility either. Logically, I believe that perhaps a form of monism is the direction in which work should go. However, the sort of logic used to conclude this may belong to the problem rather than the solution. It will certainly take more than minor modifications to analytic science to achieve success in this area.


This appendix discusses in slightly more detail the claim of the need to both not add causes and to add causes, and the nature of indeterminacy in quantum theory.

To the claim of the need to both not add causes and add causes, it might be objected that this relies on analytic science’s having a complete understanding of causation in the world, which, given the historical rate of change in science, seems unlikely.

However, this objection is mistaken. Rosen’s diagram is intended generally and covers all causes, known and unknown, which fit the scheme of analytic science. As this article argues, what is required is a different scheme, not further causes that do fit the scheme. Emergent phenomena are caused in some way, but their causes are not such as those that fit in the scheme of analytic science. It is therefore necessary to change the scheme, although, as the remainder of the article argues, it is not easy to see how.

The success of analytic science in describing many domains in the world would suggest that analytic science provides a number of approximations to the true scheme, but unfortunately, this does not help us work back to what that true scheme is.

With regard to the place of chance in quantum theory, and whether this might legitimize or even require the addition of chance to models, it is very difficult to comment. There is no final consensus on the interpretation of quantum mechanics, and so it would be a rash person who made sweeping pronouncements concerning it. However, Rosen’s diagram does throw some useful light on the situation, since it suggests that the indeterminacy problem lies in the coding (see Rosen, 1991: 103-5). The mathematical apparatus (L) of quantum mechanics works perfectly well, with wave functions being manipulated as desired. But quantum mechanics throws doubt on the standard encoding, which is normally measurement (cf. Rosen, 1991: 59) and is where the indeterminacy arises. What this means with regard to the ontological status of chance is unclear, since as noted the ontological status of measurement itself is unclear. But, if this understanding is correct, it suggests that chance is not a part of the quantum mechanics model so much as involved with the coding.


The author would like to thank an anonymous reviewer for helping to clarify some of the points made. Responsibility for any remaining opacity is, of course, the author’s.