This paper presents a discussion of the possible influence of incomputability and the incompleteness of mathematics as a source of apparent emergence in complex systems. The suggestion is made that the analysis of complex systems as a specific instance of a complex process may be subject to inaccessible ‘emergence’. We discuss models of computation associated with transcending the limits of traditional Turing systems, and suggest that inquiry into complex systems in the light of the potential limitations of incomputability and incompleteness may be worthwhile.

We suggest that what we intuitively define as (strongly) emergent systems may include processes which are not computable in a classical sense. We ask how incomputable processes would appear to an observer and, via a thought experiment, show that they would display features normally defined as ‘emergent’.

If this conjecture is correct, then two important corollaries follow: first, some emergent phenomena can neither be studied nor modelled via classical computer simulations and second, there may be classes of emergent phenomena which cannot be detected via standard physical measurements unless the process of measurement exhibits super-Turing properties in its own right. Borrowing from recent literature in computer science we then show that tools which enable us to break the classical computational barrier are already available and suggest some directions for a novel approach to the problem.

Implicit in most approaches to the study of emergence are 3 concepts:

The dilemma which has kept scientists and philosophers busy for decades is whether this novelty and inherent causality are real physical phenomena or merely lie in the eyes of the observer; said differently, whether reductionism is the only tool we need to understand Nature.

The most efficient language we possess to study Nature is Mathematics. This is used not only to describe processes but also, by using mathematical transformations rules, to deduce, extrapolate and manipulate novel processes. It is thus crucial to be sure that the mathematical machinery we use is consistent and correct. It is also important that it is as exhaustive as possible, since the more mathematical rules (theorems) we discover, the more options are available to us to interpret and manipulate Nature’s workings. These needs motivated mathematicians at the end of the 19^{th} century who dreamt of devising a set of axioms and transformation rules from which all other mathematical truths could be deduced as theorems. In Hilbert’s dream, this would be achieved simply by mechanical manipulation of symbols devoid of external meaning (Chaitin, 1993, 1997: 1-5). Basically, Hilbert was seeking a consistent and complete formal system which would guarantee that all theorems of Mathematics could be proved. The dream was famously shattered by the work of Gödel (1931) who proved that no formal system in which we are able to do integer arithmetic can be both complete and consistent. For the sake of our discussion, Gödel’s Incompleteness theorems can be summarized as follows (see Gensler (1984) for a simplified explanation of the theorem and its proof). In a formal logical system:

Given a set of axioms, and;

A set of transformation rules of sufficient complexity^{1};

There exist statements which are either true but not provable, or false and provable. In the first case the system is incomplete, in the second it is inconsistent.

Here we focus on systems which are incomplete, that is, systems which can contain statements which are true, but not provable. Saying that a true statement, ^{2}. A related result was subsequently demonstrated and generalized by Turing (1931) (the famous Halting problem). Turing showed that there exist processes and numbers which are not computable, where ‘computable’ means that it can be calculated via a mechanical procedure (an algorithm) given a certain input. Here, it is important to notice the relation between a formal system as described above and Turing machines. They both start from some initial conditions (axioms and input data), they both carry out a finite number of predetermined ‘mechanical’ operations (mathematical/logical rules and algorithmic instructions), they both produce results (theorems and outputs), and they both lead to inherently undecidable statements (unprovable statements which are true and incomputable numbers). This is reflected by a formal equivalence between computation and formal logic (as described in Penrose, 1994: 64-66). In the rest of the discussion we will use the words unprovable and incomputable interchangeably.

There is quite a body of work which discusses the philosophical basis and nature of complex systems science. Seeking a deeper understanding of the science of complex systems, alternatives to the traditional scientific—reductionist approach are proposed and explored (Mitchell, 2004; McKenzie & James, 2004). Several papers have gone so far as to address the

There is a self-referential discourse in our attempts to understand how to “do” the science of complex systems which is maddeningly appealing. While using the structure and language of complex systems science (or something logically equivalent) is probably inescapable, it gives rise to a self-referential element which seems suspiciously analogous to the approach metamathematics takes with mathematics. This sort of approach opens the possibility that some Gödelization of the science of complex systems is lurking in the shadows even as we attempt to understand and classify these systems. Far from signalling a flaw in our reasoning, this may implicitly be one of the hallmarks of a complex system, and an indicator that we must be ready and willing to extend our systems.

Cilliers (2001), perhaps, comes closest to addressing the fundamental issue in his paper “Why We Cannot Know Complex Things Completely”. He ties the process of

Here we carry out a thought experiment. Suppose we have a mathematical/physical system which is consistent: we can assume that some physical relationships are robust enough that we may include them in our mathematical structure as axioms (that is we take them to be true^{3}) and that these rules do not undermine the consistency of the system. Now suppose we imagine some physical process which we (magically) know to be incomputable within this system. Our purpose here is not to actually present such a process (or even to assert that such processes must exist), rather to tease out some of its consequences.

We form an extended mathematical system which takes the physical law as an axiom of the or fundamental mathematical/physical system. There is an important point here: Gödel says that our system cannot be both complete and consistent - if our law is inconsistent with the underlying system, then we cannot necessarily make assertions about what must be present (apart from the obvious inconsistencies). For the sake of the thought experiment, we will suppose that we have chosen our physical law carefully and that it is consistent with the rest of the system.

We take this extended system to represent our ‘physics’, that is to say our scientific apparatus; as Gödel’s theorems indicate, the system may exhibit physical laws which are true but not provable, that is, true, but not deducible from the basic ‘physics’ we employed. We cannot necessarily say that a given system will exhibit statements (laws) which are directly related to the new axiom or axioms, or even that it will exhibit physical manifestations of these statements but Gödel provides an avenue by which such properties may appear.

How would this system as a whole (including its true but not provable physical laws) look to us?

We would recognise different

We would not be able to understand how 2. some derived physical laws originate from the initial ‘physics’ (because they are not provable), and even less to predict their existence. These physical rules would look

Since they are physical laws, these state3. ments would carry apparent

Basically, these physical laws would look ‘emergent’ to us, since they satisfy the characterizations commonly used in defining emergence. They would appear to transcend reduction because we are unable to comprehend their formal link to the basic axiomatic physical laws. However, this (like their causal power) is merely apparent. Their properties are inherent in the basic ‘physics’ we started from, but in ways which are not deducible/computable in our formal system.

The traditional way to address emergent processes is to study and describe the different levels separately and, most of the time, independently, by looking for laws which best describe the dynamics of the different levels in isolation. In this way, quantum mechanics describe sub-particle physics, chemistry describes molecular processes, Newton’s mechanics describes macroscopic physics and so on to biology, ecology, sociology, geology, up to relativity theory and cosmology. We ‘know’^{4} that these systems are nested in a Russian doll fashion, and we can describe each doll separately, but not their nesting. Along these lines, Shalizi (2001) and Rabinowitz (2005) propose information theoretic definitions of emergent levels of representation. These are, in our opinion and to our knowledge, the most developed approaches to this problem. Shalizi and Shalizi (2004) in particular gives a numerical recipe to find the most efficient level to study an emergent system based on a measure of system predictability and complexity. The most important limitation of these approaches is that they cannot discriminate between causality^{5} and correlation. This would make little difference if we merely wanted to observe and describe a phenomenon, say in the fashion of natural historians of the 19^{th} century. If we wish to manipulate or even engineer for emergence, then we need to better understand causal relations in order to exert control over it. The obvious question is whether we can describe how the emergent levels arise.

Since Galileo claimed that the “language of Nature’s book is mathematics”^{6}, it has been assumed that natural processes (physical laws) are computable^{7}. More recently, an increasing body of literature started to question this statement (Kauffman, 2000; Penrose, 1994; Calude, ^{8}. As Penrose points out there is a fundamental difference between these kinds of incomputability and that derived from Gödel’s theorem^{9} (see also Moore, 1990). In the formal system scenario described above, there are no dynamics (not even a concept of time!), no missing information, no undetermined initial conditions, no inaccuracy in the description of the transformation rules. Does this sort of incomputability exist in Nature? Penrose, Calude ^{10}. Unfortunately, this question is often disregarded as irrelevant in applied science (Cooper & Oddifreddi, 2003), and we follow Aronson (2005) in the belief that more attention is deserved, since the potential for scientific breakthroughs could be enormous. In the following, we discuss some potential consequences on the conjecture we proposed above, namely that there may be emergence which arises from incomputability inherent in the system we are modelling.

It is interesting to discuss some consequences which would arise if our conjecture is correct:

^{11};

The first two statements are straightforward. The third one requires some clarification. The scientific method requires that experiments be reproducible. This implies that an experiment needs to follow a quite detailed and rigorous procedure in order to be replicated by different observers under inevitably slightly different experimental settings. Basically, an experiment is reduced to an algorithm (Stannett, 2003) and consequently scientific experimentation suffers the very same limitation of formal logic and computer systems, and thus is, by itself, unable to detect truly emergent processes unless it has access to super-Turing input. It seems that the very strength of the scientific method, that is, its unique ability to define objective, reproducible and rigorous statements, by following precise measurement and logical procedures, backfires on its very purpose, by denying access to some members of the class of processes which we instinctively define as emergent. An important question which arises in this regard is “Under what conditions is our own involvement in an experiment sufficient to raise its

There are models of computation which are not necessarily equivalent to Turing machines. The basic notion of how “powerful” a machine (or model of computation) might be is based on the size of the set of languages which can be accepted by the machine. Thus some systems may be beyond the representational ability of a particular model of computation, but not beyond that of another. These alternatives may make models of many systems more accessible, but they still cannot resolve the fundamental uncertainty raised by Gödel’s Theorem: they still contain the basic number theory which gives rise to Gödel’s result.

Graça and Costa (2003), explore the nature of general purpose analogue computers (GPACs) which are the continuous analogues to the Turing machine. They propose a continous-time GPAC which, while sacrificing some of the generality of Shannon’s original machine in order to exclude undesirable configurations, maintains the significant properties of Shannon’s original machine. The notion of an analog computer has a great deal of appeal since so much of what we model is inherently continuous in its nature. The basic conceptual components of a GPAC map quite readily into the usual toolbox of an analytic modeler. MacLennan (2004) takes the approach to its logical extent and derives a mathematical representation of a model of continuous computation on a state-space which is continuous in all its ordinates (including time). This paper presents a mathematical treatment of a model of computation which is quite different from traditional Turing machines and substantially different from the GPAC of Graça and Costa.

Fuzzy Turing machines (Wiedermann, 2000, 2004), for example, provide super-Turing computational power: machines can be constructed which accept a larger class of languages than a traditional Turing machine is capable of accepting. These less traditional approaches to computation may make accessible some of those emergent systems which are inaccessible to ordinary algorithmic computation. However, we are still left considering the possibility that there are complex systems which arise from Gödelian truths, and can only be studied by stepping outside the system.

Turing never claimed that his definition of computation encompasses all systems in which computation may occur. He imagined an abstract machine which, under restricted conditions, can access superior computational power (in the form on an ‘Oracle’) when faced with specific parts of computation it cannot perform. Surely, following Penrose’s argument (Penrose, 1994), the very fact that Nature displays super-computational power (as he admits), while it highlights the limits of formal logic and classic computability, also in principle shows that processes to surpass those limits may be available, though it should be noted that these arguments may have more appeal by analogy than by robustness (Feferman, 1996). The obvious questions are what these processes might look like and whether we can employ them productively^{12}.

In Turing’s (1931) seminal work, the computer he discussed was an abstract concept, not an actual physical machine. Similarly, several authors have contemplated ideal abstract machines (hyper-machines) which could in principle break classic computational barriers (Ord, 2002; Aronson, 2005). As for today, none of these machines has been built, nor does it seem likely that any will be built anytime soon. More down-to-earth approaches, however, look more promising. In a series of papers (Verbaan,

Today, human-computer interactions are standard in a large number of applications. These are usually seen as enhancing human capabilities by providing the fast computation resources available to electronic machines. Should we see the interaction in the opposite direction, as humans enhancing the computational capabilities of electronic machines? Leeuwen and Wiedermann (2000) speculate that personal computers, connected via the web to thousands of machines world wide, receiving inputs via various sensors and on-line instructions from users, are already beyond classic computers. Sensors now monitor many aspects of the environment routinely and are routinely installed on animals in the wilderness (Simonite, 2005). Can we envisage a network computing system, in which agents (computers) interact with the environment via analog sensors, receive data from living beings, and instructions from humans to deal with unexpected situations?

The purpose of this discussion is ^{13}. Designing a set of axioms of sufficient complexity and transformation rules carries incomputability as a natural consequence, that is, it ^{14}. This has led to the suggestion that ultimate causal power does not belong to causal laws, but to the organisation of matter (Laughlin, 2005) and processes (Bickhard, 2000). This robustness seems at odds with the ‘other’ kinds of incomputability which may be responsible for chaotic and unstable processes: namely incomplete descriptions of the system, and sensitivity to initial conditions.

So, what does all of this mean for the study of emergence and complex systems in general? We happily pursue models of all sorts of systems, relatively comfortable with the knowledge that we are approximating a system. As long as we can control the size of the error in our approximations, we remain relatively content. This is the practical side of the analysis of complex systems. As participants in a very large complex system we hope to be able to predict, or at least understand, our interactions with other component systems and as much of the aggregate system as we can. There is a very real survival value in being able to foresee the state of the system. However, in the way that the abstract consideration of non-Euclidian geometry opened the door to a number of different approaches in physics and improved our models of the way the universe may work, so the abstract, impractical side of complex systems science needs to address some basic problems to smooth the path of the practical models. It seems likely that there are systems which exhibit properties which we are unable to model well either because the properties they evince are mathematically inaccessible; the system is algorithmically impossible; or because we are unable to apprehend the true state of the system even though the dynamics of the system are understood. Systems which fall into the first category will remain difficult to model until our mathematics is capable of dealing with them: in this context, the ball is firmly in the metamathematician’s court. The second category is a limitation based on our model of computation, and there are alternative models which may be helpful. Practically, it suggests that we should pursue alternatives to traditional digital computers and the traditional model of computation as an adjunct in our attempts to model and understand systems. The third category is almost certainly the largest. Strictly speaking, it isn’t more “incomputable” or “inaccessible” to us than it is impossible to find the needle in a haystack. Practically, we are still looking for a strong enough magnet.

We conclude with a note about randomness, which further justifies the need for deep enquiry into these problems. Chaitin (1993) shows that incomputability also carries complete randomness. As before, this sort of randomness is not linked to incomplete information and, like formal incomputability, seems to have a more fundamental nature. Since we currently read Nature via a computable language (and experiment with a computable means), we are left to wonder how much of what we assume to be intrinsic randomness actually arises from the limitations of the language we use. Could an exploration of these meta-mathematical enquiries to the physical world have the potential to change the way we perceive several Natural processes?

In Gödel’s original work, basic number theory (arithmetic) was used and the results can be extended to more complex axioms and rule sets.

For any arbitrary string,

Clearly, the ‘truth’ values of a mathematical axiom and of experimentally defined physical laws are very different. Here we take the pragmatic view that this choice is the best available in our scientific enquiry and that it is indeed the way (physical) science is carried out. See also footnote 12, below.

Crutchfield (1994a) gives a beautiful description of how agents discover structures and laws in their environment at different level of complexity and different levels of representation.

In the information theoretical language used by Shalizi (2001), the word ‘causal’ is used frequently, but in the sense of automata in Shalizi and Shalizi (2004). Here we use it as Pattee (1997a) does in the sense that would allow an observer to intervene on the causal process and consequently exert control on its future behavior.

“Philosophy is written in this grand book, the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures without which it is humanly impossible to understand a single word of it; without these one is wandering in a dark labyrinth” (Galileo, 1623).

It is often remarked that all known physical laws are computable. This statement carries an underlying tautology, since our current understanding and use of physics relies on and implies computability.

Kauffman (2000) refers explicitly to the impossibility to define ‘a priori’ the state space of the biosphere and consequently our inability to compute its evolution. This is closely related to the fundamental incompressibility of the initial conditions on chaotic processes (p. 117) which results in apparent randomness when a finite precision is imposed upon it (see Crutchfield & Feldman, 2003, for a discussion of the effect on observations induced by sub-optimal modelling).

A very simple approximation of Penrose’s argument might be “a chaotic system can be coded on a computer, so it must be computable”. Despite the fact that the result of the computation will inevitably be imprecise, the statistics of the result will still represent a ‘typical’ possible outcome.

Interestingly, this is closely related to the similarly open debate on why Mathematics is so efficient at describing Nature and the philosophical dilemma of whether it is a ‘natural’ language we discover or an ‘artificial’ language we develop.

Notice the difference between this claim and the common two sides of the standard debate on reduction: a) reduction can explain all working of Nature and one day we will confirm this, and; b) reduction can not explain all workings of Nature and another concept is needed.

It is interesting to notice that Gödel believed in a strong analogy between mathematics and natural science. Mathematics should be studied similarly to how scientists study Nature and the choice of the fundamental mathematical axioms should be based not only on their intuitive appeal but also on the benefit they provided to the development of a theory (Chaitin, 2000: 89-94). Somehow similarly, Chaitin (1997) supports ‘Experimental Mathematics’ (pp. 22-26, 29, 30) according to which mathematicians should approach mathematics the same way physicists approach physics, via experimentation and statistical inference.

As incomputable real numbers seem to arise naturally from computable ones via Cantor diagonalization arguments (see Chaitin, 1997: 9-11).

Small atomic imperfections do not change the rigidity of metal bar macroscopic state; the actions of a New Yorker only very rarely noticeably affects New York’s everyday life; our cells are completely replaced every few days, without changing our personality, appearance and metabolism.

We would like to thank our reviewers whose comments contributed greatly to this work and directed us to many useful sources, particularly Torkel Franzen’s book which we hope has steered us away from the worst of the logical mires, or as Franzen puts it (with respect to eating hotdogs anyway) “... [from] making a disgusting spectacle of ourselves” (Franzen 2005). This research was carried out as a part of the CSIRO Emergence Interaction Task, http://www.per.marine.csiro.au/staff/Fabio.Boschetti/CSS emergence.htm