Hayek’s The Theory of Complex Phenomena: A Precocious Play on the Epistemology of Complexity

Friedrich Hayek’s The Theory of Complex Phenomena (Hayek, 1964) is a precocious and far-sighted attempt to illuminate a topic that has loomed increasingly important in the 53 years since the printing of the paper: the epistemology of complexity. It posits a tension between real insight into complex phenomena and a narrow — but, common — interpretation of a Popperian falsificationism, and suggests that we make a ‘hard choice’ — one that subsequent research and philosophical forays into complexity science can be usefully understood as having implicitly made on numerous occasions.

Popper suggested actual falsifiability of hypotheses derived from a theory as a criterion of demarcation between scientific and ‘metaphysical’ or ‘pseudo-scientific’ theories and of choice among alternative empirical theories (Popper, 1959). Thus, the ‘theory’ which states that ‘the orbit of planet Z has the form of a circle, i.e, x2 + y2=R2’ is to be preferred (before testing) to a theory which states that ‘the orbit of planet Z has the form of an ellipse, i.e., a2x2+b2y2=c2 because it takes fewer points — or, measurements (namely, 3) to disconfirm the former theory than it takes (4) to disconfirm the latter. Popper capitalizes on the example to make the point that the (normative) emphasis placed by some philosophers of science on simplicity as a criterion for theory choice follows from (and is therefore not independent of) a commitment to actual falsifiability.

This is precisely the point that Hayek focuses on, in reference to ‘complex phenomena’. He sees that the astute researcher of complexity faces a hard choice between the ‘point-wise’ testability that Popper takes for granted on the one hand, and insight or understanding on the other — and advocates that the constraints which testability places on theory choice be loosened in favor of allowing for more complicated nomological relationships among independent variables, which, in turn, will orient the empiricist away from attempting to predict point events and towards predicting ‘patterns’ or dynamical regimes. This is a precocious and far sighted insight, and here is a case in point: Apparently unbeknownst to Hayek (plausibly so: Hayek finished the paper in 1961, two years before the Lorenz paper that kicked off ‘chaos theory’ was published), chaos theory got off the ground approximately at the same time Hayek was writing his paper, starting from (Edward) Lorenz’s startling realization that some dynamical systems (such as that made up by the coupling between the system of nonlinear ordinary differential equations he was trying to solve numerically and the finite-precision arithmetic operations that his computer instantiated) exhibit highly sensitive dependence of their long-run dynamics on their initial conditions, such that two points in the phase space of the system that start out arbitrarily close together will — in the course of the system’s evolution and after an only finite amount of time — end up very far apart. ‘Chaos theory’, then, touches reality not by making predictions about point events, but, rather, by specifying dynamical systems and regimes or regions of their parameter spaces that exhibit ‘transition to chaos’ (Ott, 2002) — that is, by making predictions about patterns of behavior rather than about highly localized space-time hyper-volumes (‘points’) of behavior.

But, is the development of an empirical chaos theory really a validation of Hayek’s claim that strict falsificationism must be relaxed in order to make progress on a science of complexity? Note, in this regard, that formal languages used for representing dynamical systems (such as those exhibiting chaotic behavior) come equipped with highly efficacious state space contraction devices and maneuvers, which collapse half-planes into lines and lines into points. It is, then, possible — by the application of such devices — to render predictions about macroscopic patterns and dynamical regimes of behavior into predictions about the ‘rate of transition to chaos and boundaries between ordered and disordered behavior, which make possible precisely the kind of subsequent ‘point-wise testing’ that Popper is often interpreted as having had in mind. Doing so, however, will presuppose a flexibility on the part of the researcher at a level which most social scientists in general and economists trained in the neoclassical tradition in particular have to date had little consideration for: the ontological one — a flexibility about the objects in-terms-of-which discourse proceeds. This is something that Hayek glimpses and points the way to his paper, without fully calling out: ‘statistics’ — he argues — cannot be used to teach us much about a population of computers, unless we also have access to the code that runs on them (Hayek, 1961, ibid.). Knowledge of the code used to design the computers will not only give us a radically different number and set of hypotheses that ‘statistics’ can be used to test, but also, perhaps more importantly, a different conceptualization of the ‘computer’ in terms of ‘intentional’ terms (‘algorithms’) rather than in terms of causal entities (‘electrons and holes’). And this may be a far more penetrating insight of Hayek’s paper than is the exhortation to loosen the epistemological constraints that we place on ‘complexity science’ — and one which complexity researchers may do well to heed. Should they choose to do so, the black box of human decision making (mind → brain → behavior) — neatly bracketed in economic analyses by rational choice models and linear demand functions — could be made to yield much illuminating insight under the gaze of new conceptual toolkits.


Originally published in Hayek, F. A. (1967). Studies in Philosophy, Politics and Economics, London, UK: Routledge & Kegan Paul, pp. 22-42. Reproduced by kind permission.