E:CO has a New Subtitle
This issue signifies the start of volume number 11, a feat showing not just how far E:CO has come in a relatively short amount of time, but gives striking evidence that it is still only building steam. Just look at the rich contents of this issue: articles hailing from diverse countries around the globe and covering a wide variety of fields, and all done in a rigorous, thought-provoking fashion. This issue also inaugurates a new subtitled for E:CO: “An International Transdisciplinary Journal of Complex Social Systems.” That’s quite a mouthful but it more accurately reflects what E:CO is about and is evolving more and more into.
Three phrases in our new subtitle stand out. First, there is the emphasis on “International.” Of course, internationality was essential in the journal’s mission right from the start, and even before that with the precursor journal Emergence. But we’re now including it in the title to reaffirm our commitment to a truly international focus in terms of our authors, our readers, and our article topics. Indeed, this very issue is like a mini United Nations with authors hailing from Australia, Italy, Egypt and Israel (our own little peace initiative although it wasn’t planned that way!), South Africa, China, and the US. Then there’s the term “Transdisciplinary”. Again that was an indispensible aspect of our initial mission but here it is spelled out overtly. This current issue is a vivid demonstration of transdisciplinarity as can be seen by considering the fields of research of the authors: cognitive science, urban planning, environmental studies, architecture, knowledge management, social networks, organizational theory, operations research, information technology and computer science, and sociology. Moreover, the methods and constructs utilized herein similarly cut-across a wide range of scientific, mathematical, cultural, and philosophical disciplines.
The third new addition to the subtitle is the intentional mention of “Social Systems.” We are in fact the only existent journal specializing in complexity applications to social systems. But of course we also supplement this specialization with numerous phalanges into philosophy, mathematics, education, physics, biology, semiotics, and many other fields related in one way or another to social systems.
In spite of the long and still unwinding road that E:CO and complexity studies in general have been on, it comes as a surprise to me how vexing it can be to pin down, into a concise definition, just what is it that the field of complexity theory or complexity science covers. I’m sure many of the readers have experienced a version of the following situation. I was recently at a party where I met some friends that I hadn’t seen in 30 or more years. To catch up on all those years, we asked the customary questions of what we each had been up to personally and professionally. When my turn came, I again, as I always do in this situation, became tongued-tied as to how to describe just what it is that complexity theory is about. This elusiveness of a crisp definition of complexity always reminds me of that famous retort by one of the United States Supreme Court Judges when asked how he defined obscenity: he said he couldn’t define it precisely in a few words, but he knew it when he saw it! That’s how I see the study of complex systems, I can’t precisely define it but I’m pretty sure I can recognize it when I see it.
Of course, this kind of response is not particularly enlightening and it brings with it all the vagueness accompanying ostensive, contextual, and indexical references, a topic I’ll return to below. Moreover, although one tactic could have been to start listing the varied types of systems considered to be complex, I realized that this would not be of much help. Since, not only were there too many of them—with the result I would be hard pressed to say what they had in common besides such unilluminating vague properties as interconnectedness—it would be questionable whether many of the systems I mentioned were in fact complex, or at least complex all the time, i.e., some systems at certain conditions such as at equilibrium appear quite simple, but may not be when parameters change.
So, getting back to the situation of trying to explicate for my old friends what complexity theory was about, this time around I came up with a new strategy (for me at least) of asking my well meaning friends if they had ever heard of “chaos theory” and most of them indeed had. Chaos theory, of course, has entered the popular imagination in many forms, from Jeff Goldblum’s famous allegory of it in the movie “Jurassic Park”, when he dropped water on the hand of the character played by Laura Dern and told her to watch its unpredictable flow, to the more recent film “The Butterfly Effect” with Ashton Kutcher, to last year’s surprisingly good romantic comedy “Chaos Theory”. As a matter of fact, each of these films does include plot elements having to do with the property of sensitive dependence on initial conditions that characterizes chaotic systems.
So, when the heads of my friends shook “yes” that they had indeed heard of chaos theory as well as the butterfly effect, I then added that chaos theory was like a branch of a wider and taller tree called “complexity science.” But immediately after offering that analogy, I thought to myself, uh oh, that doesn’t really help at all for at least two strong reasons. First, it leaves the harder question of what this wider and taller tree is comprised of yet to be explained, and second, and even more troubling, it dawned on me that this analogy was simply wrong and misleading in a variety of ways.
First of all, there is a decided sense that chaos theory, at least in the form of its progenitor field of nonlinear dynamical systems theory (NDS), predates much of modern complexity theory, going back over a century to Poincaré and to the great Russian mathematicians Chebyshev, Lyapunov, and others. Branches can hardly be said to predate their trees! Second, a great deal of the mathematical apparatuses used by complexity theory, indeed, taken for granted by complexity aficionados, comes right out of NDS, e.g., phase space, attractors, bifurcations, qualitative dynamics, and so on. Wolfram first classified the dynamics of cellular automata using NDS types of categories. Furthermore, the continual updating of cellular automata, which underlies the apparent motion of artificial life, recapitulates the iterations of discrete difference equations as found in the logistic map which is so emblematic of NDS.
Also, there are the very important differences between chaotic and complex systems, e.g., it is not clear to me at least that complex systems generally possess sensitive dependence on initial conditions like chaotic systems do. In addition, chaotic systems can form out of very simple functions whereas it seems complex systems require relatively large networks of interacting agents where the mathematics of their interactions is typically much more complicated than that of logistic-type maps. On the other hand, Shaw (1981) made a convincing case how chaotic systems can be understood as information generators which it seems to me is cognate to what Peter Allen talks about as happening in complex systems as micro-level diversity generates the seeds of emergent order with new properties.
Furthermore, complexity science utilizes many more mathematical tools than what NDS can supply, e.g., there’s all the graph theory of social networks, the group theory of the renormalization group used to explain phase transitional types of the emergence of new order, all the computational complexity stuff hovering around, the talk of Turing machines, algorithmic complexity theory, Bennett’s logical depth, Crutchfield’s computational mechanics, and so forth—a whole host of new mathematical objects and constructs which include but also exceed chaos theory.
Complexity is Vague and that is a Good Thing
So relating complexity to chaos as tree trunk to branch simply won’t do. And, again, I was still left with the arduous task of trying to adequately describe the tree. That brings me back to the idea of vagueness. At first impression, it might seem problematic that an entire field of study appears to be perhaps hopelessly vague. “Vague” though doesn’t necessarily entail being nebulous, ethereal, or inherently muddled. Indeed, in philosophical circles there has recently been a great deal of interest in exploring the philosophical ramifications of vagueness. In this regard, the English philosopher M. S. Sainsbury (1997) has made a good case for considering much of our ordinary discourse as composed of words that are fundamentally vague. Thus, we find vagueness attached to the ordinary idea of “small,” demonstrated in what is called Wang’s paradox after the mathematical logician Hao Wang, a protegé of Kurt Gödel:
* By mathematical induction,
0 is small,
If n is small, n + 1 is small;
Therefore, every number is small.
As the British philosopher of mathematics Michael Dummett (1997:101) explained about Wang’s paradox, “…since every natural number is larger than only finitely many natural numbers, and smaller that infinitely many, every natural number is small, i.e., smaller than most natural numbers.” This kind of vagueness is typically linked with the ancient conundrum of the heap known as the Sorites paradox (Sorensen, 2006):
1 grain of wheat does not make a heap.
If 1 grain of wheat does not make a heap then 2 grains of wheat do not.
If 2 grains of wheat do not make a heap then 3 grains do not.
… If 9,999 grains of wheat do not make a heap then 10,000 do not.
In the context of Sorites’s types of paradox, Sainsbury points to the word and concept “red”—there is no crisply defined set of referents that could be considered red without also including colors that blend off into yellow. For instance, we might find on a shelf in an art store, a row of red oil paint tubes that go from a strong, bright, blood red all the way to something approaching orange yellow. There is a continuum of possible colors between red and yellow depending on the scale of resolution, although from a distance we can discern, in a prism, or example, a band separating red from yellow. Ralph Stacey has his well-known diagram distinguishing three zones of a coordinate plane into simple, random, and complex in relation to the degree of certainty on the horizontal axis and agreement on the vertical one. As helpful as this diagram can be, in my experience it turns out that the complexity region is itself highly complex, containing mini-regions, fractally embedded, of more random and more simple regions, like the pockets of order within the famous bifurcation diagram of the logistic map (ah ha! We’re back to chaos theory after all that!).
As a prototypical example of vagueness, the philosopher Achille Varzi (2001) cites a wild passage from the many populating Saul Bellow’s great novel Herzog:
Remember the story of the most-most? It’s the story of that club in New York where people are the most of every type. There is the hairiest bald man and the baldest hairy man; the shortest giant and the tallest dwarf; the smartest idiot and the stupidest wise man. They are all there, including honest thieves and crippled acrobats. On Saturday night they have a party, eat, drink, dance. Then they have a contest. And if you can tell the hairiest bald man from the baldest hairy man—we are told—you get a prize.
Varzi points out how vagueness runs through this passage because not only is there no sharp boundary exactly demarcating the category of bald men and no precise number of hairs separating hairy from the bald, it also makes no sense to think one can identify the hairiest bald man. Even though some men are obviously bald and some are clearly hairy, between the two there are a host of borderline cases including baldish guys, men with toupees, hirsute beatniks with a shiny bald spot on the top of their heads, and so on and on. Accordingly, the concepts of bald and hairy are vague with ranges of application having vague boundaries.
As another philosopher Max Black (1997) put it, vagueness does not necessarily indicate a conceptual defect just as an impressionist painting is not adequately understood as a flaw of precision in technique:
The impressionist painting of a London street in a fog is not a vague representation of what the artist sees, since his [sic] skill consists in the accuracy with which the visual impression is transcribed. But the picture is called vague in relation to a hypothetical laboratory record of the wave lengths and positions of the various objects in the street, while it is forgotten that that record, in supplying additional detail, obliterates just those large scale relations in which the artist or another observer may be interested. (71).
Related to Black’s contention, the computer scientist Mark Changizi (1999) has made a good argument that vagueness is intimately related to the issues of undecidability and noncomputability in mathematical logic and computational complexity theory. Changizi interprets vagueness as resulting from bounds on computability, and not from human fallibility for it would exist even with the computer HAL from 2001: A Space Odyssey. Instead, vagueness is what we are ultimately left with in the case of all natural languages. According to Changizi, “given that you are computationally bound, avoiding vagueness brings in greater costs than accepting it.” (345).
Hence, I believe we can presume that complexity and complex system are in fact vague notions, but those designations have a silver lining. To see what the advantages can be to vagueness, it is helpful to distinguish it from the ideas of either ambiguity or generality. Concerning, ambiguity, the term “complexity” is definitely a candidate. Seth Lloyd at MIT several years back had identified at least 30 different definitions of “complexity.” This could be taken as indicating there is something fishy about the whole field if it can mean so many things. But, vagueness is not the same as ambiguity. Whereas ambiguity is about words possessing many equivocal meanings, vagueness has more to do with the referents to which the word applies. Thus, vagueness makes a concept more inclusive, allowing all sorts of borderline cases, in effect, manifesting the fact that it deals with systems whose boundaries are themselves essentially vague, permeable, fractally bounded, and so forth. Moreover, as a vague notion, in this philosophical sense, complexity continually resists formulation in terms of this or that discrete mathematical formalism such as set theory, or graph theory, or whatever theory. This of course doesn’t imply that we abandon such formalisms. On the contrary, it suggests instead that we go the whole length in utilizing such formalisms, but with the expectation that at some point we will need to transcend each formalism and devise new constructs and terms and methods to further our study of complex systems. This also means that vagueness doesn’t imply that anything goes in describing and studying complex systems, i.e., vagueness is not some kind of a license for unbridled post-modernist (misunderstood postmodernism) claptrap. Rather, it is marker that context is a crucial factor in determining the scope of inquiry.
Vagueness is also not to be conflated with the idea of generality. Thus, although there are a multitude of types of complex systems, trying to get at what is complex about them by extracting some common element that could then be put into the form of a generalization would miss the boat by a country mile, since the highly important role that context plays in understanding complex systems would be sacrificed. Complex systems are related to other complex systems, but this relation is better thought of in terms of a Wittgensteinian family resemblance than some common essence that they share in general.
For a moment, let’s assume the opposite, i.e., that the complexity of a complex system, indeed, the realm of complex systems in general are not vague notions but instead can be exactingly and precisely defined. This would entail that it would always be decidable whether a system was or was not complex, no borderline cases allowed. But this would paradoxically turn complexity into a simple notion for surely a crucial aspect of simplicity is the presence of crisp, uncontroversially so, boundaries. This, though, would in turn imply that complexity as simplicity must run up against Rice’s theorem in mathematical logic and computatability theory, which suggests that only trivial problems are decidable or computable, i.e., there are algorithms to solve them (Rice’s Theorem, Weisstein).
“Trivial” in this sense is closely related to properties that are always true or always false, i.e., tautologically demonstrable properties such as that bachelors possess the property of being non-married. More formally, Rice’s theorem holds that for any non-trivial property of partial functions (we’ll put aside for the sake of brevity the technical definition of such functions), there is no general and effective method to decide whether an algorithm computes a partial function with that property (Rice’s Theorem, Wikipedia). Turing’s famous noncomputability results turn out to be a special case of Rice’s Theorem which states the more general case that every nontrivial decision problem regarding the function computed by a given Turing machine has no algorithmic solution (Goldreich, 2008). Rice’s theorem is an important result for computer science because it sets up boundaries for research in that area.
If complexity and complex were in actuality nonvague, crisp ideas, then according to Changizi’s linkage of vagueness to undecidability described above, most matters involving complex systems, by involving only nonvague, crisp areas of delineation, would therefore be decidable. But according to Rice’s Theorem, this would suggest that most matters involving complex systems would be trivial. But clearly they’re not, as our readers and authors and anyone else for that matter researching complex systems clearly knows. Rather, complex systems are difficult to research, difficult to measure, difficult to construct viable theories of, indeed difficult to determine if or if not they are in fact complex! A case could be made, I believe, that the more difficult it is to define a specific field of study, ultimately the more interesting it is precisely because it eludes preconceived constructs, frameworks, and pat answers.
Ultimately, the best strategy I could have followed in trying to describe to my friends what complexity theory was about was to simply show them a copy of this issue of E:CO and gone through it, weaving around the more arcane mathematical constructs, and pointing to the kinds of systems and the issues being investigated and the conclusions being drawn. I can think of no better way to describe what makes up complexity theory than these examples.
- Black, M. (1997). “Vagueness: An exercise in logical analysis,” R. Keefe and P. Smith (eds.), Vagueness: A Reader, ISBN 9780262112253. pp. 69-81.
- Changizi, M. (1999). “Vagueness, rationality and undecidability: A theory of why there is vagueness,” Synthese, ISSN 0039-7857, 120: 345-374.
- Dummett, M. (1997). “Wang’s paradox,” in R. Keefe and P. Smith (eds.), Vagueness: A Reader, ISBN 9780262112253, pp. 99-118.
- Goldreich, O. (2008). Computational Complexity: A Conceptual Perspective, ISBN 9780521884730.
- Hyde, D. (2005). “Sorites paradox,” Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/sorites-paradox/.
- Rice’s Theorem (Wikipedia). http://en.wikipedia.org/wiki/Rice%27s_theorem.
- Sainsbury, M.S. (1997). “Concepts without boundaries,” in R. Keefe and P. Smith (eds.), Vagueness: A Reader, ISBN 9780262112253, pp. 251-264.
- Shaw, R. (1981). Strange attractors, chaotic behavior, and information theory,” Zeitschrift für Naturforschung, ISSN 0932-0784, 36a: 80-112.
- Sorensen, R. (2006). “Vagueness,” Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/vagueness/.
- Varzi, A. C. (2001). “Vagueness, logic, and ontol-ogy,” The Dialogue: Yearbooks for Philosophical Hermeneutics, 1: 135-154.