Steven E. Phelan
University of Texas, USA
The trademark of modern culture is science; if you can fake this, you've got it made. — Mario Bunge
The need for a special issue of Emergence on the question “What is complexity science?” is disturbing on several levels. At one level, one could be forgiven for thinking that the voluminous literature generated in recent years on chaos and complexity theory must contain a clear exposition of the definition, mission, and scope of complexity science. That this exposition has not been forthcoming, or is the subject of controversy, is disconcerting. On another level, the inability to differentiate science clearly from pseudoscience in complexity studies is also problematic. Allowing pseudoscience to penetrate a field of study lowers the credibility of that field with mainstream scientists and hinders the flow of resources for future development.
It is my contention that much of the work in complexity theory has indeed been pseudo-science, that is, many writers in this field have used the symbols and methods of complexity science (either erroneously or deliberately) to give the illusion of science even though they lack supporting evidence and plausibility (Shermer, 1997). This proliferation of pseudo-science has, in turn, obscured the meaning and agenda of the science of complexity. The purpose of this article is twofold: to provide a working definition of complexity science; and to use this definition to differentiate complexity science from complexity pseudo-science. This is a play in three acts. In the first section, I will undertake an examination of science and the factors differentiating science from nonscience. In the second section, I examine the relationship between complexity and science, leading to a definition of complexity science. In the final section, I offer a test for distinguishing between science and pseudo-science in complexity studies and provide several examples of the latter. I also describe why it is important for scientists working in the area vigorously to reject pseudo-scientific theories.
Arguably, it is not possible to define complexity science without first providing a definition of science in general. According to Shermer (1997):
One could be forgiven for thinking that a working definition of science would be easily obtainable, given the enormous volume of work in science and the vast number of people working on scientific endeavors. However, nothing could be further from the truth. The absence of a clear consensus on the definition of science is most clearly illustrated by the stance of the Oxford Dictionary of Science, which has omitted a definition of science and the scientific method altogether!
The immediate cause of this confusion has been the rise and popularity of postpositivist worldviews such as feminism, postmodernism, and anarchism, which, since the early 1960s, have sought to knock science from its pedestal as the supreme arbiter of truth, objectivity, and rationality. The 1990s saw a spirited, albeit belated, counterattack by scientists to defend the role of science in society, resulting in what some authors have referred to as the Science Wars (Gross & Levitt, 1994; Gross et al., 1996; Park, 2000; Shermer, 1997; Sokal & Bricmont, 1998).
The contemporary debate on the distinction between science and nonscience has tended to obscure an on-going debate among philosophers of science about the nature of science (and a 2000-year-old debate among mainstream philosophers about the nature of reality, or metaphysics). From the perspective of philosophers of science, much of the debate in the Science Wars appears dated, unsophisticated, and naïve, particularly the responses of scientists to their postpositivist counterparts. The remainder of this section seeks to provide a critical overview of the main schools of thought in thephilosophy of science. This is not an idle exercise. The result will be a set of well-developed theoretical platforms from which to assess the claims to scientific status of complexity theory.
I have divided the voluminous literature of the philosophy of science into three traditions: empiricism, historicism, and constructivism. Empiricism is the view that “all knowledge is based on or exhausted by what is known by sensory experience” (Boyd et al., 1991: 776). Historicism is the view that a definition of science must not exclude major episodes in the history of science (Matheson, 1996). Constructivism is the view that “the subject matter of scientific research is wholly or partially constructed by the background and theoretical assumptions of the scientific community and thus is not, as realists claim, largely independent of our thought and theoretical commitments” (Boyd et al., 1991: 775).
Most textbooks on the philosophy of science start with an exposition on the development of logical positivism in Vienna in the 1920s (Chalmers, 1976; Klee, 1999). According to the textbooks, the logical positivist movement was motivated in large part by the demarcation problem, that is, the desire to distinguish science from pseudo-science. At the time, popular movements such as Marxism and Freudian psychoanalysis were gaining in popularity partly through claims to scientific status.
The logical positivists sought criteria to differentiate “real” science from pretenders to the title. According to Suppe (1977), logical positivists divided theoretical statements into three classes: logical and mathematical terms (which are true by definition); theoretical terms; and observation terms. Observation terms referred to material things and observable properties arising from sense data. Any “scientific” theory must make explicit links between theoretical terms and observation terms (these links are known as correspondence rules). Theories that lacked a correspondence with reality were deemed metaphysical and nonsensical. Therefore, “all significant discourse about the world must be empirically verifiable” and “all assertions of a scientific theory are reducible to assertions about phenomena in the observation language” (Suppe, 1977: 21). It follows that meaningful expressions must be empirically testable by observation and experiment. Nontestable statements are literally nonsensical and meaningless.
Ironically, it is the concept of verifiability that proved to be a weak spot for logical positivism. A general law, such as “all gases expand when heated,” logically requires an infinite number of tests at all places and times for verification and thus can never be verified. This is known as the problem of induction (Chalmers, 1976) and led to a modification of logical positivism, known as logical empiricism, by Rudolf Carnap and his followers. Logical empiricists admit that it is not possible to verify general laws as conclusively true through empirical testing, but that a law can be gradually confirmed by the accumulation of multiple empirical tests under a wide variety of circumstances and conditions (Malhotra, 1994). For many years, logical empiricism was the received view of science.
The first main challenge to the received view was Popper's theory of falsificationism. According to Popper (1959, 1963), confirmation is too easy. For instance, astrologers have collected a great deal of empirical data that has tended to confirm their theories. In Popper's view, the only time a confirmation test is worthwhile is when the theory risks being disproved or falsified. Science progresses through bold conjectures, not through timid tests of existing theory that have little risk of disconfirmation.
It follows that theories that cannot be falsified are not scientific. Astrology is a pseudo-science because its adherents refuse to specify conditions under which it can be falsified, not because it cannot generate confirming instances of its theory. While the problem of induction makes it impossible to prove a theory, a single disconfirming instance can disprove a theory.
Popper's theory has been criticized as both too weak and too strong. It is too weak because a theory is never tested in isolation (Malhotra, 1994). The failure to confirm a theory may be the result of faulty equipment, experimental contamination, or any number of other factors. Researchers can thus introduce ad hoc assumptions or post hoc modifications to a theory to explain its failure, thus preserving the theory from falsification. Falsification is also too strong because, in practice, scientists do not automatically reject theories that are falsified (Putnam, 1991). There are several cases in the history of science where theories were initially falsified and then shown to be correct. According to Popper, these theories should have been rejected the first time they were falsified. However, Lakatos (1977) goes so far as to say that all theories are born refuted and die refuted. Falsification cannot be the demarcation between science and pseudo-science.
Historicists maintain that any demarcation principle must not exclude major episodes from science that are widely held to be exemplars of scientific practice (Matheson, 1996). Laudan (1981; cited in Klee, 1999: 226-7) claims that we may conclude the following from the existing historical evidence:
The first, and most widely known, historicist was Thomas Kuhn (1962). Kuhn's Structure of Scientific Revolutions enunciated many of the points raised by Laudan above. Kuhn divided research into several stages, the most important of which were normal science and revolutionary science. In normal science, scientists in a particular discipline adopt a paradigm that specifies a range of complex instrumental, conceptual, and mathematical puzzles to be solved and delimits acceptable solutions. The aim of normal science is not major substantive novelties or refutations. Normal science seeks to solve the puzzles arising from its paradigm. Evidence that disconfirms a paradigm is held to be an anomaly rather than a disconfirmation requiring abandonment of the theory.
Over time, however, these anomalies increase to the point where the discipline passes into a state of crisis. The crisis occasions the questioning of the dominant paradigm and its alternatives and eventually a new paradigm emerges to replace the old. Kuhn likens the adoption of the new paradigm to a religious conversion or Gestalt switch. The old and new paradigms are held to be incommensurable, that is, the terms and explanations in the new paradigm cannot be rendered in the new paradigm and vice versa.
Kuhn has argued that the presence of multiple paradigms is a sign of immaturity or pre-science. Imre Lakatos (1974, 1977) has objected to this characterization, arguing (like Laudan above) that the co-existence of rival theories is the rule rather than the exception. In place of paradigms, Lakatos proposed the existence of multiple research programs (RPs). Each research program consisted of three elements: a hard core of theories and assumptions; a “protective belt” of assumptions that protect the hard core from falsification; and a “positive heuristic” of problems to be solved by the RP. All programs grow in “a permanent ocean of anomalies” (Lakatos, 1977: 6).
The existence of anomalies makes falsification untenable as a doctrine. In place of falsifiability as a demarcation criterion, Lakatos has proposed distinguishing between “progressive” and “degenerative” RPs. A progressive research program makes a few dramatic, unexpected, stunning predictions. An RP that ceases to make novel predictions is degenerating. Scientists tend to move to progressive programs and away from degenerating programs, although Lakatos does not condemn those trying to turn a degenerating program into a progressive one. In the Lakatosian worldview, there are no sudden revolutions or religious conversions to a new paradigm, but rather a market for research programs where scientists are free to switch programs at any time according to their prospects of progress given past dramatic successes.
One possible implication of this view is that theories can be scientific at one period in time and unscientific at another, depending on their progressiveness (Thagard, 1978). However, Thagard also argues that, in addition to being unprogressive, pseudo-scientists make little attempt to solve problems with the theory or evaluate the theory in relation to other alternatives. Pseudo-science is thus an attitude to progress rather than a state.
Laudan (1977, 1981) substitutes the term “research tradition” for Lakatos's research program and Kuhn's paradigm. For Laudan, problem solving is at the heart of scientific inquiry. We should accept the theory that solves the most problems, and pursue the tradition that is currently solving problems at the greatest rate (Matheson, 1996). Science progresses by solving more problems. However, if we pursue the tradition that is solving problems at the greatest rate, problems could be “unsolved” in the current traditions that were solved in an earlier tradition. Science is therefore not necessarily cumulative.
In his book Computational Philosophy of Science, Thagard (1988) attempts to devise a new set of demarcation criteria between science and pseudo-science that acknowledges the earlier problems with verifiability and falsification and attempts to incorporate historicist elements. Table 1 lists Thagard's attributes of science and pseudo-science respectively.
Science | Pseudo-science |
---|---|
Uses correlation thinking (e.g., A regularly follows B in controlled experiments) | Uses resemblance thinking (e.g., Mars is red, red is the color of blood, therefore |
Seeks empirical confirmations and disconfirmations | Mars rules war and anger) Neglects empirical matters |
Practitioners care about evaluating theories in relation to alternative theories | Practitioners oblivious to alternative theories |
Uses highly consilient (i.e., explains many facts) and simple theories | Nonsimple theories: many ad hochypotheses |
Progresses over time: develops new theories that explain new facts | Stagnant in doctrine and applications |
Table 1 Distinctions between science and pseudo-science
Thagard (1988) maintains that it is impossible to draw a hard line between science and pseudo-science because some sciences have some of the characteristics of pseudo-science and vice versa. However, we can be more confident in labeling something scientific if it adheres to most of the principles in the left-hand column of Table 1. Conversely, a pseudo-science will share a majority of the characteristics in the right-hand column. According to Thagard, the decision to grant a field scientific status is not a definitive, one-time decision. Many fields have moved from scientific to pseudo-scientific status and some have gone from pseudo-science to science as they have adopted or dropped elements from Table 1. Demarcation is thus an exercise in fuzzy logic—there are no necessary or sufficient conditions.
While scientists can obviously be deeply committed to their research traditions, historicism still leaves open the possibility that science can make genuine progress—by solving more problems, explaining more facts, and predicting more novel events—even though discredited theories may persist longer than commonly expected and new (superior) theories might be slow to gain widespread support (Boghossian, 1996). On the other hand, constructivism—also known as social constructivism, social constructionism, poststructuralism, postpositivism, or postmodernism—attacks the notion that science has any objective content whatsoever. Instead, scientific “facts” are “socially constructed” and the notion of progress an illusion. “The very development and use of the rhetoric of objectivity … represents a mere play for power, a way of silencing … ‘other ways of knowing' ” (Boghossian, 1996: 15).
Linda Nicholson (in Klee, 1999: 269) captures the distinction quite well when she argues:
The traditional historicist claim that all inquiry is inevitably influenced by the values of the inquirer provides a very weak counter to the norms of objectivity … The more radical move in the postmodern turn was to claim that the very criteria demarcating the true and the false, as well as such related distinctions as science and myth or fact and superstition, were internal to the traditions of modernity and could not be legitimized outside of those traditions. Moreover, it was argued that the very development of such criteria, as well as their extension to ever wider domains, had to be described as representing the growth and development of “specific regimes of power.”
Hacking (1999: 6) argues that the constructivist position can be reduced to two points (where X = science, scientific method, positivism, truth, reality, progress, objectivity, or facts):
Many constructionists also believe that:
Constructivist thinking in science can be traced back to the work of Feyerabend (1975), who argued against prescribing a rational system for demarcating good science from bad science in favor of methodological anarchism. In his view, “there is not a single rule, however plausible, and however firmly grounded in epistemology, that is not violated at some time or other … The only principle that does not inhibit progress is: anything goes” (Feyerabend, 1975: 230). Prescriptive systems such as logical positivism do not lead to more truth or greater accuracy:
An openness to new ideas, a willingness to try out unpopular techniques, and a spirit of passionate searching after the unfamiliar … contribute more to progress of science than commitment to cool reason and accepted methods. (Klee, 1999: 201)
Note that Feyerabend still adheres to the notion of progress in science, but claims that everyone can define progress in their own way.
The Strong Program in the Sociology of Science further helped lay the groundwork for constructivism in science by seeking to discover the social conventions that led to the acceptance of a fact, independent of the truth or falsity of that fact (Bloor, 1976). Similarly, Latour and Woolgar (1979) undertook a case study of a Nobel prize-winning laboratory to demonstrate how highly regarded scientific “facts” are often socially constructed from assumptions, expectations, and conventions rather than directly verified.
Contemporary constructivism comes in many flavors and purposes. Radical constructivists argue that there is no reality independent of the observer. Reality may exist “out there,” but our “biological hard-wiring” and social experiences limit our knowledge of ontological reality. Critical theory assumes that a consensus can be reached on our experiences of reality if we create “ideal speech situations” that give a voice to underprivileged views. Postmodernists, on the other hand, reject the notion that consensus can ever be reached, advocating (à la Feyerabend) an “anything goes” approach to reality with no privileged positions.
Hacking (1999) speaks of a hierarchy of purposes in constructivism. Weaker forms of constructivism seek to unmask the social and political processes that create privileged ways of knowing. More radical forms seek to alter the social structure to admit other ways of knowing and thereby share (or destroy) power. In general, constructivism is critical of science (as an especially privileged and elitist way of knowing). Accordingly, the demarcation between science and nonscience is irrelevant. Labeling something as a science is simply a political gambit to gain power and status over other ways of knowing. Science has no more insight into objective reality than any other way of knowing. Thus, constructivist studies of science seek to unmask the lack of objectivity in science, thereby removing the basis for its claim as a privileged way of knowing.
Hacking (1999) claims that constructivists are often unclear about whether they are opposed to the practice of science or the ideology of science. By the practice of science, he means those activities that allow scientists to construct theories that predict or explain the natural world. Hacking argues that it would be foolish to claim that practical science has not made large strides in discovering the laws and principles of nature. However, the success of science has given it a privileged, or ideological, status that has the effect of suppressing dissent, molding worldviews, and supporting élites (typically white, Anglo-Saxon, protestant males), regardless of the merits of their position vis-à-vis others' reality.
I have spoken at length about criteria for demarcating science from nonscience, but little about what actually constitutes scientific practice. What method do scientists use to gain the predictive and explanatory power over nature to which Hacking refers? How do the methods and practices of complexity science differ from traditional science?
The first, and perhaps most distinctive, attribute of science on Thagard's list from Table 1 is correlation thinking, or the search for empirical regularities (or laws) The ideal gas law, PV=nRT, is a classic example of a set of regularities expressed as a law. Hempel (1966) argues that prediction arises from combining the law with a set of initial conditions to deduce an outcome. For instance, the ideal gas law enables us to predict pressure given the initial temperature, volume, and quantity of a gas. Similarly, explanation involves taking a known outcome and demonstrating that it is deducible from the applicable law(s) and initial conditions. Laws may be either deterministic (for all X, Y) or probabilistic (for 80 percent of X, Y).
According to Hempel (1966), the identification of empirical regularities calls for theories to explain why these regularities occur. These theories, in turn, may give rise to additional novel predictions. As discussed in the previous section, empiricism and historicism both place great weight on scientific progress. To summarize from Laudan (1981) and Thagard (1988), a theory T′ improves on theory T when:
Of course, as the constructivists have indicated, the choice of phenomena to explain or predict has never been arbitrary. Some areas of human activity are more conducive to the identification of regularities than others and some activities have higher priority in human affairs (such as survival, shelter, and power over others). New sciences are likely to arise as technology allows more areas to be mined for regularities or new human priorities arise.
Complexity is a new science precisely because it has developed new methods for studying regularities, not because it is a new approach for studying the complexity of the world. Science has always been about reducing the complexity of the world to (predictable) regularities. To a layperson, the behavior of gases is complex and chaotic, but the gas laws reduce that complexity to manageable regularities. Similarly, Newtonian mechanics reduced complex motion (particularly the complex motion of the planets) to simple regularities. Consequently, rather than define complexity science by what is studied (i.e., a complex universe), the focus should be on the methods used to search for regularities.
Complexity science introduces a new way to study regularities that differs from traditional science. Traditional science has tended to focus on simple cause-effect relationships. In the ideal gas law, a rise in temperature leads to a corresponding rise in pressure. Similarly, Newton's wellknown formula that force equals the product of mass and acceleration (F=MA) also expresses a simple relationship. It is not surprising that early science focused on simple laws, because they are the easiest regularities to replicate, detect, control, and measure.
Complexity science posits simple causes for complex effects. At the core of complexity science is the assumption that complexity in the world arises from simple rules. However, these rules (which I term “generative rules”) are unlike the rules (or laws) of traditional science. Generative rules typically determine how a set of artificial agents will behave in their virtual environment over time, including their interaction with other agents. Unlike traditional science, generative rules do not predict an outcome for every state of the world. Instead, generative rules use feedback and learning algorithms to enable the agent to adapt to its environment over time. The application of these generative rules to a large population of agents leads to emergent behavior that may bear some resemblance to real-world phenomena. Finding a set of generative rules that can mimic real-world behavior may help scientists predict, control, or explain hitherto unfathomable systems (such as the stock market).
For instance, consider the simple logistic equation often used to introduce chaos theory, xt+1 = rxt(1 - xt ). By altering the value of r, a researcher can generate a variety of complex patterns, including fixedpoint attractors, bifurcations, and strange attractors. When r>3.7, iterating over the equation produces chaotic behavior that appears noncyclical and extremely complex to the casual observer. The key to this complex behavior is a simple deterministic equation—precisely the type of simple law or regularity that complexity science would like to use to describe the natural world.
The challenge for complexity science is to calibrate the computer models to real-world data. For instance, does the stock market exhibit chaotic behavior that can be mapped on to a logistic equation? Doyne Farmer and others have been searching for just such an equation for several years without success (Farmer, 1999). On a positive note, there have been success stories in controlling chaos. For instance, researchers have discovered that heart fibrillation can be modeled with equations based on chaos theory. This knowledge has been used to develop a method for controlling induced arrhythmia (Garfinkel et al., 1992).
It is not too difficult to see that, from the historicist perspective, complexity science represents a good example of a developing research program (Lakatos, 1974). For instance, it has its hard-core assumptions (e.g., complex dynamic systems can be modeled with generative rules; similar generative rules operate across a wide range of complex systems) and methods (e.g., agent-based computational modeling; nonlinear dynamics; genetic algorithms). Also, like all research programs, complexity science has its share of anomalies and critics (Horgan, 1995).
At this juncture, the greatest challenge for complexity science is to remain progressive by solving problems (Laudan, 1981) and making novel predictions (Lakatos, 1974). As we have seen, chaos theory has been making progress in controlling chaos, particularly in medical and industrial applications. Also, Bak and Chen (1991) have managed to fit power-law distributions to a wide range of real-world phenomena, including earthquakes, city sizes, and sand piles. However, progress has been less dramatic in artificial life and complex adaptive systems. While it has been relatively simple to show high-level resemblances between the emergent properties of computer models and real-world phenomena, it has proven extremely difficult to calibrate these models to produce correlations or confirmable regularities of real-world systems. Arguably, system dynamics has been a degenerating research program since the 1970s because of its inability to calibrate its models successfully.
I would like to conclude this article by alluding to three things that complexity science is not. Complexity science is not:
In the 1950s and 1960s, general systems theory introduced the notion that phenomena that appear to have simple causes, such as unemployment, actually have a variety of complex causes—complex in the sense that the causes are interrelated, nonlinear, and difficult to determine. Systems theorists adopted a holistic approach, which, in its most radical form, argued that everything affects everything else, and that any given phenomenon, such as unemployment, cannot be studied without looking at the entire context in which it is embedded. These contexts include the social context, the economic context, the family context, the global context, and so forth.
Complexity science, defined earlier as the search for generative rules, does not embrace the radical holism of systems theory. Complexity scientists are seeking simple rules that underpin complexity. In fact, all science is seeking ways to simplify or generalize from the complexity of the real world in some way. Traditional science seeks direct causal relations between elements in the universe, whereas complexity theory drops down a level to explain the rules that govern the interactions between lower-order elements that in the aggregate create emergent properties in higher-level systems.
In contrast, systems theory almost seems to surrender to complexity because it is not particularly interested in the identification of regularities. Regularities do not exist in open systems, almost by definition. Complexity writers who propound a holistic thesis are thus probably not complexity scientists. Interestingly, systems dynamics, a pragmatic outgrowth of general systems theory, restricts itself to the study of bounded systems (albeit with complex feedback loops). This admits the possibility of identifiable regularities.
Complexity science postulates that generative rules and equations can be discovered that are capable of explaining the observed complexity of the “real” world/universe. Furthermore, these laws have the potential to predict and control the behavior of real-world systems. Thus, complexity science would seem to be following a positivist, rather than postpositivist, research agenda (Morçöl, 2000).
However, for many constructivists/postmodernists, chaos theory represents an “attack from within” on the privileged position held by science as a dominant ideology controlled by white European males. Like Einstein's relativity, Godel's theorem, and Heisenberg's uncertainty principle, chaos theory is said to delimit the boundaries of determinism and rationality, fatally wounding the cherished notion that science can predict and control all aspects of the “real” world. Of course, as we have seen, chaos and complexity theory can be used to predict and control the real world (thus fatally wounding the cherished notion of complexity as postmodern science).
In a similar vein, Richardson et al. (2000) advocate a postmodern definition of complexity science based on the logic that the incompressibility of complex systems implies infinite ways of knowing about the world—a thoroughly constructivist view that privileges other ways of knowing. This view is perfectly compatible with the constructivist philosophy of science outlined earlier. However, in this article I have argued for a definition of complexity science that blends positivist and historicist schools of thought by emphasizing science as the search for confirmable regularities. This view is incompatible with constructivist definitions of complexity like Richardson et al. (2000), because of its normative advocacy of a single scientific method.
In our earlier discussion of demarcation criteria, Thagard (1988) listed “resemblance thinking” as a classic sign of pseudo-science. It is possible to distinguish two types of resemblance thinking in contemporary complexity studies. The first type of resemblance thinking occurs when analyzing patterns from model outputs, the second can be found at the level of metaphors.
In the first case, I have already mentioned how output from computational models of complex adaptive systems and artificial life has been used to draw qualitative similarities with real-world phenomena. While this work is interesting as far as it goes, it is basically a sophisticated form of resemblance thinking, and should be classified as pseudo-science or, at the very least, pre-science. The missing link is to demonstrate a correlation between the model and reality on a quantitative rather than qualitative basis and then, ultimately, to use the model for prediction or control.
The second case of resemblance thinking involves the use of metaphors. The use of complexity metaphors is particularly prevalent in business, where the past few years have seen a plethora of books using complexity terminology to provide novel “scientific” insights into business problems (Lissack, 1999). There are many examples of resemblance thinking in business, for example equating chaos with “a sense of chaos and upheaval in today's business environment” and complexity with “the increased complexity of today's business environment.” Clearly, these writers are not practicing complexity science because they do not subject their claims to testing, confirmation, or falsification (Thagard, 1988). In fact, there is no sound scientific evidence to back any of their claims.
I agree with Lissack (1999) that the use of metaphors has the ability to suggest new relationships and new categories of thought. However, I object to the misuse or misreading of complexity science to legitimate complexity metaphors in business. As the constructivists have suggested, science has a privileged position in society and there is always a temptation to claim “scientific status” to inflate the credibility of one's work. While Park (2000) suggests that pseudo-scientific practices often arise from genuine misunderstandings or self-delusion, he also recognizes a slippery slope from self-delusion to fraud.
The widespread use of inappropriate metaphors can only damage complexity science in the long run by destroying its credibility and consigning it to history as another management fad (McKelvey, 1999). Complexity scientists have a responsibility to prevent this from occurring, partly by exposing fraud and self-deception, and partly by ensuring that the field remains progressive.
The following should not be implied from the preceding discussion:
The central point of the argument is that “science” is not a casual term. To label something as a science implies that one follows certain approaches to gathering and interpreting knowledge, such as those identified by Thagard (1988) in Table 1. It also implies a certain privileged position in society (whether deserved or not). Therefore, “anything goes” is not an appropriate description of science.
Complexity theory, or complexity studies, is an inclusive term that admits multiple ways of knowing. Complexity science is a connative term that specifies a particular way of knowing that I have attempted to outline above. Complexity scientists should resist attempts to label “other ways of knowing” as complexity science because it undermines their reputation and credibility (and the reputation and credibility of all science). Complexity theory should not be confused with complexity science. However, this is no way implies that learning, insight, understanding, or tolerance cannot be derived from “other ways of knowing.”
Bak, P & Chen, K. (1991) “Self-organized criticality,” Scientific American, 264: 26-33.
Bloor, D. (1976) Knowledge and Social Imagery, London: Routledge.
Boghossian, P (1996) “The Sokal hoax,” Times Literary Supplement, December 13: 14-15.
Boyd, R., Gasper, P, & Trout, J. D. (1991) The Philosophy of Science, Cambridge, MA: MIT Press.
Chalmers, A. (1976) What Is This Thing Called Science?, St Lucia: University of Queensland Press.
Farmer, J. D. (1999) “Physicists attempt to scale the ivory towers of finance,” working paper 99-10-073, Santa Fe Institute.
Feyerabend, P K. (1975) Against Method, London: New Left Books.
Garfinkel, A., Spano, M. L., Ditto, W. L., & Weiss, J. (1992) “Controlling cardiac chaos,” Science, 257: 1230-35.
Gross, P R. & Levitt, N. (1994) Higher Superstition: The Academic Left and its Quarrels with Science, Baltimore, MD: Johns Hopkins University Press.
Gross, P R., Levitt, N., & Lewis, M. W. (eds) (1996) The Flight from Science and Reason, Baltimore, MD: Johns Hopkins University Press.
Hacking, I. (1999) The Social Construction of What?, Cambridge, MA: Harvard University Press.
Hempel, C. G. (1966) Philosophy of Natural Science, Englewood Cliffs, NJ: Prentice-Hall.
Horgan, J. (1995) “Trends in complexity studies: From complexity to perplexity,” Scientific American, 272:74-9.
Klee, R. (ed.) (1999) Scientific Inquiry: Readings in the Philosophy of Science, New York: Oxford University Press.
Kuhn, T. (1962) The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
Lakatos, I. (1974) “Falsification and the methodology of scientific research programs,” in I. Lakatos & A. Musgrave (eds), Criticism and the Growth of Knowledge, Cambridge, UK: Cambridge University Press.
Lakatos, I. (1977) “Science and pseudoscience,” in I. Lakatos (ed.), Philosophical Papers, Cambridge, UK: Cambridge University Press.
Latour, B. & Woolgar, S. (1979) Laboratory Life: The Social Construction of Scientific Facts, London: Sage.
Laudan, L. (1977) Progress and its Problems, Berkeley, CA: University of California Press.
Laudan, L. (1981) “A problem solving approach to scientific progress,” in I. Hacking (ed.), Scientific Revolutions, Oxford, UK: Oxford University Press.
Lissack, M. R. (1999) “Complexity: the science, its vocabulary, and its relation to organizations,” Emergence, 1(1): 110-26.
Malhotra, Y. (1994) “On science, scientific method and evolution of scientific thought: A philosophy of science perspective of quasi-experimentation,” www.brint.com/papers/science.htm.
Matheson, C. (1996) “Historicist theories of rationality,” http://plato.stanford.edu/entries/rationality-historicist/.
McKelvey, B. (1999) “Complexity theory in organization science: Seizing the promise or becoming a fad?,” Emergence, 1(1): 5-32.
Morgol, G. (2000) “Is complexity postmodern? Or is it postpositivist?,” paper presented at the 13th Annual Conference of the Public Administration Theory Network, Fort Lauderdale, FL., January 28-29.
Park, R. L. (2000) Voodoo Science: The Road from Foolishness to Fraud, New York: Oxford University Press.
Popper, K. (1959) Logic of Scientific Discovery, London: Hutchinson.
Popper, K. (1963) Conjectures and Refutations, London: Routledge & Kegan Paul.
Putnam, H. (1991) “The 'corroboration' of theories,” in R. Boyd, P Gasper, & J. D. Trout (eds), The Philosophy of Science, Cambridge, MA: MIT Press.
Richardson, K., Cilliers, P, & Lissack, M. (2000) “Complexity science: A 'grey' science for the 'stuff in between',” paper presented at the First International Conference on Systems Thinking in Management, Geelong, Australia, 8-10 November.
Shermer, M. (1997) Why People Believe Weird Things: Pseudo-science, Superstition, and Bogus Notions of our Time, New York: MJF Books.
Sokal, A. D. & Bricmont, P (1998) Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science, New York: St Martin's Press.
Suppe, F (1977) The Structure of Scientific Theories, Chicago: University of Illinois Press.
Thagard, P R. (1978) “Why astrology is a pseudoscience,” in P Asquith & I. Hacking (eds), Proceedings of the Philosophy of Science Association, East Lansing, MI: Philosophy of Science Association.
Thagard, P R. (1988) Computational Philosophy of Science, Cambridge, MA: MIT Press.