Complexity science as order-creation science: New theory, new method

Bill McKelvey
UCLA Anderson School of Management, USA

Abstract

Traditional ‘normal’ science has long been defined by classical physics and most obviously carried over into social science by neoclassical economics. Especially because of the increasingly rapid change dynamics at the dawn of the 21st century, different kinds of foundational assumptions are needed for an effective scientific epistemology. Complexity science - really ‘order-creation science’ - is particularly relevant because it is founded on theories explicitly aimed at explaining order creation rather than accounting for classical physicists’ traditional concerns about explaining equilibrium. This article sets up the rapid change problem, and shows why evolutionary theory is not the best approach for explaining entrepreneurship and organizational change dynamics. New theories from order-creation science are briefly presented. The continuing centrality of models in scientific realist definitions of modern science is brought to center stage. Agent-based computational models are shown to be better than math models in playing the role of forcing theoretical elegance and continuing the essential experimental tradition of effective science.

Introduction

Isabelle Stengers, (2004) reminds us that the founding idea of complexity science was Prigogine’s juxta positioning of the 1st and 2nd Laws of Thermodynamics so as to explain the emergence of dissipative structures. Implicit in this was his questioning of the reversibility of time and the centrality of equilibrium in “normal” science (Prigogine & Stengers, 1984). There can be no greater foundational challenge to normal science, the origin of which was classical physics. Sandra Mitchell, (2004) reminds us of the centrality of idealized, abstract models, one of the enduring legacies of logical positivism (McKelvey, 2002). Ironically, if mathematics is taken as the model-technology of choice, these two foundational statements can’t be joined. Why? Math as the core modeling method of modern science originated in Newton’s studies of orbital mechanics and was greatly reinforced by the Vienna Circle’s founding of logical positivism in 1907 (Suppe, 1977). Given that the 1st Law is about conservation of energy, and that classical physical dynamics is about the translation of matter from one form to another, math became the only means of rigorously accounting for the accuracy of the translation, and thereby the proving of theories about what causes what, given equilibrium. Simply put, math models can’t handle order creation. The methodological invention that does allow the joining of the two foregoing foundational statements is the agent-based1 computational model. Casti, (1997) states that in fifty years time computational experiments will be seen as the primary contribution of the Santa Fe Institute.

The tendency in organization studies so far is to focus on explicating the term complexity, seemingly in every way possible, and to rush toward offering practical wisdom, again, seemingly in every way possible. Could it be that the science of it all is being ignored in the rush toward practical application? My intent is to emphasize the science part in two ways. First, calling ‘it’ complexity science is like calling thermodynamics ‘hot’ science. Complexity and hot are the outcomes at one end of the dynamic scale. More aptly, complexity science is order-creation science. This puts focus on the fundamental change in the nature of the dynamics involved - from equilibrium dynamics to order-creation dynamics. Second, there is the changing role ofmodels. Math is good for equilibrium modeling. Agent-based computational models are essential for modeling order creation.

My objective is to explain why order-creation science offers significant new lessons for how scholars doing management and organizational studies can better understand the modern dynamics of their phenomena. I begin with a review of why ‘New Age’ economies and organizations in the digital information era call for organizational designs in which the collective intelligences of many employees may be brought to bear, quickly, on New Age organization problems and strategies. I then discuss why order-creation science offers better ways of understanding and researching emergent collective phenomena. Foremost among these new methods is the use of agent-based computational models. I also outline the epistemological reasons why models remain a cornerstone of effective science. A short overview of how one might use agent models in conducting one’s research on organizations and managerial processes follows.

Organizational dynamics in the 21st century

'New age' economics and increased external complexity

Over past centuries, economic life has been marked by three revolutions: agricultural, industrial, and service. The 21st century brings with it a fourth - the digital information age. Nowhere are the dynamics characterizing the knowledge era more vividly and succinctly portrayed than in a recent book edited by Halal and Taylor (1999), 21st Century Economics: Perspectives of Socioeconomics for a Changing World. Their conclusions (paraphrased) are far reaching (pp. 398-402):

  1. Economies of the 21st century will be dominated by globalization and integrated by sophisticated information networks;
  2. Increasingly deregulated economies will mirror the textbook ideal of perfect competition (and marginal profits);
  3. Creative destruction from the transition will create social disorder worldwide;
  4. Nearly autonomous entrepreneurial cellular networks and fundamentally different ways of corporate governance will replace top-down hierarchical control.

Part Two of their book, titled “Emerging Models of the Firm,” focuses on the magnitude of the problem managers face as they cope with increasing competitiveness world-wide while at the same time trying to shift from top-down control to the management of complex new organizational forms using radically new approaches of managerial leadership for purposes of knowledge creation and the creation of intra-organizational market dynamics. As if this weren’t difficult enough, Larry Prusak (1996: 6) points to speed as the driving element2:

“The only thing that gives an organization a competitive edge - the only thing that is sustainable - is what it knows, how it uses what it knows, and how fast it can know something new!”

Halal and Taylor say life will be different on the other side of the millennium - the New Age:

Communism has collapsed, new corporate structures are emerging constantly, government is being ‘reinvented’, entirely new industries are being born, and the world is unifying into a global market governed by the imperatives of knowledge” (1999: xvii)

The four conclusions by Halal and Taylor, mentioned above, predict economic revolution over the next two decades. Given this, what should organization theorists and managers worry about? Significant clues come from Part II of their book, boiled down in their Table 1, which focuses on “Emerging Models of the Firm.” Abstracting from this Table, what do the various authors in Part II see going on?

The authors in Part II emphasize decentralization, cellular networks, internal markets, and employee empowerment as defining elements of New Age economies - all in response to disequilibria and new economic trends. Key questions we face are: How should we research organizational and/or managerial dynamics? How should managers manage? (Drucker,

1999)

Also from Halal and Taylor (1999) we learn that New Age trends call for dramatically new organizational strategies and designs. Strategy scholars have seen this coming. Recent writers about competitive strategy and sustained rent generation parallel Prusak’s emphasis on how fast a firm can develop new knowledge. Competitive advantage is seen to stem from keeping pace with high-velocity environments (Eisenhardt, 1989), seeing industry trends (Hamel & Prahalad, 1994) and value migration (Slywotzky, 1996), and staying ahead of the efficiency curve (Porter, 1996). Because of increases in the need for dynamic capabilities, faster learning, and knowledge creation there is an increased level of causal ambiguity (Lippman & Rumelt, 1982; Mosakowski, 1997). Learning and innovation are not only more essential (Ambrose, 1995), but also more difficult (Auerswald, et al., 1996; ogilvie, 1998). Dynamic ill-structured environments and learning opportunities become the basis of competitive advantage if firms can be early in their industry to unravel the evolving conditions (Stacey, 1995). Drawing on Weick, (1985), Udwadia, (1990), and Anthony, et al. (1993), ogilvie (1998: 12) argues that strategic advantage lies in developing new useful knowledge from the continuous stream of “unstructured, diverse, random, and contradictory data” swirling around firms.

Matching internal to external complexity via bottom-up emergence

The foregoing trends appear on a CEO’s horizon as uncertainties. Uncertainty in organizational environments is a function of (1) degrees of freedom (generally taken as the most basic definition of complexity - GellMann, 1994); (2) the possible nonlinearity of each variable comprising each degree of freedom, and (3) the possibility that each may change. These three environmental ingredients give rise to seemingly countless strategic options. Long ago, and seemingly in simpler times, Ashby characterized environments and possible adaptive options using the term “variety.” His classic Law of Requisite Variety (1956: 207) holds that:

only variety can destroy variety

What does Ashby mean by “destroy”? His insight was that a system has to have internal variety, also defined as degrees of freedom, that matches its external variety so that it can self-organize to deal with and thereby “destroy” or overcome the negative effects on adaptation of imposing environmental constraints and complexity. In biology, this is to say that a species has to have enough internal genetic variance to successfully adapt to whatever resource and competitor tensions imposed by its environment. I update and extend Ashby’s Law as follows:

  1. Only internal variety can destroy external variety updates to:
  2. Only internal degrees of freedom can destroy external degrees of freedom - updates to:
  3. Only internal complexity can destroy external complexity - updates to:
  4. Only interactive heterogeneous agents can destroy external complexity - updates to:
  5. Only distributed intelligence can destroy external complexity.

But how is external complexity destroyed and internal complexity created? Thompson, (1967), reflecting the era of contingency theory, took the view that variety was reduced from the top. Thus, at each level starting with the CEO, some variety is taken out of the system so that at the bottom, workers do their jobs in a machine-like setting of total certainty. This is a top-down approach to uncertainty reduction. Nearly a quarter century later, Mélèse (1991) takes the opposite view, arguing that variety reduction happens best from the bottom up. Simon, (1999) observes that it is not just variety that is out there to be destroyed but also high change-rate effects. Lower level units, therefore, must absorb variety, leaving upper managers with less frequent, less noisy, less complex, but weightier decisions. By way of expanding on how organizations might go about developing internal complexity, I briefly describe four approaches to the bottom-up variety destruction problem.

Knowledge creation3. Knowledge theory explores strategies for effectively utilizing worker intelligence (Grant, 1996). According to current literature, the wellsprings of knowledge (Leonard-Barton, 1995) derive from the connected intelligences of individuals (Nonaka & Takeuchi, 1995; Nonaka & Nishiguchi, 2001; Stacey, 2001). As firms increasingly depend upon individual intelligences in their production enterprise (Burton-Jones, 1999; Davenport & Prusak, 1998; Gryskiewicz, 1999), they must develop strategies for acquiring, interpreting, distributing, and storing the information that individuals possess (Huber, 1996). Nonaka and Nishiguchi (2001) describe this knowledge management process as the expansion of “individual knowledge” into higher-level, “organizational knowledge.”

In social systems, the learning dynamics described above occur simultaneously and interactively - it is a connectionist social capital development problem (Burt, 1992). Transactive memory (Moreland & Myaskovsky, 2000; Wegner, 1987) and situated learning studies (Glynn, et al., 1994; Lave & Wenger, 1991), for example, all show learning as a nonlinear, interactive, and coevolving (Lewin & Volberda, 1999) process. Even individual cognitive processes are seen as socially distributed (Taylor, 1999), with employees in social networks influencing and learning from each other (Argote, 1999). Learning, thus, is a recursive connectionist process rather than a linear agent-independence one.

Recent work now stresses the importance of the “collective mind” (Lave & Wenger, 1991; Weick & Roberts, 1993). According to Glynn, et al. (1994), learning is “best modeled in terms of the organizational connections that constitute a learning network” (p. 56). Wenger, (1998) focuses on individual learning in “communities of practice,” observing that individual learning is inseparable from collective learning. Lant and Phelps (1999: 233) hold that learning should be understood primarily as evolving within “an interactive context... embedded in the context and the process of organizing.” McKelvey (2001a, 2005) refers to recursive vertical and horizontal individual / group learning processes in organizations as “distributed intelligence.”

Distributed Intelligence. Henry Ford’s quote represents thinking in the Industrial Age:

“Why is it that whenever I askfor a pair ofhands, a brain comes attached?” (Ford)4

“My work is in a building that houses three thousand people who are essentially the individual ’particles’ of the ‘brain’ of an organization that consists of sixty thousand people worldwide.” (Zohar, 1997: xv)

Zohar, (1997) starts her book by quoting the director of retailing giant, Marks & Spencer. Each particle (employee) has some intellectual capability - Becker’s (1975) human capital. And some of them talk to each other - Burt’s (1992) social capital. Together they comprise distributed intelligence. Human capital is a property of individual employees. Taken to the extreme, even geniuses offer a firm only minimal adaptive capability if they are isolated from everyone else. A firm’s core competencies, dynamic capabilities, and knowledge requisite for competitive advantage increasingly appear as networks of human capital holders. These knowledge networks also increasingly appear throughout firms rather than being narrowly confined to upper management (Norling, 1996). Employees are now responsible for adaptive capability rather than just being bodies to carry out orders. Here is where networks become critical. Much of the effectiveness and economic value of human capital held by individuals has been shown to be subject to the nature of the social networks in which the human agents are embedded (Granovetter, 1973, 1985, 1992).

Intelligence in brains rests entirely on the production of emergent networks among neurons “intelligence is the network” (Fuster, 1995: 11). Neurons behave as simple “threshold gates” that have one behavioral option - fire or not fire (p. 29). As intelligence increases, it is represented in the brain as emergent connections (synaptic links) among neurons. Human intelligence is ‘distributed’ across really dumb agents! In computer parallel processing systems, computers play the role ofneurons. These systems are more ‘nodebased’ than ‘network-based’. Artificial intelligence resides in the intelligence capability of the computers as agents, with emergent network-based intelligence still at a very primitive stage (Garzon, 1995). My focus on distributed intelligence places most of the emphasis on the emergence of constructive networks. The lesson from brains and computers is that organizational intelligence or learning capability is best seen as ‘distributed’ and that increasing it depends on fostering network development along with increasing agents’ human capital.

Cellular networks: Miles, et al. (1999) offer a second approach to variety destruction. They refer to the 21st century as the era of innovation. They see self-organizing employee learning networks as essential to effective performance in the knowledge economy. It takes continuously evolving networks to keep up with rapidly evolving elements of the knowledge economy, particularly technology, market tastes, and industry competitors. Miles, et al. (1999) see entrepreneurship, self-organization, and member ownership as the essential ingredients of effective cellular networks. Cells consist of self-managing teams of employees - the heterogeneous agents governed by what complexity scientists term ‘simple rules’. The cells have “...an entrepreneurial responsibility to the larger organization” (p. 163). Miles, et al. (1999) say that if the cells are strategic business units they may be set up as profit centers. They emphasize the instability of the cells, noting that each cell must reorganize constantly. It needs appropriate governance skills to do this.

For Miles, et al. (1999), the CEO’s approach to managing the cells is based on viewing the cells as entrepreneurial firms. They offer two examples. In one firm (Technical and Computer Graphics), the cells / firms are j oint venture partners within the enveloping organization. In the other (The Acer Group), the cells / firms are j ointly owned via an internal stock market. Details remain vague as to how the cells maintain their autonomy in the face of top-down control, or how the CEOs assure shareholder value from the cells. Miles, et al. (1999) talk about self-organization, learning networks, and emergent cells, but, again, how all this works is vague.

Managing appropriate network autonomy. In a recent paper Thomas, et al. (2005) remind us that Roeth-lisberger and Dixon discovered the classic control-autonomy duality (formal vs. informal organization) back in 1939. By now, several dualities have been observed: control vs. autonomy, innovation, variety, and self-organization, and change rate, among others. Thomas, et al.’s (2005) review shows that the ‘English’ literature tends to persist in looking for bipolar duality solutions in which the opposing forces are balanced or adjusted to achieve an “optimum mix” (March, 1999: 5). In contrast, they observe that the ‘French’ literature holds that control and autonomy - and other dualities are “entangled.” Even though control might dominate (the “englobing” force), for example, circumstances are recognized when autonomy can dominate ( “inversion”). Further, the French see the rate of “inversion / reversion” between which pole of the duality dominates as unstable. The tangled poles - really forces - of dualities have to be appropriately “managed” if a CEO wishes to create and maintain the “combination of independence and interdependence” characterizing Miles, et al.’s “cellular network” design. Some 60 years of organizational research shows this is much easier said than done!

Based on a twelve-year analysis of a global cosmetics firm, Thomas, et al. (2005) conclude that:

  1. Any attempt to focus only on the autonomy end of a duality likely will fail.
  2. Effective “leading” of a cellular network requires setting in motion the dynamic inversion / reversion of control and autonomy, such that they are “entangled” as opposed to “balanced” or “optimized” (March, 1991, 1999). Further, they evolve in their interactive dynamics over time.
  3. The ‘rate’ at which the bipoles irregularly oscillate is critical. The zero-oscillation periods, whether autonomy or control dominated, did not resolve the overproduction and no-profit situations.
  4. The control-pole dominating (englobing), but with frequent reversions to autonomy-dominance, appears as the most successful organizational form, since it is both stable and produces profits.

In the foregoing section I have built from Ashby’s classic Law of Requisite Variety toward the idea that adaptive capability in human organizations, under conditions of New Age external complexity, stems from distributed intelligence built up and held collectively by heterogeneous agents (employees). It is not just the one brain at the top that counts - it is lots of connected brains. This points to the critical importance of bottom-up emergent dynamics underlying the creation of learning and new knowledge, and distributed intelligence in organizations and cellular networks. Given this, I now turn to the question: Where do we draw lessons for better understanding methods for creating newly emerging structures? The best place right now is from complexity science - which is really the science of new order creation.

New theory: Order-creation (complexity) science

Theoretical background

All of the foregoing calls for bottom-up emergence call for a science of new order creation (McKelvey, 2004b). Much of the scientific method in management and organizations and in business schools, especially strategy studies, is dominated by the epistemology of economics (e.g., Besanko, et al., 2000. In fact, social science in general is dominated by economics. Unfortunately, economics is notorious for drawing its epistemology from classical physics and the latter’s focus on equilibrium dynamics and the mathematical accounting of physical matter transformations governed by the 1st Law of Thermodynamics (Mirowski, 1989, 1994; Ormerod, 1994, 1998; Arthur, et al., 1997, Colander, 2000). How to get from equilibrium-based to order-creation science?

Nelson and Winter (1982) look to Darwinian evolutionary theory for a dynamic perspective useful for explaining the origin of order in economic systems; so too, does Aldrich (1979, 1999). Leading writers about biology, such as Salthe, (1993), Rosenberg, (1994), Depew, (1998), Weber, (1998), and Kauffman, (2000), now argue that Darwinian theory is, itself, equilibrium bound and not adequate for explaining the origin of order. Underlying this change in perspective is a shift to the study of how heterogeneous bio-agents create order in the context of geological and atmospheric dynamics (McKelvey, 2004a).

Campbell brought Darwinian selectionist theory into social science (Campbell, 1965; McKelvey & Baum, 1999). Nelson and Winter (1982) offer the most comprehensive treatment in economics; Aldrich (1979) and McKelvey, (1982) do so in organization studies. The essentials are: (1) Genes replicate with error; (2) Variants are differentially selected, altering gene frequencies in populations; (3) Populations have differential survival rates, given existing niches; (4) Coevolution of niche emergence and genetic variance; and (5) Struggle for existence. Economic Orthodoxy develops the mathematics of thermodynamics to study the resolution of supply / demand imbalances within a broader equilibrium context. It also takes a static, instantaneous conception of maximization and equilibrium. Nelson and Winter introduce Darwinian selection as a dynamic process over time, substituting routines for genes, search for mutation, and selection via economic competition.

Rosenberg, (1994) observes that Nelson and Winter’s book fails because Orthodoxy still holds to energy conservation mathematics (the 1st Law of Thermodynamics), the prediction advantages of thermodynamic equilibrium, and the latter framework’s roots in the axioms of Newton’s orbital mechanics, as Mirowski, (1989) discusses at considerable length. Hinterberger, (1994) critiques Orthodoxy’s reliance on the equilibrium assumption from a different perspective. In his view, a closer look at both competitive contexts and socioeconomic actors uncovers four forces working to disallow the equilibrium assumption:

  1. Rapid changes in the competitive context of firms do not allow the kinds of extended equilibria seen in biology and classical physics;
  2. There is more and more evidence that the future is best characterized by “disorder, instability, diversity, disequilibrium, and nonlinearity” (p. 37);
  3. Firms are likely to experience changing basins of attraction - that is, the effects of different equilibria;
  4. Agents coevolve to create higher level structures that become the selection contexts for subsequent agent behaviors.

Hinterberger’s critique comes from the perspective of complexity science. Also from this view, Holland (1988: 117-124) and Arthur, et al., 1997: 3-4) note that the following characteristics of economies counter the equilibrium assumption essential to predictive mathematics:

  1. Dispersed interaction: dispersed, possibly heterogeneous, agents active in parallel;
  2. No global controller or cause: coevolution of agent interactions;
  3. Many levels of organization: agents at lower levels create contexts at higher levels;
  4. Continual adaptation: agents revise their adaptive behavior continually;
  5. Perpetual novelty: by changing in ways that allow them to depend on new resources, agents coevolve with resource changes to occupy new habitats; and
  6. Out-of-equilibrium dynamics: economies operate ‘far from equilibrium’, meaning that economies are induced by the pressure of trade imbalances individual-to-individual, firm-to-firm, country-to-country, etc.

After reviewing all the chapters in their anthology, The Economy as an Evolving Complex System, most of which rely on mathematical modeling, the editors ask, “in what way do equilibrium calculations provide insight into emergence?” (Arthur, et al., 1997: 12) The answer is, of course, they don’t. What is missing? Holland’s elements of complex adaptive systems are what are missing: agents, nonlinearities, hierarchy, coevolution, far-from-equilibrium, and self-organization.

This becomes evident once we use research methods allowing a fast-motion view of socioeconomic phenomena. The fast-paced technology and market changes in the modern knowledge economy - that drive knowledge creation and entrepreneurship - suggest such an analytical time shift for socioeconomic research methods is long over due. The methods of economics are based on the methods of physics, which in turn are based on very slow motion new order-creation events, i.e., planetary orbits and atomic processes that have remained essentially unchanged for billions of years.

Bar-Yam (1997) divides degrees of freedom into fast, slow, and dynamic time scales. On a human time scale, applications of thermodynamics to the phenomena of classical physics and economics assume that slow processes are fixed and fast processes are in equilibrium, leaving thermodynamic processes as dynamic. Bar-Yam says, “Slow processes establish the [broader] framework in which thermodynamics can be applied” (1997: 90). Now, suppose we speed up slow motion physical processes so that they appear dynamic at the human time scale - say to a rate of roughly one year for every three seconds. Then about a billion years go by per century. It is like looking at a 3.8 billion year movie in fast-motion. At this speed we see the dynamic effects geological changes have on biological order - the processes of Darwinian evolution go by so fast they appear in equilibrium (elaborated in McKelvey, 2004a)!

If the classical physics, equilibrium-influenced methods of socioeconomic research are viewed through the lens of fast-motion science, evolutionary analysis shifts into Bar-Yam’s fast motion degrees of freedom. Thus, changes attributed to selection “dynamics” slip into equilibrium. By this logic, since evolutionary analysis is equilibrium-bound, it is ill suited for research focusing on far-from-equilibrium change. Following Van de Vijver, et al. (1998), dynamic analysis, therefore, must focus on agents’ self-organization rather than Darwinian selection.

The 1st Law of Thermodynamics has been the defining dynamic of science - but it focuses on order translation, not order-creation. Elsewhere, I review the complexity scientists’ search for the 0th law of thermodynamics, focusing on the root question in complexity science: What causes order before 1st Law equilibria take hold? (McKelvey, 2004a). How and when does order creation occur? Post-equilibrium science studies only time-reversible, post 1st Law energy translations - how, why, and at what rate energy translates from one kind of order to another (Prigogine, 1955, 1997). It invariably assumes equilibrium. Pre-equilibrium science focuses on the order-creation characteristics of complex adaptive systems. Knowledge Era research needs to be based on pre-equilibrium science!

Two implications follow from the foregoing review: (1) If not equilibrium-based science, we need to find another scientific approach that focuses on order creation instead of equilibrium. This is what complexity science does; and (2) We also need a different kind of modeling approach. I will point out later just how essential formal modeling is to good science. It also turns out that the American School of complexity science has developed a New Age modeling approach agent-based computational models. But first, new theory.

Schools of complexity science: Two new theory bases

The complexity science view of the origin of order in biology is that self-organization - pre 1st Law processes - explains more order in the biosphere than Darwinian selection (Kauffman, 1993, 2000; Salthe, 1993; the many authors in Van de Vijver, et al., 1998). Two independently conceived engines of order creation are apparent in complexity science. I review them briefly below and conclude with a call for their integration.

The European group consists of Prigogine (1955, 1997), Haken (1977/1983), Cramer, (1993), Mainzer (1994/2004), among others. The American group consists largely of those associated with the Santa Fe Institute. While one could gloss over the differences, I think it is worth not doing so5. For the Europeans, it is clear that phase transitions, especially at the 1st critical value, are fundamental. Phase transitions are significant events that occur at the 1st critical value of R, the Reynolds number (from fluid flow dynamics, Lagerstrom, 1996). Phase transitions are, thus, dramatic events, far removed from the instigation events the Americans focus on, which are: (1) the almost meaningless random “butterfly” effects that set off “self-organized criticality” and complexity cascades (Gleick, 1987; Bak, 1996; Brunk, 2000), and (2) the kinds of events or ‘things’ that initiate positive feedback mutual causal processes - what Holland, (1995) calls “tags.” Though their differences are significant, both are essential to social science. To make the differences really obvious, I boil them down to bare essentials6.

European school. The Europeans emphasize the following key elements:

The region of emergent complexity defined by the 1st and 2nd critical values.

They typically begin with Bénard cells. In a Bénard process (1901), ‘critical values’ in the energy differential (measured as temperature, AT) between warmer and cooler surfaces of the container affect the velocity, R, of the air flow, which correlates with AT. Suppose the surfaces of the container represent the hot surface of the earth and the cold upper atmosphere. The critical values divide the velocity of air flow in the container into three kinds:

  1. Below the 1st critical value (the Rayleigh number), heat transfer occurs via conduction - gas molecules transfer energy by vibrating more vigorously against each other while remaining essentially in the same place;
  2. Between the 1st and 2nd critical values, heat transfer occurs via a bulk movement of air in which the gas molecules move between the surfaces in a circulatory pattern - the emergent Bénard cells. We encounter these in aircraft as up-and downdrafts; and
  3. Above the 2nd critical value a transition to chaotically moving gas molecules is observed.
Newtonian complexity exists when the amount of information necessary to describe the system is less complex than the system itself. Thus a rule, such as F = ma = md2s/dt2 is much simpler in information terms than trying to describe the myriad states, velocities, and acceleration rates pursuant to understanding the force of a falling object. “Systems exhibiting subcritical [Newtonian] complexity are strictly deterministic and allow for exact prediction” (Cramer, 1993: 213) They are also “reversible” (allowing retrodiction as well as prediction thus making the “arrow of time” irrelevant (Eddington, 1930), Prigogine & Stengers, 1984).
At the opposite extreme is stochastic complexity where the description of a system is as complex as the system itself - the minimum number of information bits necessary to describe the states is equal to the complexity of the system. Cramer lumps chaotic and stochastic systems into this category, although deterministic chaos is recognized as fundamentally different from stochastic complexity (Morrison, 1991), since the former is ‘simple rule’ driven, and stochastic systems are random, though varying in their stochasticity. Thus, three kinds of stochastic complexity are recognized: purely random, probabilistic, and deterministic chaos. For this essay I narrow stochastic complexity to deterministic chaos, at the risk of oversimplification.
In between, Cramer puts emergent complexity. The defining aspect of this category is the possibility of emergent simple deterministic structures fitting Newtonian complexity criteria, even though the underlying phenomena remain in the stochastically complex category. It is here that natural forces ease the investigator’s problem by offering intervening objects as ‘simplicity targets’ the behavior of which lends itself to simple rule explanation. Cramer (1993: 215-217) has a long table categorizing all kinds of phenomena according to his scheme.

Table 1 Definitions of Kinds of Complexity by Cramer, (1993). For mnemonic purposes I use ‘Newtonian’ instead of Cramer’s ‘ subcritical’, ‘stochastic’ instead of ’fundamental’, and ’emergent’ instead of ‘critical’ complexity.

Since Bénard, (1901), fluid dynamicists (Lagerstrom, 1996) have focused on the 1st critical value, Rc1 the Rayleigh number - that separates laminar from turbulent flows. Below the 1st critical value, viscous damping dominates so self-organized emergent (new) order does not occur; above the Rayleigh number inertial fluid motion dynamics occur (Wolfram, 2002: 996). Ashby, in his book, Design for a Brain (1960), described functions that, after a certain critical value is reached, jump into a new family of differential equations, or as Prigogine would put it, jump from one family of “Newtonian” linear differential equations describing a dissipative structure to another family. Lorenz, (1963), followed by complexity scientists, added a second critical value, Rc2. This one separates the region of emergent complexity from deterministic chaos - the so-called “edge of chaos.” Together, the 1st and 2nd critical values define three kinds of complexity (Cramer, 1993; see Table 1):

Newtonian |Rc1| Emergent |Rc2| Chaotic

Elsewhere, I have reviewed a number of theories about causes of emergent order in physics and biology, some of which have been extended into the econosphere (McKelvey, 2001c, 2004a). Kelso7, Ding and Schöner (1992) offer the best synthesis of the European school:

“Control parameters, Ri, externally influenced, create R > Rc1 with the result that a phase transition (instability) approaches, degrees of freedom are enslaved, and order parameters appear, resulting in similar patterns of order emerging even though underlying generative mechanisms show high variance.”

Equilibrium thinking and the 1st Law are endemic in evolutionary theory applications to economics and organization science. Equilibrium thinking, central tendencies, and the use of energy dynamics in independent variables to predict outcome variables is also endemic to organization science empirical methods, whether regression or econometric analyses. However, there now is a shift from the homogeneous agents of physics and mathematics to heterogeneous, self-organizing agents. As Durlauf (1997: 33) says, “A key import of the rise of new classical economics has been to change the primitive constituents of aggregate economic models: while Keynesian models employed aggregate structural relationship s as primitives, in new classical models individual agents are the primitives so that all aggregate relationships are emergent.” In this statement the 0th law is brought in more directly.

The application of the 0th law in socioeconomics rests with Haken’s control parameters, the first two words in the Kelso, Ding and Schöner statement. The R. adaptive tensions (McKelvey, 2004a, 2005) can appear in many different forms, from Jack Welch’s famous phrase, “Be #1 or 2 in your industry in market share or you will be fixed, sold, or closed” (Tichy & Sherman, 1994: 108; somewhat paraphrased), to narrower tension statements aimed at technology, market, cost, or other adaptive tensions. Schumpeter, (1942) long ago observed8 that entrepreneurs are particularly apt at uncovering tensions in the marketplace. The applied implication of the 0th law is that new order creation is a function of (1) control parameters, (2) adaptive tension, and (3) phase transitions motivating (4) agents’ self-organization. Take away any of these and order creation stops.

American school. The Americans emphasize the following:

The American complexity literature focuses on positive feedback, power laws, and small instigating effects. Gleick, (1987) details chaos theory, its focus on the so-called butterfly effect (the fabled story of a butterfly flapping its wings in Brazil causing a storm in North America), and aperiodic behavior ever since the founding paper by Lorenz, (1963). Bak (1996) reports on his discovery of self-organized criticality - a power law event - in which small initial events can lead to complexity cascades of avalanche proportions. Arthur (1990, 2000) focuses on positive feedbacks stemming from initially small instigation events. Casti, (1994) and Brock, (2000) continue the emphasis on power laws. The rest of the Santa Fe story is told in Lewin (1992/1999). In their vision, positive feedback is the ‘engine’ of complex system adaptation. American complexity scientists tend to focus on Rc2 - the edge of chaos (Lewin, 1992; Kauffman, 1993, 2000; Brown & Eisenhardt, 1998), which defines the upper bound of the region of emergent complexity. What happens at Rc1 is better understood; what happens at Rc2 is more obscure. The ‘edge of chaos’, long a Santa Fe reference point (Lewin, 1992), is now in disrepute, however (Horgan, 1996: 197).

In a truly classic paper, Maruyama, (1963) discusses mutual causal processes mostly with respect to biological coevolution. He distinguishes between the “deviation-counteracting” negative feedback most familiar to general systems theorists (Buckley, 1967) and “deviation-amplifying” positive feedback processes (Milsum, 1968). Boulding, (1968) and Arthur (1990, 2000) focus on ‘positive feedbacks’ in economies. Negative feedback control systems such as thermostats are most familiar to us. Positive feedback effects emerge when a microphone is placed near a speaker, resulting in a high-pitched squeal. Mutual causal or coevolutionary processes are inherently nonlinear - large-scale effects may be instigated by tiny initiating events, as noted by Maruyama, (1963), Gleick, (1987), and Ormerod, (1998).

It is not hard to find evidence of positive feedback instigating mutual causal behavior in organizations. The earliest discoveries date back to Roethlisberger and Dixon (1939) and Homans, (1950) - both dealing with the mutual influence of agents (members of informal groups), the subsequent development of groups, and the emergence of strong group norms that feed back to sanction agent behavior. Much of the discussion by March and Sutton (1997) focuses on the problems arising from the use of simple linear models for measuring performance - problems all due to mutual causal behavior of firms and agents within them. In a recent study of advanced manufacturing technology (AMT), Lewis and Grimes (1999) use a multiparadigm (postmodernist) approach. They study AMT from all of the four paradigms identified by Burrell and Morgan (1979). With each lens, that is, no matter which lens they use, they find evidence of mutual causal (positive feedback type coevolutionary) 9 behavior within firms. Many of the articles in the Organization Science special issue on coevolution (Lewin & Volberda, 1999) report evidence of microcoevolutionary behavior in organizations. Finally, a number of very recent studies of organization change show much evidence of coevo - lution between organization and environment and within organizations as well (Erakovic, 2002; Meyer & Gaba, 2002; Kaminska-Labbe & Thomas, 2002; Morlacchi, 2002; Siggelkow, 2002).

Both European and American perspectives are important. Phase transitions are often required to overcome the threshold-gate effects characteristic of most human agents - they don’t interact and react to just anything10. This in turn requires the adaptive tension driver to rise above Rc1 - which defines the threshold gate. Once these stronger than normal instigation effects overcome the threshold gates, then, assuming the other requirements are present (heterogeneous, adaptive learning agents, and so forth), positive feedback may start. Neither perspective seems both necessary and sufficient by itself, especially in social settings. External force effects and internal positive feedback processes are “co-producers,” to use Churchman and Ackoff’s (1950) term.

New method: Epistemology of computational models

Understanding how and why new structural order emerges in social systems has been at the core of management studies, sociology, and organization science for many decades, as evidenced by the following:

Studies of Italian industrial districts (Curzio & Fortis, 2002).

These “thick-description” studies (Geertz, 1973) exemplify the enduring importance of emergence in the study of organizations.

In contrast, various kinds of organizational order creation have also been studied via agent models to explain such phenomena as organizational learning (Carley, 1992; Carley & Harrald, 1997; Carley & Hill, 2001), organization design (Carley, et al., 1998; Levinthal & Warglien, 1999), network structuring (Carley, 1999b), organizational evolution (Carley & Svoboda, 1996; Morel & Ramanujam, 1999) and strategic adaptation (Carley, 1996; Gavetti & Levinthal, 2000; Rivkin, 2001), just to name just a few.

A fundamental difference between virtually all of the order-creation studies mentioned in the foregoing bullets, as compared with the studies mentioned in the previous paragraph is that most of the latter are based on computational experimental results whereas the former are not. This is just by way of illustrating that most of what our field has traditionally claimed to be true about organizational management and design is based on site visits (interviews and observations) and narrative studies, or data collected from the field after the fact and then analyzed via some kind of correlational method. Truth claims based on nonexperimental methods are notoriously suspect and generally fall outside the realm of long-accepted bases for asserting truth (Hooker, 1995; McKim & Turner, 1997; Curd & Cover, 1998), despite attempts to the contrary (Pearl, 2000).

In social and organizational science there is now a long list of people complaining about the “thin” (Geertz, 1973) ontological view of most experiments and statistical analyses and the wrong or inappropriate ontological view of organizational phenomena, an argument advocated by organizational post-positivists (Berger & Luckmann, 1966; Silverman, 1970; Lincoln, 1985) and more recently postmodernists (Reed & Hughes, 1992; Hassard & Parker, 1993; Alvesson & Deetz, 1996; Burrell, 1996; Chia, 1996; Marsden & Townley, 1996). Postmodernism, however, is now criticized as being anti-science (Holton, 1993; Norris, 1997; Gross & Levitt, 1998; Koertge, 1998; Sokal & Bricmont, 1998; McKelvey, 2003). Much of the fuel feeding the anti-experimental perspectives in organization science lies with the difficulty of setting up ontologically correct organizational experiments. This said, there is no escaping the centrality of models in effective science. Computational agent-based models have the threefold advantage that they are models, they are experiments, and they allow the study of order creation.

Model-centered science

Much has changed since the ceremonial death of logical positivism and logical empiricism at the Illinois Symposium in, 1969, adroitly described in Suppe’s second edition of The Structure of Scientific Theories (1977) - the epitaph on positivism. Parallel to the fall logical positivism and logical empiricism, we see the emergence of the Semantic Conception of Theories (Suppe, 1977). Suppe (1989: 3) says, “The Semantic Conception of Theories today probably is the philosophical analysis of the nature of theories most widely held among philosophers of science.” Semantic Conception epistemologists observe that scientific theories never represent or explain the full complexity of some phenomenon. A theory (1) “does not attempt to describe all aspects of the phenomena in its intended scope; rather it abstracts certain parameters from the phenomena and attempts to describe the phenomena in terms of just these abstracted parameters” (Suppe, 1977: 223); (2) assumes that the phenomena behave according to the selected parameters included in the theory; and (3) is typically specified in terms of its several parameters with the full knowledge that no empirical study or experiment could successfully and completely control all the complexities that might affect the designated parameters (see also Mitchell, 2004). Models comprise the core of the Semantic Conception. Its view of the theory-model-phenomena relationship is: (1) Theory, model, and phenomena are viewed as independent entities; (2) Science is bifurcated into two related activities, analytical and ontological, where theory is indirectly linked to phenomena via the mediation of models. The view presented here - with models as centered between theory and phenomena - that sets them up as autonomous agents, follows from Morgan and Morrison’s (2000) thesis. The course of science is as much governed by its choice of modeling technology as it is by theory and data.

Analytical Adequacy focuses on the theory-model link. It is important to emphasize that in the Semantic Conception ‘theory’ is always expressed via a model. ‘Theory’ does not attempt to use its ‘If A, then B’ epistemology to explain ‘real-world’ behavior. It only explains ‘model’ behavior. It does its testing in the isolated idealized world of the model (Mitchell, 2004). A mathematical or computational model is used to structure up aspects of interest within the full complexity of the real-world phenomena and defined as ‘within the scope’ of the theory. Then the model is used to elaborate the ‘If A, then B’ propositions of the theory to consider how a social system - as modeled might behave under various conditions.

Ontological Adequacy focuses on the mod-el-phenomena link. Developing a model’s ontological adequacy runs parallel with improving the theory-model relationship. How well does the model represent real-world phenomena? How well does an idealized wind-tunnel model of an airplane wing represent the behavior of a full sized wing in a storm? How well might a computational model from biology, such as Kauffman’s (1993) NK model, that has been applied to firms, actually represent coevolutionary competition in, say, the laptop computer industry. It therefore involves identifying various coevolutionary structures, that is, behaviors that exist in some domain and building these effects into the model as dimensions of the phase-space. If each dimension in the model adequately represents an equivalent behavioral dimension in the real world, the model is deemed ontologically adequate.

These kinds of coevolution, therefore, result in credible, i.e., more probable, science-based truth claims: (1) Theory-model coevolution; (2) Model-phenomena coevolution; and (3) The coevolution of both 1 and 2.

Agent-based computational experiments

Experiments are a continuing legacy of positivism and remain at the core of the new scientific realist aspect of philosophy of science (Bhaskar, 1975/1997). They are the standard against which other forms of truth-claims are compared (McKim & Turner, 1997), and they remain a cornerstone of modern philosophical debate (Curd & Cover, 1998). Experiments continue as the fundamental method of determining causal relations because they are the only method of clearly adding, deleting, or otherwise altering a variable to see if results change (Lalonde, 1986) - what Bhaskar (1975) calls a “contrived invariance” (see also Hooker, 1995).

The problem for organization scientists is that ‘real’ organizations can seldom if ever be recreated in a laboratory. Further, even when such attempts are made (Carley, 1996; Contractor, et al., 2000), the experiments are very thin replications of organizational complexity. Most importantly, it is very difficult, if not impossible, to delete with certainty potentially causal behavioral ‘rules’ that experimental subjects might be following. In addition, human organizational experiments are usually subject to time, size, number of rules actually ‘wiggled’, and organizational realism limitations. Finally, and perhaps most importantly, the study of emergent order in human experiments is very difficult: multiple replication conditions (such as environmental context effects and multiple rules held by subjects) have to be controlled, organizationally relevant path dependencies should be part of the experimental design, long enough duration for social structural emergence to occur needs to be allowed, a statistically relevant number of replications should be conducted, and so on.

Agent-based computational experiments offer organization scientists a virtual laboratory in which to test out theorized causal effects of organizational path dependencies, given varying environmental resources and constraints, artificial subjects (the agents) governed by known and only these rules, time periods long enough to allow agents and rules to change and emergent new structures (order) to appear, with a sufficient number of similar replications so as to allow a statistically relevant sampling. Needless to say, computational experiments have their own set of limitations. Not least of these have been the limitations of computers for handling large combinatorial spaces, agents with a sufficient number of governing rules, and lack of designed-in organizational complexities. Furthermore, much of the complexity of real organizations and current organizational theories does not show up in the models, especially earlier ones. One might reasonably conclude that most organization-relevant agent-based models have little bearing on organizational reality. On the other hand, one might also conclude that, appropriately, the models started with simpler, more stylized aspects of organizational functioning but that, studied over time, there is a progression toward improved replicational reality occurring. Math modeling has about a 300 year lead over agent-based computational modeling!

Agent-based modeling experiments are relatively new to organization studies, though some early examples exist (Cohen, et al., 1972; March, 1991). Computational models allow investigators to play out the nuances of theories over time. They also allow much clearer determinations of causal effects by allowing causal variables to be wiggled. Consequently, models offer a superb context for theory development. Ilgen and Hulin (2000: 7) go so far as to label computational modeling experiments as the “third discipline” - human experiments and correlation studies being the first two.

The use of agent-based models in social science has increased over the past decade 11. Agents can be at any level of analysis: atomic particles, molecules, genes, species, people, firms, and so on. The distinguishing feature is that the agents are not uniform. Instead they are probabilistically idiosyncratic (McKelvey, 1997). Therefore, at the level of human behavior, they fit the postmodernists’ ontological assumptions. Using heterogeneous agent-based models is the best way to ‘marry’ postmodernist ontology with model-centered science and the current epistemological standards of assumptions of effective modern sciences - specifically complexity science (Henrickson & McKelvey, 2002; McKelvey, 2002, 2003). There are no homogeneity, equilibrium, or independence assumptions. Agents may change the nature of their attributes and capabilities along with other kinds of learning. They may also create network groupings or other higher-level structures, i.e., new order.

The study by Contractor, et al. (2000) (discussed later) is a good example demonstrating two of the elements that improve the justification-logic credentials of agent-based modeling. First, this paper is particularly notable because each of its ten agent rules are grounded in existing empirical research. The findings of each body of research, clouded as they are by errors and statistics, are reduced to idealized, stylized facts that then become agent rules. The second justification approach in this study is that the model parallels a real-world human experiment. Their results focus on the degree to which the composite model and each of the ten agent rules predict the outcome of the experiment - some do, some don’t. Another approach, with a much more sophisticated simulation model, is one by LeBaron (2000, 2001). In this study, LeBaron shows that the baseline model “. is capable of quantitatively replicating many features of actual financial markets” (p. 19). Here the emphasis is mostly on matching model outcome results to real-world data rather than basing agent rules on stylized facts. A more sophisticated match between agent model and human experiment is one designed by Carley, (1996). In this study the agent model and people were given the same task. While the results do offer a test of model vs. real-world data, the comparison also suggests many analytical insights about organization design and employee training that only emerge from the juxtaposition of the two different experimental methods.

New ‘autonomous agent’ effects of models on the course of science

Models as autonomous agents12: There can be little doubt that mathematical models have dominated science since Newton. Further, mathematically constrained language (logical discourse), since the Vienna Circle circa, 1907, has come to define good science in the image of classical physics. Indeed, mathematics is good for a variety of things in science, but especially, it plays two key roles. In logical positivism - which morphed into logical empiricism (Suppe, 1977) - math supplied the logical rigor aimed at assuring the truth integrity of analytical (theoretical) statements. As Read, (1990) observes, the use of math for finding “numbers” actually is less important in science than its use in testing for rigorous thinking. But, as is wonderfully evident in the various chapters in the Morgan and Morrison (2000) anthology, math is also used as an efficient substitute for iconic models in building up a ‘working’ model valuable for understanding not only how an aspect of the phenomena under study behaves (the empirical roots of a model) and / or for better understanding the interrelation of the various elements comprising a transcendental realist explanatory theory (the theoretical roots).

Traditionally, a model has been treated as a more or less accurate “mirroring” of theory or phenomena (Cartwright, 1983) - as a billiard ball model might mirror atoms. In this role it is a sort of ‘catalyst’ that speeds up the course of science but without altering the chemistry of the ingredients. Morgan and Morrison (2000) take dead aim at this view, however, showing that models are autonomous agents that can, indeed, affect the chemistry. It is perhaps best illustrated in a figure supplied by Boumans, (2000). He observes that

Cartwright, in her classic, 1983 book “...conceive[s] models as instruments to bridge the gap between theory and data.” Boumans gives ample evidence that many ingredients influence the final nature of a model. Ingredients impacting models are metaphors, analogies, policy views, empirical data, math techniques, math concepts, stylized facts, and theoretical notions. Boumans’s analyses are based on business cycle models by Kalecki, Frisch and Tinbergen in the 1930s and Lukas, (1972) that clearly illustrate the warping resulting from ‘mathematical molding’ for mostly tractability reasons and the influence of the various non-theory and non-data ingredients.

Models as autonomous agents, thus, become so both from (1) math molding and (2) influence by all the other ingredients. Since the other ingredients could reasonably influence agent-based models as well as math models - as formal, symbol-based models, and since math models dominate formal modeling in social science (mostly in economics) - I now focus only on the molding effects of math models rooted in classical physics. As is evident from the four previously mentioned business cycle models, Mirowski’s, (1989) broad discussion (not included here), and Read’s (1990) analysis (below), the math molding effect is pervasive. Much of the molding effect of math as an autonomous model / agent, as developed in classical physics and economics, makes three heroic assumptions: (1) Mathematicians in classical physics made the ‘instrumentally convenient’ homogeneity assumption. This made the math more tractable; (2) Science in general, and in the social sciences especially econometricians (Greene, 2002), assume independence among agents (data points); and (3) Physicists principally studied phenomena under the governance of the 1st Law of Thermodynamics and, within this Law, made the equilibrium assumption. Here the math model accounted for the translation of order from one form to another and presumed all phenomena varied around equilibrium points13.

Math ’s molding effects on sociocultural analysis: Read’s (1990) analysis ofthe applications ofmath modeling in archaeology illustrates how the classical physics roots of math modeling and the needs of tractability give rise to assumptions that are demonstrably antithetical to a correct understanding, modeling, and theorizing of human social behavior. Though his analysis is ostensibly about archaeology, it applies generally to sociocultural systems. Most telling are assumptions he identifies that combine to show just how much social phenomena have to be warped to fit the tractability constraints of the rate studies framed within math molding process of calculus. They focus on universality, stability, equilibrium, external forces, determinism, and global dynamics at the expense of individual dynamics.

Given the molding effect of all these assumptions it is especially instructive to quote Read, the mathematician, worrying about equilibrium-based mathematical applications to archaeology and sociocultural systems:

  1. In linking “empirically defined relationships with mathematically defined relationships... [and] the symbolic with the empirical domain... a number of deep issues...arise... These issues relate, in particular, to the ability of human systems to change and modify themselves according to goals that change through time, on the one hand, and the common assumption of relative stability of the structure of .[theoretical] models used to express formal properties of systems, on the other hand... A major challengefacing effective - mathematical - modeling of the human systems considered by archaeologists is to develop models that can take into account this capacity for self-modification according to internally constructed and defined goals.”
  2. “In part, the difficulty is conceptual and stems from reifying the society as an entity that responds to forces acting upon it, much as a physical object responds in its movements to forces acting upon it. For the physical object, the effects of forces on motion are well known and a particular situation can, in principle, be examined through the appropriate application of mathematical representation of these effects along with suitable information on boundary and initial conditions. It is far from evident that a similar framework applies to whole societies.”
  3. “Perhaps because culture, except in its material products, is not directly observable in archaeological data, and perhaps because the things observable are directly the result of individual behavior, there has been much emphasis on purported ’laws’ of behavior as the foundation for the explanatory arguments that archaeologists are trying to develop. This is not likely to succeed. To the extent that there are ‘laws’ affecting human behavior, they must be due to properties of the mind that are consequences of selection acting on genetic information... ‘laws’ of behavior are inevitably of a different character than laws of physics such as F = ma. The latter, apparently, is fundamental to the universe itself; behavioral ‘laws’ such as ‘rational decision making’ are true only to the extent to which there has been selection for a mind that processes and acts upon information in this manner... Without virtually isomorphic mapping from genetic information to properties of the mind, searching for universal laws of behavior. is a chimera.”

Common throughout these and similar statements are Read’s observations about “the ability of [reified] human systems to change and modify themselves,” be “self-reflective,” respond passively to “forces acting” from outside, “manipulation by subgroups,” “self-evaluation,” “self-reflection,” “affecting and defining how they are going to change,” and the “chimera” of searching for “behavioral laws” reflecting the effects of external forces.

Just as the social sciences lagged behind when math was the supreme modeling approach, they are also lagging in their transition to agent-based models. Though citation rates may have picked up more recently, in 1997 there were some 18,000 natural science cites to nonlinear computational modeling, but only around 180 in economics and near 40 in sociology (Henrickson, 2004). As Henrickson and McKelvey (2002: 7288) wonder:

“How can it be that sciences founded on the mathematical linear determinism ofclassical physics have moved more quickly toward the use of nonlinear computer models than economics and sociology - where those doing the science are no different from social actors - who are the Brownian Motion?”

Computational modeling in organization science14

Lichtenstein and McKelvey (2005) note that there are over 300 agent-based models having relevance to organization studies. Maguire, et al. (forthcoming) list - 15 applications of just Kauffman’s (1993) NK model to organizational phenomena. Just to give credibility to the idea of agent models being applied to organizations, I briefly describe some examples below. As Lichtenstein and McKelvey (2005) observe, most models generate emergent networks; some generate emergent groups and supervenience effects; even fewer generate hierarchies and only two stretch a bit beyond these minimal stages of organizational order.

Cellular automata search grids: The oldest agent-based model is referred to as a cellular automata (CA) model. Agents exist in a search space whose size depends on the number of agents and rules. The search space is typically depicted as consisting of hills and valleys with, for example, higher agent fitness or intelligence represented as a peak. Agents having highest fitness may be scattered randomly across the space and separated by numerous agents having lower fitness. A particular agent, thus, runs the risk of ending up on a suboptimal peak during the course of its search attempts over some number of time periods. Usually agent interactions are limited to their ‘nearest neighbors’ - those agents directly adjacent to a specified agent. Agents usually have one output decision, depending on a couple of input signals. Often they only have one governing rule. As agents and rules increase, the search space grows geometrically, as does computer processing time. One of the most interesting is Kauffman’s NK model.

Modeling tunable NK landscapes: Kauffman’s (1993) NK Fitness Landscape, a well-known application of CA models, simulates a co-evolutionary process in which both the individual agents and the degree of interdependency between them are modeled over time. N refers to the number of agents in the model, and K refers to the density of agent interactions. According to the model, an agent’s adaptive fitness depends on its ability to identify and ‘climb to’ the highest level of fitness of its neighbors. However, due to the nearest-neighbor search limitation, an agent surrounded by neighbors all having lower fitness levels gets trapped on a local optimum that may be well below the highest system-wide optimum. Note that the agents are not independent of each other.

As individual agents change, they affect all other agents, thus altering some aspects of the nearest-neighbor landscape itself. In this way, the level of complexity ‘tunes’ the agents’ search landscape by altering the number and height of peaks and depths of valleys they encounter. It turns out that the degree of order in the overall landscape crucially depends on the level of K, the degree of system-wide interdependence, that is, complexity (Kauffman, 1993). According to Kauffman, as complexity increases, the number of peaks vastly increases in the landscape, while the difference between peaks and valleys diminishes, such that even though the pressure of Darwinian selection persists, emergent order cannot be explained by selection effects. He terms it complexity catastrophe. Instead, a moderate amount of complexity creates optimal rugged landscapes, which lead to the highest system-wide fitness levels.

Researchers have applied the NK model to business settings by exploring ways connectedness will bring an entire system to a higher level of fitness without locking it into a ‘catastrophe’ of interdependence (Levinthal, 1997). Moderate levels of interconnection can be achieved through modularization of the production process (Levinthal & Warglien, 1999), by keeping internal value chain interdependencies to levels just below their opponents’ (McKelvey, 1999), or by adopting strategies based on the industry-wide level of firm interdependence (Baum, 1999). Rivkin (2000, 2001) shows that moderate complexity prevents spillover effects while at the same time fostering intrafirm sharing of new knowledge. In an empirical test of the NK application to innovation, Fleming and Sorenson (2001: 1025) show “invention can be maximized by working with a large number of components that interact to an intermediate degree.”

Design sequencing: Siggelkow and Levinthal (2003) use Kauffman’s NK model to tease out some of the dynamics arising when firms mix centralization-decentralization or exploration-exploitation designs. They model performance results stemming from three structural designs: unchanging centralization, unchanging decentralization, and “temporary decentralization with subsequent reintegration” - the latter termed “reintegrator firms.” Some of their key findings are:

In contrast to static “balance” approaches, they conclude: “.exploration and stability are not achieved simultaneously through distinct organizational features.. .but sequentially by adopting different organizational structures.” This model supports the Thomas, et al. (2005) case study.

Modeling learning rates: Yuan and McKelvey (2004) first dock15 their model against results from Kauffman’s prior work, replicating Kauffman’s original results to correlations of 0.976. They use his NK model to test the hypotheses that communication interactivity is nonlinearly related to both amount and rate of group learning over time. Kauffman’s complexity catastrophe effect applies here as well. They find that amount of group learning is a direct function of size, N, but is curvilinearly related to K - highest in the middle of an inverted U-shaped curve. They find that rate of learning is slowest in the middle of a U-shaped curve. However, density in communication interactivity is not independent of group size. Once they adjust for this effect via standardization of K by N-1, they find that the curvilinear effect disappears, but the catastrophe effect continues as a function of two linear variables: Rate of group learning remains a positive linear function of communication interactivity, but amount of learning becomes a negative linear function of interactivity density. Among other things, they also find that altering the distribution of communication by creating isolates and stars in groups has a statistically significant effect on the coevolutionary development of group-level learning over time.

Holland’s genetic algorithms: An important advance in modeling emergence occurs through the use of genetic algorithms (GAs), invented by Holland (1975, 1995). GAs allow agents to learn and change over time by changing the rules governing their behavior: “Agents adapt by changing their rules as experience accumulates” (Holland, 1995: 10). Axelrod and Cohen (2000: 8) broaden the implications of GAs, asserting that:

“...each change of strategy by a worker alters the context in which the next change will be tried and evaluated. When multiple populations of agents are adapting to each other, the result is a coevolutionary process.”

In biological GAs, agents appear to ‘mate’ and produce ‘offspring’ that have different ‘rule-strings’ (genetic codes, blueprints, routines, competencies) as compared with their parents. In organizational applications, agents’ rule-strings change over time (i.e., across artificial generations) without agent replacement and without having ‘children’. The upward causal effects of agents are defined by these rules. Whereas CA models typically are limite d to a relatively few rules and agents - because the landscape grows geometrically each time each is added - GAs allow many agents to have many rules (Macy & Skvoretz, 1998). New rule-strings can have varying numbers of rules retained or recombined from prior agent’s rules, thus allowing the increased evolutionary fitness of complex processes such as decision-making and learning, along with recombinations of diverse skills. Two organizational GA applications are described next.

Simulated coordination models: Paul, et al. (1996) model adaptations to organizational structure by examining the adaptation of financial trading firms (groups). Their firms / groups survive in a financial market environment; they solidify out of networks to consist of from 1 to 9 constituent agents, and each agent has a different rule-set for buying and selling financial instruments or doing nothing. Firms may activate or deactivate their agents, or form networks of seemingly better performing agents from prior periods. In an efficient market performance climate with a 50% probability of success, their model firms beat the market 60% of the time. In this model the behavior of agents (components) can be altered by firm-level goals, thus allowing firms to better perform in the market environment. Further, due to the coevolution of up-and downward causality, results are not deducible from the initial agent configurations and rules.

Another model examines the classic proposition that coordination, while necessary to accomplish interdependent tasks, is costly. Crowston’s (1996) GA model tests this hypothesis by simulating organizations consisting of: agents; in subgroups; in a market; with variable task interdependency. This results in upward, downward, and horizontal causalities, i.e., causal intricacy. Bottom-level agents have to perform their tasks in a specific length of time; agents who coordinate may expedite their tasks, but the cost of coordination means a lessening of their time allotment according to the following rule: if an agent ‘talks’ to all the other agents all the time there is no time left to accomplish its tasks. Results show that organizations and / or their employee agents do in fact minimize coordination costs through organizing in particular ways. His study is an example of a GA model being used to test a classic normative statement by setting up a computational experiment that allows groups to emerge as appropriate. It also includes causal intricacy and coevolutionary causality (for comparison, see Thomas, et al., 2004).

Multi-level models: Quite possibly the most famous example of agent-based modeling is Epstein and Axtell’s Growing Artificial Societies (1996). Their model is called “Sugarscape.” They boil an agent’s behavior down to one simple rule: “Look around as far as your vision permits, find the spot with the most sugar, go there and eat the sugar” (p. 6). Agents search on a CA landscape but they sort of have sex, reproduce offspring, and begin to hold genetic, identity, and culture identification tags according to a genetic algorithm. This model not only builds social networks, but also higher-level groups emerge. These groups develop cultural properties; once cultures form they can supervene and alter the behavior and groupings of agents. Epstein and Axtell’s simulation includes four distinct levels: agents, groupings, cultures, and the overall Sugarscape environment. The Sugarscape elements include agents, emergent groups, higher-level groupings, emergent culture, multiple causalities, and environmental resources and constraints. Though theirs is ostensibly a model of an economy, it easily translates into the intraorganizational market economy highlighted in Halal and Taylor (1999).

Carley and colleagues have produced some of the most sophisticated computational models to date. They have been validated against experimental lab studies (Carley, 1996), and archival data on actual organizations (Carley & Lin, 1995). The most unique feature of the Carley models is that agents have cognitive processing ability - individual agents and the organization as a whole can remember past choices, learn from them, and anticipate and project plans into the future. These models combine elements of CA, GA, and neural networks (for the latter, see Haykin, 1998), as does LeBaron’s (2000) stock market model.

Table 2 Model substructures defined After Contractor et al., 2000

In Carley’s CONSTRUCT (1991) and CONSTRUCT-O (Carley & Hill, 2001) models, simulated agents have a position or role in a social network and a mental model consisting of knowledge about other agents. Agents communicate and learn from others with similar types of knowledge. CONSTRUCT-O allows for the rapid formation of subgroups and the emergence of culture, which, when it crystallizes, supervenes to alter agent coevolution and search for improved performance. These models show the emergence of communication networks, the formation of stable hierarchical groups and the supervenience of group effects on component agent behaviors - network driven, groups solidify, and downward causality emerges.

Her four-level simulation (Carley & Lee, 1998; Carley, 1999a) consists of small groups of interacting workers (agents) led by an executive team that develops firm-level strategy based on environmental inputs, including decisions about design, workload, and personnel. This model allows for both the interaction of managerial downward influence, along with supervenience from emergent ‘informal’ norms and culture influencing agents - we see the beginning of causal intricacy since agents can be hit with two kinds of parallel causal flow. In addition, supervenience results from both structural and cultural effects - once groups emerge, they act to control who agents interact with, learn from, and so on, thereby altering subsequent coevolutionary emergence by agents. Once there is collective agreement on what is appropriate to be known, the emergent learning culture then supervenes to alter the subsequent knowledge-creation strategies of agents.

Using agent models16

To illustrate how an agent-based model-centered science works, consider a paper by Contractor, et al. (2000) using structuration theory (Giddens, 1984) to explain the origin of self-organizing networks. It is not axiomatic nor does it offer more than a minimalist iconic model. N either does it attempt to make a direct predictive leap from structuration-based hypotheses to real-world phenomena, noting that there are a “.multitude of factors that are highly interconnected, often via complex, non-linear dynamic relationships” (Contractor, et al., 2000: 4). Instead, the substructure elements are computationally combined into a model ‘composite outcome’ and this outcome is predicted to line up with real-world phenomena. The model-substructures are shown in Figure 1.

There are three key steps in the effective use of computational agent-based models:

The Contractor, et al. (2000), research implements Step 1 (see Figure 1), and begins Steps 2 and 3.

Step 2. The analytical adequacy test: Using the model to test out the several causal propositions generated by the theory. This involves several elements in the coevolution of the theory-model link. Contractor, et al. (2000) start with structuration theory’s recursive interactions among actors and contextual structure. Structuration and negotiated order are linked to network dynamics and evolution (Barley, 1990; Stokman & Doreian, 1997). Monge and Contractor (2001) identify ten generative mechanisms posited to cause emergent network dynamics. Contractor, et al. (2000) end with ten model-substructures - each a causal proposition - rooted in structuration theory and hypothesized to affect network emergence. Each rests on considerable research. These reduce to ten equations (Figure 1) : Seven exogenous factors, each represented as a matrix of actor interactions; and three endogenous factors with more complicated formalizations. For example, in the equation ΔCWij=Wij the value of ΔCWi, “the change in communication resulting from interdependencies in the workflow” represented as the matrix Wij, “is a workflow matrix and the cell entry Wij indexes the level of interdependence between individuals i and j” (p. 21).

Contractor, et al. (2000) begin the lengthy process of theory-model coevolutionary resolution, but:

  1. Debate remains over which elements of structur-ation theory are worth formalizing;
  2. Not all generative mechanisms thought to cause network emergence are represented; additional theorizing could mean additions and / or deletions;
  3. Formalization of model-substructures could take a variety of expressions; and
  4. Their model, “Blanche,” is only one of many computational modeling approaches that could be used.

In short, it will take a research program iteratively coevolving these four developmental process elements over some period of time before theory, the derived set of formalized causal statements, and modeling technology achieve full credibility.

Step 3. The ontological adequacy test: Comparisons of model-substructures with functionally parallel real-world subprocesses. Empiricists are not held to the draconian objective of testing model-to-real-world isomorphism for all substructures at the same time - that is, matching the composite outcome of the model against equivalent real-world phenomena. Experience in classical physics shows that if each of the substructures is shown to be representative, then the whole will also refer. This means that model-phenomena tests may be conducted at the substructure or composite outcome levels.

The increased probability of nonlinear sub-structure effects (individually or in combination) in social science, however, demonstrates the increased importance of model-centered science. Given nonlinear substructure interactions, it is more likely that the model’s composite outcome will fare better in the ontological test. Contractor, et al. (2000) actually do both kinds of tests. In a quasi-experiment, they collect data pertinent to each of the model-substructures and to the composite outcome of the model. Their real-world sample consists of 55 employees measured at 13 points over two years. They do not test whether a specific model substructure predicts an equivalent subcomponent of the emergent network. For example, they do not test the relation between the model’s workflow interdependence matrix and the equivalent real-world matrix. They show, however, that each causal substructure has already been well tested in previous research. They find that the model’s composite outcome predicts the empirically observed emergent network. Furthermore, four of the ten substructures each individually significantly predicts the observed emergent network.

Testing the model-phenomena link also involves several coevolutionary developments:

  1. Decompose the model into key constituent sub-structures, which may need further ontological testing;
  2. Identify equivalent generic functions in real-world phenomena, perhaps across a variety of quasi-experimental settings, presumably improving over time as well;
  3. Define the function of each substructure in generic real-world operational terms; here, too, improvement over time is expected;
  4. Test to see if (a) the model substructures are isomorphic with the real-world functions; and (b) if the model’s composite outcome represents real-world phenomena - both expected to develop interactively over time.

Needless to say, several empirical tests would be required before all aspects of the model are fully tested. In the Contractor, et al. (2000) study, six of the substructure expressions do not separately predict the real-world outcome. This could be because of the nonlinear interactions or because the substructures do not validly represent either theory or real-world phenomena in this instance. Thus, neither analytical nor ontological adequacy of the model are fully resolved. More generally, sensitivity analyses could test the presence or absence of specific substructures against changes in level of ontological adequacy. Furthermore, since theory and model coevolve toward analytical adequacy, it follows that these tests for ontological adequacy would have to be updated as the theory-model link coevolves.

With respect to testing theories bearing on knowledge era organizational dynamics, agent-based models have much to offer. They allow us to accomplish the following objectives:

  1. Formal modeling without having to assume away the essential character of postpositivist ontology: complexity, diversity, heterarchy (multiple orders and constraints), vast networks of connections, indeterminate social behaviors, mutual causality, and so forth - all the key elements of knowledge era dynamics;
  2. Extracting more plausibly true, potentially generalizable, and predictable theories from complicated case study narratives bound to a particular locality, context, time, and observer;
  3. Reducing initially complicated theories about a complex world to agent rules, in abstracted, idealized, agent-based model form, so as to study and model how agent rules lead to order creation and the formation of norms, hierarchy, institutional structure, supervenience, and the like.;
  4. Seeing whether the analytical truth plausibility of theories may be improved by testing which of the various proposed elements of the theories work best in producing outcomes predicted by the theories, thereby leading toward the production of more elegant theories composed of fewer, but more fruitful, elements;
  5. Aiming for theories that have more empirical truth plausibility because they (a) more adequately rep-resent the state-space of real-world firms; and (b) have been tested against real-world phenomena.

Forcing elegance on theories by the use of models offers simpler, theory-based, more plausibly true beliefs, and increasingly crystallized, more easily described messages for management researchers to take to practicing managers.

I suggest ten steps that researchers may take to bridge across the advantages of both thin-and thick-description research methods, that is, bridging across existing ‘thick’ case study narratives and existing ‘thin’ empirical approaches using correlation-based, longitudinal, regression or econometric analyses:

  1. Focus on ways to bridge from the richness of case-study narratives to more substantiated multi-causal theories;
  2. Develop order-creation theories incorporating multiple causes from various underlying disciplines that apply to organizations from inception to maturity;
  3. Develop theories allowing for the coevolution of causes, as described by Thomas, et al. (2004);
  4. Develop theory direction-and-application ques-tions that extend theorizing from narrative(s) to more generalizable forms;
  5. Translate theories into model form - translate causes into agent rules, create agent activation and interaction regimens, time and space effects, etc.;
  6. Draw on stylized facts to define agent rules as much as possible - following the Contractor, et al. (2000) approach;
  7. Set up model procedures to explore the theory direction-and-application questions: How to simplify causes? What are mutual causal effects over time? What can be managed? What can be empirically researched? What aspects are specific and/or generalizable? And so on;
  8. Set up baseline-model outcomes that may be compared with real-world experiments and time-series effects;
  9. Use models to develop simplified theories that can then be tried out by managers and entrepreneurs.
  10. Cycle through all of the foregoing steps, taking into account the coevolution of: (1) theory-model link; (2) model-phenomena link; and (3) coevolution of the various model parts.

Conclusion

I began this article by pointing out that traditional ‘normal’ science has long been defined by classical physics and carried over into social science - most obviously by neoclassical economics. Classical science focused solely on explaining equilibrium under the 1st Law of Thermodynamics. Especially because of the increasingly rapid change dynamics at the dawn of the 21st century, the necessity of updating strategies and organization designs to keep ahead of competitors dominates managers’ and researchers’ attention. This article sets up the rapid change problem, and shows why even evolutionary theory is not the best approach for explaining entrepreneurship and organizational change dynamics. We need both new theory and new methods.

If the study of biological order creation is put into fast motion, it seems clear that the dynamics of order creation are the result of geological events and Darwinian selection is largely a fine-tuning process toward equilibrium within a context of existing species and stable niches. Biological population ecology is a discipline defined completely within the framework of existing species and niches - it is even more short term than the time horizon of Darwinian selection. The new look at biological order-creation dynamics suggests that evolutionary theory is an awkward choice of theoretical approaches to apply to the study of entrepreneurship and strategic organizing responses to changing competitive environments (McKelvey, 2004a, b). The lesson from biology is that most of the true order-creation action is over before Darwinian theory approaches become relevant. As if this weren’t damaging enough, the development of symbiogenesis by Lynn Margulis (1981; with Sagan, 2002; Ryan, 2002) offers a far more relevant nonequilibrium theory for organization studies than Darwin’s.

A new kind of science is called for - one based on order-creation rather than the equilibrium - and mathematics-dominated theories and methods of classical physics and neoclassical economics. Furthermore, different kinds of foundational assumptions are needed for an effective scientific epistemology. Complexity science - really ‘order-creation science’ - is particularly relevant because it is founded on theories explicitly aimed at explaining order creation rather than accounting for classical physicists’ traditional concerns about explaining equilibrium.

Order creation has become the central focus of complexity science. Calling it ‘complexity science’ is like calling thermodynamics ‘hot science’ - that is, naming it after one extreme of the outcome variable. Its real concern is the study of order-creation dynamics. My brief review separates order-creation science into two schools, European and American. The former focuses on the effects of externally imposed energy differentials (adaptive tension) on the production of phase transitions. Energy levels above Bénard’s 1st critical value are important for overcoming the threshold-gate, agent-activation problem. The American school focuses on internal positive feedback induced nonlinearities stemming from the coevolution of interacting, heterogeneous agents that are set in motion by small instigation effects - the butterfly effects of chaos theory. The American school, in particular, is also noteworthy because it reflects the development of a ‘new’ normal science that is based on a localized, connectionist ontology similar to that which postmodernists have concluded is a better representation of social ontology. In short, the European school puts Bénard’s critical values and phase transition effects at the origin of order creation. The American school legitimizes the postmodernists’ ontology but overcomes its anti-science rhetoric by using computational models / experiments based on heterogeneous agents as a means of pursuing model-centered science without assuming away the postmodernists’ - correct - representation of social ontology. In fact, both European and American contributions are needed to explain order creation.

Casti, (1997) says that the Santa Fe Institute will be remembered principally for its promulgation of agent-based computational models. I have outlined the several reasons for this. These models offer four advantages to theoreticians and researchers by:

  1. Allowing a modernization of the continuing legacy of logical positivism - the centrality of models, what I have elsewhere called “model-centered science” (McKelvey, 2002). Models are now seen as the third force in determining the course of science, along with theory and phenomena (Morgan & Morrison, 2000);
  2. Introducing virtual experiments so researchers can (a) manipulate variables with a surety not possible with real-world human experiments; (b) run experiments with large numbers of virtual subjects; (c) do so over many time periods; and (d) replicate the foregoing as many times as deemed appropriate;
  3. Fostering the study of interdependence rather than avoiding it by assuming independence; further-more, positive feedback processes are emphasized in addition to negative feedback, equilibrium-preserving processes;
  4. Permitting the exploration and study of genuine order-creation processes and emergent behaviors among interconnected agents. As Andriani and McKelvey (2005) observe, for social scientists the null assumption about social behavior is one of interdependence not the assumption of indepen-dence that characterizes most, if not all traditional science methodologies and especially the kinds of statistics usually applied in the quantitative study of organizations (e.g., see Greene, 2002).

Yes, relativists and postmodernists do have a legitimate case against the application of ‘physics-based’ science to organization studies. On the other hand, there is no reason why organizational complexity studies should be governed by their ‘anti-science’ rhetoric. I have tried to outline the case in favor of why complexity science - really order-creation science - methods allow organizational researchers to use effective scientific methods for organizational research.

Notes

References

Aldrich, H. (1979). Organizations and Environments, Englewood Cliffs, NJ: Prentice Hall.

Aldrich, H. (1999). Organizations Evolving, Thousand Oaks, CA: Sage.

Allen, P. M. (1975). “Darwinian evolution and a predator-prey ecology,” Bulletin of Mathematics and Biology, 37: 389.

Allen, P. M. (1993). “Evolution: Persistent ignorance from continual learning,” in R. H. Day and P. Chen (eds), Nonlinear Dynamics & Evolutionary Economics, Oxford, UK: Oxford University Press, pp. 101-112.

Allen, P. M. (2001). “A complex systems approach to learning, adaptive networks,” International Journal of Innovation Management, 5: 149-180.

Allen, P. M. and McGlade, J. M. (1986). “Dynamics of discovery and exploitation: The Scotian shelf fisheries,” Canadian Journal of Fisheries and Aquatic Science, 43: 1187-1200.

Alvesson, M. and Deetz, S. (1996). “Critical theory and postmodernism approaches to organizational studies,” in S. R. Clegg, C. Hardy and W. R. Nord (eds.), Handbook of Organization Studies, Thousand Oaks, CA: Sage, pp. 191-217.

Ambrose, D. (1995). “Creatively intelligent post-industrial organizations and intellectually impaired bureaucracies,” Journal of Creative Behavior, 29: 1-15

Andriani, P. and McKelvey, B. (2005). “Beyond averages: Extending organization science to extreme events and power laws,” working paper, Durham Business School, U. Durham, Durham, UK.

Anthony, W. P., Bennett, R. H., III, Maddox, E. N. and Wheatley, W. J. (1993). “Picturing the future: Using mental imagery to enrich strategic environmental assessment,” Academy of Management Executive, 7: 43-56

Argote, L. (1999). Organizational Learning, Boston, MA: Kluwer.

Arthur, W. B. (1988). “Self-Reinforcing mechanisms in economics,” in P. W. Anderson, K. J. Arrow and D. Pines (eds.), The Economy as an Evolving Complex System, Reading, MA: Addison-Wesley, pp. 9-31.

Arthur, W. B. (1990). “Positive feedback in the economy,” Scientific American, 262: 92-99.

Arthur, W. B. (2000). “Complexity and the economy,” in D. Colander (ed.), The Complexity Vision and the Teaching of Economics, Cheltenham, UK: Edward Elgar, pp. 19-28.

Arthur, W. B., Durlauf, S. N. and Lane, D. A. (eds.) (1997). The Economy as an Evolving Complex System: Proceedings of the Santa Fe Institute, Vol. XXVII, Reading, MA: Addison-Wesley.

Ashby, W. R. (1956). An Introduction to Cybernetics, London: Chapman and Hall.

Ashby, W. R. (1960). Design for a Brain, 2nd ed., New York: Wiley.

Auerswald, P., Kauffman, S., Lobo, J. and Shell, K. (1996). “A microeconomic theory of learning-by-doing: An application of the nascent technology approach,” Working paper, Cornell University, Ithaca, NY

Axelrod, R. and Cohen, M. D. (1999). Harnessing Complexity, New York: Free Press.

Axtell, R., Axelrod, R., Epstein, J. M. and Cohen, M. D. (1996). “Aligning simulation models: A case study and results,” Computational and Mathematical Organization Theory, 1: 123-141.

Bak, P. (1996). How Nature Works: The Science of Self-Organized Criticality, New York: Copernicus.

Barley, S. R. (1990). “The alignment of technology and structure through roles and networks,” Administrative Science Quarterly, 35: 61-103.

Barnard, C. I. (1938). The Functions of the Executive, Cambridge, MA: Harvard University Press.

Bar-Yam, Y. (1997). Dynamics of Complex Systems, Reading, MA: Addison-Wesley.

Baum, J. A. C. (1999). “Whole-part coevolutionary competition in organizations,” in J. A. C. Baum and B. McKelvey (eds.), Variations in Organization Science: In Honor of Donald T. Campbell, Thousand Oaks, CA: Sage, pp. 113-135.

Becker, G. S. (1975). Human Capital, 2nd ed., Chicago, IL: University of Chicago Press

Bénard, H. (1901). “Les tourbillons cellulaires dans une nappe liquide transportant de la chaleur par convection en régime permanent,” Annales de Chimie et de Physique, 23: 62-144.

Berger, P. L. and Luckmann, T. (1967). The Social Construction of Reality, New York: Doubleday.

Besanko, D., Dranove, D. and Shanley, M. (2000). The Economics of Strategy, 2nd ed., New York: Wiley.

Bhaskar, R. (1975/1997). A Realist Theory of Science, London: Leeds Books, 2nd ed., London: Verso.

Blau, P. M. and Scott, W. R. (1962). Formal Organizations, San Francisco, CA: Chandler.

Boulding, K. E. (1968). “Business and Economic Systems,” in J. H. Milsum (ed.), Positive Feedback: A General Systems Approach to Positive/Negative Feedback and Mutual Causality, Oxford, UK: Pergamon Press, pp. 101-117.

Boumans, M. (2000). “Built-in justification,” in M. S. Morgan and M. Morrison (eds.), Models as Mediators: Perspectives on Natural and Social Science, Cambridge, UK: Cambridge University Press, pp. 66-96.

Brock, W. A. (2000). “Some Santa Fe scenery,” in D. Colander (ed.), The Complexity Vision and the Teaching of Economics, Cheltenham, UK: Edward Elgar, pp. 29-49.

Brown, S. L. and Eisenhardt, K. M. (1997). “The art of continuous change: Linking complexity theory and time-paced evolution in relentlessly shifting organizations,” Administrative Science Quarterly, 42: 1-34.

Brown, S. L. and Eisenhardt, K. M. (1998). Competing on the Edge: Strategy as Structured Chaos, Boston: Harvard Business School Press.

Brunk, G. G. (2000). “Understanding self-organized criticality as a statistical process,” Complexity, 5: 26-33.

Buckley, W. (ed.) (1967). Modern Systems Research for the Behavioral Scientist, Chicago, IL: Aldine.

Burrell, G. (1996). “Normal science, paradigms, metaphors, discourses and genealogies of analysis,” in S. R. Clegg, C. Hardy and W. R. Nord (eds.), Handbook of Organization Studies, Thousand Oaks, CA: Sage, pp. 642-658.

Burrell, G. and Morgan, G. (1979). Sociological Paradigms and Organizational Analysis, London: Heinemann.

Burt, R. S. (1992). Structural Holes: The Social Structure of Competition, Cambridge, MA: Harvard University Press.

Burton-Jones, A. (1999). Knowledge Capitalism: Business, Work and Learning in the New Economy, Oxford: Oxford University Press.

Bygrave, W. and Hofer, C. (1991). “Theorizing about entrepreneurship,” Entrepreneurship Theory and Practice, 16: 13-23.

Campbell, D. T. (1965). “Variation and selective retention in socio-cultural evolution,” in H. R. Barringer, G. I Blanksten and R. W. Mack (eds.), Social Change in Developing Areas: A Reinterpretation of Evolutionary Theory, Cambridge, MA: Schenkman, pp. 19-48.

Carley, K. M. (1991). “A theory of group stability,” American Sociological Review, 56: 331-354.

Carley, K. M. (1992). “Organizational learning and personnel turnover,” Organization Science, 3: 20-46.

Carley, K. M. (1996). “A comparison of artificial and human organizations,” Journal of Economic Behavior and Organization, 31: 175-191.

Carley, K. M. (1999a). “Learning within and among organizations,” Advances in Strategic Management, 16: 33-53.

Carley, K. M. (1999b). “On the evolution of social and organizational networks,” Research in the Sociology of Organizations, 16: 3-30.

Carley, K. M. and Harrald, J. (1997). “Organizational learning under fire: Theory and practice,” American Behavioral Scientist, 40: 310-332.

Carley, K. M. and Hill, V. (2001). “Structural change and learning within organizations,” in A. Lomi and E.R. Larsen (eds.), Dynamics of Organizational Societies: Computational Modeling and Organization Theories, Cambridge, MA: AAAI/MIT Press, pp. 63-92.

Carley, K. M. and Lee, J. S. (1998). “Dynamic organizations: Organizational adaptation in a changing environment,” Advances in Strategic Management, 15: 269-297.

Carley, K. M. and Lin, Z. (1995). “Organizational designs suited to high performance under stress,” IEEE Transactions on Systems, Man, and Cybernetics, 25: 221-230.

Carley, K. M. and Prietula, M. J. (eds.) (1994). Computational Organization Theory, Hillsdale, NJ: Erlbaum.

Carley, K. M., Prietula, M. J. and Lin, Z. (1998). “Design versus cognition: The interaction of agent cognition and organization design on organizational performance,” Journal of Artificial Societies and Social Simulation, 1: 1-19.

Carley, K. M. and Svoboda, D. M. (1996). “Modeling organizational adaptation as a simulated annealing process,” Sociological Methods and Research, 25: 138168.

Cartwright, N. (1983). How the Laws of Physics Lie, New York: Oxford University Press.

Casti, J. L. (1994). Complexification: Explaining a Paradoxical World through the Science of Surprise, New York: Harper-Perennial.

Casti, J. L. (1997). Would-Be Worlds: How Simulation is Changing the Frontiers of Science, New York: Wiley.

Chia, R. (1996). Organizational Analysis as Deconstructive Practice, Berlin: Walter de Gruyter.

Churchman, C. W. and Ackoff, R. L. (1950). “Purposive behavior and cybernetics,” Social Forces, 29: 32-39.

Cohen, M. D., March, J. B. and Olsen, J. P. (1972). “A garbage can model of organizational choice,” Administrative Science Quarterly, 17: 1-25.

Colander, D. (2000). The Complexity Vision and the Teaching of Economics, Cheltenham, UK: Elgar.

Contractor, N. S., Whitbred, R., Fonti, F., Hyatt, A., O’Keefe, B. and Jones, P. (2000). “Structuration theory and self-organizing networks,” paper presented at the annual Organization Science Winter Conference, Keystone, CO, January.

Cramer, F. (1993). Chaos and Order: The Complex Structure of Living Things, trans. D. Loewus, New York: VCH.

Crowston, K. (1996). “An approach to evolving novel organizational forms,” Computational and Mathematical Organization Theory, 2: 29-47.

Crozier, M. (1964). The Bureaucratic Phenomenon, Chicago, IL: University of Chicago Press.

Curd, M. and Cover, J. A. (1998). Philosophy of Science: The Central Issues, New York: Norton.

Curzio, A. Q. and Fortis, M. (2002). Complexity and Industrial Clusters: Dynamics and Models in Theory and Practice, Heidelberg, Germany: Physica-Verlag.

Davenport, T. H. and Prusak, L. (1998). Working Knowledge: How Organizations Manage What they Know, Cambridge, MA: Harvard University Press.

Depew, D.J. (1998). “Darwinism and Developmentalism: Prospects for Convergence,” in G. Van de Vijver, S. N. Salthe and M. Delpos (eds.), Evolutionary Systems: Biological and Epistemological Perspectives on Selection and Self-Organization, Dordrecht, The Netherlands: Kluwer, pp. 21-32.

Drucker, P. F. (1999). Management Challenges for the 21st Century, New York: HarperBusiness.

Durlauf, S. N. (1997). “Limits to science or limits to epistemology?” Complexity, 2: 31-37.

Eddington, A. (1930). The Nature of the Physical World, London: Macmillan.

Eisenhardt, K. (1989). “Making fast strategic decisions in high-velocity environments,” Academy of Management Journal, 32: 543-576.

Epstein, J. M. and Axtell, R. (1996). Growing Artificial Societies: Social Science from the Bottom Up, Cambridge, MA: MIT Press.

Erakovic, L. (2002). “The pathway of radical changes: Mutual outcome of government actions, technological changes, and managerial intentionality,” paper presented at the 18th EGOS Colloquium, Barcelona, July.

Ferber, J. (1999). Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence, London: Addison-Wesley.

Fine, C. H. (1998). Clock Speed: Winning Industry Control in the Age of Temporary Advantage, Reading, MA: Perseus.

Fleming, L. and Sorenson, O. (2001). “Technology as a complex adaptive system,” Research Policy, 30: 1019-1039.

Fuster, J. M. (1995). Memory in the Cerebral Cortex: An Empirical Approach to Neural Networks in the Human and Nonhuman Primate, Cambridge, MA: MIT Press.

Garzon, M. (1995). Models of Massive Parallelism, Berlin: Springer-Verlag

Gavetti, G. and Levinthal, D. (2000). “Looking forward and looking backward: Cognitive and experiential search,” Administrative Science Quarterly, 45: 113-137.

Geertz, C. (1973). The Interpretation of Cultures, New York: Basic Books.

Gell-Mann, M. (1994). The Quark and the Jaguar, New York: Freeman.

Giddens, A. (1984). The Constitution of Society: Outline of the Theory of Structuration, Berkeley, CA: University of California Press.

Gleick, J. (1987). Chaos: Making a New Science, New York: Penguin.

Glynn, M. A., Lant, T. K. and Milliken, F. J. (1994). “Mapping learning processes in organizations: A multilevel framework linking learning and organizing,” in C . Stubbart, J. Meindl and J. Porac (eds.), Advances in Managerial Cognition and Organizational Information Processing, Greenwich, CT: JAI Press, pp. 43-83.

Granovetter, M. (1973). “The strength of weak ties,” American Journal of Sociology, 78: 1360-1380.

Granovetter, M. (1985). “Economic action and social structure: A theory of embeddedness,” American Journal of Sociology, 82: 929-964

Granovetter, M. (1992). “Problems of explanation in economic sociology,” in N. Nohria and R. G. Eccles (eds.), Networks and Organizations: Structure, Form, and Action, Boston, MA: Harvard Business School Press, pp. 25-56.

Grant, R. M. (1996). “Toward a knowledge-based theory of the firm,” Strategic Management Journal, 17: 109-122.

Greene, W. H. (2002). Econometric Analysis, 5th ed., Englewood Cliffs, NJ: Prentice-Hall.

Gross, P. R. and Levitt, N. (1998). Higher Superstition: The Academic Left and its Quarrels with Science, 2nd ed., Baltimore, MD: Johns Hopkins University Press.

Gryskiewicz, S. S. (1999). Positive Turbulence: Developing Climates for Creativity, Innovation, and Renewal, San Francisco, CA: Jossey-Bass.

Haken, H. (1977/1983). Synergetics, an Introduction: Non-Equilibrium Phase Transitions and Self-Organization in Physics, Chemistry, and Biology, 3rd ed., Berlin: Springer-Verlag.

Halal, W. E. and Taylor, K. B. (1999). 21st Century Economics: Perspectives of Socioeconomics for a Changing World, New York: Macmillan.

Hamel, G. (2000). “Reinvent your company,” Fortune, June: 99-118.

Hamel, G. and Prahalad, C. K. (1994). Competing for the Future, Boston, MA: Harvard Business School Press.

Hassard, J. and Parker, M. (1993). Postmodernism and Organizations, Thousand Oaks, CA: Sage.

Haykin, S. (1998). Neural Networks: A Comprehensive Foundation, 2nd ed., New York: Macmillan.

Henrickson, L. (2004). “Trends in complexity theories and computation in the social sciences,” Nonlinear Dynamics, Psychology and the Life Sciences, 8: 279-302.

Henrickson, L. and McKelvey. B. (2002). “Foundations of new social science,” Proceedings of the National Academy of Sciences, 99(suppl. 3): 7288-7297.

Hinterberger, F. (1994). “On the evolution of open socio-economic systems,” in R. K. Mishra, D. Maaß and E. Zwierlein (eds.), On Self-Organization: An Interdisciplinary Search for a Unifying Principle, Berlin: Springer-Verlag, pp. 35-50.

Holland, J. H. (1975). Adaptation in Natural and Artificial Systems, Ann Arbor, MI: University of Michigan Press.

Holland, J. H. (1988). “The global economy as an adaptive system,” in P. W. Anderson and K. J. Arrow and D. Pines (eds.), The Economy as an Evolving Complex System, Redwood City, CA: Addison-Wesley, pp. 117-124.

Holland, J. H. (1995). Hidden Order, Reading, MA: Addison-Wesley.

Holton, G. (1993). Science and Anti-Science, Cambridge, MA: Harvard University Press.

Homans, G. C. (1950). The Human Group, New York: Harcourt.

Hooker, C.A. (1995). Reason, Regulation, and Realism, Albany, NY: State University of New York Press.

Horgan, J. (1996). The End of Science: Facing the Limits of Knowledge in the Twilight of the Scientific Era, New York: Broadway.

Huber, G. P. (1996). “Organizational learning: The contributing processes and the literatures,” in M. D. Cohen and L. S. Sproull (eds.), Organizational Learning, Thousand Oaks, CA: Sage, pp. 124-162.

Ilgen, D. R. and Hulin, C. L. (eds.) (2000). Computational Modeling of Behavior in Organizations, Washington, DC: American Psychological Association.

Jennings, J. and Haughton, L. (2000). It’s not the BIG that eat the SMALL...It’s the FAST that eat the SLOW, New York: Harper-Business.

Kaminska-Labbe, R. and Thomas, C. (2002). “Strategic renewal and competence building in times of deconstruction,” paper presented at the 18th EGOS Colloquium, Barcelona, July.

Kauffman, S. A. (1993). The Origins of Order: Self-Organization and Selection in Evolution, New York: Oxford University Press.

Kauffman, S. (2000). Investigations, New York: Oxford University Press.

Kelso, J. A., Ding, M. and Schöner, G. (1992). “Dynamic pattern formation: A primer,” in J. E. Mittenthal and A. B. Baskin (eds.), Principles of Organization in Organisms, Proceedings of the Santa Fe Institute, Vol. XIII, Reading, MA: Addison-Wesley, pp. 397-439.

Koertge, N., (1998). A House Built on Sand: Exposing Postmodernist Myths about Science, New York: Oxford University Press.

Lagerstrom, P. A. (1996). Laminar Flow Theory, Princeton, NJ: Princeton University Press.

Lalonde, R.J. (1986). “Evaluating the econometric evaluations of training programs with experimental data,” American Economic Review, 76: 604-620.

Lant, T. K. and Phelps, C. (1999). “Strategic groups: A situated learning perspective,” Advances in Strategic Management: A Research Annual, 16: 221-247.

Lave, J. and Wenger, E. (1991). Situated Learning, New York: Cambridge University Press.

Lawrence, P. R. and Lorsch, J. W. (1967). Organization and Environment: Managing Differentiation and Integration, Cambridge, MA: Harvard Business School Press.

LeBaron, B. (2000). “Empirical regularities from interacting long-and short-memory investors in an agent-based stock market,” IEEE Transaction on Evolutionary Computation, 5: 442-455.

LeBaron, B. (2001). “Volatility magnification and persistence in an agent-based financial market,” unpublished manuscript, Brandeis University, Waltham, MA.

Leonard-Barton, D. (1995). Wellsprings of Knowledge: Building and Sustaining the Sources of Innovation, Boston, MA: Harvard Business School Press.

Levinthal, D. A. (1997). “Adaptation on rugged landscapes,” Management Science, 43: 934-950.

Levinthal, D. A. and Warglien, M. (1999). “Landscape design: Designing for local action in complex worlds,” Organization Science, 10: 342-357.

Lewin, A. Y. and Volberda, H. W. (1999). “Coevolution of strategy and new organizational forms,” Organization Science, 10: 519-690 (special issue).

Lewin, R. (1992/1999). Complexity: Life at the Edge of Chaos, 2nd ed., Chicago, IL: University of Chicago Press.

Lewis, M. W. and Grimes, A. J. (1999). “Meta-triangulation: Building theory from multiple paradigms,” Academy of Management Review, 24: 672-690.

Lichtenstein, B. B. and McKelvey, B. (2003). “Complexity science and computational models of emergent order: What’s there? What’s missing?” working paper, Management/Marketing Dept., U. Massachusetts, Boston, MA.

Lichtenstein, B. B. and McKelvey, B. (2005). “Toward a theory of emergence by stages: Complexity dynamics, self-organization, and power laws in firms,” working paper, Management/Marketing Dept., U. Massachusetts, Boston, MA.

Lincoln, Y. S. (ed.) (1985). Organizational Theory and Inquiry: The Paradigm Revolution, Newbury Park, CA: Sage.

Lippman, S. A. and Rumelt, R. P. (1982). “Uncertain imitability: An analysis of interfirm differences in efficiency under competition,” Bell Journal of Economics, 13: 418-438.

Lorenz, E. N. (1963). “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences, 20: 130-141.

Lukas, R. (1972). “Expectations and the neutrality of money,” Journal of Economic Theory, 4: 103-124.

Macy, M. W. and Skvoretz, J. (1998). “The evolution of trust and cooperation between strangers: A computational model,” American Sociological Review, 63: 683-660.

Maguire, S., McKelvey, B., Mirabeau, L. and Oztas, N. (forthcoming). “Organizational complexity science,” in S. R. Clegg, C. Hardy and T. Lawrence (eds.), Handbook of Organizational Studies, 2nd ed., Thousand Oaks, CA: Sage.

Mainzer, K. (1994/2004). Thinking in Complexity: The Complex Dynamics of Matter, Mind, and Mankind, 4th ed., New York: Springer-Verlag.

March, J. G. (1991). “Exploration and exploitation in organization learning,” Organization Science, 2: 71-87.

March, J. G. (1999). The Pursuit of Organizational Intelligence, Oxford, UK: Blackwell.

March, J. G. and Sutton, R. I. (1997). “Organizational performance as a dependent variable,” Organization Science, 8: 698-706.

Margulis, L. (1981). Symbiosis in Cell Evolution, New York: W.H. Freeman.

Margulis, L. and Sagan, D. (2002). Acquiring Genomes: A Theory of the Origins of Species, New York: Basic Books.

Marsden, R. and Townley, B. (1996). “The owl of minerva: Reflections on theory in practice,” in S. R. Clegg, C. Hardy and W. R. Nord (eds.), Handbook of Organization Studies, Thousand Oaks, CA: Sage, pp. 659-675.

Maruyama, M. (1963). “The second cybernetics: Deviation-amplifying mutual causal processes,” American Scientist, 51: 164-179. [Reprinted in W. Buckley (ed.), Modern Systems Research for the Behavioral Scientist, Chicago: Aldine, 1968, pp. 304-313.]

Masuch, M. and Warglien, M. (eds.) (1992). Artificial Intelligence in Organization and Management Theory: Models of Distributed Activity, Amsterdam: The Netherlands: North Holland.

McKelvey, B. (1982). Organizational Systematics: Taxonomy, Evolution and Classification, Berkeley, CA: University of California Press.

McKelvey, B. (1997). “Quasi-natural organization science,” Organization Science, 8: 351-381.

McKelvey, B. (1999). “Avoiding complexity catastrophe in coevolutionary pockets: Strategies for rugged landscapes,” Organization Science, 10: 294-321.

McKelvey, B. (2001a). “Energizing order-creating networks of distributed intelligence,” International Journal of Innovation Management, 5: 181-212.

McKelvey, B. (2001b). “Foundations of ‘new’ social science: Institutional legitimacy from philosophy, complexity science, postmodernism, and agent-based modeling,” presented at the National Academy of Sciences’ Sackler Colloquium, University of California, Irvine.

McKelvey, B. (2001c). “What is complexity science? It is really order-creation science,” Emergence, 3(1): 137-157.

McKelvey, B. (2002). “Model-centered organization science epistemology,” in J. A. C. Baum (ed.), Companion to Organizations, Oxford, UK: Blackwell, pp. 752-780.

McKelvey, B. (2003). “Postmodernism vs. truth in management theory,” in E. Locke (ed.), Post Modernism and Management Pros, Cons, and the Alternative, Research in the Sociology of Organizations, 21: 113-168. Amsterdam, NL: Elsevier.

McKelvey, B. (2004a). “Toward a 0th law of thermodynamics: Order creation complexity dynamics from physics and biology to bioeconomics,” Journal of Bioeconomics, 6: 65-96.

McKelvey, B. (2004b) “Toward a complexity science of entrepreneurship,” Journal of Business Venturing, 19: 313-341.

McKelvey, B. (2005). “Microstrategy from macroleadership: distributed intelligence via new science,” in A. Y. Lewin and H. W. Volberda (eds.), Mobilizing the Self-renewing Organization, New York: Palgrave Macmillan.

McKelvey, B. and Baum, J. A. C. (1999). “Donald T. Campbell’s evolving influence on organization science,” in J. A. C. Baum and B. McKelvey (eds.), Variations in Organization Science: In Honor of Donald T. Campbell, Thousand Oaks, CA: Sage, pp. 1-15.

McKim, V. R. and Turner, S. P. (1997). Causality in Crisis: Statistical Methods and the Search for Causal Knowledge in the Social Sciences, Notre Dame, IN: University of Notre Dame Press.

Mélèse, J. (1991). L’Analyse Modulaire des Systèmes, Paris: Les Editions d’Organisation.

Meyer, A. D. and Gaba, V. (2002). “Adoption of corporate venture investing programs: Effects of social proximity and community coevolution,” paper presented at the 18th EGOS Colloquium, Barcelona, July.

Miles, R., Snow, C. C., Matthews, J. A. and Miles, G. (1999). “Cellular-network organizations,” in W. E. Halal, and K. B. Taylor (eds.), 21st Century Economics: Perspectives of Socioeconomics for a Changing World, New York: Macmillan, pp. 155-173.

Milsum, J. H. (ed.) (1968). Positive Feedback: A General Systems Approach to Positive/Negative Feedback and Mutual Causality, Oxford, UK: Pergamon Press.

Mirowski, P. (1989). More Heat than Light, Cambridge, UK: Cambridge University Press.

Mirowski, P. (ed.) (1994). Natural Images in Economic Thought, Cambridge, UK: Cambridge University Press.

Mitchell, S. D. (2004). “Why integrative pluralism?” Emergence: Complexity and Organization, 6(1-2): 81-91.

Monge, P. R. and Contractor, N. S. (2001). “Emergence of communication networks,” in F. M. Jablin and L. L. Putnam (eds.), New Handbook of Organizational Communication, Newbury Park, CA: Sage, pp. 440-502.

Morel, B. and Ramanujam, R. (1999). “Through the looking glass of complexity: The dynamics of organizations as adaptive and evolving systems,” Organization Science, 10:278-293.

Moreland, R. L. and Myaskovsky, L. (2000). “Exploring the performance benefits of group training: Transactive memory or improved communication?” Organizational Behavior and Human Decision Processes, 82: 117-133.

Morgan, M. S. and Morrison, M. (eds.) (2000). Models as Mediators: Perspectives on Natural and Social Science, Cambridge, UK: Cambridge University Press.

Morlacchi, P. (2002). “Translating ideas in facts and artifacts: Co-evolution of technology and networks in heart failure,” paper presented at the Annual Meeting of the Academy of Management, Denver, August.

Morrison, F. (1991). The Art of Modeling Dynamic Systems, New York: Wiley Interscience.

Mosakowski, E. (1997). “Strategy making under causal ambiguity: Conceptual issues and empirical evidence,” Organization Science, 8: 414-442.

Nelson, R. R. and Winter, S. (1982). An Evolutionary Theory of Economic Change, Cambridge, MA: Belknap.

Nohria, N. and Eccles, R. G. (eds.) (1992). Networks and Organizations: Structure, Form, and Action, Cambridge, MA: Harvard Business School Press.

Nonaka, I. and Nishiguchi, T. (eds.) (2001). Knowledge Emergence: Social, Technical, and Evolutionary Dimensions of Knowledge Creation, Oxford: Oxford University Press.

Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company, Oxford: Oxford University Press.

Norling, P.M. (1996). “Network or not work: Harnessing technology networks in DuPont,” Research Technology Management, Jan.-Feb.: 289-295.

Norris, C. (1997). Against Relativism: Philosophy of Science, Deconstruction and Critical Theory, Oxford, UK: Blackwell.

ogilvie, dt. (1998). “Creativity and strategy from a complexity theory perspective,” presented at the 10th International

Conference on Socio-Economics, Vienna, Austria.

Ormerod, P. (1994). The Death of Economics, New York: Wiley.

Ormerod, P. (1998). Butterfly Economics: A New General Theory of Social and Economic Behavior, New York: Pantheon.

Paul, D. L., Butler, J. C., Pearlson, K. E. and Whinston, A. B. (1996). “Computationally modeling organizational learning and adaptability as resource allocation: An artificial adaptive systems approach,” Computational and Mathematical Organization Theory, 2: 301-324.

Pearl, J. (2000). Causality: Models, Reasoning, and Inference, Cambridge, UK: Cambridge University Press.

Porter, M. E. (1996). “What is strategy?” Harvard Business Review, 74: 61-78.

Powell, W.W. (1990). “Neither market nor hierarchy: Network forms of organization,” in B. M. Staw and L. L. Cummings (eds.), Research in Organizational Behavior, 12, Greenwich, CT: JAI Press, pp. 295-336.

Prietula, M. J., Carley, K. M. and Gasser, L. (eds.) (1998). Simulating Organizations: Computational Models of Institutions and Groups, Cambridge, MA: MIT Press.

Prigogine, I. (1955). An Introduction to Thermodynamics of Irreversible Processes, Springfield, IL: Thomas.

Prigogine, I. and Stengers, I. (1984). Order Out of Chaos: Man’s New Dialogue with Nature, New York: Bantam.

Prigogine, I. (with Stengers, I.) (1997). The End of Certainty: Time, Chaos, and the New Laws of Nature, New York: Free Press.

Prusak, L. (1996). “The knowledge advantage,” Strategy and Leadership, 24: 6-8.

Read, D. W. (1990). “The utility of mathematical constructs in building archaeological theory,” in A. Voorrips (ed.), Mathematics and Information Science in Archaeology: A Flexible Framework, Bonn: Holos, pp. 29-60.

Reed, M., and Hughes, M. (eds.) (1992). Rethinking Organization: New Directions in Organization Theory and Analysis, London: Sage.

Rivkin, J. W. (2000). “Imitation of complex strategies,” Management Science, 46: 824-844.

Rivkin, J. W. (2001). “Reproducing knowledge: Replication without imitation at moderate complexity,” Organization Science, 12: 274-293.

Roethlisberger, F. J. and Dixon, W. J. (1939). Management and the Worker, Cambridge, MA: Harvard University Press.

Rosenberg, A. (1994). “Does evolutionary theory give comfort or inspiration to economics?” In P. Mirowski (ed.), Natural Images in Economic Thought, Cambridge, UK: Cambridge University Press, pp. 384-407.

Rouchier, J. (2003). “Re-implementation of a multi-agent model aimed at sustaining experimental economic research: The case of simulations with emerging speculation,” Journal of Artificial Societies and Social Simulation, 6(4): http://jasss.soc.surrey.ac.uk/6/4/7.html.

Ryan, F. (2002). Darwin’s Blind Spot: Evolution Beyond Natural Selection, New York: Houghton Mifflin.

Salthe, S. N. (1993). Development and Evolution: Complexity and Change in Biology, Cambridge, MA: (Bradford) MIT Press.

Schumpeter, J. A. (1942). Capitalism, Socialism, and Democracy, New York: Harper and Row.

Scott, W. R. (1998). Organizations: Rational, Natural, and Open Systems, 4th ed., Englewood Cliffs NJ: Prentice-Hall.

Siggelkow, N. (2002). “Evolution toward fit,” Administrative Science Quarterly, 47: 125-159.

Siggelkow, N. and Levinthal, D. A. (2003). “Temporarily divide to conquer: Centralized, decentralized, and reintegrated organizational approaches to exploration and adaptation,” Organization Science, 14: 650-669.

Silverman, D. (1970). The Theory of Organizations: A Sociological Framework, London: Heinemann.

Simon, H. A. (1999). “Coping with complexity,” in Groupe de Recherche sur l’Adaptation la Systémique et la Complexité Economique GRASCE) (eds.), Entre Systémique et Complexité Chemin Faisant; Mélange en Hommage à JeanLouis Le Moigne, Paris: Presses Universitaires de France, pp. 233-240.

Slywotzky, A. (1996). Value Migration, Boston, MA: Harvard Business School Press.

Sokal, A. and Bricmont, J. (1998). Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science, New York: Picador.

Stacey, R. D. (1995). “The science of complexity: An alternative perspective for strategic change processes,” Strategic Management Journal, 16: 477-495.

Stacey, R. D. (2001). Complex Responsive Processes in Organizations: Learning and Knowledge Creation, London: Routledge.

Stengers, I. (2004). “The challenge of complexity: Unfolding the ethics of science - In memoriam Ilya Prigogine,” Emergence: Complexity and Organization, 6(1-2): 92-99.

Stokman, F. N. and Doreian, P. (1997). “Evolution of social networks: Processes and principles,” in F. N. Stokman and P. Doreian (eds.), Evolution of Social Networks, New York: Gordon and Breach, pp. 233-250.

Suppe, F. (ed.) (1977). The Structure of Scientific Theories, 2nd ed., Chicago: University of Chicago Press.

Suppe, F. (1989). The Semantic Conception of Theories and Scientific Realism, Urbana-Champaign, IL: University of Illinois Press.

Taylor, J. R. (1999). “The other side of rationality: Socially distributed cognition,” Management Communication Quarterly, 13: 317-326.

Thomas, C., Kaminska-Labbé, R. and McKelvey, B. (2004). “Unraveling entangled organizational dynamics: A study of coevolving Aristotelian causalities,” Organization Science Winter Conference, Steamboat Springs, CO, February 4-7.

Thomas, C., Kaminska-Labbé, R. and McKelvey, B. (2005). “Managing the MNC and exploitation / exploration dilemma: From static balance to irregular oscillation,” in G. Szulanski, Y. Doz and J. Porac (eds.), Advances in Strategic Management: Expanding Perspectives on the Strategy Process, Vol. 22, Elsevier.

Thompson, J. D. (1967). Organizations in Action, New York: McGraw-Hill.

Tichy, N. M. and Sherman, S. (1994). Control Your Destiny or Someone Else Will, New York: HarperCollins.

Udwadia, F. E. (1990). “Creativity and innovation in organizations: Two models and managerial implications,” Technological Forecasting and Social Change, 38: 65-80

Van de Vijver, G., Salthe, S. N. and Delpos, M. (eds.) (1998). Evolutionary Systems: Biological and Epistemological Perspectives on Selection and Self-Organization, Dordrecht, The Netherlands: Kluwer.

Weber, B.H. (1998). “Emergence of life and biological selection from the perspective of complex systems dynamics,” in G. Van de Vijver, S. N. Salthe and M. Delpos (eds.), Evolutionary Systems: Biological and Epistemological Perspectives on Selection and Self-Organization, Dordrecht, The Netherlands: Kluwer, pp. 59-66.

Weber, M. (1924). Gesammelte Aufsätze zur Sozial-und Wirtschaftsgeschichte, Tübingen: Mohr (Siebeck). [Republished as The Theory of Social and Economic Organization (trans. A. H. Henderson and T. Parsons). Glencoe, IL: Free Press, 1947.]

Wegner, D. M. (1987). “Transactive memory: A contemporary analysis of the group mind,” in B. Mullen and G. R. Goethals (eds.), Theories of Group Behavior, New York: Springer-Verlag, pp. 185-208.

Weick, K. E. (1985). “Systematic observational methods,” G. Lindzey and E. Aronson (eds.), The Handbook of Social Psychology Vol. 1, 3rd ed., New York: Random House, pp. 567-634.

Weick, K. E. and Roberts, K. (1993). “Collective mind in organizations: Heedful interrelating on flight decks,” Administrative Science Quarterly, 38: 357-381.

Wenger, E. (1998). Communities of Practice: Learning, Meaning and Identity, Cambridge, UK: Cambridge University Press.

Wolfram, S. (2002). A New Kind of Science, Champaign, IL: Wolfram Media, Inc.

Yoshino, M. Y. and Rangan, U. S. (1995). Strategic Alliances, Boston, MA: Harvard Business School Press.

Young, A. (1928). “Increasing returns and economic progress,” The Economic Journal, 38: 527-542.

Yuan, Y. and McKelvey, B. (2004). “Situated learning theory: Adding rate and complexity effects via Kauffman’s NK model,” Nonlinear Dynamics, Psychology, and Life Sciences, 8: 65-101.

Zohar, D. (1997). Rewiring the Corporate Brain, San Francisco, CA: Berrett-Koehler.


1 ‘Agent’ is a general term used to variously designate semi-autonomous entities, i.e., ‘parts’ of systems. It thus incorporates such entities as atoms, molecules, biomolecules, organelles, organs, organisms, species, processes, people, groups, firms, industries, and so on (Ferber, 1999).
2 See also Fine, (1998) as well as Jennings and Haughton (2000).
3 For further elaboration see Yuan and McKelvey (2004), from which this portion is drawn.
4 Quoted in Hamel (2000: 102).
5 By way of additional background, I note, however, that Americans and French join in the Modern Interpretation of quantum theory-which is the most foundational treatment of order creation. I describe a bit of this in McKelvey, (2001c).
6 Peter Allen (1975, 1993, 2001; with McGlade, 1986) represents a sort of crossover. A 20 year colleague of Prigogine’s, he comes from a physics / chemistry / math background, but he adds probabilistic noise to the linear differential equations he draws from systems dynamics. He then applies this method to study revolutionary dynamics in fisheries, economic development, and other obviously social science applications.
7 Though publishing in the SFI volume, Kelso was a student of, and frequent coauthor with, Hermann Haken.
8 About Schumpeter, Besanko, et al. (2000: 485) say: “Schumpeter considered capitalism to be an evolutionary process that unfolded in a characteristic pattern. Any market has periods of comparative quite, when firms that have developed superior products, technologies, or organizational capabilities earn positive economic profits. These quiet periods are punctuated by fundamental ‘shocks’ or ‘discontinuities’ that destroy old sources of advantage and replace them with new ones. The entrepreneurs who exploit the opportunities these shocks create achieve positive profits during the next period of comparative quiet. Schumpeter called this evolutionary process creative destruction.” (my italics)
9 Coevolutionary dynamics can be mutual causal and may show positive feedback but usually species coevolve into stability after some kind of instigating event, given a stable niche.
10 In most agent models I have studied the agent activity is simply coded into the model - hence there is no need or recognition of forces needed to overcome the threshold gate problem.
11 For example, see Masuch and Warglien (1992), Carley and Prietula (1994), Prietula, Carley, and Gasser (1998), Ferber, (1999), and Ilgen and Hulin (2000).
12 See McKelvey (2001b, 2003; Henrickson & McKelvey, 2002) for expanded treatments of this topic.
13 A recent view is that the most significant dynamics in bio- and econospheres are not variances around equilibria but are due to the interactions of autonomous, heterogeneous agents energized by contextually imposed tensions. A review of these causes of emergent order in physics, biology, and the econosphere can be found in McKelvey, (2004a).
14 Parts of this section are quoted, with some emendations, from Lichtenstein and McKelvey (2003).
15 ‘Docking’ is a procedure whereby the programming code of a model is reproduced by another programmer and then tested. If the model is properly described and the codes are each correct, they should agree (Axtell, et al., 1996). To date, docking is not often done and usually the model comparisons fail. See for example, Rouchier, (2003).
16 This section draws on McKelvey (2002, 2003, 2004b).