This paper proposes an analytical framework for a complexity-informed theoretical approach to human interaction and organizations. In doing so, it addresses the increasing call for better theory supporting the microfoundations of social science. A key premise of the argument is that the primary imperatives of social actors are confronting uncertainty and adapting to change as a collective. As such, in addition to seeking requisite resources, human beings interact to gather and use information for their individual and collective benefit. The paper explores this perspective by proposing a complex systems model of organizing that differs from systems theory by placing the actors inside the system rather than assuming they act on the system. We propose a definition of information that enables us to explore the dynamics of human interaction as observers from the outside without necessarily knowing what the information means. This approach is analogous to how physical and biological systems are studied and is intended to complement, rather than replace existing approaches that tend to place their emphasis on inter-subjectivity and meaning-making rather than on the objective measurement of information as a physically measurable quantity.
Complexity ideas have inspired many new approaches to organization and leadership studies (Allen, Maguire, & McKelvey, 2011). As such, it offers a plausible approach for building theoretical microfoundations for social theory in answer to recent calls in the literature (Devinney, 2013; Greve, 2013; Winter, 2013). Thus far, however, most studies that invoke complexity notions limit their ambitions to conceptual discussions that apply complexity ideas like attractors (Anderson, 1999; Dooley, 1997) or fitness landscapes (Kauffman, 1993; McKelvey, 1997; Levinthal, 1997) metaphorically to human systems. Alternatively, studies of complex adaptive systems (Holland, 1975) have used computer simulations (Carley & Prietula, 1994; Hazy, 2007; Siggelkow & Rivkin, 2005) that understandably simplify human interaction to the level where they can be modeled but in so doing ignore many aspects of interactions that are perhaps the most human.
We argue that this limitation is largely due to the absence of clearly specified physical and social mechanisms that describe the complex causality that would explain the relationships reflected in these models. Absent these advances, social science will have difficulty accumulating scientific knowledge in mathematical models and by doing so taking its place along the continuum of the natural sciences (Wilson, 1998) that are increasingly understood in complexity terms.
The study of human interaction dynamics (HID) aims to make progress toward addressing this gap. To further this objective, this paper posits foundations, assumptions and definitions that might underlie theoretical microfoundations that usefully describe and predict human activity. Such a theory might, for example, support a typology of organizing that would enable researchers and practitioners alike to more clearly recognize and respond to the specific situations they encounter. Such a classification system might form along the lines that Elman Service (1962) proposed in anthropology—that of Bands, Tribes, Chiefdoms and States—but in this case would be based upon how these distinct types make the benefits of the organizing more predictable to observers given the specific parametric context that characterizes the complex adaptive system (Holland, 1975). As populations grow in size or as ecosystems change, for example, distinctly different parametric conditions might imply that only certain organizing forms offer an acceptable level of stability (and thus useful predictability for participating agents).
The other articles in this special issue, taken together, offer the promise that a general theory of human interaction dynamics (HID) may indeed be possible. At least, they are evidence that this possibility should be explored. This article is an effort to lay out what the philosophical foundations, definitions and assumptions supporting such a theory might look like
The paper proceeds by first offering a perspective on how a complex system theory of human organizations might differ from more traditional general systems theory (von Bertalanffy, 1950). Next we propose some basic elements and assumptions for a complex systems model of organizing. This is followed by some tentative definitions of organizing in this context. The paper closes with suggested research directions.
Foundations: A complex systems model of organizing
Organizational research took a huge step forward with the introduction of the general systems model in the early 20th century (Von Bertalanffy, 1950). The usefulness and versatility of this approach was demonstrated in many publications (cf. Katz & Kahn, 1966) and is still the basis of much organizational research today.
The basic Systems model has three components: inputs, outputs and the transformation from one to the other. The many uses for this model will not be discussed here (cf. Katz & Kahn, 1966). What is relevant to this discussion is that the systems model implicitly assumes that the actor using the model will use it to act on the system, changing elements of the system through a “design intervention” to improve its operation or throughput. In Aristotelian terms, the manager or leader is assumed to operate as the efficient cause that changes the system by acting exogenously on the system. However, this simplified perspective is increasingly problematic in the age of complexity.
Although the general systems model is a useful and valid approach for an observer who is seeking to understand the environment, it is flawed as a guide to specific action when the system is complex. This is because in the case of social systems, action on the system by an individual makes the implicit (and often false) assumption that individuals can easily act across levels of analysis, reaching, if you will, from the fine-grained interactions in which we participate to the emergent coarse-grained properties that we are modeling in an effort to understand them. Complexity tells us that expressing individual potency in a complex system it is not as easy as that (Hazy, 2008).
In contrast, a Complex Systems Model of Organizing (shown in Figure 1) takes as a point of departure the point-of-view of the manager or leader acting endogenously inside the complex system (Hazy, 2006; 2010; Surie & Hazy, 2006). This insider-manager can therefore not be considered the efficient cause of change. The role of efficient cause necessarily falls to exogenous forces that are often (although not exclusively as we will see) unfolding beyond the control of the organization’s leadership or management. Technology changes, climate change, or globalizing commodity markets, for example, might be indicators of such forces. These are the forces that are acting on the system.
Managers and leaders, along with other agents, are situated inside the complex ecosystem that contains an organization, and they are able to interact both inside and outside that organization. As interactions occur, agents observe, contemplate, and respond to events in the environment that might indicate the magnitude, direction, velocity, and even acceleration of these forces in the environment (Hazy, 2011). Agents recognize information signals within events that can be processed to create value for the organization (Hazy, 2012), and based upon their interpretation of this information, they take advantage of internal mechanisms in the organization to perform the requisite functions needed to sustain the system in the face of the forces of change that they have observed. Hazy and Uhl-Bien (in press) have described some of the functions that must be performed by leadership within organizations to sustain them as complex systems.
Although the relevant events often occur in the ecosystem beyond the organization’s boundary, they can also occur within the organization’s boundaries. For example, many organizations in the 1980s first recognized the power of the microprocessor-fueled sea-change toward distributed computing when various departments inside the organization purchased their own personal computers (PCs) and then put pressure on the centralized IT department to connect the mainframes to departmental Local Area Networks (LANs).
The Aristotelian efficient cause acting on the system thus resides within events in the organization as well as environment—changing market conditions, economic or climate dynamics, for example—rather than in any one individual or individuals, even the most prestigious or powerful ones in the organization. Significantly, these events can occur simultaneously in the coarse-grained properties and the fine-grained interactions at work within the ecosystem and the organization (Hazy & Ashley, 2011). This is because of the mutual causality that is present between the coarse-grained and fine-grained levels in complex organizing (Goldstein, 2011) as is described in coming sections.
These differences between the general systems model and the complex systems model are consequential and may signal a paradigm shift in organizational theory (Kuhn, 1962). However, the differences do not relegate individual human actors to passive spectators who are along for the ride. In the human case, individuals interact with other actors and by engaging in leadership practices that can change outcomes both within their organization and elsewhere in the ecosystem (Hazy, Goldstein & Lichtenstein, 2007). By doing this, individuals in an organization can influence the exogenous forces that in turn become the efficient cause of change for the organization. They can also act within the system to impact fine-grained interaction and by doing so act as endogenous change agents, or “institutional entrepreneurs” (Battilana, 2006; Surie & Singh, this issue), by influencing which coarse-grained properties emerge in the organization and how well these match the requisite variety present in the ecosystem (Ashby, 1956).
Although this appears to be a paradox, it is resolved by looking at some basic elements of the Complex Systems Model of Organizing and how they unfold over time as shown in Figure 2. The exogenous forces for change impacting the properties of the organization are themselves coarse-grained properties, but in their case, these are properties within the larger ecosystem that also includes the organization as part of it. Examples of these ecosystem properties might be commodity price trends (like oil prices) or technology diffusion such as the use of smartphones or electric vehicles, and also wage rates, aggregate demand, and many other recognizable patterns that one might identify.
Although constrained in certain ways, as a rule, individual actors can freely flow to activities outside the organization as well as inside of it. In this sense they are analogous to molecules within living cells that flow back and forth through cell membranes under various physical and chemical constraints. By doing this, individuals (like molecules in living things) can influence properties that emerge beyond the organization (the cell) as well as within it.
As an example, a company CEO can step out of her organization through its boundary and make a speech within the broader community of customers and investors. By doing this, an (especially influential) agent can act to alter the emergent demand properties in the broader ecosystem for the benefit (or detriment) of her organization. Her actions outside the boundary thus support the health of the organization and those who are dependent upon what happens inside its boundary, its employees, investors, and even the CEO herself. Likewise at the worker-level, sales personnel routinely cross organizational boundaries to influence demand locally. Engineers routinely meet with customers to design products and services. Organizational boundaries are permeable to various degrees (Hazy, Tivnan, & Schwandt, 2011) and this enables learning and adaptation at both the fine-grained and coarse-grained levels.
In the next section, we describe these elements of the Complex Systems Model of Organizing in more detail. We also provide some key definitions to guide research in HID.
Definitions and assumptions
This section is intended to add additional specificity to the Complex Systems Model of Organizing shown in Figures 1 and 2. It begins by describing fine-grain interactions to clarify how this term is applied in HID research. Following this discussion, coarse-grain properties are defined such that their abstract description is framed in the context of the axioms of category theory (Mac Lane, 1998) from mathematics (supplemental material is also available in a Technical Appendix that is available on line). This requirement in intended to enable models of social phenomena from various studies to be related to one another so that knowledge in HID can be accumulated in the same manner as it is in the natural sciences. The next section relates to information gathering and using (Gell-Mann, 2002) and how this might be defined to further research in HID. The final section identifies some examples of complexity concepts which might offer useful modeling approaches.
Fine-grain interactions and agency
In the Complex Systems Model of Organizing, fine-grain interactions (FGI) are typically assumed to be those between individual human beings and each other, as well as their interactions with physical objects in their environment, particularly when those objects have informational or symbolic content. In this sense, FGIs are defined as the mapping of observable phenomena in the physical world (like two individuals talking) to the abstract notional “object” of an “interaction” among agents. Significantly, in the physical world, the observations always occur at specific physical locations in space and they unfold over time. Thus time and spatial dimensions are implicit in any FGI in addition to any other attributes being modeled. In many cases, however, these can be ignored for simplicity.
At the same time, it is sometimes useful to explore organizing among work groups, firms, or even larger aggregate entities. In these cases (cf. Surie & Singh, this issue) the agents engaged in the FGIs should be clearly specified as firms or some other aggregate, and the assumption underlying the aggregation would be that interactions among these aggregate agents exhibit emergent coarse-grained properties including a boundary such that they can be treated as “objects” in the sense described later in this section. An example of this would be the requirement that objects relate to themselves via an identity mapping of each object to itself which preserves the structure of the objects’ properties.
Further, we posit that the concept of FGIs is dynamic and should be considered in the context of a physical position in space (X) as well as how this position changes over time (the first derivative of X, dX/dt) as this is a characteristic of relevant physical phenomena and may be relevant to the interaction. For many applications, the precise physical position of individuals and objects and how these change over time is immaterial given the precision of our measurement and modeling. For example, the constraining effect of spatial distance can be mitigated somewhat by technology-enabled communication of information or symbols perhaps to the point where these spatial affects can be ignored when addressing a certain problem.
However, when the spatial location assumption is relaxed it should be identified and justified in the analysis. This is because spatial separation and thus the uniqueness of each positional perspective is a physical characteristic of FGIs that constrains an individual’s access to resources and ability to observe and interpret information, a fact that should not be ignored in general without comment (Gell-Mann, 2002). How this occurs and to what extent it changes the basic situation and the other elements of HID are questions for researchers to answer empirically and analytically.
To reiterate: individual human beings are the primary agents relevant to FGIs. There is considerable literature describing the complexity of modeling interactions among human agents (cf. Prietula, 2011; Hazy & Ashley, 2011) and these are not discussed in detail here. For purposes of defining baseline assumptions for future research in HID, two characteristics of human agents are postulated as minimally necessary for HID theory:
When specified, the a. and b. factors above reflect the agency and social influence assumptions of the model respectively. Specifying the mechanisms underlying the above assumptions and how these interact with the internal processes of the agents as they move to enact emergent outcomes are the primary questions to be addressed in HID research.
Other physical objects such as technology, resources, products and services, or even software models, can also be treated as interacting objects particularly if they can be used to store and access information, like for example a Laptop or a SmartPhone. However, for the purposes of HID, these non-human objects are of a different class for analysis purposes. Some aspects of this class are described in a later section.
Coarse-grain properties and category theory
Coarse-grained properties (CGPs) are “abstract objects” that emerge and might be recognized as persistent patterns within complex systems. Examples would include: firms and departments, organizational routines and capabilities (Nelson & Winter, 1982), organizational boundaries, differentiated roles, skill specializations, and the varying status of individuals within the population relative to others (Pearce, 2011; Hazy, 2012). CGPs also reflect business and economic properties of an entity such as sales and sales growth, market size and market growth, and profitability, earnings and earnings growth.
To begin, we define coarse-grained properties as abstract representations of observed phenomena. In general, CGPs are expected to represent physical phenomena which appear to be predictable to a degree and are therefore not expected to result from random events or noise. Because one can predict their behavior, one can think of them as sending informational “messages” such that by having observed a phenomenon in the past and now observing it in the present, one can predict its state in the future, at least to a degree (Crutchfield, Ellison, & Mahoney, 2009). We posit axiomatically that these “messages” can be intercepted by observers, potentially be decoded, and if they are decoded they can be interpreted by intelligent observers and used to enhance their own parochial interests and those of their local community.
We say “appear to be predictable” since observers are not directly privy to the information embedded in these messages. However, we posit that the information contained in the messages can sometimes be recognized and decoded by observers, including agents, by using decoding algorithms (or “inference models”) that they hypothesize might be used to recognize signals associated with relevant events in the physical world. But this is an uncertain process because in many cases the encoding algorithm is unknown, and thus the interpretation of apparent messages is always hypothetical and dependent upon inference.
The above observation suggests that, reflective of the ambition of HID to model human interaction dynamics in a manner that is consistent with other sciences, the algorithms we hypothesize as a means to decode the information hidden in these messages should use concepts from mathematics. We therefore restrict the “abstract representations” that are acceptable for HID research to only those that conform to mathematical principles, in particular, the axioms, definitions and theorems of Category Theory in mathematics (Mac Lane, 1998). More on this is available in the Technical Appendix online.
By invoking Category Theory, we can define CGPs such that one is able to assume that the abstract representations of the relevant CPGs reflect a deductive system since deductive systems form categories. The deducting quality by which we define a set of complicated CGPs may not be reflective of the physical truth of the phenomenon in any absolute sense, but the deductive system of representation may be independently justified through axioms, beliefs or assumed understanding. The deductions that result from such as system may prove useful (or not) in practice, and therefore constitute a new set of hypotheses. These in turn can be tested so that the model can be improved. This process is repeated over and over again in a process of recursion until adequate precision is attained for a given circumstance.
As an example of this, a simplified accounting model might be used to “represent” an organization in terms of its budgeted costs (as CGPs). All else being equal, if one reduces the budget in a certain department within an organization one can deduce that the overall budget for the organization is likewise reduced by a like amount. This is because the department is included in the organization in this abstract representation of the organization’s structure. Likewise, the departmental budget can be reduced by reducing workgroup budgets, and so forth. This all works because organizational budgets and the relationships among them are represented by a mathematical category that can be shown to form a deductive system. Of course, an experienced manager would note that the budgeting representation is not the organization, and a reduction in budgets doesn’t reflect the true cost of these actions. At the same time, there is practical value in representing the organization in this way, value that is realized through the practical benefits that result from representing the organization as a deductive system.
The point here is that to move toward theoretical microfoundations of social theory, we propose grounding all representations of relevant CGPs within categories (please note that we are not arguing that only quantitative models like the accounting example are useful) which at least in some cases can imply deductive systems. In this way, by using the theorems of category theory to link representations, many branches of mathematics can be used to formally build a growing basis of understanding.
What this would require is as follows: According to category theory, abstract objects, in this case CGPs, form a category if:
Under these conditions, the deductive logic of mathematics can be used to predict certain aspects of the behavior of these properties in the abstract model. [Note that this perspective always admits to the possibility that the abstract model does not actually reflect what might happen in reality; the outcome of the model is a prediction, not a fact.] Please see Technical Appendix available online for additional specification of the mathematical categories that might be used to represent human social systems for analysis purposes.
Mathematical structures like sets, symmetric groups and manifolds and the relationships among them can be defined as categories and the composition of relationships—such as “this department is included in that organization and therefore it shares the same objectives—can be represented as logical inference. At the same time, sophisticated mathematical analyses such dynamical systems models with attractors and control and order parameters are also possible categories that can be used to describe CGPs.
Significantly, in HID, CGPs are viewed from the point-of-view of agents at the fine-grained level—and therefore the CGPs that are observed might vary agent to agent—and are relevant with regards how they impact the FGIs of the observer and of others. Thus CGP’s that are recognized are always observed from a position in space X and time t from which the agent makes the observation. Any given CGP can be observed by agents whose interactions are inside the workings of the property (an employee in the organization making the profit), or who are outside the property (an investor evaluating a firm’s performance), or even from both of these at varying times.
Because CGPs are defined in the context of categories, mathematics can be used to develop models of their interactions to be shared among agents for use by individuals as they seek to understand their environment. There is no omniscient point-of-view, however. CGPs are defined from the perspective of agents in space and time or subsets of agents who share a common frame from which they can respond to the CGPs in an effort to inform their FGIs.
Information gathering and using
The definitions above can be taken to imply that during FGIs, agents seek to recognize and explain CGPs when they observe and gather the informational “messages” present in patterns and phenomena in the environment. Either these can be explicitly recognized and acknowledged by the agents themselves, or they can be observed by third parties who note their implicit influence on the choices and behaviors of other agents. In both cases, unless physical force or resources are applied, the relevant quantity that is assumed to be exerting influence is information. But what do we mean by “information”?
Information in HID is defined as it is in information theory (Cover & Thomas, 2004). Information is created when a predicted event with a given probability occurs in fact and is observed; thus, predictions associated with hypothesized CGPs when combined with actual events create information. The quantity of information is considered relative to the prior probability distribution of possible outcomes from the CGPs and the level of “surprise” associated with what is actually observed. More specifically, information is defined as two quantities multiplied together: The first term is prior probability that the actual observed state of the phenomenon’s structure would have been expected to occur. This is multiplied times the second term which is the logarithm of that probability, a quantity that reflects “surprise”. When the logarithm is in base 2, the unit of information is the “bit”, the same bit as is familiar from computer science.
When one works out the mathematics under this definition, it turns out that this means that the maximum information is created when an event is expected to have a prior probability that reflects randomness because this is when surprise is maximized (Cover &Thomas, 2004). Less information is created if an event is assumed to be predictable to a degree; there is less surprise. In this latter case, the difference between the new information that is created versus the maximum that what would have been created in the random case reflects the level of “ordering”—and thus stored information—present in the structure of the system. This structure enables an observer who is able to decode it to predict the systems behavior to a degree.
Taking this explanation a little deeper, absent other information being available by observing the structure (the “message”), the likelihood that one would observe a one-in-a-million orderly configuration of the system during the next time step is very low. Absent a “message” signaling what is likely to happen, one could only assume that events will involve a random draw from a pool of possibilities wherein most configurations have no structure whatsoever, or if they have a structure, it is very different from what is expected. Therefore, to confidently predict that there is a high likelihood that a particular complex structure—like an organization or a product offering—will be present in the next time step, implies that the system must send a “message” to the observer that contains information that supports that prediction. This stored information is said to be contained in a “message” sent by the ecosystem (in the form of recognizable patterns) from the past into the future (Crutchfield et al., 2009). Because there is so much information already present in the message, the only new information that can be created by events is the part that remained uncertain when the prediction was made.
For the outcome of an event to be predicted, however, the information contained in the message, “the system’s state is somewhat predictable” that is being transmitted from past to future must be intercepted (an observer-agent must “gather” this information). To be useful, one must not only intercept the message, but also use some kind of implicit or explicit model to try to decode the information in the message (so that an agent can “use” it) to predict unfolding events. This model must be “run” in some physical processor (or agent), stop processing to provide an output that can be used to inform action, and do this before the predicted event occurs (Crutchfield et al., 2009). The time it takes to complete this processing is roughly what is meant by the algorithmic or computational complexity of the phenomenon being modeled (Prokopenko et al. , 2009).
The information that reflects this model might be stored and run in a human being’s cognitive schema, for example, or it could be stored and run in some external storage device like a computer. For reference, we call the specific instance of this stored information that is being used to “observe” and interpret events the inference model. Its requisite presence enables messages (being transmitted from the past into the future by phenomena in the environment) to be presumptively decoded by its user so that the information signals contained in events might be recognized.
At the same time, the information in the message is only visible in reference to the inference model, and the inference model is only hypothesized to decode the signal because the algorithm that encoded the message is unknown. This is because the inference model decodes information in the sense that it identifies order in the system (as opposed to a random configuration) and this order contains information. It infers the existence of mappings between defined objects representing the physical world and abstract objects in the inference model. Our premise is that logical deduction and other categories that characterize predictive rational thought have evolved through evolutionary selection because they have been successful at decoding such messages (for example by recognizing relevant phenotypic trait information that is encoded in DNA structure) to intercept, gather and use relevant information signals.
Once new information is visible through events, it might be incorporated in a further iteration of the inference model allowing those who possess it to recognize additional aspects of order (i.e., embedded information) by improving the information decoding algorithm. This is what we define as knowledge: information recalled from prior events (i.e., stored in memory) that is embedded into an inference model to improve decoding.
Information gathered from prior events can also be embedded in physical structure as a means of storage for transmission into the future. The most obvious examples of this are symbolic objects or artifacts, such as ritual objects that evoke shared memories or suggest a legendary narrative, personal adornments signaling status or affiliation, writing such as books, or computer programs stored in memory. A special type of artifact is technology which is not only a physical artifact, but also gains much of its significance from how its functions unfold over time. Technologies are artifacts that are useful in the context of some unfolding dynamical activity that is predictable when the technology is used. Travel by car has one set of predictions; travel by aircraft has another, for example.
In the context of artifacts and drawing on category theory, meaning is defined as the relationship between one object (either a physical or a conceptual object) and another object, specifically, an inference model. This relationship allows the user of the inference model to recognize some of the information that is stored in the object; this is the meaning of the object in the context of that agent’s inference model. Thus objects have different meaning for different people because each person has distinct inference models. We next turn to a specific type of inference model, dynamical systems analysis (Hirsch, Smale, & Devaney, 2004) which enabled the discovery (decoding) of the meaning (the relationship between observations and dynamical systems modeling) of complexity in the first place.
Defining dynamical and structural attractors in HID
A common term in complexity, often loosely used, is the notion of attractors from dynamical systems. Under the dynamical systems usage, an attractor is a purely mathematical concept. In the language of category theory, and attractor is an “object” that extends the idea of a limit cycle within conserved systems as exemplified by the conservation of energy in statistical mechanics (Hirsh, Smale & Devaney, 2004).
In contrast to conserved quantities, when dissipative forces such as friction, heat loss, or entropic effects (like the creation of information) more generally are included, quantities are no longer conserved. In these cases, systems don’t tend to a limit cycle but instead sometimes exhibit dynamic patterns that would seem to “attract the system” into a subspace of possibilities without really forming a limit cycle. Physical, chemical and biological systems are described by models of this type (cf. Haken, 2006). These models usually apply differential equations and often include “attractors” of various types. But these “attractors” are attributes of the models of the system and are not literal physical attributes of the physical environment. They form a class of analytical results that can be usefully applied to CGPs in inference models.
Peter Allen (2001) in contrast describes physical artifacts that distort interaction patterns with complex systems of human interactions, what he calls Structural Attractors. These physical objects (which have recognizable information content that aids prediction) actually distort the dissipative tendencies of resource fields in the ecosystem. They can do this because they include information which can be detected and used to influences interactions. An example Allen uses to illustrate this idea is the construction of a warehouse for a business. His point is that the location and the attributes of the warehouse will impact the going-forward dynamics of the business and its whole ecosystem of customers, suppliers, employees, etc. Because these fields-of-influence exert a kind of pressure or force on the CGPs that make up the business, they can be important objects to include in inference models that describe the business and its CGPs.
CGPs in the presence of a structural attractor (man-made like a warehouse, or natural like a river; physical like a warehouse, symbolic like a business plan, or even abstract like institutional logics or norms) can be modeled using differential equations, and if so, a dynamical attractor (a mathematical object) might become apparent as the abstract representation of a CGP in the phase space of the model. But such sophisticated models might not be necessary. Simple heuristics might provide adequate information that would be useful for agents to predict events well enough to plan their actions. For example, a heuristic might predict quite accurately that there is a 90% probability that a new employee will choose to live within twenty-five miles of a new warehouse. At the same time, sophisticated mathematical analyses such as dynamical systems analysis with attractors or control and order parameters are also possible categories that can be used to describe CGPs.
New directions for complexity informed organizational research
Developing a theory of HID is not without challenges. For one thing, the theory must acknowledge that when organizing is considered as a complex adaptive system, human interactions are qualitatively different from interacting physical and biological systems. Thus, one of the important challenges in advancing HID research is the balance between simplicity and veridicality (Carley, 2002). This raises the question of what observable attributes of such complex agents as human beings are salient in HID, and how their interactions should be studied and modeled. To paraphrase Einstein: an approach is needed that is simple, but not too simple.
These challenges relate to human beings as “agents” and their capacity for information gathering and use (cf. Prietula, 2011). However, the question of resources, energy use and evolutionary processes like variation and selection at the multi-agent or group level (Okasha, 2006; Nowak, Tarnita & Wilson, 2010)—and how these relate to each other and to information—are also relevant when developing simple (but not too simple) theories regarding HID. What further complicates the pursuit of theory is the exchange of resources and the communication of information among individuals and communities. In this section we speculate about some of the possible directions for HID research.
Resource and information exchange, trade and markets during interaction
Resource exchanges are studied in various fields from anthropology to economics to ecology, and the natural sciences. Bringing relevant aspects of this work into the cross-disciplinary field of HID is an important area for research. How information is incorporated into resource artifacts to create value in the business sense is also an important area for study.
Cross disciplinary research is also needed that explores information storage and exchange in the HID context. Research into the instrumentality of language in the HID sense is needed. This area is addressed in the field of linguistics, and in particular the anthropology subfield of descriptive linguistics. However, the cross-disciplinary elements of HID might be used to further advances through information theory (Cover & Thomas, 2004).
Groups, communities, firms, and institutions
Another important question in HID is how to incorporate advances in anthropology, the behavioral sciences (cf. Simon, 1990) behavioral economics (Kahneman, 2011) and cognitive neuroscience into models of human interaction systems such as groups, communities, firms, organizations, and institutions. As alluded to in the Introduction, for purposes of study, organizing might be categorized into a typology of increasing complexity in a manner that is roughly related to the (1962) society typology of Elman Service from anthropology: Bands, Tribes, Chiefdoms, and States.
An HID complexity-informed typology might be based upon the level to which mathematical categories that relate to the phenomenon of organizing structure are defined. For example, types might include: an inference model describing the ad hoc organizing phenomenon that is analogous to Service’s “bands” (observable as transient regularities like cooperative projects), local organizing around reputations that is analogous to “tribes” (locally observable as persistent regularities with acquired status (Pearce, 2011) such as “deference to the expert”), global organizing around ascribed status that is analogous to “chiefdoms” (globally observable through signifier symbols of status such a royal dress or the “corner office”), or according to centralized authority and enforced hegemony that is analogous to “states” (observable through artifacts signaling legitimacy such as boundary markers and a separate enforcement subpopulation such as a military class, an internal security sub-class, or audit groups in the business sense).
Advances in complexity science offer the possibility of a paradigm shift (Kuhn, 1962) in organizational research and practices. Perhaps complexity thinking will eventually help place social sciences along the continuum with the natural sciences. At present, this is an open question, only an aspiration. But the seeds are being sewn. It remains to be seen what tomorrows harvest will bring.
- Allen, P., Maguire, S. and McKelvey, B. (2011). The Sage Handbook of Complexity and Management, ISBN 9781847875693.
- Anderson, P. (1999). “Complexity theory and organization science,” Organization Science, ISSN 1047-7039, 10(3): 216-232.
- Ashby, W.R. (1956). An Introduction to Cybernetics, ISBN 9780416683004.
- Battislana, J. (2006). “Agency and institutions: The enabling role of individuals’ social position,” Organization, ISSN 1350-5084, 13(5): 653-676.
- Carley, K. M. (2002). “Simulating society: The tension between transparency and veridicality,” Proceedings of Agents 2002, Chicago IL.
- Carley, K.M. and Prietula, M.J. (eds.) (1994). Computational Organizational Theory, ISBN 9780805814064.
- Cover, T.M. and Thomas, J.A. (2004). Elements of Information Theory, ISBN 9780471062592.
- Crutchfield, J.P., Ellison, C.J. and Mahoney, J.R. (2009). “Time’s Barbed Arrow: Irreversibility, Crypticity, and Stored Information,” Santa Fe Institute Working Paper 09-02-002, http://csc.ucdavis.edu/∼cmg/papers/tba.pdf.
- Devinney, T.A. (2013). “Is microfoundational thinking critical to management thought and practice?” Academy of Management Perspectives, ISSN 1558-9080, 27(2): 81-84.
- Dooley, K.J. (1997). “A complex adaptive systems model of organization change,” Nonlinear Dynamics in Psychology and Life Sciences, ISSN 1090-0578, 1(1): 69-97.
- Gell-Mann, M. (2002). “What is complexity?” in A.Q. Curzio and M. Fortis (eds.), Complexity and Industrial Clusters: Dynamics and Models in Theory and Practice, ISBN 9783790814712, pp. 13-24.
- Greve, H.R. (2013). “Microfoundations of management: Behavioral strategies and levels of rationality in organizational action,” Academy of Management Perspectives, ISSN 1558- 9080, 27(2): 103-119.
- Goldstein, J. (2011). “Emergence in complex systems,” in S. Maguire, P. Allen and B. McKelvey (eds.), The Sage Handbook of Complexity and Management, ISBN 9781847875693, pp. 65-78.
- Haken, H. (2006). Self-Organization and Information: A Macroscopic Approach to Complex Systems, ISBN 9783540330219.
- Hazy, J.K. (2006). “Measuring leadership effectiveness in complex socio-technical systems,” Emergence: Complexity & Organization, ISSN 1521-3250, 8(3): 58-77.
- Hazy, J.K. (2007) “Computer models of leadership: Foundations for a new discipline or meaningless diversion?” The Leadership Quarterly, ISSN 1048-9843, 18: 391-410.
- Hazy, J.K. (2008). “Toward a theory of leadership in complex systems: Computational modeling explorations,” Nonlinear Dynamics, Psychology, & Life Sciences, ISSN 1090- 0578, 12(3): 281-310.
- Hazy, J.K. (2011). “Parsing the influential increment the language of complexity: Uncovering the systemic mechanisms of leadership influence,” International Journal of Complexity in Leadership and Management, ISSN 1759-0256, 1(2): 164-191.
- Hazy, J.K. (2012). “Leading large: Emergent learning and adaptation in complex social networks,” International Journal of Complexity in Leadership and Management, ISSN 1759-0256, 2(1/2): 52-73
- Hazy, J.K. and Ashley, A. (2011). “Unfolding the future: Bifurcation in organizing form and emergence in social systems,” Emergence: Complexity & Organization, ISSN 1521-3250, 13(3): 58-80.
- Hazy, J.K., Goldstein, J.A. and Lichtenstein, B.B. (2007). Complex Systems Leadership Theory: New Perspective from Complexity Science on Social and Organizational Effectiveness, ISBN 9780979168864.
- Hazy, J.K., Tivnan, B.F., and Schwandt, D.R. (2011). “Permeable boundaries in organizational learning: Computational modeling explorations,” in Y. Bar-Yam, A. Minai, and D. Braha (eds.), Unifying Themes in Complex Systems, ISBN 9783642176340, pp. 153-163.
- Hazy, J.K. and Uhl-Bien, M. (in press). “Towards operationalizing complexity leadership: How generative, administrative and community-building leadership enact organizational outcomes,” Leadership, ISSN 1742-7150,
- Hirsch, M.W., Smale, S. and Devaney, R.L. (2004). Differential Equations, Dynamical Systems and an Introduction to Chaos, ISBN 9780123820105.
- Holland, J.H. (1975). Adaptation in Natural and Artificial Systems, ISBN 9780262581110.
- Kahneman, D. (2011). Thinking, Fast and Slow, ISBN 9780374533557.
- Katz, D. and Kahn, R.L. (1966). The Social Psychology of Organizations, ISBN 9780471023555.
- Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution, ISBN 9780195058116.
- Kuhn, T.S. (1962). The Structure of Scientific Revolutions, ISBN 9780226458113.
- Levinthal, D.A. (1997). “Adaptation on rugged landscapes,” Management Science, ISSN 0025- 1909, 43(7): 934-950.
- Mac Lane, S. (1998). Categories for the Working Mathematician: Graduate Texts in Mathematics, ISBN 9780387984032.
- McKelvey, B. (1997). “Perspective: Quasi natural organization science,” Organization Science, ISSN 1047-7039, 8(4): 351-380
- Nelson, R.R. and Winter, S.G. (1982). An Evolutionary Theory of Economic Change, ISBN 9780674272286.
- Nowak, M., Tarnita C.E., and Wilson, E.O., (2010). “The evolution of eusociality,” Nature, ISSN 0028-0836, 466(7310): 1057.
- Okasha, S. (2006). Evolution and the Levels of Selection, ISBN 9780199267972.
- Pearce, J.L. (ed.) (2011). Status in Management and Organizations, ISBN 9780521132961.
- Preitula, M. (2011). “Thoughts on complexity in computational models,” in P.M. Allen, S. Maguire, and W. McKelvey (eds.), The Sage Handbook of Complexity and Management, ISBN 9781847875693, pp. 93-110.
- Prokopenko, M., Boschetti, F., and Ryan, A.J. (2009). “An information-theoretic primer on complexity, self-organization and emergence,” Complexity, ISSN 1099-0526, 15(1): 11-28.
- Service, E. (1962). Primitive Social Organization: An Evolutionary Perspective, ISBN 9780394307831.
- Siggelkow, N. and Rivkin, J. (2005). “Speed and search: Designing organizations for turbulence and complexity,” Organization Science, ISSN 0036-8075, 16(2): 101-122.
- Simon, H.A. (1990). “A mechanism for social selection and successful altruism,” Science, ISSN 0036-8075, 250: 1665-1668
- Surie, G. and Hazy, J.K. (2006). “Generative leadership: Innovation in complex environment,” Emergence: Complexity & Organization, ISSN 1521-3250, 8(4): 13-27.
- Von Bertalanffy, L. (1950). “An outline of General System Theory,” British Journal for the Philosophy of Science, ISSN 0007-0882, 1: 139-164
- Wilson, E.O. (1998). Consilience: The Unity of Knowledge, ISBN 9780679450771.
- Winter, S.G. (2013). “Habit, deliberation, and action: Strengthening the microfoundations of routines and capabilities,” Academy of Management Perspectives, ISSN 1558-9080, 27(2): 120-137.