An intersubjective measure of organizational complexity:
A new approach to the study of complexity in organizations1

Mihnea Moldoveanu Rotman
University of Toronto, CAN

Abstract

This paper attempts to accomplish the following goals:

  1. formulate and elaborate the epistemological problem of studying organizational complexity qua phenomenon and of using “organizational complexity” qua analytical concept in the study of other organizational phenomena;
  2. propose and defend a solution to this epistemological problem by introducing a definition of complexity that (i) introduces the dependence of ’complexity of an object’ on the model of the object used, without either (ii) falling into a fully subjective and relative view of complexity or (iii) falling into a falsely subject-independent view thereof and thus (iv) making precise the subjective and objective ’contributions’ to the definition of complexity to the end of (v) making ’complexity’ tout court a useful analytical construct or hermeneutic device for understanding organizational phenomena;
  3. show how the new view of complexity can be usefully applied in conjunction with classical, well-established models of organizations to understand the organizational phenomena that are paradigmatic for the research tradition of each of those models;
  4. derive the implications of the new view of organizational complexity for the way we study and intervene in organizational life-worlds.

”The study of organizational complexity faces a difficult epistemological problem”

Organizational complexity has become a subject of study in organizational research (see, for instance, Anderson, 1999). Researching ’organizational complexity’ requires one to confront and ultimately resolve, dissolve or capitulate to the difficulties of defining the property of complexity of an organizational phenomenon and (often) defining and defending a complexity measure for organizational phenomena, which allows one to declare one phenomenon more complex than another. This minimal conceptual equipment is necessary in view of the age-old concern of scientifically minded researchers to turn qualitative impressions into quantitative measures and representations, but faces a serious epistemological problem that is connected to the basic ontology of proposed ’complexity metrics’ and ’complexity spaces’. Here is an exposition of that problem, in brief.

Outline of the epistemological problem of talking about ’complexity’ and using the term ’complexity’

As with all measures that can be used as research tools, we would like our measures of complexity to be intersubjectively valid: two observers A and B ought, through communication and interaction, to come to an agreement about ’the complexity of object X’, in the same way that the same two observers, using a yard stick, can come to an agreement about the ’length of this football field’. The epistemological problem we would like to draw attention to is that of achieving such an intersubjectively valid measure. Of course, questions such as ’how would we know a complex phenomenon if we saw it?’ and ’how can complexity of different phenomena be compared?’ would also be resolved by a solution to the core epistemological problem.

Various definitions of ’complexity’ in the literature can be understood as purposeful attempts to come to grips with the problem I have outlined above. Thus, consider:

Complexity as structural intricacy: The ’strong’ objective view of complexity

The outcome of an era of greater ’self-evidence’ in matters epistemological, the structuralist view of complexity is echoed (as we shall see, with a twist) in organizational research in Simon’s (1962) early work and in Thompson’s (1967) seminal work on organizational dynamics. It is not dissimilar from structuralist analyses of physical and biological structure and function (D’Arcy Thompson, 1934; von Bertalanffy, 1968). It is based on a simple idea: complex systems are structurally intricate. Structural intricacy can best be described by reference to a system that has (a) many parts that are (b) multiply interconnected. In the years since Edward Lorenz’s discovery in 1963 of chaotic behavior in simple dynamical systems, we have come to know both: that there exist simple (by the structuralist definition) systems that nevertheless compel us to classify them as complex (such as chaotic systems) ,and complex systems (by the structuralist definition) that behave in ways that are more characteristic of simple systems (large modular arrays of transistors, for instance). A finer set of distinctions was called for, and many of these distinctions can be perceived from a more careful study of Simon’s work.

Simon, (1962) did not leave the view of complex systems at this: he postulated that complex systems are made up of multiply-interacting multitudes of components in such a way as to make predictions of the overall behavior of the system starting from knowledge of the behavior of the individual components and their interaction laws or mechanisms. Unwittingly (to many of his structuralist followers), he had introduced the predicament of the observer, the predictor, the modeler, the forecaster, perhaps the actor him/herself into the notion of complexity of a phenomenon. But this slight sleight of hand remained unnoticed, perhaps in part due to Simon’s own emphasis on the structural component of a definition of complexity in the remainder of his (1962) paper. The (large, and growing) literature in organization science that seeks to understand complexity in structuralist terms (as numbers of problem, decision, strategic or control variables and number of links among these variables, or number of value-linked activity sets and number of links among these activity sets - Levinthal & Warglien, 1999; McKelvey, 1999) attests to the fruitfulness of the structuralist definition of complexity (NK(C) models of organizational phenomena can be deployed as explanation-generating engines for product development modularization, firm-level and group-level strategic decision processes, the evolutionary dynamics of firms, products and technologies, and many other scenarios), but does not fully own up to the cost that the modeler has to make in the generalizability of his or her results.

These costs can be understood easily enough if one is sufficiently sensitive to: (a) the relativity of ontology, and; (b) the effects of ontology on model structure. There is no fact about the identity and the number of interacting components that we may use in order to conceptualize an organizational phenomenon. (Alternatively, we may think of the problem of establishing an ontology as self-evident as an undecidable problem). We may think of organizations as interacting networks of people, behaviors, routines, strategies, epistemologies, emotional states, cultural traditions, and so forth. Moreover, we may expect that within the same organizational phenomenon, multiple such individuations may arise, interact with one another and disappear. This leaves in doubt both the essence of the modules or entities that make up the part-structure of the organizational whole, and the law-like-ness of the connections between these entities. Surely, phenomena characterized by shifting ontologies, changing rule sets and interactions between co-existing, incommensurable ontologies exist (consider cultural transitions in post-communist societies) and are complex, but are not easily captured in NK(C) models or other models based on networks on simple modules interacting according to locally simple rules. Thus, in spite of the very illuminating analysis of some complex macro-structures as nothing but collections of simple structures interacting according to simple local rules, the structuralist analysis of complexity imposes a cost on the modeler because of an insufficient engagement with the difficult epistemological problem of complexity.

Complexity as difficulty: The subjective view

Running parallel to the structuralist approach to the definition of complexity is a view that considers the complexity of a phenomenon to be related to the difficulty of making competent, valid or accurate predictions about that particular phenomenon. This view was certainly foreshadowed in Simon’s (1962) work, when he stipulated that structurally complex systems are complex in virtue of the fact that predicting their evolution is computationally nontrivial. Of course, he did not consider the possibility that structurally simple systems can also give rise to unpredictable behavior, as is the case with chaotic systems (Bar-Yam, 2000). A system exhibiting chaotic behavior may be ‘simple’ from a structuralist standpoint (a double pendulum is an example of such a system), but an infinitely accurate representation of its initial conditions is required for an arbitrarily accurate prediction of its long-time evolution: phase space trajectories in such a system diverge at an exponential rate from one another (Bar-Yam, 2000). Thus, Simon’s early definition of complexity needs to be amended so as to uncouple structural intricacy from the difficulty of making predictions about the evolution of a system.

This difficulty - of predicting the evolution of complex systems - may not be purely informational (i.e., may not merely require a theoretically infinite amount of information about initial or boundary conditions). Thus, Rivkin, (2000) shows that the problem of predicting the evolution of Boolean networks made up of simple nodes coupled by simple causal laws (NK(C) networks) is computationally intractable when the average number of connections per node in the system increases past a (low) threshold value. And, simple paradoxes in second-order logic highlight the fact that undecidability can arise even in logical systems with a very small number of axioms (e.g., deciding on the truth value of “I am telling a lie”).

The subjective difficulty of predicting the evolution of a complex phenomenon thus seems to be connected to structural complexity in ways that are significantly more subtle and complicated than was pre-figured in Simon’s early model. This situation has led some to treat complexity as a purely subjective phenomenon, related to predictive or representational difficulty alone (Li & Vitanyi, 1993). This has made ‘complex phenomena’ natural candidates for study using paradigms for the study of judgment and decision making under uncertainty.

This tendency is easy enough to understand: an uninformed, computationally weak observer will find interaction with a complex phenomenon to be a predicament fraught with uncertainty and ambiguity (as he or she will not be able to predict its evolution). What matters then is not whether or not a phenomenon is complex in some objectively or intersubjectively valid way, but rather whether or not it is difficult for the observer that must interact with this phenomenon, to make competent predictions about it, and how such an observer makes his or her predictions. Thus, the very large literature on cognitive biases and fallacies human reasoning under uncertainty and ambiguity (Kahneman, et al., 1982), or of heuristic reasoning in foggy predicaments can be understood as a branch of the study of complexity, as it studies the ways in which people deal with a core characteristic of complex phenomena, namely, the predictive fog with which they confront human intervenors and observers.

This state of epistemological affairs will hardly be satisfying to those who want to study essential characteristics of complex phenomena - characteristics that are invariant across observational frames, cognitive schemata and computational endowments of the observers of these phenomena. Such researchers will want to cut through the wealth of complexity-coping strategies that humans have developed over the millennia to the core of what it means for a phenomenon to be complex, and to investigate complexity per se, rather than complexity relative to the way of being-in-the-world of the observer. Such an ambition is not, on the face of it, ridiculous or misguided as many useful strategies for dealing with complex systems can be discerned from the study of prototypical, simplified, ‘toy’ models of complexity. For instance, the study of chaotic systems has given rise to approaches to the harnessing of chaos for the generation of secure communications systems that use chaotic waveforms to mask the secret data that one would like to convey across a wire-tapped channel; and the study of computationally complex algorithms (Cormen, et al., 1993) has given rise to strategies for distinguishing among computationally tractable and intractable problems, and finding useful tractable approximations to intractable problems (Moldoveanu & Bauer, 2004).

Nevertheless, purely structural efforts to capture the essence of complexity via caricaturized models have failed to achieve the frame-invariant characterization of complexity that some researchers have hoped for. Structurally intricate systems can exhibit simple-to-predict behavior, depending on the interpretive frame and computational prowess of the observer. Difficult-to-predict phenomena can be generated by structurally trivial systems. All combinations of structural intricacy and predictive difficulty seem possible, and there is no clear mechanism for assigning complexity measures to phenomena on the basis of their structural or topological characteristics. An approach that combines the felicitous elements and insights of both the objective and subjective approaches is called for. We will now attempt to provide such a synthetic view of complex phenomena.

“The fundamental problem of complexity studies can be dissolved if we look carefully into the eye of the beholder”: The phenomenon never speaks for itself by itself

A solution to the epistemological problem of speaking about ‘the complexity of a phenomenon’ is provided by looking carefully at the eye of the beholder. It is, itself, a ‘difficult-to-understand’ entity, because it is intimately coupled to the cognitive schemata, models, theories and proto-concepts that the beholder brings to his or her understanding of a phenomenon. It is through the interaction of this ‘eye’ and the phenomenon ‘in itself’ that ‘what is’ is synthesized. In Hilary Putnam’s words (1981), “the mind and world together make up the mind and the world.” Thus, whatever solution to the epistemological problem of complexity is proposed, it will have to heed the irreducibly subjective aspect of perception, conceptualization, representation and modeling. But there is also an irreducibly ‘objective’ component to the solution as well: schemata, models and theories that are ‘in the eye of the beholder’ cannot, by themselves, be the foundation of a complexity measure that satisfies minimal concerns about inter-subjective agreement because such cognitive entities are constantly under the check and censure of ‘the world’, which provides opportunities for validation and refutation. This suggests that a fruitful way to synthesize the subjective and objective viewpoints on complexity of a phenomenon is to measure the complexity of intersubjectively agreed-upon or ‘in-principle’ intersubjectively testable models, representations and simulations of that phenomenon.

This presents us with a problem that is well-known to epistemologists, at least since the writings of Kuhn, (1962). It is the problem of coming up with a language (for referring to the complexity of a model or theory) that is itself outside of the universe of discourse of any one model, theory or representation. Kuhn pointed to the impossibility of a theory-free observation language, a language that provides observation statements that are not sullied by theoretical language. Putnam, (1981) pointed to the impossibility of a theory-free meta-language, a language that contains statements about other possible languages without itself being beholden to any of those languages. Both, however, remained in the realm of language as it is understood in everyday parlance, or in the formal parlance of the scientist. To provide a maximally model-free conceptualization of complexity, I will instead concentrate on ‘language’ as an algorithmic entity, a program that runs on a universal computational device, such as a Universal Turing Machine (UTM). Admittedly, UTM’s do not exist in practice, but the complexity measure I put forth can be particularized to specific instantiations of a Turing Machine. (The costs for doing so, while not trivial, are not prohibitive of our effort).

If we allow this construction of a language in which a complexity measure can be provided, the following way of conceptualizing the complexity of a phenomenon suggests itself: the complexity of a phenomenon is the complexity of the most predictively competent, intersubjectively agreeable algorithmic representation (or computational simulation) of that phenomenon. This measure captures both subjective and objective concerns about the definition of a complexity measure. It is, centrally, about predictive difficulty. But it is also about intersubjective agreement, about both the semantic and syntactic elements of the model used, about the purpose, scope, scale and accuracy required of the predictions, and therefore about the resulting complexity measure. Thus, the complexity of a phenomenon is relative to the models and schemata used to represent and simulate that phenomenon. It is ‘subjective’. But, once we have intersubjective agreement on ontology, validation procedure and predictive purpose, the complexity measure of the phenomenon being modeled, represented or simulated is intersubjective (the modern word for ‘objective’).

I now have to show how ‘difficulty’ can be measured, in a way that is itself free of the subjective taint of models and schemata that are used to represent a phenomenon. To do so, I break up ‘difficulty’ into two components. The first - informational complexity, or informational depth (Moldoveanu & Bauer, 2004) - relates to the minimum amount of information required to competently simulate or represent a phenomenon on a universal computational device. It is the working memory requirement for the task of simulating that phenomenon. The second - computational complexity, or computational load (Moldoveanu & Bauer, 2004) - relates to the relationship between the number of input variables and the number of operations that are required by a competent representation of that phenomenon. A phenomenon is ‘difficult-to-understand’ (or, to predict): if its most predictively competent, intersubjectively agreeable model requires an amount of information that is at or above the working memory endowments of the modeler or observer; if the computational requirements of generating predictions about such a phenomenon are at or above the computational endowments of the modeler or observer, or both together. To make progress on this definition of complexity and, especially, on its application to the understanding of the complexity of organizational phenomena of interest, we need to delve deeper into the nature of computational load and informational depth.

The informationally irreducible: What informational depth is and is not

The view of informational depth presented here does not differ from that used in the theory of algorithms and computational complexity theory (Chaitin, 1974). The informational depth of a digital object (an image, a representation, a model, a theory) is the minimum number of elementary symbols required in order to generate that object using a general purpose computational device (Chaitin, 1974). Without loss of generality, we shall stipulate that these symbols should be binary (ones and zeros), knowing than any M-ary alphabet can be reformulated in terms of a binary alphabet. Of course, it matters to the precise measure of informational depth which computational device one uses for representational purposes, and, for this reason, we stipulate that such a device be a Universal Turing Machine (UTM). We do this in order to achieve maximum generality for the measure that I am proposing, but at the cost of using a physically unrealizable device. Maximum generality is achieved because a UTM can simulate any other computational device, and therefore can provide a complexity measure for any simulable digital object. If that object is simulable on some computational device, then it will also be simulable on a UTM.

Admittedly, the cost of using an abstract notion of a computational device (rather than a physically instantiable version of one) may be seen as high by some who are minded to apply measures in order to measure that which (in reality) can be measured, rather than in order to produce qualitative complexity classes of phenomena. In response, we can choose to relax this restriction on the definition of ‘the right’ computational device for measuring complexity, and opt to talk about a particular Turing machine (or other computational device, such as a Pentium or PowerPC processor, powering IBM and MAC clone machines). This move has immediate consequences in terms of the resulting definition of informational depth (a digital object may take fewer symbols if it is stored in the memory of a Pentium processor than if it is stored in the memory of a PowerPC processor), but this is not an overpowering argument against the generality of the measure of informational depth I am putting forth. It simply imposes a further restriction on the computational device that is considered to be ‘the standard’ for the purpose of establishing a particular complexity measure. To achieve reliability of their complexity measures, two researchers must agree on the computational platform that they are using to measure complexity, not just on the model of the phenomenon whose complexity they are trying to measure, and on the boundary conditions of this model (the class of observation statements that are to be considered legitimate verifiers or falsifiers of their model).

What is critically important about the informational depth of a digital object is its irreducibility: it is the minimum length (in bits) of a representation that can be used to generate a digital object given a computational device, not the length of any representation of that object on a particular computational device. Informational depth is irreducible, as it refers to a representation that is informationally incompressible. The sentence (1) ‘the quick brown fox jumped over the lazy dog’ can be compressed into the sentence (2) ‘th qck brn fx jmpd ovr lzy dg’ without information loss (what is lost is the convenience of quick decoding) or even to (3) ‘t qk br fx jd or lz dg’, but information is irretrievably lost if we adopt (4) ‘t q b fx jd or lz dg’ as shorthand for it. Correct decoding gets increasingly difficult as we go from (1) to (2) to (3), and suddenly impossible as we go from (3) to (4). We may say, roughly, that (3) is an irreducible representation of (1), and therefore that the informational depth of (1) is the number of symbols contained in (3). (Note that it is not a computationally easy task to establish informational irreducibility by trial and error. In order to show, for instance, that (3) is minimal with regards to the ‘true meaning’ of (1) (given reliable knowledge of the decoder, which is the reader as s/he knows her/himself), one has to delete each symbol in (3) and examine the decodability of the resulting representation. The computational load of the verification of informational minimality of a particular representation increases nonlinearly with the informational depth of that representation.)

Informational irreducibility and the commonality of a platform for representation together are necessary conditions for the objectification of informational depth as a complexity measure. The first guides the observer’s attention and effort towards the attainment of a global informational minimum in the representation of the effort. The latter stipulates that observers use a common benchmark for establishing informational depth. The informational depth of a phenomenon, the informational component of its complexity measure, can now be defined as the minimum number of bits that an intersubjectively agreeable, predictively competent simulation of that phenomenon takes up in the memory of an intersubjectively agreeable computational platform. All the platform now needs to do is to perform a number of internal operations (to ‘compute’) in order to produce a simulation of the phenomenon in question. This takes us to the second component of our complexity measure:

The computationally irreducible: What computation is and is not

As above, we will consider to have fixed (a) our model of a phenomenon, (b) the boundary conditions for verification or refutation of the model and (c) the computational device that we are working with. We are interested in getting a measure of the computational difficulty (i.e., computational load) of generating predictions or a simulation of that phenomenon using the computational device we have used to store our representation of it. If, for example, the phenomenon is the market interaction of oligopolistic firms in a product or services market and the agreed-upon model is a competitive theoretic one, then the representation (the informational component) of the phenomenon will take the form of a set of players, strategies, payoffs and mutual conjectures about rationality, strategies and payoffs, and the computational component will comprise the iterated elimination of dominated strategies required to derive the final market equilibrium. The most obvious thing to do to derive a complexity measure is to count the operations that the computational device requires in order to converge to the required answer. Two problems immediately arise:

P1. The resulting number of operations increases with the number of data points or input variables even for what is essentially ‘the same’ phenomenon. Adding players, strategies or conjectures to the example of game-theoretic reasoning above, for instance, does not essentially change the fact of the matter, which is that we are dealing with a competitive game. We would surely prefer a complexity measure that reflects the qualitative difference between solving for the Nash equilibrium and solving (for instance) for the eigen-values of a matrix (as would be the case in a linear optimization problem);

P2. Many algorithms are iterative (such as that for computing the square root of N) and can be used ad infinitum, recursively, to generate successively sharper, more accurate approximations to ‘the answer’. Thus, their computational load is in theory infinite, but we know better: they are required to stop when achieving a certain level of tolerance (a certain distance from the ‘right answer’, whose dependence on the number of iterations can be derived analytically, on a priori grounds).

Both (P1) and (P2) seem to throw our way of reasoning about computational difficulty back into the realm of arbitrariness and subjectivity, through the resulting dependence on the precise details of the problem statement (P1) and the level of tolerance that the user requires (P2). To rectify these problems, we will require two modifications to our measure of computational load:

M1. I shall define computational load relative to the number of input variables to the algorithm that solves the problem of simulating a phenomenon. This is a standard move in the theory of computation (see, for instance, Cormen, et al., 1993);

M2. I shall fix (or require any two observers to agree upon) the tolerance with which predictions are to be generated. This move results in defining computational load relative to a particular tolerance in the predictions that the model or representation generates.

Qualitative complexity classes: The simple, fathomable, unfathomable, tractable, intractable, complex, complicated and impossible defined

We are now in a position to give some objective (i.e., intersubjectively agreeable) content to various subjectively suggestive or evocative ways of describing complex phenomena. I will show that common sense approaches to the description of complex phenomena are rather sharp when it comes to making distinctions among different informational and computational complexity differences.

Distinctions in informational space: Fathomable and unfathomable phenomena

A laboratory experiment studying the results of a two-player competitive bidding situation is a fathomable phenomenon, if we stick to some basic assumptions that the subjects will follow basic rules of cooperative behavior relative to the experimenter, and of incentive-driven behavior relative to one another. We can create a representation of the situation that is workable: storable in a reasonably-sized computational device. The same experiment, when conceived as an open-ended situation in which the turmoils and torments of each of the subjects matters to the outcome, along with minute differences in their environmental conditions, upbringing or neuropsychological characteristics, becomes unfathomable: its representation exceeds not only that of an average observer, but also can easily overwhelm the working memory of even very large computational devices. Unfathomability can also result from too little information, as makers of movies in the ‘thriller’ genre have discovered. A sliced-off human ear sitting on a lawn on a peaceful summer day (as in Blue Velvet) is unfathomable in the sense that too little context-fixing information is given for one to adduce a plausible explanation of what happened (or, a plausible reconstructive simulation of the events that resulted in this state of affairs). Thus, (1) ‘the quick brown fox jumped over the lazy dog’ is fathomable from (2) ‘th qck brn fx jmpd ovr lzy dg’ or even from (3) ‘t qk br fx jd or lz dg’, but not from (4) ‘t q b fx jd or lz dg’. Compression below the informational depth of an object can also lead to unfathomability, in the same way in which informational overload can.

Distinctions in computational space: Tractable, intractable and impossible

Along the computational dimension of complexity, we can distinguish between three different classes of difficulty. Tractable phenomena are those whose simulation (starting from a valid model) is computationally simple. We can predict rather easily the impact velocity of a coin released through the air from a known height, to an acceptable level of accuracy, even if we ignore air resistance and starting from the constitutive equations for kinetic and potential energy. Similarly, we can efficiently predict the strategic choices of a firm if we know the subjective probabilities and values attached to various outcomes by its strategic decision makers, and we start from a rational choice model of their behavior. It is, on the other hand, much computationally harder to predict with tolerable accuracy the direction and velocity of the flow of a tidal wave run aground (starting from an initial space-time distribution of momentum (the product of mass and velocity), knowledge of the Navier-Stokes equations and a profile of the shore). It is, similarly, computationally difficult to predict the strategic choices of an organization whose output and pricing choices beget rationally targeted reaction functions from its competitors, starting from a competitive game model of interactive decision making and knowledge of the demand curve in their market.

Computation theorists (see, for example, Cormen, et al., 1993) distinguish between computationally easy (tractable) and difficult (intractable) problems by examining the dependence between the number of independent variables to the problem and the number of operations required to solve the problem. They call tractable those problems requiring a number of operations that is at most a polynomial function of the number of independent or input variables (P-hard problems) and intractable those problems requiring a number of operations that is a higher-than-any-polynomial function of the number of independent or input variables (NP-hard problems). This demarcation point provides a qualitative marker for computation-induced complexity: we might expect, as scholars of organizational phenomena, different organizational behaviors in response to interaction with P and NP-complex phenomena, as has indeed been pointed out (Moldoveanu & Bauer, 2003b).

Of course, not all problems are soluble, and not all phenomena are simulateable or representable on a finite state computational device. Impossible phenomena are precisely those that cannot be so simulated, or, more precisely, whose simulation gives rise to a provably impossible problem. The problem of deciding whether or not ‘I am telling a lie’ is true or false, for instance, is provably impossible to solve; so is the problem of predicting, to an arbitrary accuracy and at an arbitrarily distant point in time, the position and velocity of the end-point of a double pendulum described by a second order nonlinear equation exhibiting chaotic behavior, and starting from a finite-precision characterization of the initial conditions (displacement, velocity) of the different components of the pendulum.

Distinctions based on Interactions between the informational and computational spaces: Simple, complicated and complex

I have, thus far, introduced qualitative distinctions among different kinds of complex phenomena, which I have based on natural or intuitive quantizations of the informational and computational components of complexity. Now, I shall introduce qualitative distinctions in complexity regimes that arise from interactions of the informational and computational dimensions of complexity. We intuitively call complicated those phenomena whose representations are informationally shallow (or, simple) but computationally difficult (but not impossible). The Great Wall of China or the Egyptian pyramids, for instance, are made up of simple building blocks (stone slabs) that are disposed in intricate patterns. One way in which we can understand what it is to understand these structures is to imagine the task of having to reconstruct them using virtual stone slabs in the memory of a large digital device, and to examine the difficulties of this process of reconstruction. In both cases, simple elementary building blocks (slabs and simple patterns of slabs) are iteratively concatenated and fit together to create the larger whole. The process can be represented easily enough by a skilled programmer by a series of nested loops that all iterate on combinations of the elementary patterns. Thus, the digital program that reconstructs the Great Wall of China or the Pyramids of Egypt in the memory of a digital computer does not take up a large amount of memory (and certainly far less memory than a straightforward listing of all of the features in these structures as they appear to the observer), but are computationally very intensive (the nested loops, while running, perform a great number of operations). In the organizational realm, complicated phenomena may be found to abound in highly routinized environments (such as assembly and production lines) where the overall plans are informationally highly compressed but drive a high computational load.

I propose calling complex those phenomena and structures whose representations are informationally deep but computationally light. Consider an anthill exhibiting no correlations among the various observable features that characterize it. Using the method of the previous example, consider the process of reconstructing the anthill in the memory of a digital device. In the absence of correlations that can be exploited to reduce the information required to represent the anthill, the only way to achieve an accurate representation thereof is to store it as a three-dimensional image (a hologram, for instance). The process of representing it is computationally simple enough (it just consists of listing each voxel - three-dimensional pixels), but informationally it is quite involved, as it entails storing the entire structure. Complex phenomena in the organizational realm may be found wherever intelligibility of overall behavioral patterns is only very slight, as it is in securities trading and complex negotiations within and between executive teams.

By analogy, I propose calling simple those phenomena whose representations are computationally light and informationally shallow. These phenomena (such as frictionless pulleys and springs and point masses sliding down frictionless incline planes in physics, choice and learning phenomena in low stakes environments in economics and psychology, mimetic transfer of knowledge and routines in sociology) are usually ‘building blocks’ for understanding other, more complicated phenomena (collections of pulleys making up a hoist, the suspension system of a car, market interactions, organizational patterns of knowledge diffusion). They often constitute the ‘paradigm’ thought experiments around which disciplines (i.e., attempts to represent the world in words and equations) are founded (Kuhn, 1990). We will interact with such ‘simple’ phenomena in greater detail in the subsequent sections, which aim to show how our measure of complexity cuts across various ways of looking at organizations and modeling their behavior.

“The new conceptualization of complexity helps us see our way through any organizational model to the complexity of the underlying phenomenon”

The benefit of this new representation of complexity does not lie purely in the fact that it can make precise lay intuitions about various terms that have been (loosely) used to describe the user’s predicament when faced with a complex phenomenon (as shown above), but also in the fact that it can provide a model-invariant approach to the representation and measurement of the complexity or organizational phenomena. Model-invariant means that the representation of complexity can be used in conjunction with any model of organizational phenomena that is amenable to algorithmic representation (a weak condition, satisfied by all models that are currently in use in organization science, studies and theory). To substantiate this claim, I will now analyze the models of organizational phenomena that have come to dominate the literature during the past 20 years, and show how the complexity measure that I have developed here can be used to quantify the complexity of the paradigmatic phenomena that these models were meant to explain.

It is important to understand what ‘quantifying the complexity of a phenomenon’ is supposed to signify here. As shown above, ‘the complexity of a phenomenon’ is an ill-defined concept, unless we make reference to a particular (intersubjectively tested or testable) model of that phenomenon, which we will agree to provide the basis for a replication of that phenomenon as a digital object (i.e., to provide a simulation of that phenomenon). The phenomenon enters the process of complexity quantification via the model that has been chosen for its representation. Hence, the subjective analytical and perceptual mindset of the observer of the phenomenon is incorporated into the complexity measure of the phenomenon, and the intersubjective (i.e., objective) features of the phenomenon are taken into consideration via the requirement that the model used as the basis of the complexity measurements be intersubjectively agreeable (i.e., that two observers using the model to represent or predict a phenomenon can reach agreement on definitions of terms, the relationship of raw sense data to observation statements, and so forth). The models that will be analyzed below are already, in virtue of their entrenchment in the field of organizational research, well-established in the inter-subjective sense: they have been used as coordinative devices by researchers for many years, and have generated successful research programmes (i.e., research programmes well-represented in the literature). Thus, we are justified in studying the complexity of phenomena that are paradigmatic for the use of these models via studying the (informational and computational) complexity of the models themselves, and remain confident that we are not just engaging in the measurement or quantification of pure cognitive structures. Moreover, the complexity measures that will emerge are intersubjectively agreeable (i.e., ‘objective’) in spite of the fact that the inputs to the process of producing them have a subjective component.

Organizations as systems of rules and rule-based interactions

Recent efforts at modeling organizations have explicitly recognized them as systems of rules and rule-based interactions among multiple agents who follow (locally) the specified rules. The modeling approach to organizations as rule-based systems comprises three steps: a. the specification of a plausible set of micro-rules governing interactions among different agents; b. the specification of a macro-level phenomenon that stands in need of an explanation that can be traced to micro-local phenomena, and; c. the use of the micro-local rules, together with initial and boundary conditions, to produce simulations of the macroscopic pattern that emerges. Simple local rules (such as local rules of deference, cooperation, competition and discourse) can give rise to complex macroscopic patterns of behavior which may or may not be deterministic (in the sense that they vary with the nature of the micro-local rules but do not change as a function of changes in initial and boundary conditions). A simple micro-local rule set that is plausible on introspective and empirical grounds, such as Grice’s (1975) cooperative logic of communications (which requires agents to interpret each others’ utterances as being both informative and relevant to the subject of the conversation, i.e., ‘cooperative’) can, for instance, lead to organizational patterns of herd behavior in which everyone follows the example of a group of ‘early movers’ without challenging their assumptions.

The rule-based approach to modeling organizational phenomena is congenial to the computational language introduced in this paper, and lends itself to an easy representation in complexity space. A rule is a simple semantic-syntactic structure of the type ‘if A, then B’, ‘if not A, then not B’, ‘if A, then B, except for the conditions under which C occurs’, or, ‘if A, then possibly B’. Agents, acting locally, ascertain the ‘state of the world’ (i.e., ‘A’) and take action that is deterministically specified by the rule that is deemed applicable (‘if A, then B’, for instance). In so doing, they instantiate a new state of the world (‘C’), to which other agents react using the appropriate set of rules. (I shall leave aside, for the purpose of this discussion, the very important questions of ambiguous rules, conflicts among rules, and rules about the use of rules, but they are discussed in Moldoveanu & Singh (2003). Sets of agents interacting on the basis of micro-local rules (statistical or deterministic) can be represented as cellular automata (Wolfram, 2002) with agents represented by nodes (completely described each by a set of elementary states that change as a function of rules and the states of other agents) and a set of rules of interaction (denumerable, finite and either statistical or deterministic). This clearly ‘computational’ (but quite general, see Wolfram, 2002) interpretation of organizations-as-rule-systems is easily amenable to an application of the complexity measures that I have introduced above. First, the informational depth of a phenomenon explained by a valid rule-based model is the minimum description length of a. the agents; b. the micro-local rules, and; c. initial and boundary conditions required to suitably simulate that phenomenon. The computational load of such a phenomenon is the relationship between the number of input variables (agents, agent states, rules, initial conditions boundary conditions) that the model requires to produce a successful simulation (i.e., a successful replication of the macroscopic pattern that stands in need of explanation). Thus, when seen through the lens of rule-based systems of interactions, the ‘measure’ of organizational phenomena in complexity space is easily taken - a fortunate by-product of the universality of cellular automata as models of rule-based interacting systems of micro-agents. (Note that the applicability of the complexity measure phenomena seen through a rule-based interacting system lens depends sensitively on the universality of the cellular automata instantiation of rule-based systems).

The complexity measures that I have introduced can also be used to ask new questions (to the end of garnering new insights and exploring - or generating - new phenomena) of rule-based models of organizational phenomena, such as:

  1. How does the informational depth of micro-rules affect the computational load of the macro-phenomena that depend causally on them? Is there a systematic relationship between the complexity of micro-rules and the complexity of macro-patterns?

    Answering such questions could lead to the articulation of a new, intelligent craft of organizational rule design.

  2. How does macroscopic complexity affect the design of micro-local rule sets? What are the conditions on the rationality of individual rule designers that would be required for them to purposefully alter macroscopic patterns through the alteration of micro-local rules?

    Answering these questions could lead to a new set of training and simulation tools for organizational rule design and rule designers, and would also point to the bounds and limitations of the ‘engineering’ approach to rule system design.

Organizations as spatio-temporally stable behavioral patterns (routines and value-linked activity chains)

Organizations can also be modeled as systems of identifiable routines or activity sets, according to a dominant tradition in organizational research that dates back to at least the seminal work of Nelson & Winter (1982). A routine is a stable, finite, repeated behavioral pattern, involving one or more individuals within the organization. It may or may not be conceptualized as a routine (i.e., it may or may not have been made explicit as a routine by the followers of the routine). Being finite and repeated, routines are easily modeled either as algorithms or as the process by which algorithms run on a computational (hardware) substrate. Because an algorithm prescribes a sequence of causally linked steps or elementary tasks, wherein the output of one step or task is the input to the next step or task, the language of algorithms may in fact supply a more precise definition of what a routine is: it is a behavioral pattern that is susceptible to an algorithmic representation (Moldoveanu & Bauer, 2004). For example, an organizational routine for performing due diligence on a new supplier or customer might include: a. getting names of references; b. checking those references; c. tracking the results of the evidence-gathering process; d. computing a weighted decision metric that incorporates the evidence in question, and; e. making a go/no go decision regarding that particular supplier or customer. The steps are linked (the output of one is the input to the other) and the process is easily teachable and repeatable.

Understanding routines as algorithms (or, as the running of algorithms) allows us to easily apply our complexity measures to routine-based models of organization. Specifically, we map the number of linked steps involved in the algorithmic representation of a routine to the computational load of the routine. The informational depth of routine is given by the size of the blue-print or representation of the routine qua algorithm. These modeling moves together allow us to investigate the complexity of organizational phenomena through the lens provided by routine-based models thereof. Routine sets may be more or less complex in the informational sense as a function of the size of the memory required to store the algorithms that represent them. They may be more or less complex in the computational sense as a function of the number of steps that they entail.

Given this approach to the complexity of routines, it becomes possible to examine the important question of designing effective routines in the space spanned by the informational and computational dimensions of the resulting phenomena. Here, the language of algorithm design and computational complexity theory proves to be very helpful to the modeler. For example, algorithms may be defined recursively, to take advantage of the great sequential speeds of computational devices. A recursive algorithm is one that takes, in successive iterations, its own output at a previous step as its input at the next step, converging, with each step, towards the required answer or ‘close-enough’ approximation to the answer. Examples of recursive algorithms include those used to approximate transcendental numbers such as π or e, which produce ‘additional decimals’ with each successive iterations, and can be re-iterated ad infinitum to produce arbitrarily close approximations to the exact value of the variable in question. Defining algorithms recursively has the advantage (in a machine that is low on memory but high on sequential processing speed) that costly storage (and memory access) is replaced with mechanical (‘mindless’) raw (and cheap) computation.

Organizational routines may, following the analogy of algorithms, be usefully classified as ‘recursive’ or ‘non-recursive’, depending on their structure. Some organizational tasks (such as the planning of inventories of products, assemblies, sub-assemblies and components) may be seen as successive (recursive) applications of a computationally intelligible ‘kernel’ (matrix multiplication or matrix inversion, Moldoveanu & Bauer, 2004) at successively ‘finer’ resolutions (or levels of analysis), in such a way that ‘high-level’ decisions (regarding product inventory, say) become inputs to applications of the algorithm at lower levels of analysis (regarding assemblies or components). Other organizational tasks (such as reasoning by abduction in order to provide an inference to the best explanation of a particular organizational or environmental phenomenon are not susceptible prima facie to a recursive algorithmic interpretation and entail a far greater informational depth.

Mapping routines to algorithms allows us to consider both the evolution of routines (from a structural or teleological perspective) and to incorporate the phenomenological aspect of the complexity of the resulting phenomena into the routine-based analysis of organizational phenomena. In particular, we can produce a canonical representation of different routine-based phenomena in the language of algorithms whose informational depth and computational load can be quantified, and ask:

  1. How do routine sets adapt to their own complexity? Are there canonical self-adaptation patterns of routine sets to increases in computational load or informational depth?
  2. How should routine designers trade off between informational depth and computational load in conditions characterized by different configurations of (informational-computational) bounds to rationality? How do they make these trade-offs?
  3. Are there general laws for the evolution of complexity? How does the complexity of routine sets evolve over time?

Organizations as information processing and symbol manipulation systems

Perhaps the most congenial of the classical traditions to the study of organizations to the computational interpretation being put forth in this paper is that originating with the Carnegie School (March & Simon, 1958; Cyert & March, 1963; Simon, 1962). In that tradition, organizations are considered as information processing systems. Some approaches stress the rule-based nature of the information processing task (Cyert & March, 1963), but do not ignore the teleological components thereof. Others (Simon, 1962) stress the teleological component, without letting go of the fundamentally rule-based nature of systematic symbolic manipulation. What brings these approaches together, however, is a commitment to a view of organizations as purposive (but boundedly far-sighted) symbol-manipulation devices, relying on ‘hardware’ (human and human-created) and a syntax (grammatical syntax, first-order logic) for ‘solving problems’ (whose articulation is generally considered exogenous to the organization: ‘given’, rather than constructed within the organization).

There is a small (but critical) step involved in the passage from a view of organizations-as-information-processing-structures to an algorithmic description of the phenomena that this view makes it possible for us to represent. This step has to do with increasing the precision with which we represent the ways in which organizations process information, in particular, with representing information-processing functions as algorithms running on the physical substrate provided by the organization itself (a view that is strongly connected to the ‘strong-AI’ view of the mind that fuelled, at a metaphysical level, the ‘cognitive revolution’ in psychology, a by-product of the Carnegie tradition). This step is only possible once we achieve a phenomenologically and teleologically reasonable partitioning of problems that organizations can be said to ‘solve’ or attempt to solve into structurally reliable problem classes. It is precisely such a partitioning that is provided by the science of algorithm design and analysis (Cormen, et al., 1993), which, as we saw above, partitions problems into tractable, intractable and impossible categories on the basis of structural isomorphisms among solution algorithms. ‘What the organization does’, qua processor of information, can now be parsed in the language of tractability analysis in order to understand: a. the optimal structure of information processing tasks; b. the actual structure of information processing tasks, and; c. structural, teleological and phenomenological reasons for the divergence between the ideal and the actual.

That we can now do such an analysis (starting from a taxonomy of problems and algorithms used to address them) is in no small measure due to the availability of complexity measures for algorithms (and implicitly for the problems these algorithms were meant to resolve). Informational and computational complexity bound from above the adaptation potential of the organization to new and unforeseen conditions, while at the same time providing lower bounds for the minimum amount of information processing required for the organization to survive qua organization. In this two-dimensional space, it is possible to define a structural ‘performance zone’ of the organization seen as an information processing device: it must function at a minimum level of complexity (which can be structurally specified) in order to survive, but cannot surpass a certain level of complexity in order to adapt (which can also be structurally specified and validated on teleological and phenomenological grounds). Adaptation, thus understood, becomes adaptation not only to an exogenous phenomenon, but also to the internal complexity that efforts to adapt to that phenomenon generate. This move makes complexity (informational and computational) a variable whose roles as caused and causer must be simultaneously considered.

Organizations as systems of interpretation and sense-making

It may seem difficult to reconcile the starkly algorithmic view of complexity that I have put forth here with a view of organizations as producers and experiencers of the classical entities of ‘meaning’: narrative, symbol, sense, interpretation and meaning itself (Weick, 1995). This is because it does not seem an easy (or even possible) task to map narrative into algorithm without losing the essential quality of either narrative or algorithm in the process. It is, however, possible to measure (in complexity space) that which can be represented in algorithmic form, not only about narrative itself, but also, perhaps more importantly, about the processes by which narratives are created, articulated, elaborated, validated and forgotten. These processes (describing the processes by which organizations interact with the narratives that they produce and how they ‘live’ these narratives) are often more amenable to algorithmic interpretation than are narratives themselves and equally important to the evolution of the organizations themselves.

To see how narrative and the classical structures of meaning can be mapped into the algorithmic space that allows us to measure the complexity of the resulting phenomena, let us break down the narrative production function into three steps. The first is an ontological one: an ontology is created, and comes to inhabit the ‘subjects’ of the narrative. The organization may be said to be populated by ‘people’, by ‘embodied emotions’, by ‘transactions’, by ‘designs and technologies’, and so forth, These are the entities that do ‘causal work’, in terms of which other entities are described. Surely, the process by which an ontology is created cannot (and should not) be algorithmically represented, but this is not required for the algorithmic representation of this initial ontological step. Every algorithm begins with a number of ‘givens’ (which factor into its informational depth) which are ‘undefined’ (either because they have been implicitly defined or because there is nothing to be alarmed about in leaving them undefined). That which does matter to the algorithmic representation of this first narrative-defining step is precisely the process of mapping of ontological primitives to other primitives, over which most narrative-designers (like any other theorist) often fret, and in particular: how deep is it? How many times does the question ‘what is X?’ have to be answered in the narrative? For instance, does (one particularly reductive kind of) narrative require that organizations be analyzed in terms of individuals, individuals in terms of beliefs and desired, beliefs and desires in terms of neurophysiological states, neurophysiological states in terms of electrochemical states … and so forth? Or, rather, does the analysis stop with beliefs and desires? The articulation of the ontological first step in the production of narrative can, it turns out, be analyzed in terms of the complexity metrics I have introduced here. At the very least, we can distinguish between informationally deep ontologies and informationally shallow ones, with implications, as we shall see, for the computational complexity of the narrative-bearing phenomenon that we are studying.

The second important step is that of development or proliferation of a narrative: the process by which, through the purposive use of language, the filtering of relevant from irrelevant information and the use of the relevant information to give words meaning, narratives ‘take over’ a particular component of organizational life. It is the case that the subjective experience of ‘living a story’ cannot be precisely algorithmically replicated, but each of the steps of this experience, once we agree on what, precisely they are, can be given algorithmic interpretation whose complexity we can measure (in both the informational and computational sense). The process of ‘validation’ of a story for instance, can be easily simulated using a memory-feedback filter, a comparison and a decision based on a threshold comparison (regardless of whether the narrative validator is a justificationist or falsificationist). Of course, validation processes may differ in computational complexity according to the design of the filter used to select the data that purports to ‘make true’ the narrative. Abductive filters (based on inference to the best explanation) will be computationally far more complex than inductive filters (based on straight extrapolation of a pattern, Bylander et al., 1991), just as deductive processes of theory testing will be more computationally ‘heavy’ than will inductive processes that serve the same purpose.

Thus, we can distinguish usefully between simple and complex narratives (and even among, simple, complicated, tractable, intractable and impossible ones) and examine the effects of the complexity of these narratives on the evolution of the organization (and on the narratives themselves) as long as we are willing to make a sacrifice in the phenomenological realm and allow that not all components of a structure-of-meaning can be usefully represented in algorithmic form, in exchange for being able to cut more deeply and narrowly into a set of variables that influence the evolution and dynamics of such structures in organizational life.

Organizations as nexi of contracts and as competitive and cooperative (coordinative) equilibria among rational or boundedly rational agents

Not surprisingly, approaches to organizational phenomena based on economic reasoning (Tirole, 1988) are congenial to an algorithmic interpretation and therefore to the measures of complexity introduced in this paper. One line of modeling considers firms as nexi of contracts between principals (shareholders) and agents (managers and employees) (Tirole, 1988). A contract is an (implicit or explicit) set of contingent agreements that aims to credibly specify incentives to agents in a way that aligns their interests with those of the principals. It can be written up as a set of contingent claims by an agent on the cash flows and residual value of the firm (i.e., as a set of ‘if … then’ or ‘iff… then’ statements), or, more precisely, as an algorithm that can be used to compute the (expected value of) the agent’s payoff as a function of changes in the value of the asset that he or she has signed up to manage. In the agency-theoretic tradition, the behavior of the (self-interested, monetary expected value-maximizing) agent is understood as a quasi-deterministic response to the contract that he or she has signed up for. Thus, the contract can be understood not only as an algorithm for prediction, by the agent, of his or her payoff as a function of the value of the firm in time, but also as a predictive tool for understanding the behavior of the agent tout court.

The (informational and computational) complexity components of principal-agent agreements can be understood, prima facie, as the informational depth and computational load of the contractual schemata that form the governance ‘blueprint’ of the organization: the de facto ‘rules’ of the organizational game, which become wired into the behavior of the contractants. Such an interpretation is straightforward, and can be expected to yield clean measures of contractual complexity, and, implicitly, of the complexity of organizational phenomena understood through the agency-theoretic lens. On deeper study, however, it is clear that the complexity of a contract is not an immediate and transparent index of the complexity of the organizational phenomena that are ‘played out’ within the confines of the contract, as the problems of performance measurement, specification of observable states of the world, and gaming by both parties of the contractual schema have to also be taken into consideration. Thus, measuring the complexity of contracts (conceptualized qua algorithms) provides us merely with a lower bound of the complexity of the phenomena that are understood through a contractual lens.

A more complete reconstruction of self-interested behavior in organizations, which is no less amenable to an algorithmic interpretation than is the agency-theoretic approach is that based on game-theoretic methods. In such an approach, organizational phenomena are understood as instantiations of competitive or cooperative equilibria among members of the organization, each trying to maximize his or her welfare. What is required for the specification of an intra-organizational equilibrium (competitive or cooperative) is a representation of the set of participants (‘players’), their payoffs in all possible states of the world, their strategies and their beliefs (or conjectures), including their beliefs about the beliefs of the other participants. These entities together can be considered as inputs to an algorithm for the computation of the equilibrium set of strategies, through whose lens organizational phenomena and individual actions can now be interpreted. It is the complexity of this algorithm (backward induction, for instance) that becomes the de facto complexity measure of the organizational phenomenon that is represented through the game-theoretic lens.

Some work has already started on understanding game-theoretic models through a computational lens (Gilboa, 1989), and the basic idea is to consider individual players (participants) as computational devices attempting (but not always succeeding, depending on their informational and computational constraints) to solve for the equilibrium set of strategies and to compute the expected value of their payoff in the process. These approaches have focused predominantly on computational complexity, and have attempted to model the influence of bounded rationality on the resulting equilibria by iteratively reducing the computational capabilities of the players in the model (i.e., by placing upper bounds on the computational complexity of the problems they attempt to resolve or on the algorithms that they use). The complexity measures introduced here add texture to this computational modeling paradigm, by introducing a set of useful distinctions among different problem classes in the space of computational load (tractable, intractable, impossible) and by introducing an informational component to the complexity measures used to date, which has not always been taken into consideration (and corresponding to the ‘working memory’ of each individual player).

What does the new operationalization of complexity mean for how we carry out ‘organization science’ and ‘organizational intervention’?

The complexity measures that have articulated above make it possible to develop a research program that combines three ways of representing organizations (phenomenological, teleological, structural) that have until now generated separate (and largely incommensurable) ways of knowing and ways of inquiring about organization. I will examine in this final section the ways in which the new measure of complexity (and the implicit space in which this complexity measure ‘lives’) enables insights from structural, teleological and phenomenological studies of organizational and individual behavior to ‘come together’ in a research programme that examines how complexity (as a dependent variable) emerges as a discernible and causally powerful property of organizational plans, routines and life-worlds and how complexity (as an independent variable) shapes and influences organizational ways of planning, acting and being.

Contributions to the structural perspective

Structural perspectives on organizations (such as many of the ones analyzed above) conceptualize organizations in terms of causal models (deterministic or probabilistic). These models refer to organizations as systems of rules, routines, capabilities or value-linked activities. As I showed above, any such model, once phased in terms of algorithms (whose convergence properties are under the control of the modeler) can be used to synthesize a complexity measure for the phenomenon that it is used to simulate. Complexity (informational and computational) emerges as a new kind of modeling variable — one that can now be precisely captured. It can be used within a structuralist perspective in two ways:

  1. As a dependent variable, it is a property of the (modeled) phenomenon that depends sensitively on the algorithmic features (informational depth, computational load) of the model that is used to understand that phenomenon. In this sense, it is truly an emergent property, in two ways:

    1. It emerges from the non-separable combination of the model and the phenomenon: it a property of the process of modeling and validation, not merely of the model alone or of the phenomenon alone;
    2. It emerges from the relationship between the observer/modeler and the phenomenon, in the sense that it is a function of the interaction of the observer and the phenomenon, not of the characteristics of the observer alone or of the phenomenon alone.

Thus, it is now possible to engage in structuralist modeling of organizational phenomena which can explicitly produce complexity measures of the phenomena in question (as seen through a particular structural lens). Such measures can then be used both in order to track variations in ‘organizational complexity’ as a function of changes in organizational conditions (new variables, new relationships among these variables, new kinds of relationships among the variables, new relationships among the relationships…) and to track variations in ‘organizational complexity’ as a function of changes of the underlying structural models themselves (it may turn out that some modeling approaches lead to lower complexity measures than do others, and may for this very reason be preferred by both researchers and practitioners).

  1. As an independent variable, complexity (as operationalized above) can be used as a modeling variable itself: the complexity of an organizational phenomenon may figure as a causal variable in a structural model of that phenomenon. This maneuver leads us to be able to consider, in analytical fashion, a large class of reflexive, complexity-driven phenomena that have the property that their own complexity (an emergent feature) shapes their subsequent spatio-temporal dynamics. Such a move is valuable in that if, as many studies suggest (see Miller, 1993; Thompson, 1967; Moldoveanu & Bauer, 2004) organizations attempt adapt to the complexity of the phenomena that they encounter, it is no less true that they try to adapt to the complexity of the phenomena that they engender and that they themselves are, in which case having a (sharp) complexity measure that one can ‘plug’ into structural models as an independent variable makes it possible to examine,

    1. organizational adaptation to self-generated complexity, by building temporally recursive models in which complexity at one time affects dynamics at subsequent periods of time;
    2. the evolution of complexity itself, by building behaviorally informed models of the complexity of various adaptations to complexity.

Contributions to the teleological perspective

The teleological perspective conceptualizes organizations as adaptive, cost-benefit computing and optimizing structures that adopt and discard structures and structural models as a function of their contribution to an overall utility function. They have a purpose (telos) which may be to maximize profits, predictability or probability of survival (instantiated in an overall fitness function). The computational perspective on complexity and the algorithmic measures of complexity that I have introduced allow us to study - within teleological models of organizations - the effects of phenomenal complexity on the trade-offs that decision-takers make in their organizational design choices.

Even more importantly, we can deploy the apparatus developed by algorithm designers and computational complexity theorists to study the coping strategies that managers to deal with organizational complexity. To understand how this analytic apparatus can be usefully deployed, it is essential to think of the organization as a large scale universal computational device (such as a Universal Turing Machine, or UTM), of organizational plans and strategies as algorithms and of organizational activities and routines as the processes by which those very same algorithms run on the UTM. Now, we can distinguish between several strategies that managers - qua organizational designers - might adopt in order to deal with complexity in both computational and informational space, drawing on strategies that algorithm designers use in order to deal with large-scale problem complexity.

i Computational space (K-space) strategies: Structural and functional partitioning of intractable problems.

When faced with computationally intractable problems, algorithm designers usually partition these problems using two generic partitioning strategies (Cormen, et al., 1993). They can split up a large scale problem into several smaller-scale problems which can be tackled in parallel; or, they can separate out the process of generating solutions from the process of verifying these solutions. Either one of these partitionings can be accomplished in a more reversible (‘soft’) or irreversible (‘hard’) fashion: the problem itself may be split up into sub-problems that are tackled by the same large-scale computational architecture suitably configured to solve each sub-problem optimally (functional partitioning), or the architecture used to carry out the computation may be previously split up into parallel sub-architectures that impose pre-defined, hard-wired limits on the kinds of sub-problems that they can be used to solve.

These distinctions can be used to make sense of the strategies for dealing with K-space complexity that the organizational designer can make use of. Consider the problem of system design, recently shown to be isomorphic to the intractable (NP-hard) ‘knapsack problem’. Because the problem is in the NP-hard class, the number of operations required to solve it will be a higher-than-any-polynomial (e.g., exponential) function of the number of system parameters that enter the design process as independent variables. Solving the entire system design problem without any partitioning (i.e., searching for the global optimum) without the use of any simplification may be infeasible from a cost perspective for the organization as a whole. Partitioning schemata work to partition the ‘problem space’ into sub-problems whose joint complexity is far lower than the complexity of the system design problem taken as a whole.

Consider first how functional partitioning works for the designer of a system-designing organization. S/he can direct the organization as a whole to search through the possible subsets of design variables in order to achieve the optimal problem partitioning into sub-problems of low complexity, sequentially engage in solving these sub-problems, then bring the solutions together to craft an approximation to the optimal system design problem. Alternatively, the systems-designing organization designer can direct the organization to randomly generate a large list of global solutions to the system design problem taken as a whole in the first phase of the optimization task, and then get together and sequentially test the validity of each putative solution qua solution to the system design problem. In the first case, optimality is traded-off in favor of (deterministic and rapid) convergence (albeit to a sub-optimal, approximate) answer. In the second case, certainty about convergence to the global optimum is traded off against overall system complexity.

Such partitioning schemata can also be achieved structurally. The systems-designing organization designer can pre-structure the organization into sub-groups that are bounded in K-space as to the computational load of the problems they can take on. This partitioning can be ‘hard-wired’ into the organization through formal and informal rule systems, contracts, and organizational fiat. Now faced with the overall (NP-hard) system design problem, organization will adapt by splitting it up spontaneously into sub-problems that are matched to the maximum computational load that each group can take on. Alternatively, the organizational designer can hard-wire the solution-generation / solution verification distinction into the organizational structure by outsourcing either the solution-generation step (to a consulting organization, to a market of entrepreneurial firms, to a large number of free-lance producers) while maintaining the organization’s role in providing solution validation, or by outsourcing the solution validation step (to the consumer market, in the form of experimental products with low costs of failure) while maintaining its role as a quasi-random generator of new solution concepts.

ii Information-space (I-Space) strategies.

Of course, hard problems may also be ‘hard’ in the informational sense: the ‘working memory’ or ‘relevant information’ required to solve them may exceed the storage (and access-to-storage) capacities of the problem solver. Once again, studying the strategies used by computational designers to deal with informational overload gives us a sense of what to look for as complexity coping strategies that organizational designers use in I-space. As might be expected, I-space strategies focus on informational reduction or compression, and on increasing the efficiency with which information dispersed throughout the organization is conveyed to decision-makers to whom it is relevant. We will consider compression schemata first, and access schemata second.

Compression Schemata. Lossy compression achieves informational depth reduction at the expense of deletion of certain potentially useful information or distortion of that information. Examples of lossy compression schemata are offered by model-based estimation of a large data set (such that the data is represented by a model and an error term) and by ex ante filtering of the data set with the aim of reducing its size. In contrast to lossy information compression, lossless compression achieves informational depth reduction without the distortion or deletion of potentially useful information, albeit at the cost of higher computational complexity of the compression encoder.

Consider, as an example of how these strategies might be deployed within an organization, the problem a top management team faces in performing due diligence on a new market opportunity for a product manufacturer (instantiated as the appearance of a new technology within the firm, or a new niche within the market). There is a large amount of information readily available about competitors, their technologies, products, intellectual capital, marketing and distribution strategies, marginal and fixed costs, about customers and their preferences, about the organization’s own capabilities and competitive advantages in different product areas, about long-term technological trends and the effects of short-run demand-side and supply-side discontinuities in the product market. This information comes from different sources, both within the organization and outside of it. It has various levels of precision and credibility, and can be used to support a multiplicity of possible product development and release strategies. Let us examine how complexity management of the due diligence problem mimics I-space reduction strategies pursued by computational system and algorithm designers.

First, lossy compression mechanisms are applied to the data set in several ways: through the use of a priori models of competitive interaction and demand-side behavior that limit the range of data that one is willing to look at, through the specific ex ante formulation of a problem statement (or a set of hypotheses to be tested on the data) which further restricts the size of the information space one considers in making strategic decisions, and through the application of ‘common sense’ cognitive habits (such as data ‘smoothing’ to account for missing points, extrapolation to generate predictions of future behavior on the basis of observations of past behavior, and inference to the best explanation to select the most promising groups of explanatory and predictive hypotheses from the set of hypotheses that are supported by the data). Lossless (or, quasi-lossless) compression of the informational depth of the remaining relevant data set may then be performed by the development of more detailed models that elaborate and refine the most promising hypotheses and explanations in order to increase the precision and validity with which they simulate the observed data sequence. They amount to high resolution elaborations of the (low resolution) approaches that are used to synthesize the organization’s basic ‘business model’ or, perhaps more precisely ‘model of itself’.

Access Schemata. The second core problem faced by the organizational designer in I-space is that of making relevant information efficiently available to the right decision agents. The designer of efficient networks worries about two fundamental problem classes: the problem of network design and the problem of flow design (Bertsekas, 1985). The first problem relates to the design of network topologies that make the flow of relevant information maximally efficient. The second problem relates to the design of prioritization and scheduling schemes for relevant information that maximizes the reliability with which decision agents get relevant information on a timely basis.

The organizational designer manages network structure in I-space when s/he structures or tries to shape formal and informal communication links among individual members of the organization in ways that minimize the path length (or geodesic) from the transmitter of critical information to the intended receiver. Such ‘wiring’ of an organization can be performed through either reversible (‘working group’) or irreversible (‘executive committees’) organizational mechanisms. It can be achieved through either probabilistic (generating conditions for link formation) or deterministic (mandating link formation) means. The organizational designer manages information flow when he (she) designs (or attempts to influence) the queuing structure of critical information flows, and specifically the prioritization of information flows to different users (as a function of the perceived informational needs of these users) and the scheduling of flows of various priorities to different users, as a function of relative importance of the information to the user, the user to the organizational decision process, and the decision process to the overall welfare of the organization.

Let us examine the proliferation of I-space network and flow design strategies with reference to the executive team and their direct reports discussed earlier, in reference to the management of a due diligence process in I-space. First, network design. The team comprises members made up of members of independent functional specialist ‘cliques’ (product development, marketing, finance, business development) that are likely to be internally ‘densely wired’ (i.e., everyone talks to everyone else). The executive team serves as a bridge among functionalist cliques, and its efficiency as a piece of the informational network of the firm will depend on the reliability and timeliness of transmission of relevant information from the clique that originates relevant information to the clique that needs it. ‘Cross-clique’ path lengths can be manipulated by setting up or dissolving cross-functional working groups and task groups. Within-clique coordination can be manipulated by changing the relative centrality of various agents within the clique. These effects may be achieved either through executive fiat and organizational rules, or through the differential encouragement of tie formation within various groups and subgroups within the organization.

Second, flow design. Once an informational network structure is ‘wired’ within the organization, the organizational designer still faces the problem of assigning different flow regimes to different users of the network. Flow regimes vary both in the informational depth of a transmission and in its time-sequence priority relative to the time at which the information was generated. The assignment of individual-level structural roles with formal and informal groups within the organization can be seen as a way of shaping flow prioritization regimes, as a function of the responsibility and authority of each decision agent. In contrast, the design of processes by which information is shared, deliberated, researched and validated can be seen as a way of shaping flow scheduling regimes, as a function of the relative priority of the receiver and the nature of the information being conveyed.

Contributions to the phenomenological perspective

The proposed approach to the measurement of complexity started out as an attempt to reconcile the subjective view of complexity as difficulty with various objective views of complexity, which conflate complexity with one structural property or another of an organizational phenomenon. The synthesis between the objective and subjective views was accomplished by making explicit the contribution that the observer’s models, or cognitive schemata, used to understand a phenomenon makes to the complexity of the phenomenon, by identifying the complexity of a phenomenon with the informational depth and computational load of the most predictively successful, intersubjectively validated model of that phenomenon. This move, I argue, makes the complexity measure put forth applicable through a broad range of models to the paradigmatic phenomena they explain.

But, it also opened a conceptual door into the incorporation of new subjective effects into the definition of complexity. In our informational and computational dimensions, ‘complexity’ is ‘difficulty’. A phenomenon will be declared by an observer to be complex if the observer encounters a difficulty in either storing the information required to simulate that phenomenon or in performing the computations required to simulate it successfully. Such difficulties can be easily captured using the apparatus of algorithmic information theory (Chaitin, 1974) and computational complexity theory (Cormen, et al., 1993). Are they meaningful? And, do they allow us to make further progress in phenomenological investigations of individuals’ grapplings with ‘complex’ phenomena?

Earlier in the paper it was shown that the vague concepts that individuals use to describe complex phenomena, such as ‘unfathomable’, ‘simple’, ‘intractable’, ‘impossible’ and ‘complicated’ can be given precise meanings using one, another or combinations of both of the informational and computational dimensions that I have defined for the measurement of complexity. These distinctions make it possible for us to separate out the difficulties that observers of complex phenomena have, and enables the articulation of a research programme that facilitates the interaction between the mind and the ‘complex’. They also make it possible for us to quantitatively investigate three separate phenomena that have traditionally been interesting to researchers in the behaviorist tradition:

  1. The informational and computational boundaries of competent behavior, and in particular the I-K-space configurations that limit adaptation to complexity. Here, the use of simulations of cognitive function and the ability to model the algorithmic requirements (informational depth, computational load) of any model or schema make it possible to break down any complex predicament along an informational and a computational dimension and to hypothesize informational, computational and joint informational/computational limits on intelligent agent adaptation and adaptation potential;
  2. The trade-offs between informational and computational difficulty that intelligent adaptive agents should make when faced with complex phenomena, which can be studied by simulating the choices that intelligent adaptive agents make among different available models and schemata for understanding complex phenomena, which in turn enables the study of:
  3. The trade-offs between informational and computational difficulty that intelligent adaptive agents actually do make when faced with complex phenomena, which, in the time-honored tradition of behaviorist methodology applied to decision analysis, could be studied by measuring departures of empirically observed behavior of agents faced with choosing among alternative models and schemata from the normative models discovered through numerical and thought experiments.

Note

References

Anderson, P. (1999). “Introduction to special issue on organizational complexity,” Organization Science, 10: 1-16.

Bar-Yam, Y. (2000). Dynamics of complex systems, NECSI mimeo.

Bertsekas, D. (1985). Data networks, Cambridge, MA: MIT Press.

Bylander, T., Allemang, D., Tanner, M. C. and Josephson, J. (1991). “The computational complexity of abduction,” Artificial Intelligence V, 49: 125-151.

Chaitin, G. (1974). “Information-theoretic computational complexity,” IEEE Transactions on Information Theory, 20: 10-30.

Cormen, T. H., Leiserson, C. E. and Rivest, R. L. (1993). Introduction to algorithms, Cambridge, MA: MIT Press.

Cyert, R. M. and March, J. G. (1963). A behavioral theory of the firm, New Jersey: Prentice Hall.

D’Arcy Thompson, R. (1934). Structure and form, N ew York, NY: Basic Books.

Gilboa, I. (1989). “Iterated dominance: Some complexity considerations,” Games and Economic Behavior, 1.

Grice, H. P. (1975). “Logic and conversation,” in P. Cole and J. Morgan (eds.), Syntax and semantics, New York, NY: Cambridge University Press.

Kahneman, D., Slovic, P. and Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases, New York, NY: Cambridge University Press.

Kauffman, S. (1993). The origins of order: Self-organization and selection in evolution, New York, NY: Oxford University Press.

Kuhn, T. S. (1962). The structure of scientific revolutions, Chicago: University of Chicago Press.

Kuhn, T. S. (1990). The road since structure, Cambridge, MA: MIT Press.

Levinthal, D. and Warglien, M. (1999). “Landscape design: Designing for local action in complex worlds,” Organization Science, 10: 342-357.

Li, M. and Vitanyi, P. M. B. (1993). An introduction to Kolmogorov complexity and its applications, New York, NY: Springer Verlag.

March, J. G. and Simon, H. A. (1958). Organizations, New York, NY: Wiley.

McKelvey, B. (1999). “Avoiding complexity catastrophe in coevolutionary pockets: Strategies for rugged landscapes,” Organization Science, 10: 294-321.

Miller, D. (1993). “The architecture of simplicity,” Academy of Management Review, 18: 116-138.

Moldoveanu, M. C. and Bauer, R. (2004). “On the relationship between organizational complexity and organizational structuration,” Organization Science, 15: 98-118.

Moldoveanu, M. C. and Singh, J. V. (2003). “The evolutionary metaphor: A synthetic framework for the study of strategic organization,” Strategic Organization, 1: 439-449.

Nelson, R. and Winter, S. (1982). An evolutionary theory of economic behavior, Cambridge, MA: Harvard University Press.

Putnam, H. (1981). Reason, truth and history, New York, NY: Cambridge University Press.

Rivkin, J. (2000). “The imitation of complex strategies,” Management Science, 46: 8.

Simon, H. (1962). “The architecture of complexity,” reprinted in H. Simon (1982), The sciences of the artificial, Cambridge, MA: MIT Press.

Thompson, D. (1967). Organizations in action: Social science basis of administrative theory, John Wiley.

Tirole, J. (1988). The theory of industrial organization, Cambridge, MA: MIT Press.

Von Bertalanffy, L. (1968). General system theory: Foundations, development, applications, New York, NY: Braziller. Weick, K. E. (1995). Sensemaking in organizations, Beverly Hills: Sage.

Wolfram, S. (2002). A new kind of science, Wolfram Research.


1 This paper is taken from the soon-to-be published edited collection Managing Organizational Complexity: Philosophy, Theory and Application, K. A. Richardson (ed.), Information Age Publishing, due June 2005.