Journal Information

Article Information

The dark side of knowledge


This paper explores the concepts of organizational knowledge and intelligence from the perspective of new systems theory. It draws particularly on Niklas Luhmann’s theory of social systems, George Spencer-Brown’s calculus of distinctions, and Dirk Baecker’s applications of the two to questions of management. According to this view, knowledge can be conceptualized as a structure that determines the way in which information is dealt with. In other words, knowledge is a structure that determines whether a difference makes a difference and, if so, what difference it makes. Knowledge thus means selection; and selection implies contingency—one could have selected differently. The selectivity of knowledge, however, remains latent. That, and what knowledge excludes, is not included in the knowledge. Knowledge, thus, inevitably implies nonknowledge as its other, or “dark,” side. Intelligence can be conceptualized in relation to knowledge. It can be understood as the ability to deal with the other side of knowledge—to deal with nonknowledge. According to this view, an organization is intelligent to the extent that it is aware of its nonknowledge and takes account of this nonknowledge in its operations. In terms of Spencer-Brown’s theory, intelligence appears as the re-entry of nonknowledge into knowledge. Three examples of forms of organizational intelligence are presented in this paper: inter-organizational networks, heterarchy, and organizational interaction.


There is wide agreement among organization theorists that “knowledge” is one of the most (if not the most) important assets of an organization. There is, however, less agreement on how to conceptualize organizational knowledge. Mostly, knowledge is treated as a kind of (human) capital that can be accumulated; as something one can have more or less of, of which it is “better” to have more than less. Against this somewhat “simple” concept, some writers are propagating a more sophisticated, dialectical notion of knowledge, where knowledge implies nonknowledge. On the basis of this notion of knowledge it isn’t enough to focus on the positive side of knowledge;that is, what is known. Equally, one has to take into account the other side of knowledge, the nonknowledge; that is, that which the knowledge excludes and because of which knowledge is possible. By focusing on the “knowledge side” alone one misses half of the story; in fact, it could be argued that one misses the whole story, because, as we shall see further down, one cannot understand how knowledge can be what it is. For management, too, this has important implications, as it isn’t enough simply to “manage” knowledge. One also has to manage “nonknowledge.” This implies the paradoxical task of knowing what one doesn’t know and dealing with this nonknowledge as knowledge. This ability we might call intelligence (Luhmann, 1992, 1995: 111; Baecker, 1994, 1995, 2002).

How different researchers conceptualize “knowledge” depends very much on the way they conceptualize “cognition.” Or, put the other way around, different concepts of knowledge only make sense on the basis of specific concepts of cognition. This paper will put forward a notion of cognition based on Spencer-Brown’s (1979) concept of observation as distinction and indication. Cognition in this sense isn’t understood as the “representation” of an independent reality outside the cognitive system, but as a “creative” act—or even a “destructive” act (Cooper, 1986)—that produces the “object” of cognition. Knowledge, then, has very much to do with the organization of such creative acts, and nonknowledge with alternative organizations.

This paper first elaborates on the concept of cognition as observation, which is based on a constitutive “blindness.” This is followed by a discussion of knowledge as a structure of observation and nonknowledge as structure-related blindness. The third section applies these concepts of knowledge and nonknowledge to organizations in particular. Intelligence is then discussed as a re-entry of nonknowledge into knowledge. Three examples of forms of organizational intelligence illustrate the argument: interorganizational networks, heterarchy, and cultivation of organizational interactions.

The blindness of seeing

One of the most sophisticated theories of cognition can be found in Spencer-Brown’s (1979) theory of observation. According to Spencer-Brown, any operation of a system can be understood as an observation, whether it is a thought in the mind, a communication in a social system, or an electrical impulse in a machine. When Spencer-Brown speaks of observation in this context, he doesn’t use the concept only metaphorically. Rather, he abstracts the notion of “observation,” removing any optical connotations, and turns it into a general concept applicable to all kinds of systems. In fact, observation becomes the most basic concept from which all other concepts, such as “thing,” “thought,” “action,” or “communication,” can be derived (Seidl, 2005a: 25).

Spencer-Brown’s concept of observation doesn’t focus on the object of observation but on the observation itself as a selection of what to observe. In this sense, the underlying question is not what an observer observes, but how an observer observes: How is it that an observer observes what he or she observes, and not something else (Luhmann, 2002)?

Every observation is constructed from two components: a distinction and an indication. An observer draws a distinction, with which he or she demarcates a space into two spaces, also referred to as “states” or “contents.” Of these two states, the observer has to choose one that he or she indicates.1 That is, the observer has to focus on one state, while neglecting the other. It is not possible for the observer to focus on both. In this way the (initially) symmetrical relation between the two states becomes asymmetrical. We get a marked state and an unmarked state.

Spencer-Brown illustrates this rather abstract idea with an example. Let us imagine a uniform white piece of paper. On this paper we draw a circle. In other words, we draw (!) a distinction that creates an inside of the circle and an outside of the circle. It is important to note that it is the act of drawing the circle that establishes the two different states; without us drawing the distinction, the two states as such do not exist (Cooper, 1986; Chia, 1994). We can now indicate one of the two states: either the inside or the outside. Let us choose the inside. The inside becomes the marked state and the outside the unmarked state. While we can see the marked state, the unmarked state remains unseen. Using the metaphor of figure and ground we can say: The inside becomes figure and the outside ground.

Spencer-Brown calls the distinction, together with both its sides, the form of the distinction.

“Call the space cloven by any distinction, together with the entire content of the space, the form of the distinction” (Spencer Brown, 1979: 4).

Thus, in contrast to the common use of the term, form does not refer merely to the marked state. The form of something is not sufficiently described by the defined—the marked—state; the unmarked state is a constitutive part of it (Luhmann, 2002; Baecker, 2006). In our example, the form of the circle is the inside together with the outside of the circle. As Spencer-Brown declares:

“Distinction is perfect continence” (Spencer-Brown, 1979: 1).

A distinction, thus, has a double function: Like any boundary it both distinguishes and unites its two sides (Cooper, 1986).

The central point in this concept is that once you have drawn a distinction, you cannot see the distinction that constitutes the observation—you can only see one side of it, not the other side and not the distinction itself. In Heinz von Foerster’s terms, this can be referred to as the “blind spot” of observation (von Foerster, 1981: 288-309). The dependence of the observation on its distinction is latent (Luhmann, 1991, 1994: 91). The complete distinction with both its sides (the inside and the outside) can only be seen from outside; if you are inside the distinction you cannot see the distinction; but getting out of the distinction means giving up the observation.

Two orders of observation can be distinguished: first-order and second-order observation (von Foerster, 1981). So far we have been explaining the operation of a first-order observer, who cannot observe the distinction that he or she uses in order to observe. The second-order observer is an observer who observes another observer. This second-order observer uses a different distinction from the first-order observer: In order to observe the observer, he or she has to draw a distinction that contains the distinction (the marked and the unmarked state) of the first-order observer in his or her marked state. The second-order observer can see the blind spot, the distinction, of the first-order observer. He or she can see what the first-order observer cannot see. What is more, the second-order observer can see that the first-order observer cannot see (Luhmann, 1991: 75). More specifically, the second-order observer can see that the first-order observer can see what he or she sees, because he or she uses one particular distinction and not another. The second-order observer sees that the first-order observer could also have used another distinction and, thus, that the observation is contingent; “contingent,” here, in the modal sense of “also possible otherwise.” Luhmann writes about second-order observation:

“[S]econd-order observation is indeed not only first-order observation. It is both more and less. It is less because it observes only observers and nothing else. It is more because it not only sees (= distinguishes) its object but also sees what the object sees and sees how it sees what it sees, and perhaps even sees what it does not see and sees that it does not see that it does not see what it does not see” (Luhmann, 2002: 114-115).

Also, the second-order observer needs a distinction to observe the distinction of the first-order observer; what this second-order observer sees, and how he or she sees it, depends on the distinction that he or she uses—but doesn’t see. Thus, the second-order observer is also a first-order observer, who can be observed by another second-order observer. In this sense, second-order observation is only possible as first-order observation (Luhmann, 2002). The second-order observer also sees only what he or she sees, because he or she doesn’t see what he or she doesn’t see. The central point is that any observation—whether first order or second order—is based on a fundamental unobservability. No observation can see its own basis, because of which it observes the way it observes. As Weick and Westley (1996: 446) write: “To ‘see’ we must ‘not see’.” This also means that every observation creates both visibility and invisibility.

As indicated above, Spencer-Brown’s concept of observation can be applied to any kind of operation of a system. In the same way that organisms observe optically by distinguishing a figure from a background, social systems observe communicatively. Every communication communicates something (marked state), while at the same time it has to leave everything else in the dark (so to speak), and in particular the other possible communications (unmarked state).2 Only because of its unmarked state can a communication communicate what it communicates. Or, in other words, communications receive their meaning only through that which is not communicated (Derrida, 1968; Holmes, 1989, 1993; Luhmann, 1995). Another communication can communicate about the communication and its unmarked state, but only at the cost of producing yet a further unmarked state.

Knowledge and nonknowledge

Knowledge has something to do with observation: It is involved in every observation and also, the other way around, observation is involved in creating knowledge. Nevertheless, observation is not knowledge. Rather, knowledge is something that guides observation. That is, it is something that in the concrete moments of observation provides an orientation for where to draw the distinction. In that sense, we can parallel knowledge with structures of observation (Luhmann, 1994; Seidl, 2005a).

We can distinguish two kinds of knowledge, or structures of observation. First, there are structures that determine how single observations are made; that is, where single distinctions are drawn (Luhmann, 1994: 123-4; Willke, 1995: 237-8, 1996: 267). One could also say that they provide schemata for observation. Prime examples of communicative observations are the words of language, for example “dog,” “plant,” “human being,” “fast,” “large,” “almost.” Such words provide an orientation for drawing distinctions in (communicative) observations. Communications will usually be oriented according to such words. This also means that communications in different languages will observe differently, will draw different distinctions. Eskimos, for example, possess a language that contains different words for different kinds of snow; because of that, Eskimos tend to observe their concrete environment (communicatively3) differently from non-Eskimos. In this sense, what social systems (communicatively) observe and what they don’t depends very much on the words that their language provides.4

Apart from language, there are also other structures that guide the way in which concrete observations are made. For example, in different social contexts one can draw on different categories of communication, as a result of which different communicative observations are produced. While the structures of observation serve as orientation for making observations, they do not constitute the observation. If one communicates about a concrete human being, such as “Professor Spencer-Brown,” this observation is guided by categories in language like “human being” or “man,” but the concrete observation isn’t simply a category.

Second, there are structures that relate observations to each other (Luhmann, 1994: 138; Willke, 1995, 237-8). In other words, these are structures that regulate how one observation is connected with other ones. This relation of an observation to other (actual or potential) observations can be understood as the meaning of the observation; as Luhmann (1995: 61) defines it: “[E]very specific meaning qualifies itself by suggesting specific possibilities of connection and making others improbable, difficult, remote, or (temporarily) excluded.” In a social context this means that the meaning of a communication is the suggestion of possibilities of ensuing communications. Conversely, meaning also excludes, or makes improbable, certain possibilities of continuing the communication.

The two kinds of structures can be distinguished analytically, but they are always connected at concrete points of observation. Knowledge always determines how observations are made and how these observations connect with other observations; that is, what they mean. Willke (1995: 237), in this sense, speaks of the “double function of knowledge.”

How does knowledge—that is, the structure of observation—come about? As noted above, knowledge determines what observations are made, but observations also have an influence on what knowledge is “produced” (following Giddens, 1984, here we can speak of a “duality of structure”). The relation between observation and knowledge can be explained with the help of Spencer-Brown’s (1979) concept of condensation. A generalized distinction is “condensed” from different concrete observations (i.e., distinctions), whereby all contextual differences are omitted—particularly those regarding concrete time and space (Luhmann, 1994: 108, 311; Brown & Jones, 2000: 13; Seidl, 2003).5 For example, from the observation of particular behaviors of contemporary companies with regard to each other, the generalized distinction “co-opetition” was condensed. This generalized distinction is devoid of any contextual references of the concrete observations from which it was condensed—in Spencer-Brown’s terms, the references have been “contracted”—and thus can be used as a structure for making concrete observations. Another example: Organizations might observe repeatedly that companies that put down the prices of their very high-priced luxury products lose customers instead of winning new ones. From this, one might condense the “knowledge” that putting down the price of very high-priced luxury goods leads to a loss of customers. This example refers to a structure that connects observations. In a concrete situation of observation one might draw on this structure and observe that company A has put down its prices, and from that connect to the observation that company A does (or does not) lose customers.

This condensation of different observations results in abstracting from the concrete observations something that is common to those observations, and that as such can be repeated in other concrete observations (Luhmann, 1994: 108). This repetition can be described with Spencer-Brown’s concept of confirmation (1979).6 The condensed distinction itself is no concrete observation; it has to be “applied” to concrete situations. The generalized distinction is confirmed in the concrete observation, but not as an identical distinction. For example, when one uses the concept of “co-opetition” to observe the relation between two concrete companies A and B, one doesn’t observe co-opetition as such, but co-opetition between company A and B. Thus, every confirmation of a condensate in concrete situations is different (Seidl, 2005a: 80).

Knowledge as a structure of observation thus determines what observations come about; that is, what distinctions are drawn. In this way, knowledge determines what is, or can be, seen, but it also determines what is not, or can not be, seen. Knowledge excludes seeing in the following two ways. First, by “suggesting” certain distinctions for observation, it suggests both a marked and an unmarked state. This unmarked state, as explained above, is not seen (by this observation), although it is constitutive of the marked state; that is, that which is observed. Thus, knowledge leads to a first unobservability by suggesting observations—as any observation produces unobservability. Second, knowledge allows the drawing of certain distinctions for observation and not other distinctions; one could also say, knowledge is a selection of certain (generalized) distinctions for observation from all possible ones. Thus, the reverse side of knowledge is an exclusion of distinctions for observation; an exclusion of possibilities of observation.

The two forms of unobservability created by knowledge are connected to each other; in fact, the second form includes the first. The fundamental unobservability of every observation refers, as explained in the last section, to the observation itself. Another observation, however, using another distinction, can observe the distinction of the first observation; that is, it can observe what the first distinction cannot. In this sense, allowing second-order observations—that is, second-order distinctions—compensates for the first unobservability. Thus, unobservability is also a question of what (second-order) distinctions are made possible by knowledge. We can nevertheless still speak of a fundamental unobservability, as the second-order observation cannot “fully” observe the distinction of the first-order observation, because it depends itself on a distinction that it cannot see. For “complete” observability it is necessary to introduce an additional third-order observation for the second-order observation, and a fourth-order observation for the third-order observation, and so on ad infinitum. Knowledge thus would have to provide generalized distinctions for infinite orders of observation, which is (practically and logically) impossible.

Knowledge can be conceptualized as selection: From the infinite number of possible observations, it selects certain possibilities of observations and excludes the remaining possibilities. In this sense, one can also speak of knowledge as the exclusion of possibilities of observation. This exclusion, which is just the complement of positive selection, we might call nonknowledge. In other words, knowledge as a selection of possibilities of observation produces on what lies beyond its boundary—on its other side—nonknowledge, which refers to the possibilities that have not been selected.

If one conceptualizes knowledge and nonknowledge as positive and negative selections among an (infinite) set of possibilities, knowledge might indeed initially appear as something of which one could have more or less. The more possibilities are positively selected—that is, the greater the knowledge—the less is excluded—that is, the less the nonknowledge. The greater the knowledge, the more could be observed and the less remain unobserved. This would be true if the possibilities of observation were an independent set that didn’t change with the selections made. In fact, it is the selection itself that creates the possibilities from which it selects. The way knowledge selects and the functions (and dysfunctions) of this selection can be best understood with the help of the concepts of complexity and complexity reduction (Luhmann, 1993, 1995; Baecker, 1999a).

According to Luhmann, complexity can be conceptualized as a condition where one possesses more possibilities than can be actualized. Complexity thus means “pressure to select” (Luhmann, 1993, 1995: 23-8). In this sense, systems are complex as they possess more possibilities (of operations) than they can actualize; a psychic system cannot think (and connect to each other) all its possible thoughts and a social system cannot communicate (and relate to each other) all its possible communications. Rather, all systems have to make selections—select what possibilities to actualize and, thus, what to leave unactualized. Such selections are made with every concrete operation; every operation is a selective actualization of one of those possibilities. Systems can’t, however, select at every concrete moment from all of their possibilities. This would overburden the momentary points of selection and the reproduction of the system would be in danger (Luhmann, 1995)7. In order to ensure its continued reproduction, a system has to reduce its possibilities; that is, reduce its complexity, with relation to concrete situations. In other words, there has to be a pre-selection of possibilities before the selection of the concrete operation can take place. Complexity has to be reduced so that the concrete selection can be made easier; that is, become what Baecker (1999a) calls “simple complexity.” This reduction of possibilities is the function of structures. Structures reduce the possibilities from which at every concrete moment operations can be selected (Seidl, 2005a: 108-13). As Luhmann (1995: 283) writes: “[O]nly by structuring that constrains can a system acquire enough ‘internal guidance’ to make self-reproduction possible.”

With regard to the reduction of complexity, we can distinguish three different types of possibilities available to a system. First, there are the general possibilities—that is, all possible operations of a system (Baecker, 1999b)—which are defined as all operations that can continue or reproduce the system (Luhmann, 1995). Structures divide these general possibilities into two subsets: a subset of possibilities that are directly available for the system’s reproduction, and a subset of (for the time being) unavailable possibilities (Seidl, 2005a: 109—11). The unavailability of possibilities does not, however, mean that they are impossible, only that they cannot for the moment, or can only with difficulty, be actualized (see Figure 1).

In Spencer-Brown’s terms (1979), structures can be understood as “distinction” and “indication”: Structures draw a distinction within the possibilities of the system—creating two different sets of possibilities—and indicate one of the sets, making it the subset of available possibilities.

Paradoxically, the reduction of complexity—that is, the limitation of available possibilities—at the same time is a mechanism


Structures as a pre-selection of possibilities

for increasing complexity (Luhmann, 1995, 2000). That is to say, by reducing the originally given possibilities, new possibilities are created. In this sense, Luhmann distinguishes between primary and secondary complexity. He writes: “similarly to the evolution of language or the design of a transport network, the reduction of complexity can be used to build up secondary complexity” (Luhmann, 2000: 222, my translation).

We can now apply the concept of complexity and complexity reduction to our specific topic of knowledge and unobservability. On the basis of Spencer-Brown’s notion of observation, systems can generally be conceptualized as observing systems—as systems consisting of observations (von Foerster, 1981). Systems, we then have to say, are complex to the extent that they possess more possibilities of observation than they can realize; they are forced to select their observations. Knowledge as a structure of observation, then, can be understood as a means of complexity reduction: It pre-selects from the general possibilities of the system a smaller set of possibilities, which then become available as possibilities to the concrete situations of observation. Thus, in the concrete situation of observation there is still a selection to be made, but the selection is made easier (and only because of that, possible for the system) because a pre-selection has been made. From this perspective it becomes clear that knowledge makes observation possible only by excluding possibilities of observation (nonknowledge).

The concept of knowledge as structures of observation can now be used in a wider or narrower sense. In a wider sense, one can refer to all structures of a system as the knowledge of the system. Thus, the language of a social system would count as knowledge, just as contracts, roles, or positions would.8 Similar concepts of knowledge can also be found elsewhere in the literature: Nelson and Winter (1982), for example, refer to routines as social knowledge. In a narrower sense, one could reserve the term knowledge for a specific kind of structure. Luhmann (1994: Chapter 3; 1995), for example, defines knowledge more narrowly as cognitively stabilized (social) structures in contrast to normatively stabilized (social) structures. What distinguishes knowledge from other types of structures, here, is the reaction—or, better, expected reaction—to deviations from the structural pre-selection. In the case of knowledge (i.e., cognitive stabilized structures) structures are expected to be changed, if the operations/observations are inconsistent with the structures. The classical example is science, where theories (as structures of the social system of science) are expected to be changed if the observations made are inconsistent with those structures. Normatively stabilized structures, in contrast, are expected to be retained in the case of deviating operations. The classical example is the social system of law, where laws (as structures of the social system of law) are retained even when the observations deviate from the structures—the laws aren’t expected to be changed when people act against them (Luhmann, 1994: 138). In practice, the two kinds of stabilization are mostly combined (in various forms). For example, even scientific theories are often (implicitly) also normatively enforced.

For reasons of simplicity, in the following the concept of knowledge will be used in the wider sense; that is, the paper won’t distinguish between cognitively and normatively stabilized structures. For the argument, however, the distinction wouldn’t make a difference.

Knowledge and nonknowledge in organizations

Organizations can be conceptualized as social systems that operate on the basis of specific communications: decision communications. Luhmann (2000, 2003, 2005) defines organizations as systems that consist of decisions and that themselves produce the decisions of which they consist, through decisions of which they consist.9 This specific mode of operation has particular implications for the way in which organizations produce knowledge and can handle their nonknowledge.

Organizations are complex systems in the sense that they possess more possibilities for making decisions than they can actualize (Luhmann, 1980). In other words, organizations are forced to make selections with regard to what decisions to make. Like all complex systems, organizations reduce their complexity by producing structures that pre-select the possibilities for concrete decision situations. That is, in concrete decision situations the organization doesn’t select its decisions from all possible decisions, but only from a smaller set of possible decisions defined by the decision structures. On the basis of our explanations of the first section, the decisions of organizations can be described as organizational observations: Every decision draws a distinction between that which it decides, and everything else. Accordingly, organizational knowledge can be described as decision structures. These structures on the one hand make the organization “see,” to the extent that they enable it to make decisions, and on the other hand also create unobservability by excluding possibilities of decision making. Thus, again, the other side of the organizational knowledge is unobservability or nonknowledge.

We can distinguish two kinds of decision structures in organizations. On the one hand there are the formal decision structures, or, as Luhmann calls them, decidable decision premises (Luhmann, 2003, 2005; Seidl & Becker, 2006). These decision structures are themselves the explicit product of earlier decisions; they are decision structures that have been decided on. On the other hand, there is the organizational culture, or, as Luhmann (2000: 241—9) calls this type of structure, undecidable decision premises. In contrast to decidable decision premises, the undecidable decision premises are not explicitly decided on; they more or less emerge as an unintended side-effect from the decision processes. In Spencer-Brown’s terms, the intentional or unintentional emergence of decision premises can be described as condensation. Earlier decisions are condensed into distinctions that can be confirmed in different decision situations.

The unobservability created by knowledge is particularly extreme in the case of organizations due to their specific mode of operation, which Luhmann (2005) describes as uncertainty absorption and Karl Weick (1979) as reduction of equivocality. According to Luhmann (2000, 2003, 2005), uncertainty absorption is the central mode of reproduction of an organization. During the succession of decisions, the uncertainty involved in each earlier decision is absorbed; in the communication of decisions, the uncertainty involved in decision making is not communicated. In this sense, later decisions orienting themselves according to earlier decisions are unaware of any uncertainty involved in those earlier decisions.

If we describe this issue from the perspective laid out above, uncertainty absorption constitutes a way of eliminating possibilities of observation. The uncertainty of an observation can be understood as different possibilities of observation (in the form of decisions). At the initial decision situation the organization can draw its distinctions in different ways—there is no determinism there, but there is uncertainty. However, after the organization has drawn the distinction—ultimately by chance—the initial possibilities of observation disappear. Most importantly, the organization isn’t really aware of the possibilities of observation that have disappeared. One could also say, organizations forget other possibilities of observation. As Luhmann writes:

“When it comes to organizations, memory is linked to the uncertainty absorption that connects decisions with decisions. It forgets the generally underlying uncertainty, unless it has become part of the decision in the form of doubts or reservations. […] By and large […] it only retains what later decisions draw upon as decision premises” (Luhmann, 2000: 193, my translation, footnote omitted).

For the organization, this means that the unobservability created by the organizational knowledge—that is, the organizational structures—is particularly severe. In the case of organizations, the exclusion of possibilities of observation is particularly strict and the distinction between knowledge and nonknowledge is particularly strictly drawn. Weick and Westley (1996) make a similar point when they speak of organizing and learning as essentially “antithetical processes.” In face-to-face interactions (which are an example of another type of social system), it is much easier to deviate from structural selections. One can very easily question earlier communications and communicate differently.

At this point, we can summarize the argument so far. Organizations create their knowledge in the form of decision premises, which make decision making possible by limiting the number of possible decisions. In other words, organizational knowledge makes observations possible, but at the same time it implies nonknowledge, which means unobservability. Regarding organizational observations, this means that they are at least as much based on nonknowledge as they are on knowledge. Because of the specific mode of reproduction, which can be referred to as uncertainty absorption or equivocality reduction, the demarcation between knowledge and nonknowledge, or observability and unobservability, is particularly strict.

Intelligence as the ability to deal with one’s nonknowledge

Stevan Dedijer (1984) introduced the concept of social intelligence to refer to the capability of social systems—be they nations or organizations—to “manage” their knowledge. He describes intelligence as a higher-level capability that is located “above” knowledge (Jéquier & Dedijer, 1987: 14). Accordingly, intelligence can be described as the observation of one’s own knowledge/nonknowledge. Against the background of our analysis so far, it becomes clear that this capability is less straightforward than might be initially expected. If one accepts that knowledge is not something that can be accumulated—as Dedijer seems to imply—but is rather an exclusion of a system’s possibilities, intelligence appears as a paradoxical phenomenon: It is the inclusion of the excluded (Baecker, 1995; 1999a: 191). How can this be conceptualized?

Knowledge, as we have argued so far, is a precondition for being able to observe at all. Every creation of knowledge, however, leads to the creation of its other side—that is, of nonknowledge—as unobservability. This means that all operations of a system, to the extent that they are based on knowledge, ultimately are based on nonknowledge. Systems cannot but base their operations on knowledge, the foundation of which remains unknown to them. Metaphorically speaking, all operations of systems are chosen blindly—with the illusion of sight. In this sense, von Foerster speaks of a double closure of cognitive systems (1981: 304-5; 2003). Overcoming this blindness means dealing with the paradox of knowing what one doesn’t know. On the basis of Spencer-Brown’s theory of observation (1979), this can be described as a re-entry of nonknowledge into knowledge; the unmarked state re-enters the marked state; one could also say, knowledge observes itself.

An important point about the re-entry of the unmarked state into the marked state, or, better, of the distinction into the distinction, is that the re-entered distinction is not the original distinction—the “mark” is not the “cross” (Spencer-Brown, 1979; Luhmann, 1994: 379-80). In other words, intelligence would enable systems to know what they don’t know, but the knowledge of their nonknowledge wouldn’t be identical with the excluded knowledge; it wouldn’t be knowledge as such ( “cross”: the drawing of a distinction) but just a representation of that knowledge ( “mark”: a representation of the distinction that they do not draw).

For a system, intelligence means that it orients its observation not only according to its knowledge but also according to representations of its nonknowledge. Baecker writes in this context:

“Intelligence is a mode of operation that makes it possible to calculate not only with markings [i.e. marked states] but also with non-markings [i.e. unmarked states]” (Baecker, 1995: 170, my translation).

In the following three examples of organizational intelligence will be discussed: interorganizational networks, heterarchy, and organizational interactions. In all of these examples the organization possesses—in addition to its knowledge—representations of its nonknowledge.

Interorganizational networks

The integration of an organization in an interorganizational network has important implications for the way it can deal with its nonknowledge. Different organizations in this network possess different structures; they have pre-selected different possibilities of observation, and thus their distinction of sight and unobservability is drawn differently. In this sense, what constitutes knowledge for one organization is likely to constitute nonknowledge for another. In other words, what one organization is able to see, another one might be blind to. In this sense, the relation of an organization to other organizations in the network can be understood as a relation to its own nonknowledge.

This is not to say that an organization gains “access” to the knowledge of another organization; what could that mean? As argued above, knowledge isn’t anything that can be stored, of which one can have more or less of; rather, knowledge is a (pre-)selection. Different knowledge of different organizations thus has to be conceptualized as a different selection. In this sense, “appropriating” the knowledge of another organization would have to be conceptualized as giving up one’s own selection (cf. Hedberg, 1981; Weick & Westley, 1996; Baecker, 2000). This wouldn’t solve the problem of not knowing what one doesn’t know, as it would only create new knowledge at the expense of the old knowledge becoming nonknowledge. Apart from that, the knowledge of different organizations in the network isn’t that easily accessible to other organizations; integration into interorganizational networks doesn’t imply dissolution of boundaries between different organizations in the network, as several authors seem to suggest (Heydebrand, 1989: 331; Wigand et al., 1997). Rather, the concept of network only makes sense on the basis of distinct operative unities, distinct organizations. It can even be argued that networks enforce organizational boundaries (Baecker, 1999a: 191; Luhmann, 2000).

The significance of interorganizational networks with regard to an organization’s knowledge and nonknowledge cannot be found in any kind of access to the knowledge of other organizations; rather, it lies in how the knowledge of other organizations is represented in the focal organization. Networks force organizations to communicate about their relation to other organizations in the network (Baecker, 1999a: 191); this is also a relation to other ways of observing or other forms of knowledge. The “relation” to other organizations here stands in the communication as the representation for the nonknowledge of the focal organization; the focal organization doesn’t know what another organizations knows, but by talking about its own relation to the other organization, it can talk about this knowledge. In this context, Baecker writes:

“There is an intelligence in networks, which lies in the fact that complexity is represented by knowledge of nonknowledge, and that the development of knowledge is made dependent on continually making the nonknowledge an explicit topic of communication. Networks make it necessary to communicate about contacts. And they make it impossible not to do so” (Baecker, 1999a: 191, my translation).

In other words, alternative ways of (pre-)selection are represented through relations to other organizations. In their concrete selections of decisions, organizations orient themselves according to their own structures, their own knowledge, but also according to relations to other organizations in the network. Or, in more concrete terms, in specific decision-making situations, organizations communicate about their own structures as well as about the consequences of decisions on their relation to other systems in the network; thus, they communicate about how other organizations in the network will observe the decision (or its consequences). In this sense, they communicate about the knowledge of other systems, about which all they know is how it affects the way in which organizations relate to each other. When an organization communicates about its relation to other organizations, it communicates about its nonknowledge. Thus, the organization is aware of its nonknowledge and alert to the limitations of its own knowledge. It is aware that its own knowledge is a contingent selection, which could be different. Organizations are thus much more open to changing their knowledge if there are signs that it might no longer be viable .


Warren McCulloch (1965) introduced the concept of heterarchy as a counter-notion to that of hierarchy. Instead of a strictly transitive ordering of the components of a system, as expressed by the concept of hierarchy, heterarchy is defined as a mode of operation in which the components of the system are ordered according to the concrete requirements of the situation—which doesn’t exclude temporary hierarchical orderings. While in hierarchies the ordering of components is perceived as pre-given, in heterarchies the ordering is an explicit issue. Heterarchy forces an organization to communicate about its current ordering in the given situation, and about alternative orderings that might be more appropriate to this or other situations.

According to our theoretical perspective, different forms of ordering can be understood as different forms of knowledge; they allow for different observations to be made. In this sense, different forms of ordering imply different distinctions between knowledge and nonknowledge. Thus, when organizations are forced to communicate about different orderings, in effect they are forced to communicate about alternative knowledge (which they do not possess—yet) or about nonknowledge. In the concrete communications, this nonknowledge is represented as alternative ways of relating “subunits” or positions; by communicating about possibilities of relating various subunits, an organization can communicate about the knowledge that results from those subunits, without possessing that knowledge. The organization can then select its structures/knowledge on the basis of its discussion of different possibilities of relating subunits as representation of the knowledge resulting from it. In other words, heterachy means calculation with nonknowledge as calculation with alternative forms of relating “subunits.”

Organizational interactions

A third form of organizational intelligence can be found in the differentiation between formal and informal organization, or, better, organization and organizational interactions (Kieserling, 1994, 1999; Seidl, 2005a: 54-60, 2005b). Organizations and interactions can be conceptualized as two different types of systems that operate in different manners. While organizations reproduce themselves on the basis of decision communications, interactions reproduce themselves on the basis of communications among the people present (Luhmann, 1995: Chapter 10). Because of their different modes of reproduction, organization and interaction process their information—even when referring to the same phenomenon—in different ways; the same communication has a different meaning for the organization than it has for the interaction. In this sense, the existence of interactions “within” organizations means the parallel existence of different modes of information processing, or, in our terminology, the existence of different knowledge and processes of observation (Seidl, 2005b).

As in the case of interorganizational networks, the knowledge of the organization, and the organizational interactions taking place within the organization, remain separate from each other. The organization knows what it knows, but doesn’t know what the organizational interactions know. However, organizations know that organizational interactions possess a knowledge that the organization doesn’t possess; that is, which constitutes organizational nonknowledge. For example, organizational interactions often “remember,” and in this sense know about, the (absorbed) uncertainty involved in earlier decisions of the organization and thus know alternative ways of structuring organizational communications (Kieserling, 1999: 385-6; Seidl, 2005a). This nonknowledge of the organization is represented within the organization as possibilities of initiating organizational interactions. In the case of interorganizational networks, organizations are forced to communicate about their relations to other organizations, and in the case of heterarchies they are forced to communicate about alternative ways of ordering subunits. Similarly, organizations are forced to continually communicate about the integration of organizational interactions in the decision process. They always have to decide what meeting between which people at what time should decide on what issue. Organizations know that different interactions know different things and will decide differently. Although organizations don’t know what the interactions know, they do know that the interactions possess knowledge that they don’t possess. The organization is intelligent to the extent that it substitutes the interactions’ knowledge for its own nonknowledge (Baecker, 1994, 1999a). Organizations base their calculations on different ways of using organizational interactions for their decision making. In this sense, organizations base their calculations on their nonknowledge.


This paper has explored the concepts of knowledge and intelligence from the perspective of new systems theory. The central argument was that organizations, like all complex systems, have to reduce their complexity—that is, the number of possibilities—in order to be able to operate. Organizational knowledge in this sense was described as a mechanism of complexity reduction: Knowledge constitutes a selection between those possibilities that become available to the system as possibilities of observation, and those possibilities that are excluded. Thus, apart from observability, knowledge inevitably produces unobservability—or nonknowledge—on what we described as its “other” side. The paper then went on to argue that organizations are intelligent to the extent that they are able to re-enter nonknowledge into knowledge; in other words, to the extent that they are able to produce representations of their nonknowledge, which allows them to base their calculations on nonknowledge as if it were knowledge.


  1. Distinction and indication are two aspects of the same operation. For reasons of clarity, however, we have presented them in a temporal sequence.

  2. To be precise, we would have to refer to the “unwritten cross” (Spencer-Brown, 1979: 7) as the specific context in which the communication takes place.

  3. And probably also mentally (cf. Weick & Westley, 1996: 446-7), but this is not of relevance here.

  4. This doesn’t mean, of course, that language completely determines what can and what can’t be communicatively observed. There is, for example, non language-based communication, e.g., gestures, the fine arts.

  5. In Spencer-Brown’s notation: ¬¬?¬ (1979: 1).

  6. In Spencer Brown’s notation: ¬?¬¬ (1979: 2).

  7. From the perspective of the system as a whole, the selection of possibilities in the concrete situation of operation would be pure chance (i.e., no systematic relation between the particular selection and other selections within the system). This would constitute a state of entropy (Luhmann, 1995: 49).

  8. In addition to that, one could use the term knowledge for structures of any kind of system: organic, psychic, social, and even mechanical.

  9. This doesn’t mean, of course, that there aren’t other communications “in” organizations (e.g., gossip), but these communications do not reproduce the organization (Luhmann, 2000: 68).



Baecker, D. (1994). “The intelligence of ignorance in self-referential systems,” in R. Trappl (ed.), Cybernetics and systems ‘94: Proceedings of the Twelfth European Meeting on Cybernetics and Systems Research, ISBN 9789810217617, pp. 1555-1562.


Baecker, D. (1995). “ Über Verteilung und Funktion der Intelligenz im System, “ in W. Rammert (ed.), Soziologie und Künstliche Intelligenz: Produkte und Probleme einer Hochtechnologie, ISBN 9783593352930. pp. 161-186.


Baecker, D. (1999a). “Einfache Komplexität, “ in D. Baecker (ed.), Organisation als System, ISBN 9783518290347, pp. 169-197.


Baecker, D. (1999b). “Kommunikation im Medium der Information,” in R. Maresch and N. Weber (eds.), Kommunikation, Medien, Macht, ISBN 9783518290088, pp. 174-191.


Baecker, D. (2000). “Die verlernende Organisation,” unpublished manuscript, Universität Witten/ Herdecke.


Baecker, D. (2002). Why systems? “ Theory, Culture and Society, ISSN 0263-2764, 18: 59-74.


Baecker, D. (2006). “The form of the firm,” Organization, ISSN 1350-5084, 13: 109-142.


Brown, A. and Jones, M. (2000). “Honourable members and dishonorable deeds: sensemaking, impressionmanagement, and legitimation in the ‘arms to Iraq’ affair,” Human Relations, ISSN 0018-7267, 53: 655-689.


Chia, R. (1994). “The concept of decision: A de- constructive analysis,” Journal of Management Studies, ISSN 0022-2380, 31: 781-806.


Cooper, R. (1986). “Organization/ disorganization,” Social Science Information, ISSN 0539-0184, 25: 299-335.


Dedijer, S. (1984). “The 1984 global system: Intelligent systems, development stability and internal security,” Futures, ISSN 0016-3287, 16(1): 18-37.


Derrida, J. (1968). “ La ‘différance’,» Bulletin de la Soçiété Française de Philosophie, ISSN 0037-9352, 63:73-120.


Foerster, H. von (1981). Observing systems, ISBN 9780914105190.


Foerster, H. von (2003). “For Niklas Luhmann: ‘How recursive is communication’” in H. von Foerster (ed.), Understanding Understanding: Essays on Cybernetics and Cognition, ISBN 9780387953922 (2002), pp. 305-324.


Giddens, A. (1984). The Constitution of Society: Outline of the Theory of Structuration, ISBN 9780745600062.


Hedberg, B. L. T. (1981). “How organizations learn and unlearn,” in P. C. Nystrom and W. H. Starbuck (eds.), Handbook of Organizational Design, Vol. 1, ISBN 9780198272410. pp. 3-27.


Heydebrand, W. V. (1989). “New organizational forms,” Work and Occupation, ISSN 0730-8884, 16: 323-357.


Holmes, S. (1989). “The permanent structure of antiliberal thought,” in N. Rosenblum (ed.), Liberalism and the Moral Life, ISBN 9780674530201, pp. 227-253.


Holmes, S. (1993). The Anatomy of Antiliberalism, ISBN 9780674031807.


Jéquier, N. and Dedijer, S. (1987). “Information, knowledge and intelligence: A general overview,” in S. Dedijer and N. Jéquier (eds.), Intelligencefor Economic Development: An Inquiry into the Role of the Knowledge Industry, ISBN 9780854965205, pp.1-23.


Kieserling, A. (1994). “Interaktion in Organisationen,” in K. Dammann, D. Grunow and K. P. Japp (eds.), Die Verwaltung des Politischen Systems: Neue Aystemtheoretische Zugriffe auf ein altes Thema, ISBN 9783531123738, pp. 168-182.


Kieserling, A. (1999). Kommunikation unter Anwesenden: Studien über Interaktionssysteme, ISBN 9783518582817.


Luhmann, N. (1980). “Komplexität, “ in E. Grochla (ed.), Handwörterbuch der Organisation, ISBN 9783791080161, pp. 1064-1070.


Luhmann, N. (1991). “Wie lassen sich latente Strukturen beobachten?” in P. Watzlawick and P. Krieg (eds.), Das Auge des Betrachters: Beiträge zum Konstruktivismus, ISBN 9783492034975, pp.61-74.


Luhmann, N. (1992). “Gibt es ein ‘System’ der Intelligenz?” in M. Meyer (ed.), Intellektuellendämmerung?: Beitrage zur neuesten Zeit des Geistes, ISBN 9783446171237.


Luhmann, N. (1993). “Haltlose Komplexität, “ in N. Luhmann (ed.), Soziologische Aufklärung 5: Konstruktivistische Perspektiven, ISBN 3531420941, pp. 59-76.


Luhmann, N. (1994). Die Wissenschaft der Gesellschaft, ISBN 9783518286012 (1992).


Luhmann, N. (1995). Social Systems, ISBN 9780804719933.


Luhmann, N. (2000). Organisation und Entscheidung, ISBN 9783531134512.


Luhmann, N. (2002). “Identity: What or how,” in W. Rasch (ed.), Theories of Distinction: Redescribing the Descriptions of Modernity, ISBN 9780804741224, pp. 113-127.


Luhmann, N. (2003). “Organization,” in T. Bakken and T. Hernes (eds.), Autopoietic Organization Theory: Drawing on Niklas Luhmann’s Social Systems Perspective, ISBN 9788763001038, pp. 31-52.


Luhmann, N. (2005). “The paradox of decision making,” in D. Seidl and K.-H. Becker (eds.), Niklas Luhmann and Organization Studies, ISBN 9788763001625, pp. 85-106.


McCulloch, W. S. (1965). The Embodiments ofMind, ISBN 9780262631143 (1988).


Nelson, R. and Winter, S. (1982). An Evolutionary Theory of Economic Change, ISBN 9780674272279.


Seidl, D. (2003). “Organizational identity in Luh- mann’s theory of social systems,” in T. Bakken and T. Hernes (eds.), Autopoietic Organization Theory: Drawing on Niklas Luhmann’s Social Systems Perspective, ISBN 9788763001038. pp. 123-150.


Seidl, D. (2005a). Organizational Identity and Self-Transformation: An Autopoietic Perspective, ISBN 9780754644583.


Seidl, D. (2005b). “Organization and interaction,” in D. Seidl and K.-H. Becker (eds.), Niklas Luhmann and Organization Studies, ISBN 9788763001625, pp. 145-170.


Seidl, D. and Becker, K.-H. (2006). “Organizations as distinction generating and processing systems: Niklas Luhmann’s contribution to organization studies,” Organization, ISSN 1350-5084, 13: 9-35.


Spencer-Brown, G. (1979). The Laws of Form, ISBN 9780525475446.


Weick, K. E. (197 9). The Social Psychology of Organizing, ISBN 9780201085914.


Weick, K. E. and Westley, F. (1996). “Organizational learning: Affirming an oxymoron,” in S. R. Clegg, C. Hardy and W.R. Nord (eds.), Handbook of Organization Studies, ISBN 9780761951322, pp. 440-458.


Wigand, R. T., Picot, A. and Reichwald, R. (1997). Information, Organization and Management: Expanding Markets and Corporate Boundaries, ISBN 9780471964544.


Willke, H. (1995). Systemtheorie III: Steuerungstheorie, ISBN 9783825218409.


Willke, H. (1996). “Dimensionen des Wissensmanagements - zum Zusammenhang von gesellschaftlicher und organisationaler Wis- sensbasierung,” in G. Schreyögg and P. Conrad (eds.), Managementforschung, ISSN 1615-6005, 6: 263-304.

Article Information (continued)

This display is generated from NISO JATS XML with jats-html.xsl. The XSLT engine is Microsoft.