We all need an occasional whack on the side of the head to shake us out of routine patterns, to force us to rethink our problems, and to stimulate us to ask new questions that may lead to other answers. (von Oech, 1983)

Systems thinking is one of several contemporary paradigms of organizational problem solving that is widely considered “profound.” It is accorded this august and somewhat pretentious status because it does not fit neatly into an existing scientific discipline and because it is thought to aid in the discovery of solutions not apparent through the application of these other disciplines. “At its broadest level, systems thinking encompasses a large and fairly amorphous body of methods, tools, and principles, all oriented to looking at the interrelatedness of forces, and seeing them as part of a common process” (Senge et al., 1994). Systems thinking “has been developed over the past fifty years, to make the full patterns clearer, and to help us see how to change them effectively” (Senge, 1990: 7). No widely accepted canon of systems thinking exists. Different scholars include different elements and give them different emphases.

There is probably no work that is as responsible for the introduction of systems thinking into American business than Peter Senge’s The Fifth Discipline (1990). On the New York Times bestsellers’ list for weeks, this book was a dominant influence on business thinking throughout the early 1990s. A major portion of it was devoted to a description of the system structures that recur again and again. Known as systems archetypes, several of these generic structures are so common that they deserve to be kept in mind whenever one must diagnose system states and determine how they should be managed. Yet, there are several factors beyond these archetypes that should be considered as part of any system diagnosis. These include symptoms, critical thinking, pattern recognition, and boundary conditions.


There are no incontrovertible measures of the performance of a human system. Ultimately, systems thinkers must base their assessment on a series of indicators, each of which provides but a partial picture of the system’s health. In a business organization, it is common to use financial measures to gauge system performance. Levels of goal accomplishment, market share, and standing with the public are other measures of the overall wellbeing of organizational systems. When it comes to subsystems, performance relative to accrued costs, project milestones, or sales targets may suffice as relevant indicators. Some managers have their favored indices of system performance. Bill Blount, President of Power Motive Corporation, has his favorite:

I check out how employees handle the company equipment. They don’t own it so how they care for it is an important sign of their attitude toward the company. Have you ever noticed the difference between the inside of a privately owned cab compared to a company-owned cab? The owner doesn’t have fourteen used coffee lids on the dashboard. (Benton, 1996: 141)

One thing is for certain: What is going on in a system can seldom be understood by the indicators alone. As Ortega y Gasset asserts:

To know is to be not content with things as presented to us but to seek beyond their appearance for their being. This “being” of things is a strange condition: it is not made clear in things, but on the contrary, it throbs hidden within them, beneath them, beyond them. (Gasset, 1960: 67)

Diagnosis figures prominently in this process. System thinkers must reason diagnostically from those things that are apparent to the underlying relationships that account for the troublesome situation. This takes a great deal of concentration and experience. Neil Georgi, CEO of Neil Georgi & Associates, Inc., puts it this way:

It’s impossible to know everything, so I’m silent and listen. It gives me an ability to develop a strategy. When I’m able to listen, I can envision how things fit together. (Benton, 1996: 78)

Yet, reflection is an imperfect antidote for biased thinking. Sometimes, diagnoses are clouded by our tendency to look for causes of system disorders in familiar places.

Marketers look for marketing problems and solutions, engineers look for engineering problems and solutions, accountants look for accounting problems and solutions … Sherlock Holmes explained his mystical powers of detection by pointing out, “Since all here see without observing, nothing is clear to them.” What is needed is the ability to observe things as they are, not to see things as we are most comfortable with them being. We need to always ask ourselves, “What am I looking for that’s keeping me from seeing what is there?” (McGill & Slocum, 1994: 250)

In fact, systems thinking theorist Peter Senge advises against the bias of looking for problem causes in convenient places:

[There] is a fundamental characteristic of complex human systems: “cause” and “effect” are not close in time and space. By “effects,” I mean the obvious symptoms that indicate that there are problems … By “cause” I mean the interaction of the underlying system that is most responsible for generating the symptoms, and which, if recognized, could lead to changes producing lasting improvement. Why is this a problem? Because most of us assume they are—most of us assume, most of the time, that cause and effect are close in time and space. (Senge, 1990: 63)

In general, when system indicators signal problems, system thinkers must judge how and when to act to return the system to an acceptable state. In simple systems, this is comparatively straightforward. If an inspector detects a system producing too many defective parts, a small adjustment made immediately may suffice. As the system in question becomes larger and more complex, it is typically more difficult to determine how to get leverage on system performance. One skill absolutely essential in this regard is critical thinking.


In recent years, the education establishment in the US has begun to warm to the notion that teachers need to develop critical thinking among their students (Jones et al., 1995). The principal reason for this increased attention to critical thinking is difficult to pin down. However, two features of contemporary life appear to be driving it. First, it is clear that contemporary life is richer from an information standpoint than ever before. Commercial markets deluge customers with information about products and services. Politicians and interest groups lobby the populace for their support of causes. The media is filled with statements that seem to have the weight of authority. The question is whether young people are being educated to judge the veracity of claims accurately and to discriminate between valid and spurious messages. The second reason that critical thinking has so much currency today is that occupations increasingly demand critical thinking skills (Hunt, 1995; Reich, 1992).

The kind of “work” required more and more in industry and business is intellectual, that is, it requires workers to define goals and purposes clearly, seek out and organize relevant data, conceptualize those data, consider alternative perspectives, adjust thinking to context, question assumptions, modify thinking in light of the continual flood of new information, and reason to legitimate conclusions (Paul & Nosich, 1992: 87-8).

It goes without saying that critical thinking is a vital skill for leaders (Novelli & Taylor, 1993). Without it, modern work organizations are likely to be swept up in every hyped idea and fashionable policy initiative.

Without a substantial groundwork of quality in thinking … “total quality management” will be a mockery of Deming, all procedure and no substance … “multiculturalism” will move society in the direction of cultural anarchy, all diversity and no unity, and “group problem solving” and “decision making” will confuse group agreement with group rationality, and conformity with depth and quality of reason. (Paul, 1995: xiii)

The term “critical thinking” generally refers to higher-order cognitive skills involved in rendering judgment, conducting analysis, and synthesizing disparate information, as opposed to thinking done in a mechanical or rote fashion (Halpern, 1996, 1998). Critical thinking is not synonymous with intelligence. Indeed, studies relating critical thinking to other psychological traits have found a closer relation to openmindedness and self-monitoring (Klacznski et al., 1997; Stanovich & West, 1997; Walsh & Hardy, 1997). It is thinking that is imaginative, sensitive to context, planful, skeptical, and self-monitored. It is critical not of others but of spurious arguments. Its criticism is also aimed inward, not to the point of debilitating self-loathing, but to the point where one’s own closely held premises are frequently under scrutiny and potential revision.

As it applies to systems thinking, critical thinking takes any characterization about a system state very seriously. Let’s say that Phyllis is an office manager whose assistant Bob complains that morale in the office is so poor that members of the staff will surely resign if Phyllis does not do something soon. This is actually a fairly common message for a manager to receive, a kind of “do something or else” plea. Some managers in Phyllis’s position might ask for recommendations from those like Bob who seem to have something specific in mind. A critically thinking manager would rarely act before carefully analyzing the basis for Bob’s request.


One component of critical thinking is understanding the role that assumptions play in any characterization of a system state (Brookfield, 1987). An assumption is a statement for which no proof or evidence is stated (Halpern, 1996). Some of the assumptions behind Bob’s characterization that the office members had such low morale that they would surely quit are listed below:

  • Bob has accurately assessed the morale level.

  • The only major factor leading the staff to quit is the low morale level.

  • All of the staff members will quit.

  • It would be a problem if the staff resigned.

All of these assumptions might be reasonably challenged.

Some contemporary managers systematically practice this element of critical thinking. John Seely Brown, the former research director of Xerox’s Palo Alto Research Center, sees this kind of questioning as an integral part of his job:

I constantly question the assumptions and beliefs of the organization. One of my main roles is to set the stage for there being deep engagement in terms of what are those assumptions. Are they any longer valid? What are the trends and context that they are operating on? What is the limit to those trends and context? What would be different if those trends were followed out to their logical conclusion? What does this really mean for what we do? From that position we can then step back to really question if the things we are currently doing really make any sense. (Sherman & Schultz, 1998: 27)


Critical thinking also demands valid evidence for any claim (Fogelin, 1987). As we have seen, an assumption behind Bob’s characterization is that he accurately assessed the morale level. A critical thinker might assess whether his method of assessment would stand up to scientific standards. Similarly, a question could be raised about the evidence that makes Bob so certain that people are about to resign. In general, Bob has made a series of claims that are, as yet, unsubstantiated. Now, if Phyllis is convinced that Bob does not generally make such claims without corroborating evidence, she might not raise questions of evidence. However, if she did, she would want to make sure that she had the evidence such a waiver would require. Once again, critical thinkers hold themselves to the same standards to which they hold others.


Critical thinkers avoid any semblance of absolutist thinking (Brookfield, 1987). Generalizations are expected to be valid only within their specific historical and environmental context. In the situation at hand, Phyllis may be more than curious about the context within which this report has been made. Does Bob provide many such warnings? Is there something that has occurred with or to the office staff that would make their morale problem historically distinctive? Are resignation threats an instrument whereby others in the organization get attention from their supervisors? Questions like these place the events in a context that enable a much finer-grained understanding of what is occurring and possibly even why.


Reflective skepticism is a central feature of critical thinking (Brookfield, 1987). Critical thinkers are highly suspicious of “theories” based on tradition, common sense, or conventional wisdom. Although there might be other interpretations, common sense might argue on behalf of Bob’s claim being rejected by Phyllis. After all, how often does an entire staff resign even under the most extreme instances of poor morale? However, a critical thinker might in fact invite this possibility precisely because it is so counter-intuitive. In other words, critical thinking implies a freedom from pedestrian thinking no matter what form it takes or what conclusion it supports.


The truth often contains inconsistencies, incongruities, and contradictions. However, the critical thinker pursues these as leads that might further result in productive revelations (Paul, 1995). In the case of the office staff, the curious juxtaposition of the existence of an assistant and the report of a severe attitude problem just does not seem right. Would it not be reasonable to assume that an assistant should have done something before the situation got so apparently out of hand? Moreover, there are apparently two escalation events tied into this scenario. The staff have presumably signaled their dissatisfaction to the assistant, who has in turn passed it along to his boss. Again, do these two characterizations of system states tell us anything about the situation (e.g., do people low in the hierarchy not feel heard)?

In summary, critical thinking involves: (a) identifying and challenging assumptions, (b) carefully scrutinizing evidence, (c) requiring a sensitivity to context, (d) being skeptical about causes and effects, and (e) investigating incongruities and contradictions. Clearly, this is not appropriate in every situation. Like other cognitive skills, critical thinking requires deliberate, effortful, and sometimes intense cognitive work (Wagner, 1997). Moreover, critical thinking is not always socially welcome. It often results in awkward questions being raised and implicit considerations being surfaced. It can slow the decision process down and lead to the impression that the critical thinker is argumentative and ornery. However, practiced effectively and in the right situation, with pattern recognition and an eye to boundary conditions, critical thinking is an indispensable talent of every accomplished system thinker.


A common theme in contemporary literature about organizations is that the relationships between organizational attributes are complex and loosely coupled. This sounds incredibly pat. After all, social observers have been arguing for a complicated rather than simple worldview as long as there have been social observers, so there appears to be nothing new. However, there is a system concept that justifies this more complex approach to problem solving: the principle of requisite variety (Ashby, 1956; Morgan, 1986). Basically, this holds that the regulatory mechanism of a system must be as complex as the environment with which it is trying to deal. Since modern organizational problems are complex, decision makers must match this environmental reality with complex decision rules in order to deal with this increased “variety” (Sherman & Schultz, 1998).

Interestingly, this need to contend with complex relationships is aligned with what is known about expert decision makers. Experts use deeper, more elaborate problem categories than novices, and they seem to be faster and more agile in recognizing and acting on patterns of relationships rather than simple action triggers (e.g., Ericsson & Smith, 1991; Isenberg, 1984).

The neophyte problem solver applies learned rules to address surface elements of the problem regardless of what else is happening to influence the situation. (Ferry & Ross-Gordon, 1998)

In contrast, system thinking requires action predicated on pattern recognition.

Pattern recognition appears to be a common feature of expertise over a wide variety of human endeavors. Although the precise nature of the relevant patterns differs, pattern recognition is a vital skill among chess masters, bridge champions, medical diagnosticians, physicists, auditors, and computer programmers (Ericsson & Charness, 1994; Ericsson & Lehmann, 1996; Van Lehn, 1989). For example, master chess players develop an ability to detect patterns in the way each game is unfolding that seem to defy the normal limits of human information processing (Chase & Simon, 1973). Similarly, expert auditors (Bedard & Biggs, 1991) were able to detect fraud by virtue of their sensitivity to patterns. This highly specialized knack for pattern recognition is not a function of intelligence but rather repeated exposure over an extended period to tasks about which frequent and accurate feedback is obtained (Proctor & Dutta, 1995). Once developed, pattern recognition enables the performer rapidly to perceive relevant situations and call on memory efficiently for appropriate responses stored there.

Pattern recognition is also a frequently mentioned talent of senior executives (Isenberg, 1984).

Studies of effective leaders have repeatedly emphasized their ability to recognize patterns and relationships among seemingly disjointed events … Effective leaders are masters of sense making, of bringing order to the chaos that tends to surround them. They can sort relevant from irrelevant information. They know how to prevent themselves from being swamped by sensory and informational overloads. (Kets de Vries, 1989: 201)

Discussions with individuals engaging in pattern recognition can often be enigmatic. Recently, I had a conversation with a manager who was told by an assistant that members of his large clerical staff were complaining about the temperature in the office. “I’ll go down and have a talk with them; that should take care of it” was this manager’s rather strange solution. When I asked whether their perception of temperature needed verification, I was told that temperature had nothing to do with it. “I have been out of the office for several days,” he explained, “and the group usually gives me a reason to check in with them when I have been gone for a while.” Pattern recognition is not the only explanation for this action, but the manager himself clearly thought it was.

In part, “system[s] thinking is the ability to see connections between events, issues, and data points—to think of the whole rather than the parts” (McGill & Slocum, 1994: 19). Even though events are separated in time, the systems thinker looks for their common origins, themes, and correlates. Understood in a historical context, events may be seen as the product of a response to earlier events. For example, in the late 1980s, Lockheed Missiles and Space realized that the age distribution of its professional workforce was bimodal, that is, there were far fewer employees in their forties. Systems thinkers at the company realized that this issue was an unanticipated consequence of aggressive efforts to fill vacancies ten years before. Because of Reagan-era expansion in defense, work vacancies were filled by both military retirees in their forties and college graduates in their twenties. Considering the relationship between these two events brought about a new appreciation of the effects of staffing efforts on future human resource contingencies. Such an appreciation is consistent with the system observation that “today’s problems come from yesterday’s ‘solutions’” (Senge, 1990: 57).

Systems thinking also welcomes the association of contemporaneous issues. Different issues “hang together” in certain industries. For example, process modernization, resource consolidation, and operational flexibility are issues in many smoke-stack industries. High-technology concerns, in contrast, must wrestle simultaneously with problems of speed, attracting and maintaining technical talent, and exploiting the experience curve. Combinations of issues also follow strategic themes. For example, organizations undergoing rapid growth experience difficulty routinizing operations, absorbing new talent, and stimulating strategic thinking. Issues also cluster in different business functions and at different hierarchic echelons. Thus, the issues that tend to face a senior vice-president of marketing are different from those that must be faced by a senior vice-president of finance. Moreover, the issues that these individuals confront are different from those faced by junior officials in the same departments. Indeed, it is this clustering of issues throughout organizations that justifies requiring specific types of experience for applicants for associated organizational positions.

Data are the descriptive elements that make up issues and events and, again, system thinking seeks patterns among them. Common sense (Fletcher, 1984; Schwieso, 1984) defines certain patterns of data. For example, conventional wisdom has it that managers ought to delegate tasks (one data point) only to trustworthy employees (another data point). Systems thinking progresses beyond such commonsensical patterns of data. For example, data indicative of processes that are often thought to have positive effects, like charismatic leadership, group cohesiveness, and participative decision making, are not universally heralded in the minds of system thinkers. Similarly, processes that appear to have no readily discernable relationship with positive outcomes are seen as elements of effective management behavior. Included here are such apparently questionable practices as making changes for the sake of making changes, implementing change in a top-down direction, and assigning an employee two or more immediate supervisors. Relating data in these ways

is similar to detective work. A piece of information here, another one there, and soon it is possible to draw some conclusions. The test of the conclusion’s validity is that the seemingly disparate facts have a logical connection to it. One challenge for a sage is to try to place some limits on the information search—gathering information can be time consuming and costly. Unfortunately, we only know the value of any single piece of information when we recognize a pattern. (Wells, 1997: 48)

Combining data into unique patterns often yields creative approaches to organizational problems. One example is from the university where I teach. Santa Clara University has one of the largest evening MBA programs in the country. Classes are conducted in the same building after undergraduate classes have ended late in the afternoon. As one might imagine, after a day’s worth of classes the classrooms are often pretty messy. Faculty and MBA students started to complain, and the dean immediately asked the building maintenance staff to schedule an additional cleaning of the classrooms after the day classes and before the evening. However, this “fix” was judged logistically and financially infeasible. With that, Dean Barry Posner came up with an original idea: replace the classroom trash receptacles, which were almost always overflowing by the beginning of each evening, with larger ones. This solution worked like a charm. The classrooms appeared cleaner and complaints diminished.

A similarly creative solution involving unique patterns is recounted by Ian Mitroff:

A manager of a large office building was experiencing many complaints about the poor service of the elevators in the building. When the complaints reached the point where he could not ignore them any longer, he called in a team of consulting engineers. You know what the consulting engineers are going to recommend—an engineering solution to the problem: more elevators, speeding up the elevators—technological, fix-it-up types of solutions. These solutions turned out to be prohibitively expensive. Fortunately, the manager in this case asked a psychologist to make a recommendation. His solution was much cheaper. By installing mirrors in the lobby, he played upon the vanity of people because they liked to look at themselves. By conceptualizing the problem in a different way, he cheaply and efficiently solved it by speeding up the perception of the passage of time, rather than by technologically speeding up the elevators. (Mitroff, 1978: 135-6)


Senge’s system archetypes define internal system relationships. They do not really describe what happens in the environment or at the system boundaries. One of the major contributions of a systems approach to organizations is a focus on boundary conditions rather than internal organizational functioning. Accordingly, systems thinkers include boundary conditions as part of the diagnostic process. Of special concern is the boundary that is not permeable.

Just what are boundaries and what does it mean that they should best be “permeable”? Although alternative definitions of boundaries exist (Scott, 1998), the most useful conceives a boundary as

a domain of interactions of a system with its environment in order to maintain the system as a system and to provide for its long-run survival. Accordingly, boundary work refers to the activities in which a system is engaged to deal with its environment, ranging from preserving resources in the face of competing demands to preventing environmental disruptions and collecting resources and support. (Yan & Louis, 1999)

In this sense, permeable boundaries are those that permit the free and unfiltered exchange of information and other resources between the system and its environment. This implicates the entire organization functioning within its environment, the work unit functioning largely within its internal organizational environment, and the individual as a system functioning in an environment made up of all of the influences external to the person.

This is in stark contrast to boundary conditions in closed systems, which tend to repel all environmental influences except those to which the system can readily and easily adapt.

The difference between closed and open systems is like that between a fish in a fishbowl and one swimming in the open sea. The fishbowl fish may be safe in its walled habitat, and it may breed for a cycle or two, but when conditions change, it will be completely unprepared to meet any variations in its environment. The fish in the sea may be subject to greater danger, more predators, but the choices and new possibilities available are virtually unlimited. Its ability to change and adapt as needed is also greater. (Sherman & Schultz, 1998: 4)


Even before the open systems views cast attention on the organization-environment boundary, there was a strong and enduring interest in adaptation as the overriding strategic posture. Initially, this took the form of environmental scanning by individuals made structurally responsible for it. Later, organizations became much more assertive with their environments by managing issues, taking collective action based on strategic alliances, and engaging in mergers, acquisitions, and spinoffs. Today, there has been so much boundary management that many organizations are left with blurred boundaries and international and financial entanglements. The argument is no longer about whether organizational boundaries should be permeable, but instead about whether boundaries themselves are meaningful demarcations.


The open systems view of organization was relatively uncontested at the organization level of analysis. Theorists and practitioners alike took to the different ways in which organizations could open up their boundaries to the environment. Parallel ideas focusing on the work unit as a system were slower to gain attention and popularity. After all, work groups were inside the organization, and existing models of work unit effectiveness concentrated completely on the internal state of the unit. To manage a work group effectively, managers were instructed to focus on such variables as roles, norms, status, and cohesion (e.g., Hackman, 1983).

The seminal research by Deborah Ancona and David Caldwell (1988, 1990, 1992a, 1992b) demonstrated that boundary conditions between work units and their environments better predicted performance than did internal states of the work units themselves. Specifically, work teams charged to innovate in high-technology organizations were most effective if they had permeable boundaries with the rest of the organization. Their internal states had much less to do with their success. In fact, the highestperforming teams had levels of cohesiveness that were actually lower than teams that were not as innovative.

Clearly, this does not imply that all work units should have permeable boundaries at all times. Some types of work units can and should be insulated from sources of uncertainty in their organizational environments. Work teams that require stability and predictability in order to take advantage of production technologies are best buffered with strategies of protective boundary management (Tushman, 1979). In addition, teams may benefit by being pointedly open to political and coordination realities early in their development (Ancona & Caldwell, 1992b; Schneider, 1991).

There are two reasons that make the relation between a team’s boundary and performance especially critical in systems thinking. First, this is a commonly overlooked variable in team performance. For many, measures of a team’s internal wellbeing are intuitively sufficient as indices of a work unit’s effectiveness. Second, contemporary developments in work are placing much more pressure on boundaries of this sort. Yan and Louis (1999: 26) have concluded that “restructuring for de-bureaucratization, extensive use of teams, shrinking organizational slack, increased workforce diversity, and the adoption of advanced information technology” contribute to increased importance of work unit boundaries.


These same factors are creating significant changes in the boundaries between job holders and their immediate environment. In the past, individuals who met or exceeded their job objectives were probably secure. “Do your job and keep your head down, and you’ll be okay,” was the mantra. General Electric CEO Jack Welch was among the first among many to insist that job holders had to focus outward from their assigned responsibilities (cf. Kelley & Caplan, 1993). As a harbinger of corporate America, Welch declared in several speeches to employees that GE would become a boundaryless culture.

At the core of a boundaryless company, he told them, are people who act without regard for status or functional loyalty and who look for ideas from anywhere—including from inside the company, from customers, or from suppliers. (Tichy & Cohen, 1997: 39)

This may not sound like a particularly profound development until one recognizes just what significance structural boundaries have in simplifying matters of authority, tasks, politics, and even identity.

In the traditional company, boundaries were “hard-wired” into the very structure of the organization … [They] function like markers on a map. By making clear who reported to whom and who was responsible for what, boundaries oriented and coordinated individual behavior and harnessed it to the purposes of the company as a whole. (Hirschhorn & Gilmore, 1992: 105)

With the end of boundaries, differences in authority, talent, and perspective still exist, but they are not compartmentalized into structural strata and stovepipes. Everyone must now figure out what kind of roles they need to play and what kind of relationships they need to maintain in order to use these differences effectively in their jobs.

At GE and other supposedly boundaryless organizations, it is no longer enough to perform one’s job assignment well.

In the old days, GE may have fired a few unpleasant people who didn’t meet their performance goals, but nice guys who didn’t deliver and complete jerks who did deliver were welcome to stay. In the new GE, Welch declared, performance and behavior would both count. People who embraced boundaryless-ness but couldn’t quite deliver would be helped along and given second, maybe even third chances, but stellar performers who insisted on keeping up the old walls and floors would be dismissed. And he backed up his statement by personally getting rid of some boundaryful people at the top of the company. Finally, he held others responsible for doing the same thing in their parts of the company. (Tichy & Cohen, 1997: 39)

These developments at GE should not be dismissed as a fad or as relevant to only one company. As early as 1967, there was a broad call for professionals who were capable of working effectively across departmental boundaries (Lawrence & Lorsch, 1967). Additionally, many companies besides GE are striving to develop a workforce that is versatile along these lines (Pfeffer, 1998). While not all are as intolerant as Welch of “boundaryful” people, job holders are advised never to use the excuses, “I can’t do that, it’s not my job,” or “You’ll have to talk to my boss if you want me to do that.”

Earlier, job holders might have been content being a great engineer, a super-competent cost accountant, or a charismatic foreman. Today, job holders need organizational as well as job skills. They must be able to influence without authority, establish priorities without managerial guidance, and change problem-solving repertoires on a dime.

Take the simple example of an engineer on an interfunctional product design team. To be an effective participant on the team, the engineer must play a bewildering variety of roles. Sometimes she acts as a technical specialist to assess the integrity of the team’s product design; at other times she acts as a representative of the engineering department to make sure that engineering does not get saddled with too much responsibility while receiving too few resources; then again, in other situations she may act as a loyal team member to champion the team’s work with her engineering colleagues. (Hirschhorn & Gilmore, 1992: 105-6)

In addition to this type of flexibility, job holders also need to be conversant with the strategic realities facing the enterprise. In short, they need to be business literate.

Members of an organization need to understand the transitions occurring within an industry and how the industry affects what the organization is currently doing. Understanding externalities allows members to accept the rationale behind organizational transitions. Too often organization members see what transitions are expected of them without understanding why. (Ulrich, 1991: 148)

Part of the diagnosis of a system state, then, should focus on the boundary conditions between an organization and its environment, between work units and their environment, and between job holders and their environment. It is not enough that organizations are internally effective, that work units are cohesive, or that job holders are good performers. If boundaries are not permeable, the system is potentially defective.


Effectively diagnosing system states requires all the resources that one can muster. Accordingly, one need not tackle this job alone. Although many of the principles and examples in this article have been framed as though an individual were doing this system thinking, there is really no reason why these same processes cannot be done by a work group or team. There is nothing inherently autocratic or heroic about system thinking (Bradford & Cohen, 1998). Teams can “think” (i.e., process information) critically, recognize patterns, and account for system boundaries.

This article deals with a form of problem solving composed of a highly diverse array of different components from critical thinking to boundary analysis. It is profound not because it is elegant or sophisticated. Rather, its profundity is that it is largely counter-intuitive. It does not naturally occur to most people to think in terms of symptoms, assumptions, patterns, and boundaries. While we are all capable of critical thinking and considering how our team interfaces with others, these things rarely occur to us “in the heat of everyday battle.” The wisdom of systems thinking is not sage with a capital S. Rather, it is unconventional wisdom.