Many discussions of systems thinking, complexity1 and the philosophy of science take as their starting point the fact that everything in the world is connected in a direct or indirect way to everything else. Therefore, the scientific observer is an integral part of the world s/he observes, not separate from it (e.g., von Bertalanffy, 1968; Bateson, 1972; Maturana & Varela, 1992). Additionally, the focus is often on the impossibility of both full understanding of phenomena and infallible prediction, because the complexities of the world slip the grasp of the human observer (e.g., Casti & Karlqvist, 1996).2 The significance of this is that it undermines some of the philosophical ideas that have traditionally been invoked in support of science. For instance, it becomes possible to question the reliance of the philosophy of science on the concept of independent observation.
Independent observation is observation detached from the values and idiosyncrasies of the observer. This does not mean observation without the presence of an observer: it simply means observation that is judged by scientists to be independent of the peculiarities of any particular individual. In other words, an independent observation is one that people in a given scientific community agree would be the same regardless of who is making it. It is only if we can say that independent observation (in the above sense) has been achieved that we can make a satisfactory claim to objectivity (Popper, 1976).3 Clearly, if we want observation to be independent in this manner, then intervention by an observer into the observed has to be prevented—except, that is, intervention to ensure the purity of observation, such as when a scientist constructs an experiment. Any such intervention could create a change, thereby making it possible to say that the observation is a result of the intervention rather than the intrinsic characteristics of the phenomenon being observed. The fact that systems and complexity theories say that there are inevitably direct and/or indirect links between the observer and observed brings into question the possibility of observation free of intervention.4 However, the interest in complexity being shown across the disciplines has placed these problems at the forefront of the agenda of the philosophy of science once again.
These are important issues, but it seems to me that the debate, as it has been framed in the two paragraphs above, already makes a very significant (and arguably questionable) assumption—that scientists should indeed be concerned primarily with observation. In this paper I want to enter the debate from an entirely different angle: by discussing the methodology of intervention. To give an initial definition of intervention, it simply means ‘purposeful action by an agent5 to create change’.6 This contrasts starkly with the conventional canons of the philosophy of science: scientists have traditionally been exhorted to avoid intervention for fear of corrupting the purity of observation (except the kind of intervention that preserves this purity, such as when an experiment is set up).
I will start the main body of this paper by exploring how the concept of intervention has been used by others (but not necessarily in the same way I use it), focusing on the supposed opposition between intervention and observation. However, after comparing these two concepts, I will seek to show that the distinction between observation and intervention is not as simple as it might at first appear, especially given the problems (mentioned above) with the idea of independent observation. Indeed, I will argue that observation should be viewed as just one type of intervention. As we shall see, this has profound consequences for understanding the boundaries between ‘science’ (which has traditionally had observation as its focus) and other activities that are more obviously concerned with intervention (e.g., policy-making, personal and/or group decision-making, management and community development). I will argue that scientific methods for structuring observation should be placed alongside a whole host of other methods for exploring values, reflecting on subjective understandings, planning future activities, etc. Different methods (including scientific methods) can be useful for different purposes, and can be interrelated as part of intervention practice.
Having made the case for science as intervention, I will then return to the theme of systems thinking and complexity in order to argue specifically for systemic intervention. A broad-based methodology for systemic intervention will be outlined, and references will be provided to more detailed work published elsewhere. The paper will then end with some reflections on the wider social implications of this methodology.
However, let us begin by exploring the concepts of ‘observation’ and ‘intervention’ in more detail.
Observation versus Intervention
Many writers contrast observation and intervention: it appears that both scientists (who champion observation) and action researchers7 (who champion intervention) have an interest in maintaining this pair of concepts in opposition to one another. Let us start with the views of the scientific camp.
Observation as the Basis of Science
While many philosophers of science have discussed observation, Popper (1959, 1972) is arguably the best known. Popper claims that, to be worthy of scientific attention, “[an] event must be an ‘observable’ event; that is to say, basic statements must be testable, inter-subjectively, by ‘observation’” (1959: 102). Hence, traditional science seeks to place all statements that cannot be tested by observation outside its remit.
Popper (1959) also proposes the idea that method is crucial: methods need to be chosen that enable independent observation. Hence the emphasis in most traditional scientific methodologies on quantitative comparisons between ‘experimental’ and ‘control’ conditions (Wright et al, 1970). However, while high-quality methods are important, they are not enough on their own: the final guardian of independent observation is the scientific community which is able to re-test findings and subject claims to critical scrutiny (Popper, 1976).
Arguably, one of the most important aspects of controlling observation, as far as many scientists are concerned, is the need to prevent intervention. The observer should not influence the observed other than, in an experiment, by establishing the required difference between the experimental and control conditions, otherwise the results of the observation could be due to the activities of the scientist rather than the variable(s) under investigation.
Intervention as the Basis of Action Research
In marked contrast with Popperian science, action research is concerned primarily with intervention8 and not observation: the researcher engages with what is being researched, seeking to bring about positively valued change. The birth of action research is widely attributed to Lewin (1946, 1947, 1948), who argues that the focus of the philosophy of science on independent observation creates a divorce of the scientific method (especially as it is used in the social sciences) from social practice. He stresses that science should be harnessed for the benefit of human society, and this requires a very different set of philosophical and methodological ideas from those traditionally used. While I would not wish to confine science to a narrow definition of applied social research, Lewin’s views are worth exploring as they provide a useful starting point for developing a broader understanding of science as intervention.
To appreciate why action research emerged in the mid-Twentieth Century, and gained a great deal of popularity very quickly amongst many people (especially those working outside academia), it is necessary to understand the orthodoxy that was being propounded at the time. Popper had been writing about the importance of experiment and observation since the 1930s, and his work built on previous philosophies of science that also placed independent observation at the centre of scientific practice.9 While there were strong debates about the extent to which human knowledge is fallible, the orthodox view was that the need for independent observation was not in question. For many people, it began to appear that the reasons or purposes for undertaking scientific research were secondary to the robustness of the methods used.10 Some scientists advocated a radical denial of purpose, saying that all organisms, including human beings, are deterministic ‘learning machines’ (e.g., Skinner, 1971; Maze, 1983). Even if the existence of purposes was accepted, such purposes could not be considered ‘scientific’ in the same sense as observations; they were generally omitted from reports of experimental practice, and could often only be deduced by reading between the lines of hypotheses. In this way, the purposes and debates that made the hypotheses meaningful were largely hidden from view.
It was in this atmosphere that Lewin (1946, 1947, 1948) mounted a strong critique of ‘pure’ science in favour of action research. Lewin’s argument is that the institutions of science invest massive resources into research that has largely become divorced from the goals of meeting human need and satisfying human desires (that is, the desires of those outside the scientific community—the latter tends to value knowledge for its own sake). In Lewin’s view, it is generally a matter of accident whether this research is relevant to people working in industrial and welfare organizations. Of course, there are the applied natural sciences, like medicine, but really nothing comparable for the worlds of industry and human welfare where it is much more difficult to control observations.
Essentially, Lewin (1946) advocates the harnessing of science in the service of intervention rather than observation. That is, science should be undertaken in organizations for social benefit. He believes that scientists have a choice: they can either conduct research for the sake of pure curiosity, or help themselves and others improve the social conditions that surround them. When a problem is encountered in an organization, research may be undertaken to help define a way forward. However, social purposes should not be subordinated to methodological purity: in Lewin’s view, if research is being conducted in support of action, it makes little sense to subvert the purposes that guide that action in the name of scientific rigour. This means, for Lewin, adapting the scientific method (when necessary) to make it more meaningful in social situations: instead of testing hypotheses, scientists can identify questions that need answering. Likewise, if it is impossible to set up perfectly controlled conditions, they should not call research ‘invalid’, but should still generate data in a manner that supports decision-making—even if strongly scientific conclusions cannot be reached. After all, organizational decisions will have to be taken anyway, and it is preferable to take them on the basis of imperfect data than using no data at all.
Of course, embedding scientific practice in social situations, and adapting it in the service of intervention, will affect the degree of independent observation that can be achieved. Far from keeping one’s distance from the observed, in Lewin’s (1946) model of action research the observer is encouraged to find a means to eliminate socially undesirable phenomena and promote desirable ones. What counts as desirable or undesirable obviously needs to be defined by participants in the local situation, which is why Lewin (1952) produced his “field theory”—a “field” is a set of phenomena that are seen as directly interacting with an object (person, group or organization) of concern. The boundaries of the “field” demarcate what is and is not relevant in an analysis. We see that, in Lewin’s perspective, observation is not independent of the values of the observer (these values determine what initial question is asked), but is nevertheless ‘factual’ in the sense that a realist ontology is assumed—so observations reflect the real world (albeit imperfectly through our fallible perceptions).11 Also, because of the context of action which takes place over time, observations tend to be most meaningful as a sequence which constitutes feedback to actor(s) who are required to make judgements about the success, or otherwise, of their actions.
It appears that, while Lewin (1946, 1948) is primarily concerned with intervention, he does not entirely abandon observation—but it is harnessed into the service of the former. Also, where controlled observation is impossible, other means of supporting intervention through research are explored. Therefore, the principle of independent observation is not abandoned, but it is subordinated to the principle of social utility.
This work has since been developed by a variety of different authors, both in the action research and other communities. One of the most notable examples is Seidman (1988) who, following Dewey (1946) as well as Lewin (1947), advocates a much stronger opposition between observation and intervention. Instead of arguing that science should be harnessed into the cause of intervention, Seidman suggests that the two concepts are mutually exclusive because they are differentiated by the involvement of action. Science requires the exclusion of action on the grounds that changing the phenomenon of interest corrupts the purity of observation, while intervention is founded upon action (also see Reason & Heron, 1995).
Summary of the Distinction Between Observation and Intervention
At this point I have made a clear distinction between observation (as used in science) and intervention, the former being about seeing things in a manner that is not ‘contaminated’ by the actions of the observer, and the latter being about the actions of agents to promote change. However, it should already be apparent from the discussion of Lewin’s (1946, 1948) work (above) that observation and intervention do not have to be regarded as opposites (although they often are)—observation can be undertaken in the service of intervention.
Of course, a counter-argument could be that de-prioritizing the principle of independent observation, which is implied in this way of thinking, simply undermines science. If science does not seek to preserve the independence of the observer through the design of methods, and through scrutiny by scientific communities, it may cease to be useful for both pure and applied studies. Worse, if we allow the purposes of the observer to be discussed as an integral aspect of scientific practice, rather than accepting them as inevitable determinants of the focus of inquiry that are essentially non-scientific, we could open the door to the domination of science by political ideology (Popper, 1966). However, the validity of these counter-arguments rests on the assumption that independent observation is actually possible—or at least, if it is seen as an ideal (rather than being actually achievable), that we can know how near to, or how far from, the ideal we are. In my view the actual achievement of independent observation is impossible, and judgements about distance from an ideal of independent observation are inevitably uncertain. My reasons are detailed below.
The Impossibility of Independent Observation
Let us start the analysis with the insight, common to both systems thinkers and complexity theorists, that everything in the universe is interconnected. Such a perspective precludes the possibility that an observer can be truly independent of the observed. In von Bertalanffy’s (1968) general system theory, and in the works of other writers12, the universe is made up of hierarchies and/or networks of open systems with semi-permeable boundaries: all systems interact with their environments, and there is no such thing as a truly autonomous entity. This means that observers are part of the reality they observe: they cannot observe from outside the systems of mutual causality that they participate in. Although links between the observer and the observed may be indirect, they do exist, and therefore wholly independent observation is impossible.
Of course, it might be argued that some interconnections between the observer and observed are more significant than others. Vickers (1972) compares observations of our solar system with observations of social systems in which the scientist is a participant. In the former case, he claims that the interconnections between the observer and observed are relatively trivial (at least as far as scientific observation is concerned), while the interconnections between human beings in social systems are strongly implicated in social-scientific observations. Decisions to undertake observations are made taking account of these interconnections, and observations of human behavior can feed back to transform what is observed. For this reason, Vickers makes the claim that natural and human systems are fundamentally different.
In one sense it would be easy to accept a distinction between ‘human’ and ‘natural’ systems, as it neatly reflects the familiar division between the ‘social’ and ‘biophysical’ sciences. This would mean that we only have to be sceptical about independent observation in relation to social science. However, I have two reasons for refusing to accept this distinction:
First, as someone with an interest in systems thinking, which prioritizes the ideal of transdisciplinary inquiry, I find it difficult to conform to ‘arbitrary’ divisions between scientific disciplines.13 It seems to me that human systems can legitimately be studied as natural phenomena: there is, for instance, a great deal of systems research that views families, organizations, communities and societies as ‘living systems’ (e.g., Miller, 1978).14 Conversely, natural systems can quite reasonably be studied as social constructs. After all, if we are critical of naïve positivism, the most we can claim is that we have access to our knowledge of natural systems, not the systems themselves (see Darier, 1999, for a particularly sophisticated set of analyses of natural systems as social constructs).15
My second reason for refusing the distinction between ‘natural’ and ‘social’ systems is linked to this. There is strong evidence that observers construct observations of natural systems (as well as social ones) in ways that are not the same for all people. This is an insight that has been surfaced in different ways across a range of scientific disciplines. For example, in physics, Einstein (1934) claims that our inability to know the world “as it really is” means that human “speculation” has to be an integral part of physics. This idea took root in physics through the development of quantum theory, which challenges the conventional separation of the observer from the observed by empirically demonstrating that the former cannot help but influence the latter (Bohr, 1963). Indeed, quantum theory proposes the existence of sub-atomic particles that are not directly observable at all, so these propositions must be based on something in addition to empirical evidence—metaphysics (the non-empirical realm of ideas). Thus, the scientific orthodoxy identified by Einstein (1934) that “the belief in an external world independent of the perceiving subject is the basis of all natural science” was thrown into doubt. The worlds of physical and metaphysical reality came to be seen as inseparable (Prigogine, 1989).
Similar ideas have been explored in biology too. Like the quantum theorists, Northrop (1967) focuses on the inevitability of metaphysics. If biological theories are about the identification of patterns in empirical data, then an understanding of metaphysics reveals that human beings, in looking for patterns, must employ ideas that have their origins outside the empirical data itself. Likewise, in psychology there have been theorists who have stood out against the philosophy of independent observation (e.g., Kelly, 1955; Weimer, 1979; Hollway, 1989), as there have been in sociology (e.g., Brown, 1977), systems thinking (e.g., Maturana, 1988a,b; Ulrich, 1983; Alrøe, 2000) and complexity science (e.g., Fitzgerald, 1999).
Arguably, however, the most sophisticated arguments have been constructed by philosophers of science and technology. For example, Quine (1969) shifts the focus from observation per se to observation sentences: i.e., sentences that refer to something that directly stimulates the senses, and which we may reasonably assume that all competent users of language would agree on: Quine (1990: 4) gives “it’s raining” as an example.16 Quine and Ullian (1978: 28) say that “observation sentences are the bottom edge of language, where it touches experience”. Quine concentrates on observation sentences, partly because the phenomenon of observation itself is “awkward to analyze” (Quine, 1990: 2), but mainly because any observation is only meaningful to science if it is communicated. Scientific communication, if it is to be based on observation, involves reference in sentences to what is stimulating the senses. Having justified the focus on observation sentences, Quine (1990) then goes on to discuss how individuals learn basic associations between sentences and sensory stimuli, mostly in early childhood. He argues that it is only in this moment of individual learning that an observation sentence is free of any theoretical content. As soon as anyone reflects on the meaning of an observation sentence (all scientific observations have a meaning in context, if only the very limited context of an experiment), the words can only be understood in relation to a network of other concepts which have significance beyond the direct context of the observation. In other words, in the scientific context, even the most basic observation sentence cannot be ‘pure’—it must be interpreted with reference to theory.
Reflecting on the work of Quine (1969), Hesse (1974) goes further to question whether, even in the moment of learning an observation sentence as a child, it is possible for that learning to be untainted by theory. The issue for her is that, when a parent (or another person) introduces a word to a child in the context of the child’s sensory experience, that word already has a meaning that is dependent on its relationship with other words. Therefore, she ends up questioning the whole idea of distinguishing between a ‘language of observation’ and a ‘language of theory’, saying that all observation is inevitably theory-laden.
Hesse (1961) also criticizes the whole project of “operationalism” in science. She defines operationalism as the reduction of scientific explanation to operational statements (i.e., statements about how the scientist is controlling observation, and what is observed as a consequence), in order to eliminate supposedly unverifiable (metaphysical) theory. She has several compelling reasons for challenging operationalism, among which is the insight that, without theory, scientists would have to fundamentally change their explanations every time a novel observation is made. Citing Waismann (1952), she argues that theories must have “a fringe of meaning not defined by observation” because “it is precisely the function of theories to assimilate… new observations without the meanings of the theories being radically altered” (Hesse, 1961: 8). Therefore, for purely practical reasons, the idea of theory-free science based on independent observation is untenable.
This challenge to independent observation is extended by Cartwright (1999). She advances the radical proposition that the laws of physics proposed by scientists are not necessarily universal, but could very well be specific to the types of context in which they are generated. Essentially, her argument is that laws (theories of supposedly universal applicability) are made up of abstract concepts that are only imbued with meaning when they are related to particular models, and these models are almost always tested in laboratory conditions (even in the field scientists generally exercise experimental control, making the field a pseudo-laboratory). Observations therefore take place under carefully controlled conditions, which are not representative of the conditions that generally obtain in the universe. Cartwright (1999) therefore argues that our observations, models and laws form a self-supporting triangle that only applies in the aspects of the world that human beings construct. It is consequently not legitimate to claim universality (the status of law) for our theories, as we cannot know whether they apply beyond the world of our constructions. Here, observation is not only made relative to theory, but also to the constructive activity of human beings.
This is likewise important to Latour (1991). He points to the increasing number of instances where the issues addressed by scientists have both technical and social dimensions. While biophysical scientists often try to keep their science ‘pure’ by focusing on the technical aspects, the social inevitably intrudes. A good example from my own research is forensic DNA analysis: while the methods that scientists use to judge the probability of a crime sample coming from an accused person are based in the technical science of genetics, their application has a social context that inevitably raises ethical issues. Examples of ethical issues include what criteria to use to decide whose DNA to keep on file; whether racial typing is legitimate; and whether it is acceptable to use genetic data for purposes other than those it was originally collected for (Baker et al., 2006). It is therefore apparent that the explicitly technical framing of the science (excluding the social) is complicit in the use of that science for social ends that may be open to question. Once we acknowledge that scientists in this position have a particular frame of reference with social consequences, which can be contrasted with alternative frames, then the unavoidable conclusion is that their observations are given meaning by (i.e., are not independent of) the contexts of human action in which they are communicated.
Inevitably, this lightning review of the philosophy of science and other fields of study has ignored the differences between the opinions of the cited authors. Rather, I have focused on what they have in common: a critical attitude to the idea that it is possible to have genuinely independent observation. The essence of the critique of independent observation is that observations take place in particular contexts, and people bring different knowledge resources into these, so there can never be guarantees that they will see things in the same ways.
Of course, it could be argued (following Popper, 1959) that independent observation is an ideal, not something that is actually achievable. However, aiming towards an ideal suggests that, given two observations, we can reliably judge which is closer to the ideal. But even this is uncertain. If Popper is right to say that we should never take objectivity for granted because future work by people in the scientific community could reveal something that suggests the intrusion of subjectivity, then the distance between any actual observation and the ideal of independent observation cannot, in principle, be determined. Something that appears, on one day, to be as close as we can get to an independent observation might be seen, the next day, as very distant from it. All we have, then, are temporary judgements about independent observation by members of scientific communities, and these judgements could (in principle) be undermined at any time.
We are now left with the question, if truly independent observation is impossible, and the ideal of independent observation is problematic, where does this leave science? My argument, to be developed below, is that the construction of scientific observation should be regarded as a form, but by no means the only valid or useful form, of intervention.
Observation as Intervention
Scientific observation is not just any observation, but a moment in which the situation is constructed to facilitate observation under controlled conditions (Cartwright, 1999). There are two levels at which this kind of observation is dependent on the involvement of particular agents: first, in actually undertaking the observation; and second, at a ‘higher’ level, in establishing the goals and parameters of the observation. Below, I discuss each of these levels and then describe how Popper (1959, 1976) addresses them. While Popper does account for both forms of dependency on agents when discussing how to define ‘objectivity’, I nevertheless argue that his view that science should only be concerned with pursuing the ideal of truth17, not exploring values, sits uncomfortably with his acceptance of this dependency. This then opens the door for us to recast observation as an aspect of intervention. Let us start, then, by looking at the two levels at which agents are implicated in constructing observations.
First, scientific observation is dependent on the involvement of particular agents because interpretation is integral to the act of observation itself. What the scientist is able to see will in part be determined by his or her expectations, which in turn will be colored by the language s/he uses and the values flowing into the act of observation. To illustrate, in experiments in which people are asked to look into a tachistoscope (a machine that feeds one picture into one eye and another into the other), some interesting effects occur. If people are fed two faces, one upside-down and the other the right way up, they invariably only see the one that is the right way up (Engel, 1956; Hastorf & Myro, 1959). In a similar experiment, Bagby (1957) took U.S. and Mexican citizens and fed them the same two images: one a North American landscape and the other a Mexican one. In almost every case, people only saw the one that was culturally familiar to them. This indicates that the brain, linked to its environment, is actively constructing the observation, not simply reflecting what enters the eye. Observation takes place using conceptual and emotional frameworks of interpretation (Maturana, 1988a,b; Maturana & Varela, 1992).
Second, at a ‘higher’ level, agents are also implicated in constructing observations when they set the goals and parameters for them—when they ask, what exactly should be observed? This is a moral question as much as a practical one, as scientists can choose what to observe. There is a value judgement, whether consciously recognised or not, involved in every decision to study one thing rather than another (e.g., Churchman, 1979; Ulrich, 1983; Alrøe, 2000; Midgley, 2000; Romm, 2001).
Popper’s (1959) answer to the first of the above issues, that interpretation is integral to observation, is to stress the importance of the scientific community in determining what counts as objective. Basically, the more people who scrutinize the findings of scientists, the more likely it will be that idiosyncratic interpretations will be identified. However, it is worth pointing out that a consensus among scientists is not the same thing as true objectivity because even a consensus in a professional community can be the product of cultural construction (Foucault, 1980)—so objectivity remains an ideal, not something that we can know we have achieved. Popper’s answer to the second issue, that value judgements are involved in setting the goals and parameters of observations, is simply to accept that this is the case. Indeed, the positivist idea that science can be totally value-free has been largely discredited (Resnik, 1998; ESRC Global Environmental Change Programme, 1999). Nevertheless, Popper (1966) still argues that scientists should focus on pursuing the ideal of truth, leaving explorations of values to others. As he sees it, democracy itself is dependent on keeping a strict separation between matters of fact and value: if science comes under the influence of ideology then the pursuit of truth may be severely compromised. Popper argues that, in a society in which science is constrained in this way, informed democratic decision making is impossible.
However, from a systems point of view (e.g., Ulrich, 1983)—and also from a critical theory viewpoint (e.g., Habermas, 1972, 1984a,b)—this strong separation of moral decision making from the act of observation cannot be sustained. Because the two interact, in principle they should both be available for critical analysis. I suggest that, if we acknowledge that agents are involved in interpreting observations, and we accept that value judgements guide what is investigated, we cannot legitimately follow Popper’s prescriptive path which places the exploration of values outside the remit of science.18
Of course, in practical situations, boundaries have to be drawn around the inquiry process, but it seems to me that there can be no general case for excluding value judgements from inquiry—only local cases for momentary exclusions while observations are being undertaken. In other words, moral inquiry can be suspended temporarily while an act of observation is carried out, simply because the agent cannot do two things at once, and it can be resumed once again in the light of the observation and previous moral inquiries.19 So, in many different ways we have seen that agents are implicated in constructing observations: through their direct and indirect interactions with the observed; through their interpretations of sense data; through their selection of concepts to guide observation; and by making value judgements about what to observe. It should be clear from this that observation, as a purposeful act, can only be isolated from its context by artificially ignoring what flows into it and the consequences it gives rise to. In my view, it is hard to justify placing this artificial boundary around it—especially as the choice of what to observe and how to observe it has unavoidable moral consequences for action (which may sometimes be anticipated and sometimes not). Given this state of affairs, I argue that it is more appropriate for us to take account of the construction of observation than to turn our backs on it. Once the moral, subjective, linguistic and other influences on observation are opened to critical reflection, scientific observation has to be seen as a form of intervention: observation is undertaken purposefully, by an agent, to create change in the knowledge and/or practice of a community of people. It is this purposeful action of an agent that is the defining feature of intervention.
Of course, methods of scientific observation provide a set of techniques for intervention that can be seen to have significant uses and limitations. These methods have been given pride of place in the last three hundred years of Western intellectual history, largely because of the focus of philosophers of science on maintaining the shibboleth of independent observation and thereby denigrating methods of intervention. As I believe that I have demonstrated that scientific observation should also be viewed as a form of intervention, I argue that scientists should welcome a whole host of other methods that are more self-consciously concerned with action for change. Of course, there are many communities of writers, including several with an interest in systems thinking and complexity, that have been developing methodologies and methods for intervention despite the disinterest, or even the disapproval, of the scientific establishment. It is mainly to this work that I refer in other writings (e.g., Midgley, 2000) that stress the value of methodological pluralism: the use of a wide variety of intervention methods to pursue a correspondingly wide variety of purposes.
I have defined intervention in terms of purposeful action by an agent to create change, and have argued that scientific methods can be used as part of intervention practice. However, this still does not deal with all of the issues thrown up by systems thinking and complexity science. If we were to conceive of intervention as flawlessly pre-planned change based on accurate predictions of the consequences of action, we would be assuming the mechanistic vision of the universe that systems thinking and complexity science seek to challenge. Mechanism is the view that everything can be observed and described as if it is a machine—a predictable, functional, inherently understandable object (Pepper, 1942). According to this view, all the things in the world (including human beings, organizations and societies) are like clockwork toys. If we can figure out how they work, then we will be able to change them according to our will, within the limits of the natural laws that they conform to. As systems thinking and complexity science both fundamentally undermine this mechanistic world-view by highlighting issues of uncertainty and non-linear interaction (see Prigogine, 1987, and Flood and Carson, 1993, for some introductory writings), there is a need to further clarify our understanding of ‘intervention’ to avoid the pernicious interpretation of it mentioned above. I therefore wish to propose that we should think in terms of systemic intervention. The following account is heavily abbreviated, and more information can be found in Midgley (2000).
I argue that the boundary concept lies at the heart of systems thinking (and Cilliers, 1998, makes a similar claim in relation to complexity science). Because of the fact that everything in the universe is directly or indirectly connected with everything else, where the boundaries are placed in any analysis becomes crucial. The ‘cut-off point’ for analysis will make some things visible and others invisible. Systems thinkers pursue the ideal of comprehensiveness, but know that this is unattainable. However, reflection on the boundaries of knowledge at least enables us to consider options for inclusion, exclusion and marginalization. It also reminds us that all understandings are incomplete: there is a need for humility and openness to the perspectives of others (Churchman, 1979).
If intervention is purposeful action by an agent to create change, then systemic intervention is purposeful action by an agent to create change in relation to reflection on boundaries. This statement embodies the core concern of the methodology of systemic intervention that I will be introducing over the coming pages.
Towards a Methodology for Systemic Intervention
At the bare minimum, I suggest that an adequate methodology for systemic intervention should be explicit about three things: boundary critique; theoretical and methodological pluralism; and action for improvement. These are discussed below.
There is a need for agents to reflect critically upon, and make choices between, boundaries. Boundaries define both what issues are to be included, excluded or marginalized in analyses, and who is to be consulted or involved (the two are obviously linked, as different agents will have different concerns). Because of the ‘who’ question, issues of power and participation are unavoidable in systemic intervention (Churchman, 1979; Ulrich, 1983; Brown, 1996; Midgley, 1997a, 2000; Vega-Romero, 1999; Córdoba & Midgley, 2003, 2006).
An important aspect of my understanding of boundaries is that boundary judgements are intimately linked with value judgements (Ulrich, 1983): the values adopted in any intervention will direct the drawing of boundaries that define the knowledge accepted as pertinent. Similarly, the inevitable process of drawing boundaries constrains the ethical stance taken and the values pursued. Making decisions about boundaries is therefore an ethical business. It is also important to note that, regardless of how detailed the process of critical reflection on values and boundaries actually is, there may still be surprises as things excluded from view interact with whatever is the focus of attention. While boundary critique cannot altogether eliminate surprises, it can help minimise them. Also, because things change over time, boundary judgements need to be regularly reviewed as part of a learning process (Ulrich, 1983; Brown & Packham, 1999).20
Of course, it is only possible for agents to make boundary judgements through the use of (implicit or explicit) theories and methods, and reflection leading to the making of boundary judgements is an activity (it is intervention to shape the agent’s understanding, which may in turn influence future action). Critical reflection upon boundary judgements is vital because it is only by way of boundary critique that the ethical consequences of different possible actions (and the ways of seeing they are based upon) can be subject to analysis.21
Theoretical and methodological pluralism
The second aspect of a methodology for systemic intervention that should be made explicit is the need for agents to make choices between theories and methods to guide action, which requires a focus on theoretical and methodological pluralism. These two forms of pluralism have meaning in terms of the focus on boundary judgements mentioned above: if understandings can be bounded in many different ways, then each of these boundaries may suggest the use of a different theory (and conversely, each theory implies particular boundary judgements). Methodological pluralism then also becomes meaningful because methods and methodologies embody different theoretical assumptions: choices between boundaries and theories suggest which methods might be most appropriate (and conversely, choices between methods imply particular theoretical and boundary judgements).
Choice between theories and methods is also a form of action, in the same way as reflection on, and choice between, boundary judgements can be seen as action: it is intervention in the present to shape a strategy for future intervention.22
Action for improvement
Finally, an adequate methodology for systemic intervention should be explicit about taking action for improvement—action for the better, which cannot of course be defined in an absolutely objective manner. ‘Improvement’ needs to be understood temporarily and locally: as different agents may use different boundary judgements, what looks like an improvement through one pair of eyes may look like the very opposite through another (Churchman, 1970).23 Also, even if there is widespread agreement between all those directly affected by an intervention that it constitutes an improvement, this agreement may not stretch to future generations. The temporary nature of all improvements makes the concept of sustainable improvement particularly important: while even sustainable improvements cannot last forever, gearing improvement to long-term stability is essential if future generations are to be accounted for. We can say that an improvement has been made when a desired consequence has been realized through intervention. In contrast, a sustainable improvement has been achieved when this seems like it will last into the indefinite future without the appearance of undesired consequences (or a redefinition of the original consequences as undesirable). Of course, whether an improvement is sustainable or not is a matter of judgement (and judgements are inevitably temporary and local, even if they are widely accepted): the limitations of human understanding mean that what may appear to be sustainable at one moment may seem less so at the next. Therefore, in aiming for sustainable improvement, agents involved in systemic intervention need to periodically review the criteria of sustainability that they are using.
The notion of improvement is important because agents are restricted in the number of interventions they can undertake, and must therefore make decisions about what they should and should not do. The extent to which various interventions look like they may or may not bring about improvements, or may bring about improvements that have greater or lesser priority, is a useful criterion for making these decisions.
Of course, I should say why I have used the term ‘improvement’ rather than, say, the creation of beauty, pleasure, knowledge, understanding, emancipation or spiritual enlightenment. The answer is that, if we value any of these things, the creation of these represents an improvement. The term ‘improvement’ is therefore general enough to have meaning in relation to almost any value system: it simply indicates the purposeful action of an agent to create a change for the better. In the case of ‘pure’ science, this may simply be a change in our knowledge base and/or understanding of the world.24
Interrelating the Three Activities
These three activities—reflecting on value and boundary judgements; making choices concerning theory and method; and taking action for improvement—are clearly inseparable. Doing one always implies doing the other two as well, although the focus of attention may shift from one to another aspect of this trinity so that none remain implicit and thereby escape critical analysis. The separation between the three is therefore analytical rather than factual: it ensures a proper consideration of a minimum set of three ‘angles’ on possible paths for intervention. Making all of them a specific focus of a methodology for systemic intervention guides the reflections of the agent, ensuring that boundaries, values, theories, methods, and action for improvement all receive explicit consideration. The three activities, diagrammed in relation to one another, are presented in Figure 1. Critique specifically means boundary critique (reflection on, and choice between, boundaries and associated values); judgement means judgement about which theories and methods might be most appropriate; and action means the implementation of methods to create improvement (however this is to be understood by different actors in the local context).
Implications for Society
Having presented the methodology of systemic intervention, which can encompass methods of observation used in the service of knowledge generation, I will end the main body of the paper with some brief reflections on its implications for society.
Earlier, I mentioned Popper’s (1966) argument that science needs to be protected from the imposition of political ideology: he advocates granting freedom to scientists to pursue the cumulative development of knowledge, aiming towards an ideal of truth. His claim is that we must preserve the ‘open society’: a democratic society based on rational inquiry that is capable of using science to eliminate primitive superstition. According to Popper, there are forms of ideology (such as Marxist ‘historicism’25) which threaten the open society by enforcing rules concerning what it is and is not legitimate to explore. These forms of ideology are therefore to be resisted, and the ideal of truth is to be preserved as the focus of science.
There are many assumptions in this argument that have been thoroughly debated in the last 25 years. These include the strong distinction between ‘modern’ and ‘pre-modern’ societies (Latour, 1991); the cumulative development of knowledge (Kuhn, 1970); the value of the ideal of truth (Rorty, 1989); the nature of ‘rationality’ (Foucault, 1980); and the necessity of questioning tradition (MacIntyre, 1985). However, for the purposes of this paper, I wish to focus on the strong division in Popper’s worldview between the pursuit of truth and the exploration of values. Contrary to Popper, I argue that marginalizing the exploration of values makes science more prone to ideological manipulation, not less so.
The crux of my argument is that if, without critical reflection, we allow the value judgements that inevitably flow into decisions on what to research to be shaped by whatever macro-social and economic forces exist in society, we give up one of the key means that we have of protecting ourselves against totalizing ideologies.26 Here I agree with Habermas (1984a,b) that moral inquiry is as important as inquiry into the nature of the world. However, I disagree with Habermas’s assumption that it is possible to neutralise the effects of power by establishing ‘free and fair’ debates in the public sphere, in which all citizens are equally able to ask questions orientated to the ideals of truth, rightness and sincerity. As I see it (following Foucault, 1980, 1984), power operates in a more complex manner, and is ever present as both an enabling and constraining force. So critical questioning around moral issues is indeed a means to challenge totalizing ideologies, but we should never assume that this can make us ideologically or morally neutral.
Of course, Popper was writing at a time of ‘grand’ ideological debates. The first edition of The Open Society and its Enemies was published in 1945 when fascism had just been overthrown in Western Europe; capitalism was seen as either the saviour or the enemy of the ‘free’ world (depending on your point of view); and Marxism was on the rise. It could be argued that, as we enter the Twenty-First Century, we have no more need for protection against totalizing ideologies, and therefore this debate about science and values is simply redundant. I strongly resist such a view, for two reasons. First, it would be a very short sighted view of history to think that, just because we have seen the end of the Twentieth Century confrontation between capitalism and socialism, this spells the final demise for all totalizing ideologies. Second, as forces of globalization proliferate, and we experience economic forces that are beyond the control of individual nation states, it could be argued that we are more at risk of being subsumed by a totalizing ideology than ever before. It’s no longer Marxist historicism that pulls us towards an ‘inevitable’ future, but the discourse of global market forces (Robertson, 1998) and a culture pivoted around individual consumer choice (Gare, 1996).
The idea of bringing explorations of values alongside observational methods, as suggested by the methodology of systemic intervention presented in this paper, could support scientists and other citizens in working participatively to reveal more of what flows into the making of truth judgements. This kind of exploration enables us to ask new and different questions about what forms of intervention we should pursue, including what should be the focus of observational research. Also, because this is a methodology that is explicit about the need for reflection on value and boundary judgements on an on-going basis, it encourages resistance to totalizing ideologies which require a continual reference back to a single ‘truth’—a single uncritically-accepted boundary and associated value judgement.
In this paper I have chosen to side-step the usual starting points for debate about complexity and the philosophy of science, which tend to assume that science is primarily about observation. Instead, I opened my argument by exploring the concept of intervention, and defined intervention as purposeful action by an agent to create change. I then contrasted this with the concept of observation. While some authors suggest that intervention and observation are opposites, I have argued that observation (as undertaken in science) should be viewed as just one type of intervention. We should therefore welcome scientific techniques of observation into a pluralistic armory of intervention methods, alongside methods for exploring values, reflecting on subjective understandings, planning future activities, etc.
Having redefined scientific observation as intervention, I then returned to systems thinking and complexity ideas to advocate a methodology of systemic intervention. This focuses attention on the need for boundary critique (reflection on, and choice between, boundaries and associated values); judgement (concerning appropriate theories and methods); and action for improvement (defined temporarily and locally).
Finally, I ended with a brief discussion of the implications of this methodology for society. In particular, I emphasized its value in terms of resisting totalizing ideologies. It also encourages a critical and participative attitude to intervention—including forms of intervention that incorporate the traditional observational methods of science.
I treat systems thinking and complexity as a pair because they share certain characteristics of particular relevance to this paper. I appreciate that some authors (e.g., Stacey et al., 2000) contrast complexity and systems thinking. However, while I accept the criticisms of early systems thinking offered by Stacey et al., I disagree with their characterization of some later systems theories which I suggest have a lot in common with philosophical writings in complexity (e.g., Cilliers, 1998; Richardson et al., 2000). However, this is an argument that lies beyond the scope of the current paper.
Of course some of these insights, such as the impossibility of comprehensive understanding, are not unique to complexity theory, but have been discussed by philosophers of science for many years (see, for example, the work of Popper, 1959).
Objectivity is not an absolute. Arguably, Popper’s (1959, 1972) greatest contribution to the philos-ophy of science is to undermine the positivist claim that objectivity is actually achievable. His point is that all claims to objectivity are judged within scientific communities. Because, in principle, the boundaries of these communities are not closed, any accepted claim to objectivity may be undermined by new participants (or by the original participants re-testing the claim). Objectivity is therefore an ideal we may aim towards. It is not actually an achievable attribute of an observation.
While insights from complexity theory threaten some of the presuppositions of science, others remain unchallenged by it. For example, most complexity theorists share the commitment of other scientists to a realist philosophy: it is assumed that scientific descriptions do reflect a real world, even if we can never ultimately measure the accuracy of this reflection. My own view is that, if we follow through some of the implications of complexity theory and systems thinking to their logical conclusions, the relatively naïve realism that is often assumed by scientists is problematized. However, discussion of this is beyond the scope of the current paper (see Midgley, 2000, for details).
I suggest that an agent can be viewed as either a single human being, or an identifiable group of human beings in interaction (e.g., a family, team or organisation), that have purposes ascribed to them. In the case of a group, this definition does not assume that all participating individuals share the purpose of the whole (indeed, some sub-agents may act in opposition to the dominant purpose). However, a group can be called an agent when it (or its representatives) is perceived as acting to realise a dominant purpose at the group level regardless of the actions or views of sub-agents. The word ‘dominant’ here is crucial. It indicates that the group purpose is a function of whatever mechanisms of legitimation exist within and beyond the group that allow it to be perceived as moving in one particular direction, regardless of any counter-arguments being produced by internal opponents. Therefore, when a government minister declares war on behalf of a nation, it is generally accepted that the nation is at war even if half of its citizens wish to dissent.
Obviously, there is much more to say about intervention than this (for a full exposition, see Midgley, 2000). One thing I should be clear about here, however, is that the concept of intervention does not presume that it is always possible to have flawlessly pre-planned change based on accurate predictions of the consequences of action. This would be a return to the mechanistic view of the universe that systems thinking and complexity science have sought to challenge. For more details, see later in the paper.
There are others in the ‘intervention camp’ too, such as operational researchers, management scientists, evaluators and systems practitioners. These labels refer to people in a variety of semi-independent research communities who have similar interests, but slightly different emphases.
Reason (1996) disagrees with using the term ‘intervention’, but I will not deal with this here. An explanation of his position, and an argument against it, can be found in Midgley (2000).
Many of these philosophies of science were far less sophisticated than the one advanced by Popper. For instance, the positivists working in the late 19th and early 20th Centuries asserted that science should be entirely value-free. Popper (1972), in contrast, argues that the values of scientists will inevitably guide what will be the focus for investigation, but independent observation can still be achieved once this focus has been determined. See Delanty (1997) and Romm (2001) for some interesting reviews of this and other related debates.
Of course Popper stressed inter-subjective testability within scientific communities, and said that method alone is not an adequate determinant of independent observation. However, a prime means by which an individual scientist could influence the consensus of scientists was by using a widely accepted method. Hence the overwhelming focus on methods that was certainly still present when I graduated as a student of Psychology as late as 1982.
In this sense, Lewin’s (1946, 1948, 1952) philosophical assumptions are similar to Popper’s (1959). Popper says that values determine the focus of science, and that observations (imperfectly) reflect the real world. However, Popper (1966) argues against what he sees as the imposition of a social utility agenda on science. He also strongly demarcates the supposedly nonscientific world of values from the world of facts—in his view, it is only legitimate for scientists to focus their inquiries on the latter.
Many writers on systems thinking and complexity take this view. See, for example, Bogdanov (1913-1917), Koehler (1938), Boulding (1956), Kremyanskiy (1958), von Bertalanffy (1968), Bateson (1972), Miller (1978), Prigogine and Stengers (1984), Laszlo (1995), Capra (1996), Allen (1997), Hardy (1998), Holland (1998) and Cilliers (1998).
A great deal has been written about the limitations that disciplinary boundaries impose on the generation of knowledge (e.g., von Bertalanffy, 1968; Lovelock, 1988; Midgley, 2001).
There are also writers who are critical of this work (e.g., Merkel & Searight, 1992; Pam, 1993). However, we need not fall into the trap of saying that this is the only valid way of viewing social systems. If we welcome a ‘living systems’ approach as one amongst a plurality of useful ways of thinking, we can gain insights from it without necessarily succumbing to its limitations (Rosenblatt, 1994).
Saying that both approaches are reasonable might appear contradictory, but I believe they can be reconciled through a new, pluralistic approach to systems philosophy. However, this is beyond the scope of the current paper (see Midgley, 2000, for details).
Quine (1990) acknowledges that in real life there are ambiguities; cases where people question linguistic conventions; and uses of scientific jargon that many competent users of language will not understand. These phenomena suggest that agreement on an observation sentence often needs to be viewed as relative to a particular bounded community. Nevertheless, Quine still insists that there are some very basic sentences which we can reasonably assume have a universally clear reference to a particular sensory stimulus.
Popper (1959) talks about an ideal of truth, not truth itself, because he follows Kant (1787) in arguing that we can only know our knowledge constructs, not reality itself. Nevertheless, he still believes that truth is something we ought to aim towards, even if we can never know for sure if or when we have attained it.
Towards the end of this paper I will argue that, far from opening science to political domination, this exploration of values protects us from ideological dogmatism.
One possible argument against this is that there is a difference between ‘pure’ and ‘applied’ science. Some might say that those conducting applied science should indeed undertake moral inquiry, but pure science is curiosity-driven; its ethical implications are generally unknown or uncertain; and it less obviously involves intervention. My answer to this is that even pure science involves intervention in the sense that it is designed to produce knowledge that will make a difference in scientific debates. There may be similarities and differences between the ethical issues impacting on pure and applied scientific projects, but in choosing to undertake a particular piece of pure, curiosity-driven research, the scientist is still making a value judgement that this is the right thing to do. S/he could, for instance, have taken on some other research project. This kind of judgement is therefore just as amenable to moral inquiry as that made by the applied scientist—it just means acknowledging that factors other than curiosity can and should be considered in forming pure research agendas.
There is a substantial body of literature on the theory, methodology and practice of boundary critique: e.g., Churchman (1970, 1979); Ulrich (1983, 1987, 1994, 1996); Midgley (1992, 1994, 1997b, 2000); Midgley et al. (1998, 2007); Brown & Packham (1999); Vega-Romero (1999); Córdoba et al. (2000); Yolles (2001); Foote et al. (2002, 2007); Córdoba & Midgley (2003, 2006); and Midgley & Shen (2007).
This exposition of boundary critique has left out, or made only passing reference to, a number of important issues. These include the extension of the concept of boundary judgement to encompass concerns about how things ought to be; the importance of wide-spread stakeholder participation in systemic intervention; and the need for agents to deal with the marginalization of particular issues and stakeholders within social contexts. These are dealt with in Ulrich (1983), Midgley et al (1998) and Midgley (2000).
Many issues have been left unexplored by this short exposition, including paradigm incommensurability, standards for choice between theories, theoretical coherence/incoherence, how to develop the methodological knowledge base of agents, etc. All these issues are covered in Midgley (2000).
An example is logging a stretch of rain forest, which may bring about an improvement in the eyes of the logging company’s employees and those who consume the wood that is generated, but may be considered as damaging by tribal people who are displaced from their ancestral lands, and by conservationists concerned with the preservation of species diversity. As Churchman (1970) says, every improvement assumes boundaries defining what consequences of intervention are to be taken into account, and what are to be ignored or regarded as peripheral. In the above example, the logging will only be viewed as bringing about an improvement if the displacement of tribal people and the reduction of species diversity are excluded from the boundaries of analysis. Clearly, what is included in the boundaries of analysis and who conducts this analysis are both vital issues in defining improvement.
It should be noted that there is a counter-argument to this. According to Rorty (1989), using a term like improvement (or truth, legitimacy, ontology, morality, etc.) suggests a belief in absolute facts or values. Rorty believes that such words are tainted. To talk of improvement is to talk about the attainment of a state that everybody would agree is better. Rorty has launched a fierce critique of the apparent certainties of modernity. He offers a powerful argument, but why abandon words like truth, morality and improvement? If we are prepared to be critical about the business of making boundary judgements, there is no need to assume that understandings of improvement are universal. To abandon words like truth, morality and improvement is to risk slipping into negativity and inaction. To tear away the modernist certainties surrounding their use and clothe them with an awareness of the frailty of human understanding is to preserve the possibility of positive action while facing the complexities of this head on.
Historicism, according to Popper (1966), is the belief that the course of history is predetermined (e.g., by structural economic forces). This kind of belief informs people’s actions in the world, bringing them nearer to the ‘inevitable’ future. Historicism therefore involves a self-fulfilling prophesy.
This is similar (but not identical) to the arguments of some critical and systems theorists in the second half of the 20th Century (see, for example, Habermas, 1971, 1972, 1984a,b; Foucault, 1980, 1984; Ulrich, 1983; Fay, 1987; Jackson, 1991; Oliga, 1996 and Gregory, 2000). As I see it, there is considerable scope for dialogue between critical theorists, systems thinkers and philosophers of science.