Journal Information

Article Information


Systems thinking, complexity and the philosophy of science


Abstract

It is usually assumed in debates about systems thinking, complexity and the philosophy of science that science is primarily about observation. However, the starting point for this paper is intervention, defined as purposeful action by an agent to create change. While some authors suggest that intervention and observation are opposites, it is argued here that observation (as undertaken in science) should be viewed as just one type of intervention. We should therefore welcome scientific techniques of observation into a pluralistic set of intervention methods, alongside methods for exploring values, reflecting on subjective understandings, planning future activities, etc. However, there is a need to explicitly counter a possible pernicious interpretation of this argument: intervention could (erroneously) be viewed as flawlessly pre-planned change based on accurate predictions of the consequences of action. This is the mechanistic worldview that systems thinking and complexity science seek to challenge. Therefore, having redefined scientific observation as intervention, the paper revisits insights from systems thinking and complexity to propose a methodology of systemic intervention. Some brief reflections are then provided on the wider social implications of this methodology.


Introduction

Many discussions of systems thinking, complexity1 and the philosophy of science take as their starting point the fact that everything in the world is connected in a direct or indirect way to everything else. Therefore, the scientific observer is an integral part of the world s/he observes, not separate from it (e.g., von Bertalanffy, 1968; Bateson, 1972; Maturana & Varela, 1992). Additionally, the focus is often on the impossibility of both full understanding of phenomena and infallible prediction, because the complexities of the world slip the grasp of the human observer (e.g., Casti & Karlqvist, 1996).2 The significance of this is that it undermines some of the philosophical ideas that have traditionally been invoked in support of science. For instance, it becomes possible to question the reliance of the philosophy of science on the concept of independent observation.

Independent observation is observation detached from the values and idiosyncrasies of the observer. This does not mean observation without the presence of an observer: it simply means observation that is judged by scientists to be independent of the peculiarities of any particular individual. In other words, an independent observation is one that people in a given scientific community agree would be the same regardless of who is making it. It is only if we can say that independent observation (in the above sense) has been achieved that we can make a satisfactory claim to objectivity (Popper, 1976).3 Clearly, if we want observation to be independent in this manner, then intervention by an observer into the observed has to be prevented—except, that is, intervention to ensure the purity of observation, such as when a scientist constructs an experiment. Any such intervention could create a change, thereby making it possible to say that the observation is a result of the intervention rather than the intrinsic characteristics of the phenomenon being observed. The fact that systems and complexity theories say that there are inevitably direct and/or indirect links between the observer and observed brings into question the possibility of observation free of intervention.4 However, the interest in complexity being shown across the disciplines has placed these problems at the forefront of the agenda of the philosophy of science once again.

These are important issues, but it seems to me that the debate, as it has been framed in the two paragraphs above, already makes a very significant (and arguably questionable) assumption—that scientists should indeed be concerned primarily with observation. In this paper I want to enter the debate from an entirely different angle: by discussing the methodology of intervention. To give an initial definition of intervention, it simply means ‘purposeful action by an agent5 to create change’.6 This contrasts starkly with the conventional canons of the philosophy of science: scientists have traditionally been exhorted to avoid intervention for fear of corrupting the purity of observation (except the kind of intervention that preserves this purity, such as when an experiment is set up).

I will start the main body of this paper by exploring how the concept of intervention has been used by others (but not necessarily in the same way I use it), focusing on the supposed opposition between intervention and observation. However, after comparing these two concepts, I will seek to show that the distinction between observation and intervention is not as simple as it might at first appear, especially given the problems (mentioned above) with the idea of independent observation. Indeed, I will argue that observation should be viewed as just one type of intervention. As we shall see, this has profound consequences for understanding the boundaries between ‘science’ (which has traditionally had observation as its focus) and other activities that are more obviously concerned with intervention (e.g., policy-making, personal and/or group decision-making, management and community development). I will argue that scientific methods for structuring observation should be placed alongside a whole host of other methods for exploring values, reflecting on subjective understandings, planning future activities, etc. Different methods (including scientific methods) can be useful for different purposes, and can be interrelated as part of intervention practice.

Having made the case for science as intervention, I will then return to the theme of systems thinking and complexity in order to argue specifically for systemic intervention. A broad-based methodology for systemic intervention will be outlined, and references will be provided to more detailed work published elsewhere. The paper will then end with some reflections on the wider social implications of this methodology.

However, let us begin by exploring the concepts of ‘observation’ and ‘intervention’ in more detail.

Observation versus Intervention

Many writers contrast observation and intervention: it appears that both scientists (who champion observation) and action researchers7 (who champion intervention) have an interest in maintaining this pair of concepts in opposition to one another. Let us start with the views of the scientific camp.

Observation as the Basis of Science

While many philosophers of science have discussed observation, Popper (1959, 1972) is arguably the best known. Popper claims that, to be worthy of scientific attention, “[an] event must be an ‘observable’ event; that is to say, basic statements must be testable, inter-subjectively, by ‘observation’” (1959: 102). Hence, traditional science seeks to place all statements that cannot be tested by observation outside its remit.

Popper (1959) also proposes the idea that method is crucial: methods need to be chosen that enable independent observation. Hence the emphasis in most traditional scientific methodologies on quantitative comparisons between ‘experimental’ and ‘control’ conditions (Wright et al, 1970). However, while high-quality methods are important, they are not enough on their own: the final guardian of independent observation is the scientific community which is able to re-test findings and subject claims to critical scrutiny (Popper, 1976).

Arguably, one of the most important aspects of controlling observation, as far as many scientists are concerned, is the need to prevent intervention. The observer should not influence the observed other than, in an experiment, by establishing the required difference between the experimental and control conditions, otherwise the results of the observation could be due to the activities of the scientist rather than the variable(s) under investigation.

Intervention as the Basis of Action Research

In marked contrast with Popperian science, action research is concerned primarily with intervention8 and not observation: the researcher engages with what is being researched, seeking to bring about positively valued change. The birth of action research is widely attributed to Lewin (1946, 1947, 1948), who argues that the focus of the philosophy of science on independent observation creates a divorce of the scientific method (especially as it is used in the social sciences) from social practice. He stresses that science should be harnessed for the benefit of human society, and this requires a very different set of philosophical and methodological ideas from those traditionally used. While I would not wish to confine science to a narrow definition of applied social research, Lewin’s views are worth exploring as they provide a useful starting point for developing a broader understanding of science as intervention.

To appreciate why action research emerged in the mid-Twentieth Century, and gained a great deal of popularity very quickly amongst many people (especially those working outside academia), it is necessary to understand the orthodoxy that was being propounded at the time. Popper had been writing about the importance of experiment and observation since the 1930s, and his work built on previous philosophies of science that also placed independent observation at the centre of scientific practice.9 While there were strong debates about the extent to which human knowledge is fallible, the orthodox view was that the need for independent observation was not in question. For many people, it began to appear that the reasons or purposes for undertaking scientific research were secondary to the robustness of the methods used.10 Some scientists advocated a radical denial of purpose, saying that all organisms, including human beings, are deterministic ‘learning machines’ (e.g., Skinner, 1971; Maze, 1983). Even if the existence of purposes was accepted, such purposes could not be considered ‘scientific’ in the same sense as observations; they were generally omitted from reports of experimental practice, and could often only be deduced by reading between the lines of hypotheses. In this way, the purposes and debates that made the hypotheses meaningful were largely hidden from view.

It was in this atmosphere that Lewin (1946, 1947, 1948) mounted a strong critique of ‘pure’ science in favour of action research. Lewin’s argument is that the institutions of science invest massive resources into research that has largely become divorced from the goals of meeting human need and satisfying human desires (that is, the desires of those outside the scientific community—the latter tends to value knowledge for its own sake). In Lewin’s view, it is generally a matter of accident whether this research is relevant to people working in industrial and welfare organizations. Of course, there are the applied natural sciences, like medicine, but really nothing comparable for the worlds of industry and human welfare where it is much more difficult to control observations.

Essentially, Lewin (1946) advocates the harnessing of science in the service of intervention rather than observation. That is, science should be undertaken in organizations for social benefit. He believes that scientists have a choice: they can either conduct research for the sake of pure curiosity, or help themselves and others improve the social conditions that surround them. When a problem is encountered in an organization, research may be undertaken to help define a way forward. However, social purposes should not be subordinated to methodological purity: in Lewin’s view, if research is being conducted in support of action, it makes little sense to subvert the purposes that guide that action in the name of scientific rigour. This means, for Lewin, adapting the scientific method (when necessary) to make it more meaningful in social situations: instead of testing hypotheses, scientists can identify questions that need answering. Likewise, if it is impossible to set up perfectly controlled conditions, they should not call research ‘invalid’, but should still generate data in a manner that supports decision-making—even if strongly scientific conclusions cannot be reached. After all, organizational decisions will have to be taken anyway, and it is preferable to take them on the basis of imperfect data than using no data at all.

Of course, embedding scientific practice in social situations, and adapting it in the service of intervention, will affect the degree of independent observation that can be achieved. Far from keeping one’s distance from the observed, in Lewin’s (1946) model of action research the observer is encouraged to find a means to eliminate socially undesirable phenomena and promote desirable ones. What counts as desirable or undesirable obviously needs to be defined by participants in the local situation, which is why Lewin (1952) produced his “field theory”—a “field” is a set of phenomena that are seen as directly interacting with an object (person, group or organization) of concern. The boundaries of the “field” demarcate what is and is not relevant in an analysis. We see that, in Lewin’s perspective, observation is not independent of the values of the observer (these values determine what initial question is asked), but is nevertheless ‘factual’ in the sense that a realist ontology is assumed—so observations reflect the real world (albeit imperfectly through our fallible perceptions).11 Also, because of the context of action which takes place over time, observations tend to be most meaningful as a sequence which constitutes feedback to actor(s) who are required to make judgements about the success, or otherwise, of their actions.

It appears that, while Lewin (1946, 1948) is primarily concerned with intervention, he does not entirely abandon observation—but it is harnessed into the service of the former. Also, where controlled observation is impossible, other means of supporting intervention through research are explored. Therefore, the principle of independent observation is not abandoned, but it is subordinated to the principle of social utility.

This work has since been developed by a variety of different authors, both in the action research and other communities. One of the most notable examples is Seidman (1988) who, following Dewey (1946) as well as Lewin (1947), advocates a much stronger opposition between observation and intervention. Instead of arguing that science should be harnessed into the cause of intervention, Seidman suggests that the two concepts are mutually exclusive because they are differentiated by the involvement of action. Science requires the exclusion of action on the grounds that changing the phenomenon of interest corrupts the purity of observation, while intervention is founded upon action (also see Reason & Heron, 1995).

Summary of the Distinction Between Observation and Intervention

At this point I have made a clear distinction between observation (as used in science) and intervention, the former being about seeing things in a manner that is not ‘contaminated’ by the actions of the observer, and the latter being about the actions of agents to promote change. However, it should already be apparent from the discussion of Lewin’s (1946, 1948) work (above) that observation and intervention do not have to be regarded as opposites (although they often are)—observation can be undertaken in the service of intervention.

Of course, a counter-argument could be that de-prioritizing the principle of independent observation, which is implied in this way of thinking, simply undermines science. If science does not seek to preserve the independence of the observer through the design of methods, and through scrutiny by scientific communities, it may cease to be useful for both pure and applied studies. Worse, if we allow the purposes of the observer to be discussed as an integral aspect of scientific practice, rather than accepting them as inevitable determinants of the focus of inquiry that are essentially non-scientific, we could open the door to the domination of science by political ideology (Popper, 1966). However, the validity of these counter-arguments rests on the assumption that independent observation is actually possible—or at least, if it is seen as an ideal (rather than being actually achievable), that we can know how near to, or how far from, the ideal we are. In my view the actual achievement of independent observation is impossible, and judgements about distance from an ideal of independent observation are inevitably uncertain. My reasons are detailed below.

The Impossibility of Independent Observation

Let us start the analysis with the insight, common to both systems thinkers and complexity theorists, that everything in the universe is interconnected. Such a perspective precludes the possibility that an observer can be truly independent of the observed. In von Bertalanffy’s (1968) general system theory, and in the works of other writers12, the universe is made up of hierarchies and/or networks of open systems with semi-permeable boundaries: all systems interact with their environments, and there is no such thing as a truly autonomous entity. This means that observers are part of the reality they observe: they cannot observe from outside the systems of mutual causality that they participate in. Although links between the observer and the observed may be indirect, they do exist, and therefore wholly independent observation is impossible.

Of course, it might be argued that some interconnections between the observer and observed are more significant than others. Vickers (1972) compares observations of our solar system with observations of social systems in which the scientist is a participant. In the former case, he claims that the interconnections between the observer and observed are relatively trivial (at least as far as scientific observation is concerned), while the interconnections between human beings in social systems are strongly implicated in social-scientific observations. Decisions to undertake observations are made taking account of these interconnections, and observations of human behavior can feed back to transform what is observed. For this reason, Vickers makes the claim that natural and human systems are fundamentally different.

In one sense it would be easy to accept a distinction between ‘human’ and ‘natural’ systems, as it neatly reflects the familiar division between the ‘social’ and ‘biophysical’ sciences. This would mean that we only have to be sceptical about independent observation in relation to social science. However, I have two reasons for refusing to accept this distinction:

First, as someone with an interest in systems thinking, which prioritizes the ideal of transdisciplinary inquiry, I find it difficult to conform to ‘arbitrary’ divisions between scientific disciplines.13 It seems to me that human systems can legitimately be studied as natural phenomena: there is, for instance, a great deal of systems research that views families, organizations, communities and societies as ‘living systems’ (e.g., Miller, 1978).14 Conversely, natural systems can quite reasonably be studied as social constructs. After all, if we are critical of naïve positivism, the most we can claim is that we have access to our knowledge of natural systems, not the systems themselves (see Darier, 1999, for a particularly sophisticated set of analyses of natural systems as social constructs).15

My second reason for refusing the distinction between ‘natural’ and ‘social’ systems is linked to this. There is strong evidence that observers construct observations of natural systems (as well as social ones) in ways that are not the same for all people. This is an insight that has been surfaced in different ways across a range of scientific disciplines. For example, in physics, Einstein (1934) claims that our inability to know the world “as it really is” means that human “speculation” has to be an integral part of physics. This idea took root in physics through the development of quantum theory, which challenges the conventional separation of the observer from the observed by empirically demonstrating that the former cannot help but influence the latter (Bohr, 1963). Indeed, quantum theory proposes the existence of sub-atomic particles that are not directly observable at all, so these propositions must be based on something in addition to empirical evidence—metaphysics (the non-empirical realm of ideas). Thus, the scientific orthodoxy identified by Einstein (1934) that “the belief in an external world independent of the perceiving subject is the basis of all natural science” was thrown into doubt. The worlds of physical and metaphysical reality came to be seen as inseparable (Prigogine, 1989).

Similar ideas have been explored in biology too. Like the quantum theorists, Northrop (1967) focuses on the inevitability of metaphysics. If biological theories are about the identification of patterns in empirical data, then an understanding of metaphysics reveals that human beings, in looking for patterns, must employ ideas that have their origins outside the empirical data itself. Likewise, in psychology there have been theorists who have stood out against the philosophy of independent observation (e.g., Kelly, 1955; Weimer, 1979; Hollway, 1989), as there have been in sociology (e.g., Brown, 1977), systems thinking (e.g., Maturana, 1988a,b; Ulrich, 1983; Alrøe, 2000) and complexity science (e.g., Fitzgerald, 1999).

Arguably, however, the most sophisticated arguments have been constructed by philosophers of science and technology. For example, Quine (1969) shifts the focus from observation per se to observation sentences: i.e., sentences that refer to something that directly stimulates the senses, and which we may reasonably assume that all competent users of language would agree on: Quine (1990: 4) gives “it’s raining” as an example.16 Quine and Ullian (1978: 28) say that “observation sentences are the bottom edge of language, where it touches experience”. Quine concentrates on observation sentences, partly because the phenomenon of observation itself is “awkward to analyze” (Quine, 1990: 2), but mainly because any observation is only meaningful to science if it is communicated. Scientific communication, if it is to be based on observation, involves reference in sentences to what is stimulating the senses. Having justified the focus on observation sentences, Quine (1990) then goes on to discuss how individuals learn basic associations between sentences and sensory stimuli, mostly in early childhood. He argues that it is only in this moment of individual learning that an observation sentence is free of any theoretical content. As soon as anyone reflects on the meaning of an observation sentence (all scientific observations have a meaning in context, if only the very limited context of an experiment), the words can only be understood in relation to a network of other concepts which have significance beyond the direct context of the observation. In other words, in the scientific context, even the most basic observation sentence cannot be ‘pure’—it must be interpreted with reference to theory.

Reflecting on the work of Quine (1969), Hesse (1974) goes further to question whether, even in the moment of learning an observation sentence as a child, it is possible for that learning to be untainted by theory. The issue for her is that, when a parent (or another person) introduces a word to a child in the context of the child’s sensory experience, that word already has a meaning that is dependent on its relationship with other words. Therefore, she ends up questioning the whole idea of distinguishing between a ‘language of observation’ and a ‘language of theory’, saying that all observation is inevitably theory-laden.

Hesse (1961) also criticizes the whole project of “operationalism” in science. She defines operationalism as the reduction of scientific explanation to operational statements (i.e., statements about how the scientist is controlling observation, and what is observed as a consequence), in order to eliminate supposedly unverifiable (metaphysical) theory. She has several compelling reasons for challenging operationalism, among which is the insight that, without theory, scientists would have to fundamentally change their explanations every time a novel observation is made. Citing Waismann (1952), she argues that theories must have “a fringe of meaning not defined by observation” because “it is precisely the function of theories to assimilate… new observations without the meanings of the theories being radically altered” (Hesse, 1961: 8). Therefore, for purely practical reasons, the idea of theory-free science based on independent observation is untenable.

This challenge to independent observation is extended by Cartwright (1999). She advances the radical proposition that the laws of physics proposed by scientists are not necessarily universal, but could very well be specific to the types of context in which they are generated. Essentially, her argument is that laws (theories of supposedly universal applicability) are made up of abstract concepts that are only imbued with meaning when they are related to particular models, and these models are almost always tested in laboratory conditions (even in the field scientists generally exercise experimental control, making the field a pseudo-laboratory). Observations therefore take place under carefully controlled conditions, which are not representative of the conditions that generally obtain in the universe. Cartwright (1999) therefore argues that our observations, models and laws form a self-supporting triangle that only applies in the aspects of the world that human beings construct. It is consequently not legitimate to claim universality (the status of law) for our theories, as we cannot know whether they apply beyond the world of our constructions. Here, observation is not only made relative to theory, but also to the constructive activity of human beings.

This is likewise important to Latour (1991). He points to the increasing number of instances where the issues addressed by scientists have both technical and social dimensions. While biophysical scientists often try to keep their science ‘pure’ by focusing on the technical aspects, the social inevitably intrudes. A good example from my own research is forensic DNA analysis: while the methods that scientists use to judge the probability of a crime sample coming from an accused person are based in the technical science of genetics, their application has a social context that inevitably raises ethical issues. Examples of ethical issues include what criteria to use to decide whose DNA to keep on file; whether racial typing is legitimate; and whether it is acceptable to use genetic data for purposes other than those it was originally collected for (Baker et al., 2006). It is therefore apparent that the explicitly technical framing of the science (excluding the social) is complicit in the use of that science for social ends that may be open to question. Once we acknowledge that scientists in this position have a particular frame of reference with social consequences, which can be contrasted with alternative frames, then the unavoidable conclusion is that their observations are given meaning by (i.e., are not independent of) the contexts of human action in which they are communicated.

Inevitably, this lightning review of the philosophy of science and other fields of study has ignored the differences between the opinions of the cited authors. Rather, I have focused on what they have in common: a critical attitude to the idea that it is possible to have genuinely independent observation. The essence of the critique of independent observation is that observations take place in particular contexts, and people bring different knowledge resources into these, so there can never be guarantees that they will see things in the same ways.

Of course, it could be argued (following Popper, 1959) that independent observation is an ideal, not something that is actually achievable. However, aiming towards an ideal suggests that, given two observations, we can reliably judge which is closer to the ideal. But even this is uncertain. If Popper is right to say that we should never take objectivity for granted because future work by people in the scientific community could reveal something that suggests the intrusion of subjectivity, then the distance between any actual observation and the ideal of independent observation cannot, in principle, be determined. Something that appears, on one day, to be as close as we can get to an independent observation might be seen, the next day, as very distant from it. All we have, then, are temporary judgements about independent observation by members of scientific communities, and these judgements could (in principle) be undermined at any time.

We are now left with the question, if truly independent observation is impossible, and the ideal of independent observation is problematic, where does this leave science? My argument, to be developed below, is that the construction of scientific observation should be regarded as a form, but by no means the only valid or useful form, of intervention.

Observation as Intervention

Scientific observation is not just any observation, but a moment in which the situation is constructed to facilitate observation under controlled conditions (Cartwright, 1999). There are two levels at which this kind of observation is dependent on the involvement of particular agents: first, in actually undertaking the observation; and second, at a ‘higher’ level, in establishing the goals and parameters of the observation. Below, I discuss each of these levels and then describe how Popper (1959, 1976) addresses them. While Popper does account for both forms of dependency on agents when discussing how to define ‘objectivity’, I nevertheless argue that his view that science should only be concerned with pursuing the ideal of truth17, not exploring values, sits uncomfortably with his acceptance of this dependency. This then opens the door for us to recast observation as an aspect of intervention. Let us start, then, by looking at the two levels at which agents are implicated in constructing observations.

First, scientific observation is dependent on the involvement of particular agents because interpretation is integral to the act of observation itself. What the scientist is able to see will in part be determined by his or her expectations, which in turn will be colored by the language s/he uses and the values flowing into the act of observation. To illustrate, in experiments in which people are asked to look into a tachistoscope (a machine that feeds one picture into one eye and another into the other), some interesting effects occur. If people are fed two faces, one upside-down and the other the right way up, they invariably only see the one that is the right way up (Engel, 1956; Hastorf & Myro, 1959). In a similar experiment, Bagby (1957) took U.S. and Mexican citizens and fed them the same two images: one a North American landscape and the other a Mexican one. In almost every case, people only saw the one that was culturally familiar to them. This indicates that the brain, linked to its environment, is actively constructing the observation, not simply reflecting what enters the eye. Observation takes place using conceptual and emotional frameworks of interpretation (Maturana, 1988a,b; Maturana & Varela, 1992).

Second, at a ‘higher’ level, agents are also implicated in constructing observations when they set the goals and parameters for them—when they ask, what exactly should be observed? This is a moral question as much as a practical one, as scientists can choose what to observe. There is a value judgement, whether consciously recognised or not, involved in every decision to study one thing rather than another (e.g., Churchman, 1979; Ulrich, 1983; Alrøe, 2000; Midgley, 2000; Romm, 2001).

Popper’s (1959) answer to the first of the above issues, that interpretation is integral to observation, is to stress the importance of the scientific community in determining what counts as objective. Basically, the more people who scrutinize the findings of scientists, the more likely it will be that idiosyncratic interpretations will be identified. However, it is worth pointing out that a consensus among scientists is not the same thing as true objectivity because even a consensus in a professional community can be the product of cultural construction (Foucault, 1980)—so objectivity remains an ideal, not something that we can know we have achieved. Popper’s answer to the second issue, that value judgements are involved in setting the goals and parameters of observations, is simply to accept that this is the case. Indeed, the positivist idea that science can be totally value-free has been largely discredited (Resnik, 1998; ESRC Global Environmental Change Programme, 1999). Nevertheless, Popper (1966) still argues that scientists should focus on pursuing the ideal of truth, leaving explorations of values to others. As he sees it, democracy itself is dependent on keeping a strict separation between matters of fact and value: if science comes under the influence of ideology then the pursuit of truth may be severely compromised. Popper argues that, in a society in which science is constrained in this way, informed democratic decision making is impossible.

However, from a systems point of view (e.g., Ulrich, 1983)—and also from a critical theory viewpoint (e.g., Habermas, 1972, 1984a,b)—this strong separation of moral decision making from the act of observation cannot be sustained. Because the two interact, in principle they should both be available for critical analysis. I suggest that, if we acknowledge that agents are involved in interpreting observations, and we accept that value judgements guide what is investigated, we cannot legitimately follow Popper’s prescriptive path which places the exploration of values outside the remit of science.18

Of course, in practical situations, boundaries have to be drawn around the inquiry process, but it seems to me that there can be no general case for excluding value judgements from inquiry—only local cases for momentary exclusions while observations are being undertaken. In other words, moral inquiry can be suspended temporarily while an act of observation is carried out, simply because the agent cannot do two things at once, and it can be resumed once again in the light of the observation and previous moral inquiries.19 So, in many different ways we have seen that agents are implicated in constructing observations: through their direct and indirect interactions with the observed; through their interpretations of sense data; through their selection of concepts to guide observation; and by making value judgements about what to observe. It should be clear from this that observation, as a purposeful act, can only be isolated from its context by artificially ignoring what flows into it and the consequences it gives rise to. In my view, it is hard to justify placing this artificial boundary around it—especially as the choice of what to observe and how to observe it has unavoidable moral consequences for action (which may sometimes be anticipated and sometimes not). Given this state of affairs, I argue that it is more appropriate for us to take account of the construction of observation than to turn our backs on it. Once the moral, subjective, linguistic and other influences on observation are opened to critical reflection, scientific observation has to be seen as a form of intervention: observation is undertaken purposefully, by an agent, to create change in the knowledge and/or practice of a community of people. It is this purposeful action of an agent that is the defining feature of intervention.

Of course, methods of scientific observation provide a set of techniques for intervention that can be seen to have significant uses and limitations. These methods have been given pride of place in the last three hundred years of Western intellectual history, largely because of the focus of philosophers of science on maintaining the shibboleth of independent observation and thereby denigrating methods of intervention. As I believe that I have demonstrated that scientific observation should also be viewed as a form of intervention, I argue that scientists should welcome a whole host of other methods that are more self-consciously concerned with action for change. Of course, there are many communities of writers, including several with an interest in systems thinking and complexity, that have been developing methodologies and methods for intervention despite the disinterest, or even the disapproval, of the scientific establishment. It is mainly to this work that I refer in other writings (e.g., Midgley, 2000) that stress the value of methodological pluralism: the use of a wide variety of intervention methods to pursue a correspondingly wide variety of purposes.

Systemic Intervention

I have defined intervention in terms of purposeful action by an agent to create change, and have argued that scientific methods can be used as part of intervention practice. However, this still does not deal with all of the issues thrown up by systems thinking and complexity science. If we were to conceive of intervention as flawlessly pre-planned change based on accurate predictions of the consequences of action, we would be assuming the mechanistic vision of the universe that systems thinking and complexity science seek to challenge. Mechanism is the view that everything can be observed and described as if it is a machine—a predictable, functional, inherently understandable object (Pepper, 1942). According to this view, all the things in the world (including human beings, organizations and societies) are like clockwork toys. If we can figure out how they work, then we will be able to change them according to our will, within the limits of the natural laws that they conform to. As systems thinking and complexity science both fundamentally undermine this mechanistic world-view by highlighting issues of uncertainty and non-linear interaction (see Prigogine, 1987, and Flood and Carson, 1993, for some introductory writings), there is a need to further clarify our understanding of ‘intervention’ to avoid the pernicious interpretation of it mentioned above. I therefore wish to propose that we should think in terms of systemic intervention. The following account is heavily abbreviated, and more information can be found in Midgley (2000).

I argue that the boundary concept lies at the heart of systems thinking (and Cilliers, 1998, makes a similar claim in relation to complexity science). Because of the fact that everything in the universe is directly or indirectly connected with everything else, where the boundaries are placed in any analysis becomes crucial. The ‘cut-off point’ for analysis will make some things visible and others invisible. Systems thinkers pursue the ideal of comprehensiveness, but know that this is unattainable. However, reflection on the boundaries of knowledge at least enables us to consider options for inclusion, exclusion and marginalization. It also reminds us that all understandings are incomplete: there is a need for humility and openness to the perspectives of others (Churchman, 1979).

If intervention is purposeful action by an agent to create change, then systemic intervention is purposeful action by an agent to create change in relation to reflection on boundaries. This statement embodies the core concern of the methodology of systemic intervention that I will be introducing over the coming pages.

Towards a Methodology for Systemic Intervention

At the bare minimum, I suggest that an adequate methodology for systemic intervention should be explicit about three things: boundary critique; theoretical and methodological pluralism; and action for improvement. These are discussed below.

Boundary critique

There is a need for agents to reflect critically upon, and make choices between, boundaries. Boundaries define both what issues are to be included, excluded or marginalized in analyses, and who is to be consulted or involved (the two are obviously linked, as different agents will have different concerns). Because of the ‘who’ question, issues of power and participation are unavoidable in systemic intervention (Churchman, 1979; Ulrich, 1983; Brown, 1996; Midgley, 1997a, 2000; Vega-Romero, 1999; Córdoba & Midgley, 2003, 2006).

An important aspect of my understanding of boundaries is that boundary judgements are intimately linked with value judgements (Ulrich, 1983): the values adopted in any intervention will direct the drawing of boundaries that define the knowledge accepted as pertinent. Similarly, the inevitable process of drawing boundaries constrains the ethical stance taken and the values pursued. Making decisions about boundaries is therefore an ethical business. It is also important to note that, regardless of how detailed the process of critical reflection on values and boundaries actually is, there may still be surprises as things excluded from view interact with whatever is the focus of attention. While boundary critique cannot altogether eliminate surprises, it can help minimise them. Also, because things change over time, boundary judgements need to be regularly reviewed as part of a learning process (Ulrich, 1983; Brown & Packham, 1999).20

Of course, it is only possible for agents to make boundary judgements through the use of (implicit or explicit) theories and methods, and reflection leading to the making of boundary judgements is an activity (it is intervention to shape the agent’s understanding, which may in turn influence future action). Critical reflection upon boundary judgements is vital because it is only by way of boundary critique that the ethical consequences of different possible actions (and the ways of seeing they are based upon) can be subject to analysis.21

Theoretical and methodological pluralism

The second aspect of a methodology for systemic intervention that should be made explicit is the need for agents to make choices between theories and methods to guide action, which requires a focus on theoretical and methodological pluralism. These two forms of pluralism have meaning in terms of the focus on boundary judgements mentioned above: if understandings can be bounded in many different ways, then each of these boundaries may suggest the use of a different theory (and conversely, each theory implies particular boundary judgements). Methodological pluralism then also becomes meaningful because methods and methodologies embody different theoretical assumptions: choices between boundaries and theories suggest which methods might be most appropriate (and conversely, choices between methods imply particular theoretical and boundary judgements).

Choice between theories and methods is also a form of action, in the same way as reflection on, and choice between, boundary judgements can be seen as action: it is intervention in the present to shape a strategy for future intervention.22

Action for improvement

Finally, an adequate methodology for systemic intervention should be explicit about taking action for improvement—action for the better, which cannot of course be defined in an absolutely objective manner. ‘Improvement’ needs to be understood temporarily and locally: as different agents may use different boundary judgements, what looks like an improvement through one pair of eyes may look like the very opposite through another (Churchman, 1970).23 Also, even if there is widespread agreement between all those directly affected by an intervention that it constitutes an improvement, this agreement may not stretch to future generations. The temporary nature of all improvements makes the concept of sustainable improvement particularly important: while even sustainable improvements cannot last forever, gearing improvement to long-term stability is essential if future generations are to be accounted for. We can say that an improvement has been made when a desired consequence has been realized through intervention. In contrast, a sustainable improvement has been achieved when this seems like it will last into the indefinite future without the appearance of undesired consequences (or a redefinition of the original consequences as undesirable). Of course, whether an improvement is sustainable or not is a matter of judgement (and judgements are inevitably temporary and local, even if they are widely accepted): the limitations of human understanding mean that what may appear to be sustainable at one moment may seem less so at the next. Therefore, in aiming for sustainable improvement, agents involved in systemic intervention need to periodically review the criteria of sustainability that they are using.

The notion of improvement is important because agents are restricted in the number of interventions they can undertake, and must therefore make decisions about what they should and should not do. The extent to which various interventions look like they may or may not bring about improvements, or may bring about improvements that have greater or lesser priority, is a useful criterion for making these decisions.

Of course, I should say why I have used the term ‘improvement’ rather than, say, the creation of beauty, pleasure, knowledge, understanding, emancipation or spiritual enlightenment. The answer is that, if we value any of these things, the creation of these represents an improvement. The term ‘improvement’ is therefore general enough to have meaning in relation to almost any value system: it simply indicates the purposeful action of an agent to create a change for the better. In the case of ‘pure’ science, this may simply be a change in our knowledge base and/or understanding of the world.24

Interrelating the Three Activities

These three activities—reflecting on value and boundary judgements; making choices concerning theory and method; and taking action for improvement—are clearly inseparable. Doing one always implies doing the other two as well, although the focus of attention may shift from one to another aspect of this trinity so that none remain implicit and thereby escape critical analysis. The separation between the three is therefore analytical rather than factual: it ensures a proper consideration of a minimum set of three ‘angles’ on possible paths for intervention. Making all of them a specific focus of a methodology for systemic intervention guides the reflections of the agent, ensuring that boundaries, values, theories, methods, and action for improvement all receive explicit consideration. The three activities, diagrammed in relation to one another, are presented in Figure 1. Critique specifically means boundary critique (reflection on, and choice between, boundaries and associated values); judgement means judgement about which theories and methods might be most appropriate; and action means the implementation of methods to create improvement (however this is to be understood by different actors in the local context).

Figure 1: Three aspects of a methodology for systemic intervention

Implications for Society

Having presented the methodology of systemic intervention, which can encompass methods of observation used in the service of knowledge generation, I will end the main body of the paper with some brief reflections on its implications for society.

Earlier, I mentioned Popper’s (1966) argument that science needs to be protected from the imposition of political ideology: he advocates granting freedom to scientists to pursue the cumulative development of knowledge, aiming towards an ideal of truth. His claim is that we must preserve the ‘open society’: a democratic society based on rational inquiry that is capable of using science to eliminate primitive superstition. According to Popper, there are forms of ideology (such as Marxist ‘historicism’25) which threaten the open society by enforcing rules concerning what it is and is not legitimate to explore. These forms of ideology are therefore to be resisted, and the ideal of truth is to be preserved as the focus of science.

There are many assumptions in this argument that have been thoroughly debated in the last 25 years. These include the strong distinction between ‘modern’ and ‘pre-modern’ societies (Latour, 1991); the cumulative development of knowledge (Kuhn, 1970); the value of the ideal of truth (Rorty, 1989); the nature of ‘rationality’ (Foucault, 1980); and the necessity of questioning tradition (MacIntyre, 1985). However, for the purposes of this paper, I wish to focus on the strong division in Popper’s worldview between the pursuit of truth and the exploration of values. Contrary to Popper, I argue that marginalizing the exploration of values makes science more prone to ideological manipulation, not less so.

The crux of my argument is that if, without critical reflection, we allow the value judgements that inevitably flow into decisions on what to research to be shaped by whatever macro-social and economic forces exist in society, we give up one of the key means that we have of protecting ourselves against totalizing ideologies.26 Here I agree with Habermas (1984a,b) that moral inquiry is as important as inquiry into the nature of the world. However, I disagree with Habermas’s assumption that it is possible to neutralise the effects of power by establishing ‘free and fair’ debates in the public sphere, in which all citizens are equally able to ask questions orientated to the ideals of truth, rightness and sincerity. As I see it (following Foucault, 1980, 1984), power operates in a more complex manner, and is ever present as both an enabling and constraining force. So critical questioning around moral issues is indeed a means to challenge totalizing ideologies, but we should never assume that this can make us ideologically or morally neutral.

Of course, Popper was writing at a time of ‘grand’ ideological debates. The first edition of The Open Society and its Enemies was published in 1945 when fascism had just been overthrown in Western Europe; capitalism was seen as either the saviour or the enemy of the ‘free’ world (depending on your point of view); and Marxism was on the rise. It could be argued that, as we enter the Twenty-First Century, we have no more need for protection against totalizing ideologies, and therefore this debate about science and values is simply redundant. I strongly resist such a view, for two reasons. First, it would be a very short sighted view of history to think that, just because we have seen the end of the Twentieth Century confrontation between capitalism and socialism, this spells the final demise for all totalizing ideologies. Second, as forces of globalization proliferate, and we experience economic forces that are beyond the control of individual nation states, it could be argued that we are more at risk of being subsumed by a totalizing ideology than ever before. It’s no longer Marxist historicism that pulls us towards an ‘inevitable’ future, but the discourse of global market forces (Robertson, 1998) and a culture pivoted around individual consumer choice (Gare, 1996).

The idea of bringing explorations of values alongside observational methods, as suggested by the methodology of systemic intervention presented in this paper, could support scientists and other citizens in working participatively to reveal more of what flows into the making of truth judgements. This kind of exploration enables us to ask new and different questions about what forms of intervention we should pursue, including what should be the focus of observational research. Also, because this is a methodology that is explicit about the need for reflection on value and boundary judgements on an on-going basis, it encourages resistance to totalizing ideologies which require a continual reference back to a single ‘truth’—a single uncritically-accepted boundary and associated value judgement.

Conclusion

In this paper I have chosen to side-step the usual starting points for debate about complexity and the philosophy of science, which tend to assume that science is primarily about observation. Instead, I opened my argument by exploring the concept of intervention, and defined intervention as purposeful action by an agent to create change. I then contrasted this with the concept of observation. While some authors suggest that intervention and observation are opposites, I have argued that observation (as undertaken in science) should be viewed as just one type of intervention. We should therefore welcome scientific techniques of observation into a pluralistic armory of intervention methods, alongside methods for exploring values, reflecting on subjective understandings, planning future activities, etc.

Having redefined scientific observation as intervention, I then returned to systems thinking and complexity ideas to advocate a methodology of systemic intervention. This focuses attention on the need for boundary critique (reflection on, and choice between, boundaries and associated values); judgement (concerning appropriate theories and methods); and action for improvement (defined temporarily and locally).

Finally, I ended with a brief discussion of the implications of this methodology for society. In particular, I emphasized its value in terms of resisting totalizing ideologies. It also encourages a critical and participative attitude to intervention—including forms of intervention that incorporate the traditional observational methods of science.

Notes

  1. I treat systems thinking and complexity as a pair because they share certain characteristics of particular relevance to this paper. I appreciate that some authors (e.g., Stacey et al., 2000) contrast complexity and systems thinking. However, while I accept the criticisms of early systems thinking offered by Stacey et al., I disagree with their characterization of some later systems theories which I suggest have a lot in common with philosophical writings in complexity (e.g., Cilliers, 1998; Richardson et al., 2000). However, this is an argument that lies beyond the scope of the current paper.

  2. Of course some of these insights, such as the impossibility of comprehensive understanding, are not unique to complexity theory, but have been discussed by philosophers of science for many years (see, for example, the work of Popper, 1959).

  3. Objectivity is not an absolute. Arguably, Popper’s (1959, 1972) greatest contribution to the philos-ophy of science is to undermine the positivist claim that objectivity is actually achievable. His point is that all claims to objectivity are judged within scientific communities. Because, in principle, the boundaries of these communities are not closed, any accepted claim to objectivity may be undermined by new participants (or by the original participants re-testing the claim). Objectivity is therefore an ideal we may aim towards. It is not actually an achievable attribute of an observation.

  4. While insights from complexity theory threaten some of the presuppositions of science, others remain unchallenged by it. For example, most complexity theorists share the commitment of other scientists to a realist philosophy: it is assumed that scientific descriptions do reflect a real world, even if we can never ultimately measure the accuracy of this reflection. My own view is that, if we follow through some of the implications of complexity theory and systems thinking to their logical conclusions, the relatively naïve realism that is often assumed by scientists is problematized. However, discussion of this is beyond the scope of the current paper (see Midgley, 2000, for details).

  5. I suggest that an agent can be viewed as either a single human being, or an identifiable group of human beings in interaction (e.g., a family, team or organisation), that have purposes ascribed to them. In the case of a group, this definition does not assume that all participating individuals share the purpose of the whole (indeed, some sub-agents may act in opposition to the dominant purpose). However, a group can be called an agent when it (or its representatives) is perceived as acting to realise a dominant purpose at the group level regardless of the actions or views of sub-agents. The word ‘dominant’ here is crucial. It indicates that the group purpose is a function of whatever mechanisms of legitimation exist within and beyond the group that allow it to be perceived as moving in one particular direction, regardless of any counter-arguments being produced by internal opponents. Therefore, when a government minister declares war on behalf of a nation, it is generally accepted that the nation is at war even if half of its citizens wish to dissent.

  6. Obviously, there is much more to say about intervention than this (for a full exposition, see Midgley, 2000). One thing I should be clear about here, however, is that the concept of intervention does not presume that it is always possible to have flawlessly pre-planned change based on accurate predictions of the consequences of action. This would be a return to the mechanistic view of the universe that systems thinking and complexity science have sought to challenge. For more details, see later in the paper.

  7. There are others in the ‘intervention camp’ too, such as operational researchers, management scientists, evaluators and systems practitioners. These labels refer to people in a variety of semi-independent research communities who have similar interests, but slightly different emphases.

  8. Reason (1996) disagrees with using the term ‘intervention’, but I will not deal with this here. An explanation of his position, and an argument against it, can be found in Midgley (2000).

  9. Many of these philosophies of science were far less sophisticated than the one advanced by Popper. For instance, the positivists working in the late 19th and early 20th Centuries asserted that science should be entirely value-free. Popper (1972), in contrast, argues that the values of scientists will inevitably guide what will be the focus for investigation, but independent observation can still be achieved once this focus has been determined. See Delanty (1997) and Romm (2001) for some interesting reviews of this and other related debates.

  10. Of course Popper stressed inter-subjective testability within scientific communities, and said that method alone is not an adequate determinant of independent observation. However, a prime means by which an individual scientist could influence the consensus of scientists was by using a widely accepted method. Hence the overwhelming focus on methods that was certainly still present when I graduated as a student of Psychology as late as 1982.

  11. In this sense, Lewin’s (1946, 1948, 1952) philosophical assumptions are similar to Popper’s (1959). Popper says that values determine the focus of science, and that observations (imperfectly) reflect the real world. However, Popper (1966) argues against what he sees as the imposition of a social utility agenda on science. He also strongly demarcates the supposedly nonscientific world of values from the world of facts—in his view, it is only legitimate for scientists to focus their inquiries on the latter.

  12. Many writers on systems thinking and complexity take this view. See, for example, Bogdanov (1913-1917), Koehler (1938), Boulding (1956), Kremyanskiy (1958), von Bertalanffy (1968), Bateson (1972), Miller (1978), Prigogine and Stengers (1984), Laszlo (1995), Capra (1996), Allen (1997), Hardy (1998), Holland (1998) and Cilliers (1998).

  13. A great deal has been written about the limitations that disciplinary boundaries impose on the generation of knowledge (e.g., von Bertalanffy, 1968; Lovelock, 1988; Midgley, 2001).

  14. There are also writers who are critical of this work (e.g., Merkel & Searight, 1992; Pam, 1993). However, we need not fall into the trap of saying that this is the only valid way of viewing social systems. If we welcome a ‘living systems’ approach as one amongst a plurality of useful ways of thinking, we can gain insights from it without necessarily succumbing to its limitations (Rosenblatt, 1994).

  15. Saying that both approaches are reasonable might appear contradictory, but I believe they can be reconciled through a new, pluralistic approach to systems philosophy. However, this is beyond the scope of the current paper (see Midgley, 2000, for details).

  16. Quine (1990) acknowledges that in real life there are ambiguities; cases where people question linguistic conventions; and uses of scientific jargon that many competent users of language will not understand. These phenomena suggest that agreement on an observation sentence often needs to be viewed as relative to a particular bounded community. Nevertheless, Quine still insists that there are some very basic sentences which we can reasonably assume have a universally clear reference to a particular sensory stimulus.

  17. Popper (1959) talks about an ideal of truth, not truth itself, because he follows Kant (1787) in arguing that we can only know our knowledge constructs, not reality itself. Nevertheless, he still believes that truth is something we ought to aim towards, even if we can never know for sure if or when we have attained it.

  18. Towards the end of this paper I will argue that, far from opening science to political domination, this exploration of values protects us from ideological dogmatism.

  19. One possible argument against this is that there is a difference between ‘pure’ and ‘applied’ science. Some might say that those conducting applied science should indeed undertake moral inquiry, but pure science is curiosity-driven; its ethical implications are generally unknown or uncertain; and it less obviously involves intervention. My answer to this is that even pure science involves intervention in the sense that it is designed to produce knowledge that will make a difference in scientific debates. There may be similarities and differences between the ethical issues impacting on pure and applied scientific projects, but in choosing to undertake a particular piece of pure, curiosity-driven research, the scientist is still making a value judgement that this is the right thing to do. S/he could, for instance, have taken on some other research project. This kind of judgement is therefore just as amenable to moral inquiry as that made by the applied scientist—it just means acknowledging that factors other than curiosity can and should be considered in forming pure research agendas.

  20. There is a substantial body of literature on the theory, methodology and practice of boundary critique: e.g., Churchman (1970, 1979); Ulrich (1983, 1987, 1994, 1996); Midgley (1992, 1994, 1997b, 2000); Midgley et al. (1998, 2007); Brown & Packham (1999); Vega-Romero (1999); Córdoba et al. (2000); Yolles (2001); Foote et al. (2002, 2007); Córdoba & Midgley (2003, 2006); and Midgley & Shen (2007).

  21. This exposition of boundary critique has left out, or made only passing reference to, a number of important issues. These include the extension of the concept of boundary judgement to encompass concerns about how things ought to be; the importance of wide-spread stakeholder participation in systemic intervention; and the need for agents to deal with the marginalization of particular issues and stakeholders within social contexts. These are dealt with in Ulrich (1983), Midgley et al (1998) and Midgley (2000).

  22. Many issues have been left unexplored by this short exposition, including paradigm incommensurability, standards for choice between theories, theoretical coherence/incoherence, how to develop the methodological knowledge base of agents, etc. All these issues are covered in Midgley (2000).

  23. An example is logging a stretch of rain forest, which may bring about an improvement in the eyes of the logging company’s employees and those who consume the wood that is generated, but may be considered as damaging by tribal people who are displaced from their ancestral lands, and by conservationists concerned with the preservation of species diversity. As Churchman (1970) says, every improvement assumes boundaries defining what consequences of intervention are to be taken into account, and what are to be ignored or regarded as peripheral. In the above example, the logging will only be viewed as bringing about an improvement if the displacement of tribal people and the reduction of species diversity are excluded from the boundaries of analysis. Clearly, what is included in the boundaries of analysis and who conducts this analysis are both vital issues in defining improvement.

  24. It should be noted that there is a counter-argument to this. According to Rorty (1989), using a term like improvement (or truth, legitimacy, ontology, morality, etc.) suggests a belief in absolute facts or values. Rorty believes that such words are tainted. To talk of improvement is to talk about the attainment of a state that everybody would agree is better. Rorty has launched a fierce critique of the apparent certainties of modernity. He offers a powerful argument, but why abandon words like truth, morality and improvement? If we are prepared to be critical about the business of making boundary judgements, there is no need to assume that understandings of improvement are universal. To abandon words like truth, morality and improvement is to risk slipping into negativity and inaction. To tear away the modernist certainties surrounding their use and clothe them with an awareness of the frailty of human understanding is to preserve the possibility of positive action while facing the complexities of this head on.

  25. Historicism, according to Popper (1966), is the belief that the course of history is predetermined (e.g., by structural economic forces). This kind of belief informs people’s actions in the world, bringing them nearer to the ‘inevitable’ future. Historicism therefore involves a self-fulfilling prophesy.

  26. This is similar (but not identical) to the arguments of some critical and systems theorists in the second half of the 20th Century (see, for example, Habermas, 1971, 1972, 1984a,b; Foucault, 1980, 1984; Ulrich, 1983; Fay, 1987; Jackson, 1991; Oliga, 1996 and Gregory, 2000). As I see it, there is considerable scope for dialogue between critical theorists, systems thinkers and philosophers of science.

Acknowledgments

This article is an updated version of an earlier paper published in Systemic Practice and Action Research (Midgley, 2003). I would like to thank Kluwer Academic (now Springer) for their permission to reproduce material that is copyright to them. Thanks are also due to Norma Romm, Mike Jackson and Alan Dean for their helpful feedback on the 2003 version of this paper, and to one of the anonymous referees who encouraged me to engage with philosophers of science whom I had not previously read.

References

ref1?

Allen, P.M. (1997). Cities and Regions as Self-Organizing Systems: Models of Complexity, ISBN 9789056990718.

ref2?

Alrøe, H.F. (2000). “Science as systems Learning: Some reflections on the cognitive and communicational aspects of science,” Cybernetics and Human Knowing, ISSN 0907-0877. 7: 57-78.

ref3?

Bagby, J.W. (1957). “A cross-cultural study of perceptual predominance in binocular rivalry,” Journal of Abnormal and Social Psychology, ISSN 0096-851X, 54: 331-334.

ref4?

Baker, V., Foote, J., Gregor, J., Houston, D. and Midgley, G. (2004). “Boundary critique and community involvement in watershed management,” in K. Dew and R. Fitzgerald (eds.), Challenging Science: Issues for New Zealand Society in the 21st Century, ISBN 9780864694584.

ref5?

Baker, V., Gregory, W., Midgley, G. and Veth, J. (2006). “Ethical implications and social impacts of forensic DNA technologies and applications: Summary report,” ESR, Christchurch.

ref6?

Bateson, G. (1972). Steps to an Ecology of Mind, ISBN 9780876689509 (1987).

ref7?

Bertalanffy, L. von (1968). General Systems Theory, ISBN 9780807604533 (1976). ’

ref8?

Bogdanov, A.A. (1913-1917). Bogdanov’s Tektology, ISBN 9780859588768 (1996).

ref9?

Bohr, N. (1963). Essays 1958/1962 on Atomic Physics and Human Knowledge, London, UK: Interscience Publishers Inc.

ref10?

Boulding, K.E. (1956). “General systems theory: The skeleton of science,” Management Science, ISSN 0025-1909, 2: 197-208.

ref11?

Brown, M. (1996). “A framework for assessing participation,” in R.L. Flood and N.R.A. Romm (eds.), Critical Systems Thinking: Current Research and Practice, ISBN 9780306454516.

ref12?

Brown, M. and Packham, R. (1999). “Organizational learning, critical systems thinking and systemic learning,” Research Memorandum #20, Centre for Systems Studies, School of Management, University of Hull, Hull.

ref13?

Brown, R.H. (1977). A Poetic for Sociology: Toward a Logic of Discovery for the Human Sciences, ISBN 9780521211215.

ref14?

Capra, F. (1996). The Web of Life: A New Synthesis of Mind and Matter, ISBN 9780002554992.

ref15?

Cartwright, N. (1999). The Dappled World: A Study of the Boundaries of Science, ISBN 9780521643368.

ref16?

Casti, J.L. and Karlqvist, A. (eds.) (1996). Boundaries and Barriers: On the Limits to Scientific Knowledge, ISBN 9780201555707.

ref17?

Churchman, C.W. (1970). “Operations research as a profession,” Management Science, ISSN 0025-1909, 17: B37-53.

ref18?

Churchman, C.W. (1979). The Systems Approach and its Enemies, ISBN 9780465083428.

ref19?

Cilliers, P. (1998). Complexity and Post-Modernism, ISBN 9780415152860. ’

ref20?

Côrdoba, J., Midgley, G. and Torres, D. (2000). “Rethinking stakeholder involvement: An application of the theories of autopoiesis and boundary critique in IS planning,” in S. Clarke and B. Lehaney (eds.), Human-Centered Methods in Information Systems: Current Research and Practice, ISBN 9781878289643, pp. 195-230.

ref21?

Côrdoba, J-R. and Midgley, G. (2003). “Addressing organizational and societal concerns: an application of critical systems thinking to information systems planning in Colombia,” in J.J. Cano (ed.), Critical Reflections on Information Systems: A Systemic Approach, ISBN 9781591400400.

ref22?

Côrdoba, J-R. and Midgley, G. (2006). “Broadening the boundaries: an application of critical systems thinking to IS planning in Colombia,” Journal of the Operational Research Society, ISSN 0160-5682, 57: 1064-1080.

ref23?

Darier, É. (ed.) (1999). Discourses of the Environment, ISBN 9780631211228.

ref24?

Delanty, G. (1997). Social Science: Beyond Constructivism and Realism, ISBN 9780816631278.

ref25?

Dewey, J. (1946). Problems of Men, ISBN 9780806529592.

ref26?

Einstein, A. (1934). The World as I See It, ISBN 9781599868240 (2007).

ref27?

Engel, E. (1956). “The role of content in binocular resolution,” American Journal ofPsychology, ISSN 0002-9556, 69: 87-91.

ref28?

ESRC Global Environmental Change Programme (1999). The Politics of GM Food: Risk, Science and Public Trust, ISBN 9780903622882.

ref29?

Fay, B. (1987). Critical Social Science, ISBN 9780745604213.

ref30?

Fitzgerald, L.A. (1999). “Why there’s nothing wrong with systems thinking a little chaos won’t fix? A critique of modern systems theory and the practice of organizational change it informs,” Systemic Practice and Action Research, ISSN 1094-429X, 12: 229-235.

ref31?

Flood, R.L. and Carson, E.R. (1993). Dealing with Complexity: An Introduction to the Theory and Application of Systems Science, ISBN 9780306442995.

ref32?

Foote, J., Baker, V., Gregor, J., Hepi, M., Houston, D. and Midgley, G. (2007). “Systemic problem structuring applied to community involvement in water conservation,” Journal of the Operational Research Society, ISSN 0160-5682, 58: 645-654.

ref33?

Foote, J.L., Houston, D.J. and North, N.H. (2002). “Betwixt and between: ritual and the management of an ultrasound waiting list,” Health Care Analysis, ISSN 1065-3058, 10: 357-377.

ref34?

Foucault, M. (1980). Power/Knowledge: Selected Interviews and Other Writings, 1972-1977, ISBN 9780855275570.

ref35?

Foucault, M. (1984). “What is enlightenment?” in P. Rabinow (ed.), The Foucault Reader, ISBN 9780140124866.

ref36?

Gare, A. (1996). Nihilism Inc.: Environmental Destruction and the Metaphysics of Sustainability, ISBN 9781876236007.

ref37?

Gregory, W.J. (2000). “Transforming self and society: A ‘critical appreciation’ model,” Systemic Practice and Action Research, ISSN 1094-429X, 13: 475-501.

ref38?

Habermas, J. (1971). Toward a Rational Society, ISBN 9780807041765.

ref39?

Habermas, J. (1972). Knowledge and Human Interests, ISBN 9780435823832.

ref40?

Habermas, J. (1984a). The Theory of Communicative Action, Volume One: Reason and the Rationalization of Society, ISBN 9780745603865.

ref41?

Habermas, J. (1984b). The Theory of Communicative Action, Volume Two: The Critique of Functionalist Reason, ISBN 9780745607702.

ref42?

Hardy, C. (1998). Networks of Meaning: A Bridge between Mind and Matter, ISBN 9780275960353.

ref43?

Hastorf, A.H. and Myro, G. (1959). “The effect of meaning on binocular rivalry,” American Journal of Psychology, ISSN 0002-9556, 72: 393-400.

ref44?

Hesse, M.B. (1961). Forces and Fields: The Concept of Action at a Distance in the History of Physics, ISBN 9780486442402 (2005).

ref45?

Hesse, M.B. (1974). The Structure of Scientific Inference, ISBN 9780333150702.

ref46?

Holland, J.H. (1998). Emergence: From Chaos to Order, ISBN 9780192862112.

ref47?

Hollway, W. (1989). Subjectivity and Method in Psychology: Gender, Meaning and Science, ISBN 9780803982079.

ref48?

Jackson, M.C. (1991). Systems Methodology for the Management Sciences, ISBN 9780306438776.

ref49?

Kant, I. (1787). The Critique of Pure Reason, ISBN 9780486432540 (2003). "

ref50?

Kelly, G.A. (1955). The Psychology of Personal Constructs, ISBN 9780393001525 (1963).

ref51?

Koehler, W. (1938). The Place of Values in the World ofFact, ISBN 9780871401076 (1976).

ref52?

Kremyanskiy, V.I. (1958). “Certain peculiarities of organisms as a ‘system’ from the point of view of physics, cybernetics and biology,” Voporsy Filosofii (Problems of Philosophy), August, 1958, pp. 97-107. Translated from the Russian by A. Rapoport (1960) in General Systems, 5: 221-230.

ref53?

Kuhn, T. (1970). The Structure of Scientific Revolutions, ISBN 9780226458038.

ref54?

Laszlo, E. (1995). The Interconnected Universe: Conceptual Foundations of Transdisciplinary Unified Theory, ISBN 9789810222024.

ref55?

Latour, B. (1991). We Have Never Been Modern, ISBN 9780674948396.

ref56?

Lewin, K. (1946). “Action research and minority problems,” Journal of Social Issues, ISSN 0022-4537, 2(4): 34-46. "

ref57?

Lewin, K. (1947). “Frontiers in group dynamics,” Human Relations, ISSN 0018-7267, 1: 2-38.

ref58?

Lewin, K. (1948). Resolving Social Conflicts, ISBN 9781557984159 (1997).

ref59?

Lewin, K. (1952). Field Theory in the Social Sciences, ISBN 9781557984159 (1997).

ref60?

Lovelock, J. (1988). The Ages of Gaia: A Biography of our Living Earth, ISBN 9780192177704.

ref61?

MacIntyre, A. (1985). After Virtue: A Study in Moral Theory, ISBN 9780715616635.

ref62?

Maturana, H. (1988a). Ontology of Observing: The Biological Foundations of Self Consciousness and the Physical Domain of Existence, http://www. inteco.cl/biology/ontology/.

ref63?

Maturana, H. (1988b). “Reality: The search for objectivity or the quest for a compelling argument,” Irish Journal ofPsychology, ISSN 0303-3910, 9: 25-82.

ref64?

Maturana, H.R. and Varela, F.J. (1992). The Tree of Knowledge: The Biological Roots of Human Understanding, ISBN 9780877736424.

ref65?

Maze, J.R. (1983). The Meaning of Behavior, ISBN 9780041500813.

ref66?

Merkel, W.T. and Searight, H.R. (1992). “Why families are not like swamps, solar systems, or thermostats: Some limits of systems theory as applied to family therapy,” Contemporary Family Therapy, ISSN 0892-2764, 14: 33-50.

ref67?

Midgley, G. (1992). “The sacred and profane in critical systems thinking,” Systems Practice, ISSN 0894-9859, 5: 5-16.

ref68?

Midgley, G. (1994). “Ecology and the poverty of humanism: A critical systems perspective,” Systems Research, ISSN 0731-7239, 11: 67-76.

ref69?

Midgley, G. (1997a). “Mixing methods: Developing systemic intervention,” in J. Mingers and A. Gill (eds.), Multimethodology: The Theory and Practice of Combining Management Science Methodologies, ISBN 9780471974901.

ref70?

Midgley, G. (1997b). “Dealing with coercion: Critical systems heuristics and beyond,” Systems Practice, ISSN 0894-9859, 10, 37-57.

ref71?

Midgley, G. (2000). Systemic Intervention: Philosophy, Methodology, and Practice, ISBN 9780306464881.

ref72?

Midgley, G. (2001). “Rethinking the unity of science,” International Journal of General Systems, ISSN 0308-1079, 30: 379-409.

ref73?

Midgley, G. (2003). “Science as systemic intervention: Some implications of systems thinking and complexity for the philosophy of science,” Systemic Practice and Action Research, ISSN 1094- 429X, 16: 77-97.

ref74?

Midgley, G. and Shen, C-Y. (2007). “Toward a Buddhist systems methodology 2: An exploratory, questioning approach,” Systemic Practice and Action Research, ISSN 1094-429X, 20: 195-210.

ref75?

Midgley, G., Ahuriri-Driscoll, A., Baker, V., Foote, J., Hepi, M., Taimona, H., Rogers-Koroheke, M., Gregor, J., Gregory, W., Lange, M., Veth, J., Winstanley, A. and Wood, D. (2007). “Practitioner identity in systemic intervention: Reflections on the promotion of environmental health through Maori community development,” Systems Research and Behavioral Science, ISSN 1092-7026, 24:233-247.

ref76?

Midgley, G., Munlo, I. and Brown, M. (1998). “The theory and practice of boundary critique: Developing housing services for older people,” Journal of the Operational Research Society, ISSN 0160-5682, 49, 467-478.

ref77?

Miller, J.G. (1978). Living Systems, ISBN 9780070420151.

ref78?

Northrop, F.S.C. (1967). “The method and theories of physical science and their bearing upon biological organization,” in R.W. Marks (ed.), Great Ideas in Modern Science, New York, NY: Bantam Books.

ref79?

Oliga, J. (1996). Power, Ideology, and Control, ISBN 9780306451607.

ref80?

Pam, A. (1993). “Family systems theory: A critical view,” New Ideas in Psychology, ISSN 0732- 118X, 11: 77-94.

ref81?

Pepper, S.C. (1942). World Hypotheses: A Study in Evidence, ISBN 9780520009943 (1961).

ref82?

Popper, K.R. (1959). The Logic of Scientific Discovery, ISBN 9780415278447 (2002).

ref83?

Popper, K.R. (1966). The Open Society and its Enemies, ISBN 9780691019727 (1971).

ref84?

Popper, K.R. (1972). Objective Knowledge: An Evolutionary Approach, ISBN 9780198750246.

ref85?

Popper, K.R. (1976). “The logic of the social sciences,” in T.W. Adorno, H. Albert, R. Dahrendorf, J. Habermas, H. Pilot and K.R. Popper (eds.), The Positivist Dispute in German Sociology, ISBN 9780435826550.

ref86?

Prigogine, I. (1987). “Exploring complexity,” European Journal of Operational Research, ISSN 0377-2217,30:97-103.

ref87?

Prigogine, I. (1989). “The rediscovery of time: Science in a world of limited predictability,” Beshara, ISSN 0954-0067, 9: 28-32.

ref88?

Prigogine, I. and Stengers, I. (1984). Order out of Chaos: Man’s New Dialogue with Nature, ISBN 9780006541158.

ref89?

Quine, W.V. (1969). Ontological Relativity and Other Essays, ISBN 9780231083577 (1977).

ref90?

Quine, W.V. (1990). Pursuit of Truth, ISBN 9780674739505.

ref91?

Quine, W.V. and Ullian, J.S. (1978). The Web of Belief, ISBN 9780394321790.

ref92?

Reason, P. (1996). “Comments on Midgley’s paper, ‘The Theory and Practice of Boundary Critique’,” in J. Wilby (ed.), Forum One: Transcripts and Re-flections, ISBN 9780859589536.

ref93?

Reason, P. and Heron, J. (1995). “Co-operative inquiry,” in J.A. Smith, R. Harré and L. Van Langenhove (eds.), Rethinking Methods in Psychology, ISBN 9780803977334.

ref94?

Resnik, D.B. (1998). The Ethics of Science: An Introduction, ISBN 9780415166980.

ref95?

Richardson, K.A., Mathieson, G. and Cilliers, P. (2009). “Complexity thinking and military operational analysis,” in K.A. Richardson (ed.), Knots, Lace and Tartan: Making Sense of Complex Human Systems in Military Operations Research - The Selected Works of Graham L. Mathieson, ISBN 9780981703244, pp. 27-69.

ref96?

Robertson, J. (1998). Transforming Economic Life: A Millennial Challenge, ISBN 9781870098724.

ref97?

Romm, N.R.A. (2001). Accountability in Social Research: Issues and Debates, ISBN 9780306465642.

ref98?

Rorty, R. (1989). Contingency, Irony and Solidarity, ISBN 9780521367813. ’

ref99?

Rosenblatt, P.C. (1994). Metaphors of Family Systems Theory: Toward New Constructions, ISBN 9780898623222.

ref100?

Seidman, E. (1988). “Back to the future, community psychology: Unfolding a theory of social intervention,” American Journal of Community Psychol-ogy, ISSN 0091-0562, 16: 3-24.

ref101?

Skinner, B.F. (1971). Beyond Freedom and Dignity, ISBN 9780140216615.

ref102?

Stacey, R.D., Griffin, D. and Shaw, P. (2000). Complexity and Management: Fad or Radical Challenge to Systems Thinking? ISBN 9780415247610.

ref103?

Ulrich, W. (1983). Critical Heuristics of Social Planning: A New Approach to Practical Philosophy, ISBN9783258032191.

ref104?

Ulrich, W. (1987). “Critical heuristics of social systems design,” European Journal of Operational Research, ISSN 0377-2217, 31: 276-283.

ref105?

Ulrich, W. (1994). “Can we secure future-responsive management through systems thinking and design?” Interfaces, ISSN 0092-2102, 24: 26-37.

ref106?

Ulrich, W. (1996). A Primer to Critical Systems Heuristics for Action Researchers, ISBN 9780859588720.

ref107?

Vega-Romero, R. (1999). Health Care and Social Justice Evaluation: A Critical and Pluralist Approach, Ph.D. Thesis, University of Hull.

ref108?

Vickers, G. (1972). Freedom in a Rocking Boat: Changing Values in an Unstable Society, ISBN 9780140212051.

ref109?

Waismann, F. (1952). “Verifiability,” in A.G.N. Flew (ed.), Logic and Language, ISBN 9780631173007 (1979).

ref110?

Weimer, W.B. (1979). Notes on the Methodology of Scientific Research, ISBN 9780470266502.

ref111?

Wright, D.S., Taylor, A., Davies, D.R., Sluckin, W., Lee, S.G.M. and Reason, J.T. (1970). Introducing Psychology: An Experimental Approach, ISBN 9780140801002.

ref112?

Yolles, M. (2001). “Viable boundary critique,” Journal of the Operational Research Society, ISSN 0160-5682, 52: 35-47.


Article Information (continued)


This display is generated from NISO JATS XML with jats-html.xsl. The XSLT engine is Microsoft.