I would like to start by thanking you for inviting me. I feel quite privileged to be invited to a complexity conference, given that I haven’t made much of a contribution to complexity thinking at all, being primarily engaged with the systems community. But my hope is that there can be learning across these two communities, and that’s one of the things I want to talk about today.
My talk is called “Systems Thinking for Community Involvement in Policy Analysis”, and over the years I have talked with numerous audiences, particularly in the areas of management and community development. Some approaches that I’ve used are adaptable across domains, so I’m hoping that what I say will have some relevance to policy.
I want to start by acknowledging some of the history of policy analysis because, as I understand it, in the 1960s policy analysis and systems analysis were considered virtually synonymous—most policy people were using systems analysis in some way. That approach came into disrepute in the late 60s and early 70s. In this presentation I want to touch on what happened with systems analysis, in case there are people out there who are skeptical about why somebody would even bother to talk about systems thinking again. I also want to give some information about where systems thinking has moved to, because it has entered a space that has a lot of commonalities with complexity thinking. I would also like to talk about the relationship between systems thinking and complexity science before going onto my own work, which is about systemic intervention.
When I talk about ‘systemic intervention’, I am making an assumption that I think all systems thinking and complexity approaches make: that everything in the universe is directly or indirectly connected with everything else. However, you can’t have a God’s eye view of that interconnectedness, so there are inevitable limits to understanding, and it is those limits that we call boundaries. So, systemic intervention is fundamentally about how to explore those boundaries, and how to take account of the inevitable lack of comprehensiveness and begin to deal with it. This will lead me onto talk about something that I’ve called boundary critique. And by this I mean being critical of boundaries, rethinking them, considering the different meanings they invoke and the values associated with those meanings.
The discussion of boundary critique will take me onto the need for theoretical and methodological pluralism, drawing upon mixed methods, and evolving methodology on an ongoing basis. Throughout this talk I will give you some practical examples, as I think
The critique of systems analysis (1960s and 1970s) (slide 1)
Let us start with what happened to system analysis in the early days. People may be aware that there were lots of large scale modeling projects in the 1950s and 1960s. The ones that seemed to come into most disrepute were the ones where giant models were built, especially in California (the Californian experience seems to be the typical one that other authors have written about), where local government offices were recruiting consultants to build models of whole cities with no particular purpose in mind. The belief was that a policymaker could go to the modeler and say, “Well, can you now answer this question for me given all the wonderful data that you have?” Of course, by building models without purposes you end up with such huge complexity that the results are largely unreliable and meaningless. In the 1960s, millions of dollars were invested in giant models of this nature, with limited practical results. I call this phenomenon the death of the super model.
People also began to realize the limits of conventional rational planning. And the example that I like to give (it’s not really an example from systems thinking actually—it’s an example from operations research in the UK) is the planning of Stansted airport. Here, they spent a lot of money commissioning an analysis of the best option for building a new London airport. They evaluated a number of alternatives, taking account of environmental and social impacts, etc., and then said, “This is the best one.” The politicians promptly replied, “Well, that’s no good. It doesn’t take into account our political realities, and we’ll choose this one instead.” This example is widely regarded in the OR community as illustrating the decline of rational planning. Actually to me it’s an example of irrational planning. It’s irrational because it did not take into account the perspectives (or the rationalities—plural) of those people who needed to take the decision. That doesn’t mean that you just agree with political perspectives, regardless of the assumptions they are based upon, but it does mean that you have to work with them in order to be able to get something that’s going to be useful.
Interestingly, these issues were not only encountered by systems analysts. There was also a major systems engineering movement that spread across the world in the 1950s and 1960s. With the term ‘engineering’, of course, come all the connotations of being able to command and control social systems, as if people with their own self-consciousness didn’t actually sometimes say, “I want to resist those kinds of improvements.” So, the engineering metaphor began to die away.
The notion of ‘expertise’ also came under scrutiny, i.e., the idea that modelers and scientists always know best. People began to realize that other kinds of expertise (e.g., the expertise of the people on the receiving end of some of these policies), were actually important.
People also began to appreciate the limits of optimization approaches. It is simply the case that what is optimal from one perspective may, given a different value set and a different perspective, be completely unacceptable. So, simply talking about optimization as the only thing that we do is not enough.
With the inability to deal adequately with conflicting values, viewpoints, policy preferences, ideologies, power relations, etc., the limitations of some of the ‘engineering’, ‘rational’ and ‘optimization’ approaches began to show through. People began to realize that, if you simply start with the goal of one stakeholder and assume that this is unproblematic, then all kinds of side effects can emerge.
Finally, on Slide 1, I have said that, in the 1960s, the ‘self-justifying ideology’ of systems science was one of comprehensive analysis. What often happened is that if a model failed (i.e., if people were not satisfied with the results), the modelers simply said “we weren’t comprehensive enough so we need more systems analysis.” If that kind of reply is given often enough, people will eventually declare, “the Emperor has no clothes.”
So that’s what was happening in the 1960s and 1970s, with the backlash against systems analysis, and it really took systems thinking a good decade to recover its credibility. In that process of recovery, some quite dramatic shifts in systems thinking happened. I’ll talk very generally about what those shifts involved. Of course you will always be able to find exceptions to these generalizations, and there are dimensions to the shift that I will not cover, but here I am only able to provide an overview.
More recent systems thinking principles (slide 2)
Instead of producing massive super models, modeling for particular purposes (rather than all purposes) became more usual. Much more focused modeling was undertaken that didn’t necessarily pretend to be comprehensive, but actually thought about what is involved in making a model fit for purpose. Also, modelers explored those purposes instead of just taking them for granted. So, now people began to embed that modeling in a social process, as opposed to simply producing a mathematical model and thinking that it will produce the answers on its own.
Part of the new socially-embedded modeling process was accepting the relevance of multiple rationalities, instead of generating
The engineering metaphor was largely abandoned in favor of engaging with self-conscious actors, although it is still around in a few places. For example, in the military domain, people still talk about systems engineering. It’s also still prominent in China where there’s an institute for systems engineering (which has over 600 researchers) that is as important as the institutes for physics, biology and chemistry. In Colombia, there are still systems engineering degrees, but what they teach is actually the whole breadth of systems thinking, so the term has changed its meaning.
The democratization of expertise has also taken place. Instead of assuming that the necessary expertise is simply scientific, modeling or policy expertise, many other possible types of expertise are recognized, including perspectives from people in the community. From my own point of view, it is really important to preserve the notion of expertise because, although there have been some people arguing that we should just get rid of the term, it’s quite dangerous to pretend that the systems thinker, or the intervener, is ‘just another participant’. They actually play quite a pivotal role in constructing events, and by labeling it as a particular kind of expertise, you can make them accountable. If you lose the notion of expertise altogether, there is a risk that you lose accountability for the exercise of power.
The value of optimization approaches has not been entirely undermined, but there is a growing acceptance that such approaches have limited spheres of application. I like something that somebody said yesterday about islands of tractability, as it is an idea that has really come into favor. The idea is that there are, of course, valid applications for optimization techniques. You want the trains to run on time. You want to be able to get to a conference like this on time. Of course we need optimization techniques, but they have limited domains of application. We also need approaches that account for conflicting values, viewpoints, policy preferences, etc.
Ultimately, the contemporary systems view urges us to accept that systems thinking is about dealing with the inevitable lack of comprehensiveness, and is not the means to achieve comprehensiveness. This is a really crucial shift in how systems thinking has developed.
Systems thinking and complexity science (slide 3)
In terms of the relationship between complexity and systems (why I’m here basically, in terms of learning from complexity people and hopefully the learning being two-way), I see systems thinking as a discourse that has a community of people who are engaged within it, with fuzzy boundaries at the edges. I think that complexity is quite similar in that respect. There’s a community of complexity researchers, and both communities overlap
Multiple paradigms of systems and complexity (slide 4)
And yet neither complexity nor systems thinking are easy to define. We saw that in our first day here. It was quite clear that people are using the words in different ways. However, it is not necessarily productive to try to define them exactly. Arguably a more constructive approach, that gives room for different perspectives but also gives us an overview, is to look at the main paradigms within a research area. I therefore want to give some examples of paradigms in both complexity and systems thinking. I’ve done this in Slide 4.
It seems to me that the same set of paradigms has emerged in both perspectives: you have the basic scientific theories; the modeling approaches; the interpretive and social interaction approaches; and the critical approaches (which are really about values and ethics). Of course this is just a story that I’ve created to reduce the complexity, and there is also a lot of variety that is not represented in Slide 4, but I find these similarities quite interesting.
The meaning of ‘systemic intervention’ (slide 5)
I have presented the previous material primarily to situate where I’m coming from, and why I’m here. I next want to give you a little bit of background to my own work. The systemic intervention research program that I’ve been developing is something that I’ve been working on over the last twenty-or-so years, mostly in the UK, but now in New Zealand. It’s a program that has been continually building theory and practice that mutually inform one another, so I’ve been engaged in a lot
I want to start by defining what I mean by ‘intervention’, knowing that this definition will raise more questions than it gives answers. I want you to ride with this because, as the talk unfolds, you’ll see where I’m going with it. I want to define intervention as purposeful action by an agent to create change. Now, that doesn’t mean completely pre-planned, or based on flawless prediction, or any of those sorts of things, but I think you can talk about action being purposeful. It doesn’t mean that the purpose is necessarily coming from outside, as if you’re manipulating a system. Whether you’re coming from inside an organization or whether you’re brought in from outside (like a consultant), you become part of the organization as soon as you engage with it. Once action starts, it is always action from inside.
And what I mean by systemic intervention—going back to what I said right at the very beginning—is that because we can’t know the interconnectedness of reality, the full interconnectedness of everything (i.e., we cannot have that God’s eye view), we necessarily have boundaries. Whether you’re aware of them or not, in your understanding of anything there are boundaries involved. So, systemic intervention for me means purposeful action by an agent to create change in relation to reflection on those boundaries. So that’s the basic concept I like to use to begin to think about how you deal with the impossibility of knowing everything.
Some ideas about boundaries
What I want to do next is go very briefly through the history of some of the ideas about boundaries in the systems community that I think might be relevant to the complexity community as well. I want to start with the basic boundary idea that was introduced by Churchman in the 1960s, because he made a radical departure from the previous systems ideas where people just assumed that boundaries are reflections of reality: real markers of the edge of a system (e.g., the skin of my body being a boundary). What Churchman did was to say that boundaries could be conceptual or social constructs. They mark the inclusion or exclusion of stakeholders, people and issues. They demarcate what is relevant to an analysis. A boundary may coincide with a physical edge or not, depending on the purposes of the person looking at the system.
The ellipse in Slide 6 represents a boundary which marks who’s included, who’s excluded, what issues are in the analysis, and what issues are out. The peak represents the values that are associated with that particular boundary. Churchman’s key insight was that value judgments always drive boundary judgments, and so it is impossible to have a situation where you have a bounded understanding without having some values lying behind that. So, the idea of absolutely objective analysis is problematic. You might be able to reconstruct the notion of objectivity, but you have to acknowledge that there are values involved in any boundary judgment. But, at the same time, because we don’t come to a situation completely from the outside with pre-given values,
In terms of my own experience of facilitating systemic interventions, if you start to talk about boundaries, people often get stuck with thinking about current, familiar boundaries, and they tend to be constrained in their thinking. If you actually start with values, people are often less used to thinking that way, and it opens up considerations more easily. Thus, my starting point tends to be around values, moving onto the boundaries that these imply.
Churchman was working in the 1960s, and his mission was to create an ethical systems practice. He believed that, because boundaries constrain values, the most ethical systems practice is one that pushes out the boundaries of analysis widely, to be inclusive of as many different value perspectives as possible—but without going to the extreme of over-inclusion so that action is paralyzed. He basically said that you should push out the boundaries as widely as possible, within the limits of the human capacity to process information. However, in the early 1980s, one of his students, Werner Ulrich, was quite critical of this. He said, “Well, that’s all very well in theory, but in practice there are a lot of constraints that stop you pushing out the boundaries as widely as possible. And, it’s not necessarily irrational to live with those constraints when you have to take practical actions.” He wanted to think about how you rationally justify boundary judgments, given that you can’t be as comprehensive as you would want to be a lot of the time.
In order to answer that question, “how can you rationally justify system boundaries?”, he had to ask a deeper question, which is, “what is rationality?” I’m sure nobody has come to this policy analysis workshop to answer the question “what is rationality?”, but Ulrich actually had to address this in order to deal with the problem of boundary setting, and he came to the conclusion that any argument concerned with the justification of a boundary is always expressed in language. Language is something that is socially shared with other people: it’s not something that is a purely private affair. That doesn’t mean we always completely agree on the meanings of words and signs, but they are nevertheless socially shared. So he came out with the principal that to say something is rationally justified means that it has to be agreed with all those involved in, and affected by, the thing that we’re looking at. Of course, Ulrich recognized that this is a high standard of rationality to achieve in many practical situations and said, “Yes, but it is something you try to move towards; you try to secure an agreement between those involved in planning and those affected by it, even if you know that you will not always succeed.”
To make this idea practical, he developed a set of ‘critical systems heuristics’ questions that both planners and ordinary people could use in debate to think through issues. These questions were about what the situation currently is and what it ought to be. The twelve questions he developed focus on four areas, namely:
Motivation—why would you want to be planning this system in the first place?
Control—who should have decision-making power? What should people have some say over, and what shouldn’t they have a say over?
Expertise—what forms of knowledge are necessary, and from what sources?
Legitimacy—what are the values this is based on? Are you creating an oppressive system and, if so, what should you do about it (if anything)?
So there are twelve questions, three for each of these four areas, and I’ve used them myself in a number of different studies: for example, with children living on the streets; with people with mental health problems in prison; and with older people in residential care. I think Ulrich is quite right to claim that these are questions that ordinary people with no experience of planning can engage with. I have found that ordinary people can produce outputs that are at least as comprehensive as those generated by professional planners, providing that the questions are translated into everyday language (Ulrich’s original questions contain some academic jargon, so you have to rephrase them).
When I came into this research area in the mid-to-late 1980s, I was interested in what Ulrich had done, but I was also interested in what happened when different value and boundary judgments come into conflict, i.e., when you have a situation where people make different boundary judgments and have different values, and they get into entrenched conflicts that begin to stabilize. I wanted to both try to explain that phenomenon, and see if I could identify some methods to do something about it. So, I developed the idea that in most situations there isn’t just one boundary judgment going on, but multiple judgments (as depicted in Slide 7). The inner ellipse represents a boundary judgment that might be made by one group, and the next ellipse is a boundary judgment that might be made by a second group. The area in between these two boundaries is referred to as the marginal area. There are things that are of core interest to everybody,
To give an example, consider unemployment. An industrial organization may have ethical owners and managers who are concerned with the welfare of their employees: they are concerned with paying them a decent wage, but are not interested in giving money to people who are unemployed in the local community. This community is outside their sphere of concern. As is quite common and understandable, they are concerned with the health of their own organization, with their own employees. On the other hand, you may have activists in the community who are very interested in people who are unemployed, and who believe that the industry has some responsibility to deal with unemployment. So you begin to get conflict.
Now Slide 8. This might look horrific at first, but I’ll talk you through it. First, you see the same boundaries here as in Slide 7. In the center is a narrow boundary judgment. Let us say that this is the one made by the industrial organization, which claims that it only needs to be interested in the welfare of its own employees. The next (middle) boundary represents the one made by the community activists who say that the industrial organization should also be interested in dealing with unemployment. The two peaks represent the values that are associated with each boundary judgment, and these values come into conflict (represented by the ‘explosion’ between the two values).
I realized through practical experiences in a number of projects that these kinds of situations are not necessarily always resolved. There is a tendency to assume that when you’ve got a conflict, somehow the conflict gets resolved and everything’s nice in the end. However, a lot of conflicts are not easily resolved: they stabilize, and they perpetuate for weeks, months, years, or even generations, and there is something going on that creates this situation. In my research I began to look at what is happening to the things in the margins, and I realized that they have a role in stabilizing conflict situations. I noticed that the things in the margins are attributed the status of being sacred or profane, and I use such strong words to emphasize the power of these kinds of judgments. If the things in the margins are viewed as profane, then it justifies only looking at the narrow boundary. So people who are unemployed, for example, begin to be looked on as scroungers who are wasting taxpayers’ money, thereby justifying people in the organization saying, “It’s not a concern of ours; these are wasters; and it’s not our responsibility.” Or they get viewed as sacred, so the community activists begin to say, “If we could only harness the energy of the unemployed, they’ll be the vanguard for a new political movement.” I can certainly identify with this sentiment personally: in my early 20s I was involved in a political party, and I stood outside employment exchanges (where unemployed people register for welfare) handing out leaflets seeking to recruit people. In retrospect, I can now see that I was making the unemployed people sacred. My perspective on this was just part of a wider system that included those who saw the same people as profane.
Interestingly, there’s rarely a consensus around whether things in the margins—whether issues or people—are sacred or profane, and this is critical to the perpetuation of conflict. The stabilization eventually happens through the institutionalization of ritual, and either the sacred or profane attribution is made dominant. To give an example of ritual, I was unemployed for three years in the early 1980s (the early Reagan/Thatcher years), and I had to sign a register once a week to declare that I was eligible for work. That particular ritual had a function: it allowed the people working in the employment exchange to know that I was available for work. However, it was also an exercise in ritual humiliation that basically expressed the view that unemployed people are ‘profane’.
So, this is the kind of process that I believe is going on, and it happens at all sorts of levels. I’ve seen it going on in small groups; within organizations; between organizations; across communities; and in international relations. Some processes are easier to shift than others. Some are very, very difficult to shift indeed. I was at a conference a few years ago, and when I reached this part of the talk a woman in the audience asked me an absolute bummer of a question. She said, “I’m from Israel, and this model really explains the Palestinian/Israeli conflict. What would you do about it?” I said, “Well, some problems are easier to diagnosis than they are to solve!” What is actually going on in some really, really entrenched situations is that this whole process of marginalization is given life, and made very resistant to change, by conflicting discourses that are embedded in institutions across societies.
I can illustrate with the example of unemployment again. The marginal status of the unemployed is extremely difficult to change. The reason I see for this relates to the conflict that goes on in our institutions between the discourses of capitalism and liberalism. Although these two discourses are mutually supportive in many situations (for instance, liberalism promotes the ideal of individual choice and capitalism allows the manufacture of a variety of goods to choose between), in relation to unemployment they are not. In capitalist societies, you need organizations to be responsible for their own employees, but they mustn’t be responsible for others in the community. If organizations had to be responsible for all the people in their local communities (which the liberal ideal of equal citizenship might suggest would be a good thing) then they could not have any influence over their own competitiveness and the capitalist system as we know it would collapse. At the same time, if you actually said that it is legitimate for unemployed people to be completely neglected, to starve on the streets, then the liberal ideal of equal citizenship would collapse instead. The only way to preserve both things at the same time is to have unemployed people neither totally inside nor totally outside. They have to be kept in that marginal position.
Of course, like any model, Slide 8 is an oversimplification. In real situations there are lots of dynamic processes like this interacting. I should also note that my interest in this is not purely sociological. My main interest is to ask, “What meaning can this have for intervention? What can you actually learn from this? How can you reflect on these kinds of processes and do something about them?”
An example: Developing services for young people (under 16) living on the streets
To give you a practical example to ground these ideas a little better, I want to very briefly talk about a project, which was about developing services for young people under sixteen living on the streets. This is a project I worked on in Manchester (UK) in partnership with two colleagues, Alan Boyd and Mandy Brown. Three voluntary organizations commissioned this project because they were aware that there were lots of homeless children living on the streets, and they were falling through the net of all the agencies. No agency had a statutory responsibility to deal with the situation: these particular children were not in
I’d like to give you the whole story, but here I’ll just focus on one aspect of marginalization. It was really important to us to involve young people centrally in this project. We noticed that, in terms of Slide 8, there were two kinds of marginalization going on. First of all, young people in general are marginalized in the sense that they’re regarded as less rational, and less able to make informed decisions about their own lives, than adults. As such, they can only vote when they’re eighteen; they can only buy alcohol at a certain age; there’s an age of consent for sex; etc. Various things mark out young people as different in relation to their decision making ability. Secondly, these particular children were living on the streets. We’re talking about something like 2,000 children in the one year living on the streets of Manchester. I thought this was a problem in Brazil. I didn’t think it was a problem that existed in the UK. So it was a shock to me, as a UK researcher, that this sort of thing was going on. It was really a hidden problem. If they want to stay on the streets for any length of time, these children can only survive through petty crime or prostitution. They are children who could easily be classified as ‘troubled teenagers’ and be marginalized in that way.
How did we go about getting broad involvement (Slide 9)? First of all, we sought the views of young people before approaching professionals: the voices of the young people were actually the foundation upon which the professionals could build commitment. Going to young people first was effective in harnessing multi-agency involvement because their voices were very, very powerful. It was really strong, emotional material that we generated through interviews with children on the streets. This basically made it emotionally impossible for the agencies to say that they were not going to get involved. We communicated the young people’s words, not just ours, to professionals. This was really important, partly because of the need for emotional engagement, and partly because we had a situation where, as we were interviewing young people on the streets, a number of them were making quite strong allegations about how the police were behaving towards them; that they’d been abused by the police in various ways. We had a workshop with the police, and we decided that, ethically, we couldn’t just set this aside and pretend that it wasn’t happening. So we produced quotations—all of the quotations, whether they were positive or negative about the police, filling three pages in total—and gave them out at the workshop. Initially there was silence in the room. I was sitting there thinking, “This could just explode in any direction”. It was a huge risk. But the very first person who spoke put their head up and said, “I know who did this one!” And people spontaneously started to say, “Yes, we have to deal with it.” Within an hour, they produced five different ideas for how they could actually correct the situation. I enjoyed working with the police: I found it to be a really proactive agency.
When we actually got onto designing the services for homeless children, we used the same design methods with the young people as with professionals. We actually had a disagreement in our team over this. One member of the team suggested that we ought to use a ‘playful’ approach that would allow children to represent their concerns in a play or using art techniques. My feeling was that, if we took a playful approach, it would have been very easy for the professionals to have said, “Oh yes, that’s very nice, we’ll take it into account, but here’s the proper plan that we have produced.” To avoid that, I thought we needed to use the same process with the young people as with the professionals. So, in separate workshops with children and agency representatives, we used the principles of interactive planning developed by Russ Ackoff.
There are three principles of design in interactive planning:
Plans have to be technologically feasible. So there are no magic solutions to housing problems, like little fold-up houses in your pocket.
What is produced has to be viable. It has to be sustainable in social, ecological, economic, and cultural terms. For the purposes of design you can disregard start-up costs, but a new development has to be sustainable by the agencies that are going to run it.
It has to be adaptable. You mustn’t produce some kind of super bureaucracy that is impossible to change when circumstances change around it.
These principles are designed to promote creativity and the development of ambitious proposals for change, while preventing people from fixing on completely unrealistic ideas. We also used the critical systems heuristics questions that I mentioned before (about motivation, control, expertise and legitimacy) to guide the debate. This made sure that questions of governance and young people’s involvement were considered as part of the process. The young people actually produced much more detailed designs than the professionals. They dealt with things in a really sophisticated way. For example, they were talking about building a refuge for young people in the center of the city, and they discussed the drugs policy that would be needed in that refuge. There was one girl who said, “You need a ‘three strikes and you’re out’ policy, because drugs create violence in a refuge like this, and the last thing we want is violence when people are already in a vulnerable situation.” Another girl then turned around to her and said, “How can you say that? You take drugs every day.” And she replied, “What I do and what is necessary for the refuge are two different things.” So there was a level of awareness and responsibility amongst the children that was really striking for the professionals, and allowed the professionals to have confidence to take these ideas forward.
Theoretical pluralism (Slide 10)
I’ve already talked about the notion of boundaries, and the idea that it is possible to explore different boundary judgments and the values associated with these. This legitimizes theoretical pluralism: drawing upon multiple theories depending on our purposes, rather than seeking one single ‘grand’ theory. Different theories assume different boundaries of analysis. For example, Maturana and Varela’s theory of autopoiesis is about the biological nature of human beings, and it tends to put its primary emphasis on the boundary of the
So, different theories assume different boundaries. Logically then, if it’s reasonable to choose between different boundaries, it’s also reasonable to choose between different theories. Whether or not you have to harmonize those theories in an intervention depends entirely on your purposes. If you’re supporting the development of new policy to deal with a social issue, and draw upon several different theories to understand the human relationships unfolding through your intervention, maybe you don’t actually need to harmonize the different assumptions of those theories, even if they are commonly seen as incommensurable. That might be an unnecessary exercise because the primary purpose is to support the emergence of a new policy. But if what you’re trying to do is produce a new theory of human relationships to enhance our understanding of the policy making process, then harmonizing any contributory theories would be important to making the final theoretical product coherent. It entirely depends on your purposes and audience.
Methodological pluralism (Slide 11)
Now I want to move on from theoretical pluralism to methodological pluralism. Different methodologies and methods make different theoretical assumptions, so if you can have theoretical pluralism, you can certainly have methodological pluralism. This is the theoretical rationale for methodological pluralism, but the most important reason for embracing it is practical. There is no method, as far as I can see, that can do everything. It is
Before continuing, I want to make a distinction between methodology and method:
Method is a set of techniques to achieve some purpose, and;
Methodologies are the theories and ideas that enable one to understand why particular methods are appropriate.
There are two kinds of methodological pluralism (Slide 12): there is learning from other methodologies to inform your own, and there is mixing methods from different methodological sources. First let us look at learning from other methodologies:
You can build a methodology in an ongoing way, learning from other people. I have built my understanding of methodology over a period of approximately twenty one years. In order to be credible, you have to have some coherence within the set of ideas you work with. But to have learning from other perspectives, you also need to welcome disjunctions at the same time. You have to be able to tolerate a certain amount of discord in your thinking in order to be able to take in ideas from others. In my own work, I go through periods of opening up to other ideas followed by periods of consolidation. I tend to think of developing a methodology as producing a fragmentary whole, which is a deliberately contradictory concept: there is sufficient coherence to be able to observe some consistency in the argumentation, but new ideas can still be embraced, and integration is not necessarily immediate. What this allows you to do in an academic or practitioner community is avoid the situation that communities often get into where people build their methodologies like castles. Then they go up to the ramparts and start firing at all the other people who try to knock their castles down. If we can accept that somebody else having a good idea doesn’t have to undermine our own thinking, because we can tolerate some disjunctions as we take in ideas from others, then this enables much more productive relationships and learning opportunities in our academic and practitioner communities.
Then there is the other form of methodological pluralism: mixing methods from different methodological sources.
An example: Evaluating a diversion from custody service for mentally disordered offenders (Slide 13)
To give you an example of what I mean when I talk about methodological pluralism at the level of methods, I’ll briefly discuss a project that I undertook with a colleague, Claire Cohen, when we were asked to evaluate a diversion from custody service for mentally disorder offenders (people with mental health problems who inappropriately end up in prison). Instead of getting treatment, these people have been incarcerated for a crime, or they’re in custody in a police cell awaiting charge or trial, and they’re not getting any help. The service brought together a social worker, a probation officer and a psychiatric nurse who were going around police cells to identify people with mental health problems. They were trying to work with the police and with the prison service to get mentally disordered offenders out of prison and find them alternatives to custody. When we were offered this evaluation, I could see straight away that they were responding to an existing situation instead of proactively trying to prevent it. So I suggested, “Instead of being a responsive service, don’t you want to use your staff to try to change the system so that people with mental health problems don’t actually get into prison in the first place?” And the woman I was talking with said, “No, no, no. Don’t go there. We’ve got the funding for a responsive service, and this is all we can do right now.” So I struck a deal with her that we would do what she wanted, and we would spend a year gathering data about the effectiveness of the existing service, but we agreed that if this data showed that they were only ‘mopping up’ (that they were an ambulance at the bottom of the cliff), they would revisit the issue of being more proactive.
We utilized multiple methods for this evaluation. These included methods drawn from Checkland’s soft systems methodology, which is a process for engaging people in debate about the current situation and the human activities that are needed to improve it. That was useful in supporting the diversion team’s planning, and also to inform the design of a database for data collection. We used participant observation and interviews; case study information on individual clients with mental health problems; and statistical analysis of client group characteristics and diversion rates. We triangulated the qualitative and quantitative data. I have suggested previously that quantitative and qualitative information are both useful for different purposes.
This project was a good example of where quantitative data was absolutely necessary because the team had a view of their project as failing. I asked them to estimate what percentage of their cases involved a successful diversion from custody, and they thought they had maybe a 30-40% success rate. What our statistics showed was that they had an 85% success rate overall, and that for minor crimes this rose to 100%. However, because it might have taken five attempts before the police released someone, the four unsuccessful attempts appeared to outweigh the one successful one in their minds. Despite their subjective experience of failure, the team was actually very successful in getting people out of custody.
What we also found when we triangulated the quantitative data and the qualitative case studies, was that a small, hard core of individuals were going in and out of prison on a regular basis. There were twelve individuals in our sample who in one year alone had been in prison over twenty times each. Essentially, these people were stuck in a revolving door, and it is because of this we went back to the management of the service and said, “You really do need to look at the whole issue of how you design the system to prevent this from happening in the first place.” They agreed. Unfortunately we had very little money left, and very little time, so we did the only thing we could with the remaining resources. We held the same kind of workshops that we did with the children on the streets that I was talking about earlier, both with professionals and with people with mental health problems who had recently been released from prison. The aim was to look at what the properties of the mental health and criminal justice systems ought to be if they were to prevent people from getting into this situation in the first place. The thing that really surprised both the professionals and the people with mental health problems was that they agreed on about 90% of what should be done. And even the areas of disagreement were not fundamental: all parties accepted that they could work on resolving them.
When we first contacted the group of mentally disordered offenders, they had no idea that there were others with similar problems. They found it enormously helpful to realize that they had common experiences. As a
Outline methodology for systemic intervention
If I could be really silly, and try to sum up everything I’ve written in the last twenty one years in a single overhead, it would be Slide 14. You need a minimum of three things in a systemic intervention process. First, you need a process of critique—i.e., thinking critically about boundaries and values. Second, you need judgment about what kinds of methods are going to be appropriate, and you need a creative synthesis between different approaches. It’s not just about picking methods off the shelf, because most situations are complex enough to actually require quite creative design processes. I often find myself inventing new methods rather than just copying something from off the shelf. Finally, you need action, which is about implementing the products of the interactive process of critique and judgment.
These three elements are not steps in a methodology where you simply go from one to another in a linear manner. Rather, they are lenses through which you look at a situation to make sure that you’ve covered the three aspects that are necessary for systemic intervention.
Conclusions about systemic intervention (Slide 15)
The first two conclusions shown in Slide 15 are not new, but I hope the third one is. The first one is that boundary critique enhances reflection on issues of inclusion and exclusion, marginalization and the design of methods. I.e., boundary critique is a useful idea! People have been developing the systems theory of boundaries since the 1960s. The second conclusion, about the value of methodological pluralism, is not new either. Numerous authors over the past twenty years have explained how it allows a more flexible and responsive intervention practice than adherence to a limited approach that has just a narrow range of methods, and methodological pluralism has become a mainstream idea in the systems community. However, there are still a lot of people out there in the operations research and other communities who champion just one approach as the answer to every kind of problem, so that’s why I think it’s necessary to continue to talk about this.
My own contribution is to bring these two things (boundary critique and methodological pluralism) together. There is a danger in just doing boundary critique alone: you can have good sociological analyses of situations, but you do not necessarily get any action to change them at the end of the day. Similarly, by just having methodological pluralism alone, you can end up with quite a superficial analysis based on the word of a couple of managers you have spoken to, and then you pick a range of methods that you think are going to be appropriate for the situation. This superficial approach can have quite dramatic side effects when you fail to take into account other perspectives that could be impacting upon the situation. So boundary critique helps deepen analysis to allow you to make methodological choices in a more informed way. The synergy of the two is where I believe my own contribution to systems methodology lies.
Given the similarities between the various systems and complexity paradigms noted earlier, I hope that some of this systems research has relevance for the complexity community too.