Interactions, Technology, and Organizational Change

Duska Rosenberg
University of London, ENG

Tony Holden
Cambridge University, ENG

Introduction

Introducing technology into a working environment seems to be full of surprises. It is not easy to predict how people will use the new computerized services, how the new technology will affect established working practices, how user “alienation” can be avoided, or how we can assess the likely cost-benefit of introducing new technology.

One possible reason for this is that managers, designers, and developers underestimate the complexity of the social and organizational environment where technology will be used. This article represents an attempt to examine changes in communication patterns induced by the introduction of technology, as the first step toward discovering those aspects of complexity that would otherwise remain unnoticed. Such as approach is frequently used in studies of human communication, cf. Gumpertz & Hymes (1972). The discussion is motivated by successes and failures of information technology in manufacturing, petrochemical, and construction industries, tracing their development from database management systems, through expert systems, leading to virtual and augmented reality.

The analytical framework for this work reflects a multidisciplinary perspective for information and interaction. Organizations are viewed as complex systems; that is, the focus is not on the structure of the system, but on its interactions. The interactions may be between structural or functional units of the system, or they may be with the environment in which the system operates. The concept of emergence is critical to this view. “Emergent interactivity” is central to the analysis of the impact of technology artefacts on human communication. In the organizations where the technology was introduced, established patterns of communication were continuously changing as the artefacts were gradually adapted to fit the organizational system. This is regarded as a complex system whereby “all the parts of the system are in dynamic communication with all the other parts, so that the potential for information processing in the system is maximal” (Goodwin, 1996: 183). In this context, technology artefacts provide channels for communication and a link between parts of the system; that is, interacting agents—people who are engaged in “taking initiative, understanding concepts and pursuing ideas” (Letiche, 1999). New properties of the communication system and new patterns of system behavior emerge as the result of those interactions. “Emergence” is thus defined as the ability of the system to respond to change by developing new interactions both within the system and with its environment. In essence, it is a human system whose interactions are facilitated, enabled, or obstructed, as the case may be, by technology.

The case studies below are examples of what happens when a new technology artefact is introduced and triggers off new interactions that ultimately change the communication system.

New technologies are increasingly restructuring organizational rationalization and social regulation, in the battle to reconcile the tension between the need for predictability and a flexible response to unpredictable threats. (Wood, 1999: 99)

Their influence on the existing relationships within an organization is studied with respect to the tension between the flexibility and situatedness of human communication on the one hand, and organizational structures and processes on the other.

CASE STUDY 1

About ten years ago, CIM (computer integrated manufacturing) was announced as the start of a new era in manufacturing. The errors and untidiness that inevitably go with human participation in the manufacturing process would be progressively removed by the replacement of people with technology. First of all, carefully selected areas of work on the production floor would be made into “islands of automation.” These would then be enlarged to cover whole sectors of manufacturing activity. Finally, different manufacturing activities would be fitted together coherently, in a super-efficient automation of the entire organization.

Technology would be substituted for people at every level—this could certainly be done for all routine tasks—and it was envisaged that this could also be done for knowledge-intensive work. Knowledge would be made more accessible. Transfer of knowledge would be facilitated, in accordance with sophisticated models of database management systems, by which information in one sector of the activity is made available to other sectors; grasp of knowledge would be facilitated by systems for integrating information so that numerical data could be visualized, in a form that people more easily take in and handle. People would increasingly assume the role of supervisor over the technology of the organization.

This was an exciting and attractive vision of the factory of the future; but it was not realized. Technology did not replace people, and it did not always make things easier for them. In some situations it was helpful, by making available information about the manufacturing process to which they did not previously have access. In other situations, it provided too much information, and the quantity meant that it was still in a form that was hard to interpret.

A particularly difficult problem was that the “islands of automation” created information bottlenecks. The effect of setting up an island of automation was that a relatively unsophisticated technological artefact was placed in the center of an activity, where it should have mediated communication between the human participants, but it did not have the capabilities required for this task. Islands of automation had been very effective in the mechanical processes of the production line. But they became “prisons of information,” isolating essential knowledge from people both inside and outside the activity. Information from the manufacture of the products in the factory, or from the maintenance of the products on customer sites, was “locked” in databases, and so could not play its essential part as feedback. The structures, retrieval mechanisms, and coding of the data prevented it being interpreted by human experts, to extract information they could reason with and apply.

The consequence was that human users spontaneously developed repair strategies to unblock the information bottlenecks. To store knowledge more effectively, users tended to avoid putting information into databases, preferring to set it down on paper, or keep it in their heads. To impart knowledge more effectively, they had increasing recourse to alternative information channels, both written ones such as noticeboards or paper documentation, and personal ones such as word-of-mouth messages and face-to-face meetings, formal or informal. By the ingenuity they demonstrated, the users preserved essential social and organizational processes in the face of insensitive technological intervention.

However, benefits can be drawn from this confrontation between human resourcefulness and insensitive mechanization. Both managers and analysts have had an opportunity to study the wide range of communication strategies that people adopt in order to handle the social, organizational, and technical complexity within which they operate (Remboldt, Blume, & Dillman, 1985).

CASE STUDY 2

Expert systems, even more than other forms of knowledge-based technology, were supposed to bring people and databases closer together. Their particular advantage was in the way the information-processing strategies of the machine were in conformity with those of its users. In this way, they were expected to offer users an intelligent handling of data. Experts would be enabled to extend their existing strategies to the increasingly large amounts of data generated by manufacturing and maintenance activity. For example, a prototype expert system captured the problem-solving strategies of engineers engaged in the diagnosis and repair of faults in computer equipment. It was expected that by automatically locating the source of the fault, it would help engineers to find an optimum solution more quickly and more efficiently (cf. Rauch-Hindin, 1988).

The prototype was a success, to the extent of achieving its stated aims. The system was good at identifying the smallest replaceable unit in the faulty equipment whose replacement would correct the fault. The repair engineers working on customer sites could speed up and increase the number of repairs, by carrying out the replacement indicated. Both the intended users (the repair engineers) and the customers (the owners of the equipment) were very pleased with the system. However, there was no positive response from anyone else in the organization, and the system was never taken further than the initial prototype.

From the point of view of the organization as a whole, the system was not so satisfactory or welcome. Its use involved ignoring wider considerations, such as important features of the existing cooperative working practice, and not anticipating the dynamic effects of introducing it. The practice was for replaced units to be brought back to the firm for repair after examination by diagnostic engineers. From the point of view of the repair engineers, the optimal—in the sense of easiest—solution to a fault was indeed to identify the faulty unit and simply replace it; but such a solution tends to be very costly and should not be overused. The expert system enabled the repair engineers to increase their output and, as they are paid for each site visit, this would be in their interests. But it would do this by passing on the crucial work of repairing the faulty component: this had to be undertaken by the diagnostic engineers, who were better qualified and thus more expensive. There was also a risk that repair engineers would be deskilled, since all they had to do was replace units as the system advised. Furthermore, although the units identified as needing replacement were returned to the firm, the information about their behavior when they were causing trouble was lost. The diagnostic engineers therefore had a much more difficult task identifying the fault and recommending a repair strategy for the units.

The loss of information about breakdowns had consequences for other aspects of the business than repairing faults: such information is required by managers concerned with preventive maintenance of products, with quality control of current production, and with future product improvement. These and other activities rely on proper feedback, which the system did not provide and even prevented. Thus the more efficient the expert system was in supporting one class of users, the more damaging it was to the organization as a whole.

In this case, the system was not insensitive to the users in respect of their individual problem-solving strategies, but it was insensitive to the social nature of their participation in the workplace, and the objectives of the organization as a whole. Lessons learned from it were that the successful introduction of technology requires account to be taken of social and organizational complexity, and of tacit social knowledge of the diverse aspects of current practice; and that the technology to be introduced must be correspondingly more complex.

The system was powerful enough to introduce significant changes in the ways people shared information about the behavior of faulty units. In the previous, human-only practice, such information was gathered and passed on personally by the repair engineers. Expert systems freed the repair engineers from these tasks, and such information was no longer shared because just by using the given expert system they could avoid detailed reporting from the sites, and the expert system did not itself elicit it. Thus, the expert sytem only covered a part of their existing expertise: the technical aspects of their expertise were supported, but not the social ones that required communication. In this light, it can be said that the expert system did not really capture the individual repair engineers' existing practice or, indeed, their total expertise.

There was a dynamic effect of the efficiency of the system on the working of the organization: the repair engineers increased productivity, but at the cost of transferring the technically more difficult part of their work to the more highly paid diagnostic engineers. This perverse effect was unexpected, and the knowledge-engineering approach that underpinned the development of expert systems at the time was focused entirely on “domain knowledge,” ignoring the “organizational knowledge” that complemented it in practice. The tacit organizational knowledge would have been useful in deciding whether to have the expert system, and what exactly it should do.

CASE STUDY 3

Unexpected emergency conditions arise in the operation of energymanagement systems, and to speed response times operators need an online support tool. What typically happens during a plant emergency is that the operator is swamped by alarm signals, and is given insufficient information, or misleading information, on which to base decisions. To identify the cause of the problem and construct a safe startup procedure, the situation has to be analyzed by plant experts. These have to consult information about plant design and operation, possibly large quantities of information, which can make responding to the emergency a slow process.

Merely providing operators with progressively more sophisticated technology does not solve this problem. One of the complications is that everything takes place in a situation that is social as well as technical. Recognizing this, we look at the problem from the point of view of the human decision makers using their decision processes. Any new decision support systems must be built around this core. This is true whether or not such systems use technology.

Our decision analysis acknowledges the relevance of technology, while emphasizing the central place, for better or worse, of humans. An important aspect of this view is that however low the risk of human error, we consider it as being, although rare, inevitable; the technology must take account of this.

CaDSM (Cambridge Decision Support Methodology) is a framework that we have developed for analyzing problem situations and developing support systems to handle them. It is composed of three phases. It leads from a “soft” approach in the first two, with “hard” products resulting from the third. The first phase identifies the core problems, and the second outlines a general solution. These phases analyze organizational complexity, having particular regard to the human factors. In the last phase, development, the results of the first two phases are used to design a suitable decision support system that can be implemented by the use of more conventional IS design techniques such as SSADM.

CASE STUDY 4

To achieve good operational performance, and reduce the risk of poor operational decisions, operational staff need a decision support framework that is in harmony with their organization. This framework must accord with features of the management (such as systems and procedures), features of the work team (such as the knowledgeability of the staff and the culture of the work environment), information features (such as the quality and accessibility of data and documentation about operating procedures), and technological features (such as automated systems and information technology). The framework must address all of these in a way that is integrated, oriented toward the operational user, and designed to accommodate both formal and informal aspects of operational decision making.

To provide a coherent and comprehensive view, what is needed is a knowledge-based framework; “knowledge” here includes knowhow, expertise, information, and data. KNOVA (KNOwledge Value-Added) is a methodology that takes a systematic approach to identifying the various factors that affect performance in the organizational situation and that influence risk. The outcome of using KNOVA can be regarded as a knowledge investment plan. The initial KNOVA application framework was developed collaboratively with British Petroleum.

KNOVA guides the manager in developing both a static influence diagram and a dynamic computer model to represent the effect of knowledge factors varying along the time axis. We have developed a computer decision support tool embodying KNOVA, which makes the process of performing a KNOVA analysis accessible to the line manager. It also enables analyses to be performed repeatedly at regular intervals, and the results to be stored; so it is possible to follow performance trends and to take corrective action.

An organizational analysis is normally a complex and time-consuming process. Our methodology and support tool enable such an analysis to be carried out once; then the results of this first analysis serve as the basis of subsequent, shorter analyses that can be undertaken by line managers. Business process is thus placed in a social and cultural context of organizational interactions.

In many organizational situations, the aim of introducing technology is not only to support decision making, but to foster increasing awareness. One conclusion of our work is that, especially in such situations, a thorough analysis is required of the socio-informational environment and its knowledge dynamics. This is to ensure that any resulting system, when introduced, amplifies the mechanisms of awareness and does not work against them. But our experience also indicates a need to develop techniques that provide clients with rapid reassurance that an analysis is necessary and so not wasteful of their time and possibly their money.

Further work will be concerned with rationalizing our methodology in a form that can rapidly highlight, for everyone concerned, some of the key hidden socio-informational processes and the ways they affect operational awareness.

CASE STUDY 5

Advanced interactive technology, now becoming widely available in various forms, is often regarded as a tool for making hidden socioinformational processes more visible, with the potential to support both managerial and operational awareness.

The most exciting of interactive technologies is virtual reality. This has been very successful in computer games, where it challenges people's everyday experience, tests their skills of survival in a world of their own creation, and enables them to see and experience things not encountered in their actual environment. Among industrial applications, virtual reality is especially well suited to such areas of work as the construction industry. This industry presents certain characteristic difficulties, arising from the way it is spread over space and over time. In a construction project, people are often necessarily dispersed at remote locations between which they have to collaborate at a distance; and however well the construction process is planned, unpredictable problems will always arise on the site during the course of the work that necessitate changes in design. Thus, there is a specific need for communication of what site conditions are like on a given occasion. It is hard to achieve this effectively over the telephone or in a memo, since the limitations of the spoken and written media mean a loss of much of the important information.

Virtual reality addresses these problems particularly well. It can provide a fuller and more vivid representation of site conditions. Virtual presence can give the designer, even if they are thousands of miles away, an artificial equivalent to personal presence on the site.

However, virtual reality does not provide all that is needed, or in the form in which it needs to be. Virtual reality is, in general, very effective for challenging people's perceptions by offering unfamiliar alternatives to real reality; but immersing people in a virtual environment as in a computer game does not help them to perform their real-life tasks. Rather, what they need is relevant information presented in a way that facilitates their use of it to solve problems.

CONCLUSION

Empirical and theoretical research indicates that people abstract information from the world, and that they use the abstractions so formed to reason with, to communicate with others, and to organize. There is no sharp division between the cognitive, the social, and the organizational aspects of abstraction. The essential question here is: what do people perceive in the real world that enables them to abstract useful information from it? Empirical and theoretical research shows, further, that both abstraction and communication are best based not on the passive reception of perceptual information, but on perceptual-motor interaction. To give another person an adequate impression of a state of affairs, a camera attached to someone's head, transmitting the contents of their visual field, is less effective than a camera in their hands, playing its part in their work on a practical task.

Furthermore, the metaphor of wearable technology as an extension of their perceptual capabilities often leads to confusion, since they use body movements to communicate as well as perceive. In a conversation over a videoconferencing channel, the speaker wearing the camera on their head may spontaneously nod in agreement and confuse the observer at the other end. Thus, even the human body must be viewed as a complex system with both perceptual-motor interactions and communicative ones simultaneously.

Instances of this general truth have been encountered in the course of finding solutions after running into unexpected problems. There is room for a more systematic investigation, or process of learning, that would accompany the process of applying technology in the workplace, and would be aimed at identifying the complexities of interaction that we tackle by forming and communicating abstractions from real experience.

References

Devlin, K. & Rosenberg, D (1993) “Situation Theory and Cooperative Action,” in S. Peters (ed.), Situation Theory and Applications Vol IV, CSLI Lecture Notes, Center For The Study of Language and Information, Stanford University.

Devlin, K. & Rosenberg, D. (1966) Language at Work, Analyzing Communication Breakdown in the Workplace to Inform System Design, CSLI Lecture Notes 66, Stanford University, Cambridge University Press.

Glykas, M., Wilhelmij, P. & Holden, T. (1993) “Object Orientation in Enterprise Modelling,” Proceedings of IEE Colloquium on Object Oriented Development, January.

Goodwin, B. (1996) How the Leopard Changed Its Spots, New York: Touchstone.

Gumpertz, J. & Hymes, D. (eds) (1972) Directions in Sociolinguistics: The Ethnography of Communication, Holt Rinehart Winston.

Holden, T. & Glykas, M. (1994) “Enterprise Modelling and Process Design Techniques For Configuration Management,” Information Engineering Division, Department of Engineering, University of Cambridge.

Letiche, H. (1999) “Emergence: Cyborgs versus cognitivist (social) Darwinism,” Emergence, 1(3): 16-36.

Rauch-Hindin, WB. (1988) A Guide to Commercial Artificial Intelligence, Prentice Hall. Remboldt, U., Blume, C. & Dillman, R. (1985) Computer Integrated Manufacturing: Technology and Systems, Marcel Dekker.

Wood, M. (1999) “Cyborg: A design for life in the borderlands,” Emergence, 1(3): 92-104.