W.S. McCulloch and W. Pitts


It is important to recapitulate, at this time, certain aspects of the theory of teleological mechanisms, and to indicate one or two directions in which progress is to be expected.

The participants of this conference have come together from such unlike disciplines of science that they have had to learn each others vocabularies before they could hope to understand one another. This has made it difficult to follow arguments which in themselves were often relatively simple. I shall try to rephrase these arguments in ordinary language, although I am sure that, in so doing, I shall often fall far short of exact or rigorous statement. To make up for this, I shall try to relate the novel notions to the history of thought, so that it can be seen how we came by them.

The conference has considered two related questions; namely, What characteristics of a machine account for its having a Telos, or end, or goal? and What characteristics of a machine define the end, or goal, or Telos? It has answered the former in terms of activity in closed circuits, and the latter in terms of entities which the mechanism could compute in terms of the discriminations it could make. We are all indebted to Thales for the basic conception. He was the first to insist that the Gods were in things and not behind them. To know the Gods, then, is to know how things work. Unfortunately for physics, the Greeks thought that generation rather than motion was the cause of apparent change. Generation determined what it was, and what it was determined what was the natural place for it to go. For other than living things, this place, or Telos, lay outside of it, whereas for living things the Telos lay within it. Entelos, the end of the operation, was within the operation. Fortunately for biology, the basic notion of function was, and remained, an operation whose end was within the operation. When the notion was imported into mathematics, it had the same meaning: Y being a certain function of X meant that Y was the end inherent in the given mathematical operation upon X.

Aristotle, however, used the end in operation to obtain a law of the conservation of species, as we use potential energy to obtain a law of the conservation of energy. Neither entelechy nor potential energy is supposed to do anything. The future does not operate to alter the present. Only the ergs within it, the energies, do that. But the kind of bird which the egg will become is the end in, and of, the operation within, and of, the egg. It shall try to incorporate only such material as shall be suitable to that end. Everything else is evil for it. Everything suitable to that end is good.

But living things do not live by themselves. Men may exchange some of one kind of thing for some of another. To facilitate this, they introduced money as a common measure of value for marketable goods. The question then arose whether there was, conceivably, a common measure for all values. From this developed, eventually, the normative sciences of ethics, esthetics, and philosophy, including, in the last, logic and mathematics. So teleology stood with the good, the beautiful, and the true when Roman engineering overwhelmed Greek science, and was lost in the Dark Ages. Then Thomas Aquinas picked it up where Aristotle had left it and made of it the sublime Causa Finalis which extended as a veritable hierarchy of values from lowly creatures to the God who stood apart from or behind them. Ironically, it was this separation of the Divine Cause or Telos from the natural cause or vis a tergo that permitted good Christians to investigate the laws of physics.

Centuries passed, though, before the first teleological machine was made by Watts, who constructed a governor for a steam engine in 1784, and nearly 50 years more before Sir Charles Bell published the first closed path in an organism (The Nervous Circle, February 16, 1826). By 1850, the French school of neurophysiology had defined the reflex as an activity which, originated by a change in some part of the body, proceeded to the central nervous system over the dorsal roots, whence it was reflected over the ventral roots to that part of the body where it had been initiated, and there diminished, stopped, or reversed the change that had given rise to it.

This is exactly what the governor on a steam engine does. Although Maxwell published the mathematical theory of its behavior in 1868, and Wischnegradsky in 1877, the trick had to be independently rediscovered in many arts. I can remember the old bimetallic strips we used in thermal regulators, the introduction of sensitive electromagnetic relays, and the first use of a vacuum-tube relay to obtain better thermostasis. That must have been about 1920. By 1930, self-regulating power supplies were being built into well-designed vacuum-tube devices, although most electronic engineers may date the introduction of inverse feedback amplifiers some four years later, when Black published his work on telephonic repeaters.

In the laboratory, factory, and home, in any machine, we could regulate anything if we could devise a sufficiently sensitive receptor and appropriate effectors to control the variable desired to hold constant. We could make the value of that variable to which the system returns follow any course we desired.

Cannon showed how living organisms likewise hold constant many crucial variables. This he called homeostasis. He even pointed out how the value of the variable to be sought by the system might be altered by demands made upon it, and finally he extended the notion to the relation of the system to the environment.

It had become clear that the problem, whether in organisms or manmade machines, was a question of signals in closed paths. Considerations of energy were immaterial. It was information that went around the circuit. Then began the collaboration between the physiologist, Rosenblueth, who was Cannon’s chief collaborator, and the communication engineer – in this case Norbert Wiener – a mathematician if you will. In January, 1943, they published with Bigelow, in the Journal “Philosophy of Science,” and article on Behavior, Purpose, and Teleology, which heralded such discussions as we hear these days. They concluded that purposeful, or teleological, behavior is controlled by negative feedback.

Let me resume what all of the systems have in common. Each has a path which is closed. Each has a natural period, that is, the time necessary for the signal to complete the circuit. Each has a gain, that is, the ratio of the size of the initial disturbance to the size of the disturbance returning to its origin. The gain, generally, is different for different frequencies and for different sizes of disturbance. The feedback is regulatory only when it opposes an induced change and the gain is less than one. If it is equal to one for a given frequency, the size of the disturbance oscillates. If the gain is greater, oscillations increase until the system is destroyed or, at the greater size, the gain becomes one when it continues to oscillate at that size. The intrinsic disease of negative feedbacks is, one and all, that they cease to be negative. This state may be enforced by compelling them to operate at other frequencies than those for which they were proportioned, or may be permitted by altering the natural period or increasing the amplification of the signal. As soon as feedbacks operate with a gain greater than one, they cease to subserve their proper ends.

There are, in organisms, many processes which are rhythmic and of remarkable constant amplitude. Some of them are sinusoidal rhythms, others decidedly not. We can divide their circuits into two groups. In the first, if we add causes, we add effects. We call this group linear circuits. If they oscillate, the oscillation is sinusoidal, and only sinusoidal. In the second group, adding causes does not simply add effects. We call this group non-linear circuits. If they oscillate, the oscillation will have a form which may be far from sinusoidal. To this, the non-linear group, belong the so-called relaxation oscillators. Both kinds of circuit can be found in the body. However, most circuits are linear only for relatively small excitations. They are usually so constructed that, as the amplitude increases, the gain decreases, affecting chiefly the extremes of the deviations from the central position in one direction or in both. At this point, they have ceased to be linear. Hence, the general case is the non-linear one. However, the mathematical description is simple only for the linear case. Hence, for the non-linear case, wherever possible, we will juggle the mathematics so as to find a new variable, say the logarithm of the original variables, in terms of which the oscillation is sinusoidal. Moreover, in the body, some circuits are easily identified through their entire closed path; others are diffuse, both here and there; and many have a variable portion of the path, now here, now there.

But whether they are linear or not, and whether they are well defined in path or not, in these circuits cause and effect are so closely interconnected that it makes little sense to ask whether the bird or the egg was first. Each event bears such an imprint of its precursor that its consequent resembles that precursor. And that was all that Aristotle needed for his conservation of species, the active repetition of the same form. Thanks to Lorente de Nó, we are certain that, in the nervous system, activity does persist in closed circuits of neurons, is responsible for nystagmus persisting long after stimulation has ceased, and may account for a very active memory in which the occurrence of a certain event may be retained without reference to the instant of occurrence.

We may divide all inverse or negative feedback devices into those which are merely homeostatic, i.e., keep some internal parameters of the system constant; those which are servo, i.e., those in which the value of the parameter sought by the system can be altered from without the circuit; and the appetitive, i.e., those in which the circuit passes through regions external to the system. But such distinctions are a bit arbitrary.

We have seen that the end sought by the system may be any parameters we choose, or to which its receptors happen to be sensitive, and hence that the ends sought by such systems may have no common measure, or be dimensionally dissimilar.

Consider an organism in which two such systems, of which it has many, are active at the same time and would require opposite actions of one or more parts of the organism. Working concurrently, they would destroy it, as, for example, swallowing and inhaling. There must exist some connection between the circuits, so that in case of conflict one inhibits the other. The same must hold whenever the world makes us choose one of two ends. One circuit must dominate or we die.

In many cases, we are like a man driving toward a cliff. We may turn right or left: which way, is indifferent, only we must turn. We have the ability to decide or we should not have come to attend the conference. Even when there are two things which we must do to survive, we must choose, and there is often more urgency to the one than to the other. We must do the more urgent immediately and the other when we can. This other, consequently, remains active in us and unfulfilled. Thus it comes about that many a natural but long-term need lives on until the morrow, when with accumulated urgency it demands fulfilment. Whether or not we are aware of them, we are all freighted with such active processes, which at times erupt inopportunely. These are diseases not of a single negative feedback, but of the interaction of such systems.

Apart from urgency and the conditions of the moment, values do not constitute a hierarchy. By a series of tests with rats, it can be determined at what degree of starvation for both food and sex half of the experimental animals will prefer the one and half the other. Economists have called the plots of these values indifference curves. We could also plot how much punishment would prevent half the rats from seeking food, or half the rats from sexual activity. We should now have these indifference curves, but a strange thing appears. If we assume that values have a common measure, then rats starved for both food and sex so that they choose half and half, ought to be willing to take the same amount of punishment to get either. But this is not the case. They will take far more punishment for sex than for food. In fact, rats may be starved so that they will almost all prefer food to sex, and yet in that condition they will prefer sex to avoidance of pain and avoidance of pain to food. We have, under these conditions, three teleological mechanisms so interconnected that the first dominates the second, the second dominates the third, but the third again dominates the first. I ran into a similar circularity of preference more than 20 years ago in experimental esthetics, but could make nothing of it at the time. Unfortunately for the theory of motivation in psychology and economy and the hierarchy of Causae Finales, values are not magnitudes of any one kind any more than ends are dimensionally similar. Both differ in kind, not merely in degree, and we must look to the mechanism for the appropriate theory. Therefore, as to the good in the biological sense, and the beautiful in the sense in which it can be inferred from mere human preference, we indeed live in a world of many incomparable values, where fate compels us to choose whenever we act and nature does not forgive us for leaving undone those things which we ought to have done just because, then and there, we had to do something else.

I should commit precisely that error if I did not now mention what has been one aim of this conference, namely, the extension of the realm to which these notions are applicable. Organisms do not live by themselves and may be links in closed paths through which the feedback is negative. We have heard examples presented from Ecology, Cultural Anthropology,2 and Public Opinion.2 There seems to be little question but that feedback occurs in those fields and is not always negative.

In the discussion of the examples, predictive problems came to the forefront. During the war, as we know, this theory was advancing. It had been incorporated in certain engines of death and destruction which enabled guns to shoot at the place where the target would be when the shell arrived. This they did by autocorrelation of data in time for a variable lag. There is a story that Leonardo da Vinci killed the mechanic who worked with him, lest knowledge of the submarine reach the ears of some would-be conqueror. I cannot imagine that Norbert Wiener feels happier about the way the theory of prediction is being used, but it was he who first demonstrated that there was an optimal prediction and how to achieve it. He has pointed out how such autocorrelations and intercorrelations of data in time-series may detect the existence of causal connections, and their lags in one or both directions, provided the correlation is less than perfect. He has also demonstrated how, by these means, we may detect feedback, including inverse feedback, in Ecology, Anthropology, and Sociology. The data need be no more than decisions, actions or opinions in time, provided we have runs of sufficient length. I know of no utopian dream that would be nearer to everybodys wishes, including my own, than that man should learn to construct for the whole world a society with sufficient inverse feedback to prevent another and perhaps last holocaust. There may at least be time for us to learn to recognize and decrease the gain in those reverberating circuits that build up to open aggression. We cannot begin too soon. To make it quite clear, we have no desire to reinfect a convalescent sociology with the virus of that attempt to better the world which is proper to medicine and engineering, which are not sciences but arts. All we have a right to ask of the appropriate sciences are long-time runs of data. We know it will take years to collect these, but we must have them before we can determine whether the mechanism of negative feedback accounts for the stability and purposive aspects of the behavior of groups. This was one of our questions.

The other end concerns the discriminations of which the machine is capable, and what it can compute in terms of them. Because Democritus took seriously the riddles of Zeno, he gave us the notion of a least, or atomsomething which was there or else was not there, and that was all there was to it. It had for him no other properties. Our chemistry was born with the notion that atoms differed in kind and could be combined to form molecules, which had properties that could not be deduced from the addition of similar properties of their component atoms. Fortunately, we are in a slightly better position with respect to the activity of the nervous system. We know that each action of a nervous cell is atomic: it either happens or does not happen at a given time. We can regard each cell as a telegraphic repeater which, on receipt of an appropriate signal or combination of signals, emits one of its own. It has a threshold. After a brief impulse, there is a short period in which received signals may add to exceed the threshold of the cell or remain to prevent its firing, then a somewhat longer delay before it emits its signal, and a roughly equal interval during which it is unexcitable after activity. Most of these properties of the nervous impulse have been accurately measured in New York by Lorente de Nó, and its inhibitions demonstrated by David Lloyd. Let us consider any cell, C, as receiving either a single adequate impulse or an adequate impulse and, at the same time, an inhibitory one which may prevent it sending on an impulse. Let us consider these impulses as coming from two sources and call them A and B. Now since A and B are atomic, there are only two possibilities with respect to each: the impulse either happens or does not happen.

There are, then, four cases conceivable: only A, only B, both, or neither happening. The last of these could never excite C. There are, then, eight ways in which the excitation of C might be related to the occurrences A and B, one of which is the trivial case, inconsequentiality, in which C does not get excited at all. The signal C may represent either A, whether or not B occurred; B, whether or not A; both; either A and not B, or B and not A; or, finally, A or B or both. These are the functions out of which one can construct the complete calculus of signals. Given memory and enough cells in proper circuits, such a nervous system can detect any combination of afferent stimuli in time and space. It can convert any pattern in space into one in time and vice versa. It can compute any computable number, and, given the time required, arrive at any deduction from a finite number of premises.

An ordinary brain, a merely human brain, has only about 10 billion nervous cells, that is about 233, whereas the eye alone has more than a hundred million photoreceptors, each of which is either signaling or not signaling at a given moment. This means that the eye can exist in 2100,000,000 states, each of which corresponds to a unique distribution of stimulation. It is obvious, therefore, that mans brain, even if it had no other function, could not have a single cell to respond to each distribution even if it could contain all the necessary connections, which it cannot. Nor would it be of any use to man to bring all this information into his cortex, spread over an equal number of elements. What he does in the three layers of the retina is to note masked coincidences and abrupt changes, and send in this abstract of the data over about a million fibers from each eye to the lateral geniculate ganglia. There, coincidences and displacements in the two eyes are, perhaps, noted. In any case, the information is sent on in three ways: to the cortex for detailed analysis; to the superior colliculus to help determine the proper movement of the eye to keep it turned to what is important; and to the great master servomechanism of motor control, the cerebellum. In the primary optical cortex, contrasts are heightened, and incomplete data filled in and thence relayed to the secondary cortex where stimulation at a point gives perception of formed objects. At the very step when perception of form occurs, the last vestige of conformal representation disappears. This is only five relays removed from the stimulus, yet a lesion here may cause imperception of hue with perfect ability to match colors, and, a few millimeters away, alexia. Perception of hue may, or may not, be learned. Reading is learned, yet it clearly depends on local structure, and that local structure must depend on use. Nervous cells are alive and presumably grow with use. This is exactly what Ramón y Cajal proposed. Lorente de Nó would prefer the hypothesis that they and their connections diminish with disuse.

We may picture the process somewhat as follows: Imagine a nervous cell so connected to a second cell that it can excite it, and then imagine a third cell which is not quite able to excite the second cell. Let our law of growth be such that if this third cell is excited while the first cell is firing the second, then the third cell will so grow as to be able to excite the second, or, what amounts to the same thing, let the second cell so grow as to become excitable by the third. This would give us the fundamental kind of change in structure produced by use, in terms of which to account for the conditioning of a reflex. Excitation of the first cell is effected by the so-called unconditioned stimulus, the second cell being the path to the response, and excitation of the third cell by the conditioned stimulus. Such an altered structure yields a passive memory. The introduction of such a passive memory into a net which contains no closed paths leaves its calculus that of signals. In this, it differs from the introduction of a tertium quid which might be a diffuse background of excitation, a neuron undergoing relaxation-oscillations, or a regenerative cycle of activity in a reverberating chain of neurons set going by a stimulus in the remote past. Introduction of any of these makes it possible for the system to respond to mere absence of excitation at some specific point, and thereby creates the possibility of including response to that case in which neither of a pair of atomic events occurs. This alters the appropriate calculus from that of signals to that of signs. This calculus has now sixteen, instead of eight, functions of two arguments, and in this respect it becomes equivalent to the full calculus of propositions, from which it differs, however, in that one still has to keep track of the time of occurrence of the proposition, although no longer of the event proposed. When passive memory is included in such a system, or, what amounts to the same thing, when the organism has effectors with which it can make enduring marks or signs, and sense organs with which it can resense them at any desired time, it can neglect the time of occurrence and think in terms of the calculus of propositions. The use of such signs is, in fact, equivalent to the incorporation into its nervous system of an infinite number of reverberating circuits running for all the required durations. Moreover, by means of these signs, one organism may share any task of computation with another similarly conditioned. Thus, it comes about that the world of metamathematics, as well as the physical world, is public property.

But let us return to the actual brain with its paltry ten-to-the-tenth neurons. It seems fairly clear that the genes which inhabit our tiny chromosomes cannot contain a number of particles sufficiently large for their order and arrangement to specify in detail which nervous cell is to be connected with which. They can, in fact, at best serve only as a pattern for the more general features of order and arrangement, leaving to chance and learning the detailed connections, or fine structure, of the net. This is neither the time nor the place to go into the mathematical theory required for a statistical approach to this structure. It is sufficient, for the moment, to state that this can be approached by Wieners method of auto- and intercorrelation of the activities of many points in the net. What was required was the development of a theory in which the value of the relations of activity of neurons could be approximated by the first term of a series in ascending powers. This has been achieved by Walter Pitts, by selecting, as a value to satisfy this relation, the chance of the firing of a particular nervous cell that is equal to the chance of afferent impulses being transmitted to it from other neurons in the structure. The empirical procedure for determining the statistical properties of the structure is, then, to administer to one point in an isolated portion of the structure a series of equal electrical pips of random distribution in time, to record their arrival as propagated disturbances at several remote points in the region, and to study their auto- and intercorrelations as series in time.

All one can hope to obtain from such measurements is this: Structures may be chaotic in an infinite number of ways, and these statistical measures, even though they exclude an infinite number of possible varieties of chaos, will still leave us with an infinite number of possible varieties. The exclusion will rest on our knowledge of the distributions in time and space of the transmitted pips, from which we can conclude something about the number of junctions between two points and the frequency of occurrence of closed paths having two, three, four, five, or more junctions within them. It is certainly to these little circles that we must look for the most transitory forms of active memory. We propose to start work on the bark of the brain, first perhaps on the visual cortex, beginning with the primary optic area, and then with the secondary optic areas where each position stimulated yields perception. After that, we shall proceed to typical associative areas. This should give us a more useful working knowledge of the properties characterizing the physiological performance of the anatomic substrate of psychology.

The question we should like to consider at the moment is how it comes about that we perceive any form whatsoever. We already know that we have not enough nervous cells to have one respond to each of the distributions of activity of which the eye is capable at any instant, nor enough fibers to relay the information from the eye to the brain. We know that in the eye itself the cells are so connected as to respond to coincidence of activities, or to coincidence except for a small fixed time delay. It is information of this sort which is transmitted over the optic stalk. Thus, whatever perceptions the brain is to form must be built by recombinations of these abstractions. Let me contrast this with any supposition that there is a conformal pattern of stress and strain in the brain, comparable to that which might be found in a system in equilibrium when the forces of the external world determine that pattern. First, the brain is a system which is not in equilibrium, either within itself or with the outside world; and second, the pattern of the stimulation from the outside world is partially abstracted even in the retina. At the geniculate bodies, it may again be coincidences, this time from the two eyes, that count. The information passed on by them to the cortex, concerning the dimension of depth, is in no sense conformal with three-dimensional space or its depth, but is simply a matter of impulses passing at specified times along linear conductors. However, this much is true, that some of this information is reassembled in the primary visual cortex, which has a point-to-point correspondence to the retina and thus to the visual field, although it is not strictly conformal, since it is divided in two in the mid-line and, at this line of cleavage through the point of foveal vision, abuts the secondary visual cortex. Now this latter cortex has recently been shown, under conditions of hyperexcitability, to exhibit a point-to-point relation of activity to retinal stimulation. As yet, we do not know the anatomy of the path; but, be that what it may, the point-to-point correspondence here is the reverse of that in the primary visual cortex, and local stimulation of the secondary visual cortex in man yields perceived form instead of local bright spots which follow similar stimulation of the primary cortex. During the last year, a certain amount of information has begun to point toward the representation of seen motion in area 19, which is still further removed from the primary visual cortex. Also, in looking back over old work on electrical stimulation of the cortex, it is stimulation of area 19, or regions anterior to it, which has yielded perception of moving forms and dizziness in patients on the operating table. A second consideration might lead us to the same conclusion, namely, that motion is represented elsewhere than form in the cortex. When a man is tired, his eyes, instead of snapping from one position to another, move slowly, and he becomes unable to keep the forms from blurring. Apparently the perception of form requires appreciable time for the coincidence of impulses over paths of unequal numbers of relays. This is presumably the chief reason for the existence of the many and complicated servomechanisms that keep the eyes fixed on objects in space, however we or they chance to move. Also, it is apparently the function of the primary visual area to determine the form as precisely as possible, which means that it is not suited for handling motion. For years, introspectionists have noted that it is easy to recall the visual image of a familiar room as it appears from almost any point they have ever occupied, but that it is extremely difficult or impossible to let that point move in the room as if they were walking into it. The visual image is retained only as a series of clearly distinct snapshots. From this, it seems likely that area 17 and, perhaps, area 18 the primary and secondary visual areasare those whose structures retain, permanently and passively, some impression from visual experience of form, and that it is these structures, reactivated by impulses from other parts of the brain, which yield the visual image devoid of motion.

I have spoken of the visual system, although I might have spoken of any other sensory system equally well, and I have labored the point because so much of the work in Gestalt psychology has been done in the field of vision. The observations made have been invaluable, but, as the founders of the school foresaw, their theories have encountered almost insurmountable obstacles because of our lack of knowledge of nervous processes. From adequately developed neurophysiology, we should be able to deduce most of the relations characterizing Gestalt psychology. This much, at least, should be obvious, that psychology is fundamentally sound in assuming that perceived forms cannot be reached by a simple addition of their component parts. The nervous system never works that way, as we have seen in the case of clonus, wherein, because nervous cells respond to coincidence of disturbances, the response may be sinusoidal in the logarithm: that is to say, it is a product, not a sum, that counts. Now the body of manhis effectorshas an enormous number of degrees of freedom, but they are by no means as numerous as the forms of excitation of the retina, nor can they be executed with anything like the speed at which the retina can change in a changing world. Perfect conformal representation, were it to go all the way through the system, would jam the effectors. We have more information than we could possibly use even if our brains were big enough to handle it. Hence, the brain must abstract a smaller set of patterns for behavior. Even if it could and did abstract more, we could never know this about someone else, for he could never communicate it by behavior.

At this point, let us turn back to our initially random net and ask how it ever comes about that, in such a net, order is introduced so that there are objects in the world for us. It will doubtless be recalled how the associational psychology of Aristotle was simplified by the British school into association on the basis of likeness and association on the basis of togetherness in time and space. It is clear that we cannot use association on the basis of likeness to explain the origin of semblance itself. This was painfully clear to Mill, who suggested, as a way out of the difficulty, that those things are initially alike for us which have been associated in time and space throughout the development of the kind. In other words, let us suppose that we inherit some structure, as we do, which, instead of being altogether random, has laid down initially the paths required for detecting some similarities. The counterpart of this psychology is Kapperss law of neurobiotaxis, namely, that phylogenetically those cells which are associated in activity become associated in structure. The law is at least descriptively correct in very many instances, and there is nothing in it, or in Mills suggestion, as to the mechanism underlying phylogenesis. It is neither Lamarckian nor Darwinian, merely factual. But it states this much, that historically with evolution of nervous structure, new semblances have become recognizable (and, harking back to Aristotle, we may add, new differences discernible). It is clear that the same thing happens in the introduction of new ideas or the recognition of new forms within the span of a single life, though obviously the mechanism cannot be the same.

Now let us turn back to Cajals theory of learning and suppose that, in the growing nervous system, cells used together become associated in structure. In our initially somewhat chaotic nervous system, almost everything might be connected to almost everything else. Clearly, there must be competition between nervous cells for footspace on their followers. Then, at least, priority in time might determine which would succeed and be perpetuated, but it is to be feared that this is not enough. Confronted with a new problem, behavior may be initially random, but once success is achieved, the successful mode of behavior becomes the preferred mode, and ultimately the fixed mode, of behavior. This, in substance, is Thorndykes law of effect. The question is simply how this can be accounted for in terms of what will fix the connections of cells. Must we invoke impulses from some remote structure whose activity corresponds to its sense of satisfaction, or can we look locally to the relations of one cell to another? I believe the latter is the case, although as yet I see no way to prove it. We will invoke what may be called a setting-in process. It will be recalled that, if an external magnetic field is applied to a lump of iron, the little magnets scattered within it are compelled to assume new positions and, if they are subjected to a series of such forces, the strength and duration of these applied forces will, to a certain extent, determine their organization. The most significant thing by far will, however, be that force which last put a given magnet into position before the force disappeared. Our setting-in process is comparable to this. The activity itself disappears when the problem is solved, and it leaves the cells to continued growing along the last pattern enforced by the activity. This, too, then, is but another example of the importance of inverse feedback. The activity of these self-same cells, relayed through effector mechanisms, has brought their own activity to an end and left us with a new idea, embodied in a new structure.

Let me point out that the foregoing rests on two hypotheses: that use determines growth and that rest determines set. Now, all lay opinion to the contrary notwithstanding, every scientist knows that no hypothesis can ever be proved. It can only be disproved. Suppose I give the numbers one, two, three, and ask for the next number in this series. I have the hypothesis that the answer will be the next integer, four, but it might equally well be five. This would disprove the hypothesis of integers and I might form the hypothesis of the primes and ask for the next number, expecting seven. This hypothesis would be disproved if the answer were eight. My new hypothesis might be the Fibonacci, so that I would then expect thirteen and any other number would force me to a new hypothesis. But every mathematician knows that no finite number of numbers determines a unique rule for the formation of a series. There are, in fact, an infinite number of rules that would yield the same finite number of numbers in the same order. For any given number of numbers, there are, then, an infinite number of hypotheses which are equally tenable and an infinite number which are false. It would require an infinite number of numbers, in other words all the numbers of the series, and the knowledge that such was their total, to define the rule of formation. This alone could prove the hypothesis true, and then it would be no longer necessary or useful in the construction of the next. A somewhat similar phenomenon occurs when we attempt to measure the wavelength of light or the pitch of sound. It cannot be, ultimately, precisely defined unless it lasts forever.

As Prescott has shown, quite apart from this difficulty, through the perturbation of any parameter we would measure by the effects of variables beyond our control, and unnoted in our equations, the form of every function in physics either lacks empirical foundation or remains ultimately undefined. We are, after all, finite creatures, and there will always remain for us an infinite number of formulations of experience which are equally true or else false; many that are false; but none of any importance for the morrow which is indubitably true. Science, unlike the hungry man who may find food, cannot hope for an inverse feedback that would supply a satisfactory, all-inclusive system of true propositions as to how the world works.

General Discussion

DR. F. H. Pike (Columbia University, New York, N.Y.):

Changing the symbol of one variable in a formula previously published,(1) we may represent the neuromuscular portion of respiration as

R =f(P) (V)(T)(N),

in which respiratory activity R is expressed as some function f of the pressure (P) of certain gaseous substances in the circulating fluids, the volume (V) of the fluid passing through the central nervous respiratory mechanism in unit time, the temperature (T) of this fluid, and the nervous factors (N) necessary for respiratory movements. Increasing the pressure of CO2 in the fluid in the central nervous mechanism, or reducing the volume of fluid passing through it in unit time, increases the pulmonary ventilation. Increasing the temperature of this fluid may give either an increase in total pulmonary ventilation, an increase in the volume in unit time, as indicated by a rise in arterial pressure, on some combination of both. The general response of the central mechanism and, through it, of the organism, is interpretable (a) in terms of the theorem of le Chatelier,(2) in that the system tends to respond to any constraint imposed upon it by a change in any parameter, or parameters, by a change, opposite in direction, tending to neutralize the constraint so imposed; and (b) in terms of the law of mass action,(3) in the sense that constancy of concentration tends toward maintaining a constant velocity of reactions. (P), (V) and (T) have the same significance they have in the gas laws, but the quantitative determinations in the organism cannot be brought to a degree of accuracy that would enable us to compute any one variable when the other two are given, as can be done in isolated systems of gases. In general, it may be said that the changes in the volume of fluid flowing through the central mechanism are in such a direction as to maintain a constant pressure of the gaseous substances. The biologist will recognize that pressure, temperature, and volume are important parameters in that group of factors in the environment which he calls the conditions of existence.

The nervous factor (N) has been held by some to be independent of any inflow of afferent impulses, i.e., to be automatic in its reactions. I must confess that I fail to see the compelling force of the arguments urged in support of this view. It is true that the central respiratory mechanism will discharge when it is in such a state that afferent impulses sent in over the vagus or the sciatic nerves(4) do not change its rate or character in any way, but an animal in such a condition will not live without artificial respiration continued until well beyond the stage at which afferent impulses again become effective. I do not know of any other experimental proof in which afferent impulses can certainly be excluded. Certainly, no animal higher than a frog in the systematic scheme will live when all the afferent impulses flowing into the central mechanism, or even the greater part of them, are eliminated. In the frog and lower forms generally, it is not possible to eliminate, experimentally, all the afferent impulses without direct involvement of any efferent nerves. Alligators promptly die when the dorsal roots of the thoracic nerves are cut, without touching the vagus (unpublished results).

It is necessary to consider this question of the automaticity of the respiratory movements in connection with the hypothesis of inhibition of respiration. It is difficult to see how, if the central respiratory mechanism is purely automatic in its action, any minor disturbance of the afferent impulses coming into the central mechanism, such as that related to an act of swallowing, could have much effect upon it, or, differently stated, how respiratory movements could be inhibited during any part of the act of swallowing. One may grant, as indeed one must, that there is an independent irritability of the central respiratory mechanism, but this does not mean either that respiration is automatic, or that inhibition is a valid assumption. Independent irritability of the central mechanism is just as necessary as a condition for the hypothesis that respiratory activity is dependent, in part at least, upon afferent impulses from the respiratory muscles, as it is for the hypothesis of automaticity. If the respiratory mechanism is to be brought within the scope of the central theme of this conference as a feedback mechanism, the necessity of afferent impulses from the peripheral respiratory motor mechanism must be shown to be probable. The hypothesis of automaticity does not necessitate all the phenomena(5) and leaves much unexplained, whereas the hypothesis of afferent participation does necessitate all the phenomena.

The muscular apparatus of the respiratory mechanism has commonly been regarded as only that which gives rise to the movements of breathing. However, few who have observed animals under experimental conditions can escape the conviction that the whole cardiovascular mechanism is also involved in any organism as complicated as, for instance, a cat. It is to the cardiovascular mechanism that we must look for the changes in volume of blood flowing through the central respiratory mechanism in unit time. Without such changes, animals would be helpless in emergencies. When tyramin, which constricts peripheral blood vessels strongly, is injected, under brief anesthesia, into the cisterna magna of a cat, the animal lies panting on the floor, apparently unable to move about despite the rapidity of respiration (unpublished results). But, considering the respiratory mechanism as that mechanism in which the visible external muscular movements are taken as constituting the mechanism of respiration, this mechanism has undergone a profound change, perhaps more profound than any other, in vertebrate evolution. In sharks, all the important motor nerves are cranial nerves, and all the muscles are those concerned with movements of the mouth and gill arches.(6) Anatomical section of the dorsal roots of the spinal nerves does not affect the respiratory movements to any observable degree. Anatomical separation of afferent and efferent pathways is not possible, but there is good reason, anatomical and experimental, for supposing that the mesencephalic root of the fifth cranial nerve is a part of the afferent system. The whole tribe of amphibians either retains the motor and sensory functions of the cranial nerves in respiration, or dispenses with any motor mechanism except the cardiovascular to get oxygen and eliminate CO2 through a wet patch of skin, beneath which there is a rich network of blood vessels. Costal respiration, or some modification of it involving spinal nerves, appears in reptiles, birds, and mammals, and no cranial nerve retains any important motor function in respiratory movements. The acquisition of a mechanism in which spinal nerves and somatic muscles took over respiratory movements was a major event in the development of land-dwelling vertebrates.(7) Johannes Gads old statement that the nervous mechanism of respiration in mammals extends from the facial nerve to the lumbosacral plexus should be extended to include the fifth cranial.

However, the transition from aquatic to terrestrial habitat, accompanied as it is by a profound change in the peripheral nervous and muscular apparatus of respiration, has not been accompanied by an equally profound change in the central nerve nuclei of the respiratory mechanism. The principal central nucleus is located in the lower (caudal) portion of the medulla oblongata in fish and man alike, and the midbrain is as important in the cat as in the shark. In arriving at an estimate of the organization of the neuromuscular mechanism of respiration, we must keep in mind the primitive conditions in the shark (although one may suspect a somewhat different central mechanism in cyclostomes), whatever the type of animal with which we are dealing. I have presented what I consider to be the fundamentals of the organization of the central mechanism in mammals in an earlier paper.(8) I have seen no reason to change this scheme in any important particular since that time.

The hypothesis of inhibition does not seem applicable. Some of the same muscles are concerned in the act of swallowing and in respiratory movements. Swallowing is an act which is elicited by excitation of certain nerve endings in the pharyngeal mucous membrane. It is a response to a particular excitation of a particular set of afferent nerve endings, and cannot be performed voluntarily except by voluntarily bringing something into contact with the pharyngeal mucosa. When the act of swallowing is completed, there is an expulsion of air from the lungs the Schluckatmung of the German authorsno matter at what phase of the respiratory act the swallowing may have occurred. Swallowing and respiration are two different acts, each arising from a specific form of excitation, and the necessity for assuming that one must be inhibited in order that the other may occur is not immediately apparent. The snake does not need to inhibit respiration, in the sense in which inhibition commonly has been used or, at least, it is not immediately evident that it must do so, when it wraps itself about its prey to crush it, but respiratory movements apparently do not occur while the snake is crushing its prey.

Inhibition and facilitation are words of ancient, if not always honorable, lineage in the physiology of the nervous system, but the sense in which they have commonly been used, i.e., to give a name to a process of unknown nature, leads one to suspect that frequently there lurks behind them what Emil DuBois-Reymond called the thinly veiled specter of vitalism. In dealing with mechanisms, as supposedly we are doing, in this conference, we should look for terms which have a basis of some kind in mechanism.

Mr. Donald Herr (Control Instrument Company, Inc., Brooklyn, N.Y.3):

While Dr. McCullochs presentation reflects the neuropsychiatrists research experience and viewpoint concerning circular processes, it may be useful to synthesize certain of his concepts with some which those of us engaged in servomechanism work are continuously forced to consider. The results should at least be stimulating, especially since, in servomechanism work, we are concerned only with systems whose totality of responses is utterly mechanistic and non-subjective, and this latter group seems to embrace all the response-attributes of the neural structures to which Dr. McCulloch is restricting himself at the present time.

I believe Dr. McCulloch, in alluding to the aims of the organism, made the arresting statement that its desired quality is stability, if only to avoid the opposite. The question can first be raised as to what one should consider to be the opposite”—instability leading to the destruction of the organism, or instability leading to a change in the organism to a new, essentially stable condition? If the former is meant, then instability becomes synonymous with non-survival, death, or destruction of the organism (mechanism). If the latter, then instability may be an intermediate, transient condition potentially leading to a new condition of stable survival or operation.

The answer to this question and the attitude behind it seem to me to be of fundamental importance. Perhaps a few extrapolations from the engineers experience in servomechanisms may help to guide workers in other fields in their attempts to utilize constructively the feedback concept.

We can confine ourselves to a discussion of a simple feedback system in which only one feedback loop exists between output (response) and input (stimulus), since more complicated systems, involving several or many complex feedback paths, are qualitatively subject to the same conclusions as those simpler ones. It is found that, when the nature and amount of the feedback are adjusted to give great stability, then the system becomes most sluggish and relatively inaccurate in its response. Conversely, when the nature and amount of the feedback are adjusted to give great accuracy and rapidity of response, then the system rapidly approaches a condition of instability of one kind or another. Accuracy of response and stability of operation appear as two absolutely interdependent, yet contrary attributes of the feedback system. In servomechanism work, we have to compromise degree of stability to satisfy the accuracy-of-response specifications, or accuracy-of-response to satisfy the stability specifications, depending upon the controlling requirements in a particular application.

In an organism, or in a neural structure, it seems that, qualitatively, the situation is entirely analogous. The controlling requirements in a particular application of a servomechanism correspond to the organisms environment (external stimuli and required responses). In one instance, great stability without too accurate a response may be necessary for the organisms survival; in another instance, great accuracy-of-response with a sacrifice of degree of stability may be required. In either case, either type of response may not be sufficient for the organisms survival, even assuming it have the ability automatically to adjust its feedback gain properties and, hence, its stability and accuracy properties, between wide limits in each. This characteristic, incidentally, is found or purposely included only in special applications of practical servomechanisms.

I do not know what the analogues are in physiology or neurology, but it would appear that the emphasis upon stability, per se, is entirely an unwarranted-simplification. Degree of stability and degree of accuracy-of-response would appear to be the two contrary attributes which must be considered together in extrapolating servomechanism and feedback system dynamics over into other fields of science.

Those of us engaged in the relatively simpler field of the so-called exact sciences of physics, mathematics, and engineering will be interestedly and expectantly following the difficult work in the largely unchartered, unmeasured fields where researchers like Dr. McCulloch are engaged. By such free and open interchange of concepts as has transpired at this conference, mans understanding, and thereby his wellbeing, may finally be increased.

Dr. L. A. MacColl (Bell Telephone Laboratories, New York, N.Y.):

Out of the discussion of Professor McCullochs paper, a question has arisen as to means for stabilizing feedback systems. The subject maybe treated, briefly, as follows, in a manner which relates it to some of the other topics which have been dealt with at the conference.

If we have an unstable feedback system, or a system which is stable in the strict sense but not by a sufficiently large margin, there are essentially two ways in which we can go about improving the stability of the system.

In the case of a physical system with one feedback path, we can increase the stability by decreasing the amplification of the amplifier in the feedback loop. It would seem that this procedure has a parallel in the case of such biological systems as have been discussed by Professor Wiener and his co-workers, as well as by Dr. Livingston. Presumably, the administration of depressive drugs results in raising the thresholds of certain parts of the nervous system. Thus, removal of instability in a biological system by means of such drugs can be regarded, in a sense, as stabilization by decreasing the amplification of the part of the nervous system that is involved.

However, this procedure for securing stability is subject to severe disadvantages and limitations. Decreasing the amplification of the amplifier tends to diminish the speed and accuracy of the system, and decreasing the amplification sufficiently to make the system satisfactorily stable may result in making the system so sluggish and inaccurate as to be worthless. (The biological parallel, in the terms suggested above, is obvious.) Hence, we usually endeavor to secure stability in a basically different way.4

The behavior of a linear system with one feedback loop is characterized by the amplification and phase shift which a sinusoidal signal undergoes when it is transmitted completely around the feedback loop. (Both characteristics, i.e., the amplification and the phase shift, are to be regarded as functions of the frequency of the sinusoidal signal.) For the system to be satisfactorily stable, these characteristics have to fulfil certain technical conditions. Among these is the condition that the phase shift shall not be near one hundred and eighty degrees at any frequency at which the amplification is near unity. Broadly speaking, the practical methods for securing stability amount to modifying the system, if necessary, so that these loop transmission characteristics will satisfy the conditions necessary for stability.

Sometimes the necessary modifications of the system amount to nothing more than changes in the values of certain passive elements, such as inertias, stiffnesses, and resistances. In other cases, the necessary modifications can be best effected by introducing additional passive elements. In still other cases, it may be necessary to introduce subsidiary feedback loops, as has been pointed out by Professor Wiener in his remarks concerning the problem of steering a ship. The procedure is extremely flexible, and the skillful exploitation of the manifold possibilities usually leads to good results.

This general procedure for securing stability also has an evident parallel in the biological field.

Let us consider, for example, a person learning to skate. We observe that, at first, he tends to hold large parts of his body more or less rigid, and that he makes use of only a few rather simple and slow motions. He is slow to detect an incipient catastrophe, and when he does detect it he is apt to overcorrect in his attempts to avoid it. As a result, his stability is deplorable. In the course of time, however, he learns how to detect incipient catastrophes more promptly, and how to correct for them by means of a variety of small and rapid motions. As a consequence, his stability is increased. From the dynamical point of view, the fact that he is making use of a greater variety of possible motions means that he has learned how to introduce additional degrees of freedom into the system. This increase in the complexity of the dynamical system is accompanied, of course, by a corresponding enlargement of the part of the nervous system that is concerned in the process. Presumably, there is a large increase in the number of feedback loops in the total neuromechanical system. Although the situation is very complicated and difficult to analyze in detail, it is quite apparent that we have here an analogue of the case of an engineer stabilizing an unstable servomechanism by making alterations of the kind outlined above in the structure of the system.


Pike, F. H. 1936. Proc. Am. Soc. Zool. Anat. Rec. 67 (Suppl. 1): 105.

Hastings, A. B., H. C. Coombs, & F. H. Pike. 1921. Am. J. Physiol. 57: 104.

Pike, F. H., & E. L. Scott. 1915. Am. Naturalist 49: 321.

Stewart, G. N., & F. H. Pike. 1907. Am. J. Physiol. 19: 328.

Peirce, C. S. 1863. AM. J. Sei. & Arts, 2nd Series 35: 78. An explanatory hypothesis is one which, being admitted, necessitates all the phenomena.

Springer, M. G. 1928. Arch. Neurol. & Psychiat 19: 834.

Pike, F. H. 1924. Science 59: 402. 1928. Science 68: 378.

Pike, F. H., & H. C. Coombs. 1922. Science 56: 691.

1 Reprinted from The Annals of the New York Academy of Sciences. Vol 50, Art. 4: 259-277, October 13,1948.
2 Unpublished papers presented at the Conference.
3 Present address: Allen-Bradley Company, Milwaukee, Wisconsin.
4 Since the procedures employed are highly technical, it is not possible to give more than a very brief and Imperfect description of them here. They are discussed at length in the following publications: Bode, H. W. Network Analysis and Feedback Amplifier Design. Van Nostrand. New York. 1945. MacColl, L. A. Fundamental Theory of Servomechanisms. Van Nostrand. New York. 1945.