REGENERATIVE LOOPS12 [205]

W.S. McCulloch

##

When we seek a mechanistic explanation of the various temporal rhythms of any and all living systems, we eventually come to conceive of their activities in terms of circularities in the nexus of their causation. When a cycle inhabits an anatomical structure in which the path from a first point to a second is different from that from the second to the first, we refer to that as a closed loop. When part of the path lies through the environment, we say that the loop appears to be open. Whether it is in fact open depends, in part, on the living system and, in part, on the nature of the environment. When it is truly closed, we can enlarge our notion of the system to include the necessary portion of the environment and so close the loop. On a grand scale, animals consume oxygen and produce carbon dioxide and plants complete the path. At the other end of the scale, within the single cell, where we cannot yet always see the paths, we suppose them to exist as the only mechanistic explanation of spontaneous activity.

We can divide these circular causalities into those that are simply driven by matter and energy and those in which these are only the physical substrate of signals conveying information. Typically, they are mediated by catalysts genes, enzymes, hormones, transmitters, and nervous impulses. It is not with the biochemistry or the biophysics but with a circuit action of such nets conveying signals that we are concerned.

Either these actions are such that the output of some part decreases its input, called negative feedback"; or they are such that its output increases its input, called regenerative feedback.

Negative feedback fits Magendie's definition of a reflex, Watt's of a governor of a steam engine, Black's of a regulator of relays, Cannon's of homeostasis, and Wiener's, Rosenblueth's and Bigelow's of teleological mechanisms, which is central to the notions of governance, including servo systems, and so of cybernetics.

The regenerative feedback, which kept Watt's steam engine going and keeps us breathing, has no such history. The neurologist of the 1920's thought of it only as the vicious circle of disease causing convulsion, migraine, tremor, cramp, clonus, tick, chorea, and athetosis. He thought of the brain as a place where nothing was going on except as the consequence of afferent impulses.

This is not so surprising, for electrical amplifiers really took hold, notably in Adrian's laboratory, about 1930, before which brain waves were only questionable squiggles. A year or two later Renshaw, who felt he had a closed loop regeneration in the ventral spinal cord, and I, thinking similarly of the cortex in epilepsy, approached Cobb and Forbes, who were discussing delay chains, to explain Sherrington's C.E.S. Forbes said that he did not agree with Kubie's article in Brain, or with Ranson who, subsequent to its publication, wanted to introduce regenerative loops for another reason. Sherrington had advised the publication of the article(1) in which his C.E.S. is neither state nor substance but a reverberation within or among neurons a question that could eventually be tested electrically.

To me Kubie's article was the beginning of my attempt to handle information flowing in closed loops. I had, until then, thought only of information flowing through ranks of neurons much as genes flow through generations. I could not handle closed loops. It was some 10 years later that I met Pitts, who had the necessary modular mathematics. Part 3 of our first article(2) is very tough reading, for it is the first attempt to describe the behavior of such loops. We can say that with them the nervous system can compute any number that a Turing machine can compute with a finite tape. Doubtless such loops support those reverberations for short term storage of activity in memory that are necessary for the formation of traces that can survive quiescence of the net.

Pitts and I made use of them again, in 1947, in describing How We Know Universal(3) - but, generally, the theory of closed loops has proved very difficult. None of us doubts that neurons are nonlinear oscillators, but the theory of systems of these with proper couplings is still being sought.

In 1951, much was known about the core of the reticular formation and much more is known today, but there was no theory to guide experiment. So, with the help of Kilmer,(4) I began, several years ago, to look for a way to think about it. It was obvious that it could not be done by the theory of coupled nonlinear oscillators. The core was an iterated net with both excitatory and inhibitory links among its components and, if we would treat it as a string of similar computers which is anatomically and physiologically proper, then we could apply Hennie's theory of iterated nets.(5) But Hennie had proved that most of the questions that we had thought important were recursively unanswerable; I mean that there is no way of making a repetitive procedure that will guarantee an answer. The rest Kilmer proved insoluble. There was nothing left for us to do but to begin again with the Ars Combinatoria. We tried it for a very few computing modules say three or four and found that the more modules we had the better it would do what we believe the reticular core does do.

Early in those days I began looking at the possible modes of oscillation of N neurons. If N equals 1, there is only tick-tock till the end of time. When N equals 2, there are 20 modes; but, when I tried to work it out longhand for 3 neurons, I bogged down at about a thousand. Schnable(6) was with me then and knew how to count. He came in the next day and said

So the number of input lines is its Log2. The next question was: Can all of these modes of oscillation be obtained from a single net? Moreno-Diaz(7) answered yes, and he proved it by Manuel Blum's constructive theorem.

da Fonseca and I had begun looking at the topology of neuronal nets of various connectivities, particularly Kauffman's(8) net, in which each neuron receives from 2 and sends to 2. da Fonseca started to search the literature and came on Huffman's theory(9) of shift registers, published in 1955. It is based on prime polynomials over Galois fields. The shift register consists of a series of delays, each of which feeds the next, and the last feeds the first, and any delay may play on that return through a logical element. If the kind of element computes an exclusive or, then the shift register is linear. Then Massey(10) had discovered an algorism by which such a register could be designed to be the minimal one capable of giving a sequence of zeros and ones that matches those it has received, guesses the next, if it is correct changes nothing, if wrong, adds another delay or changes the logic in a specific way. Both Huffman and Massey helped da Fonseca,(11) who proceeded to show how all nonlinear requirements could be met by linearizing the expression or by putting into its logic box inputs from all delays and an instantaneous logic of any Boolean function. With Massey he formed the algorism for their construction which is often much shorter and never longer than that for linear shift registers.(12) This gives us two things we need: first, the proof that such a structure can be made to embody the program of any sequence of skilled acts, such as the playing of a sonata, in a minimal series of synaptic delays, say 20 neurons at most, and, second, that such a system with its embodied algorism has the proper self-referential property for making statements concerning the next, statements which at the time they are made can never be true or false but must have a third value.

When I was working on probabilistic logic with Blum,(13) Verbeek,(13) Winograd, and Cowan,(14) I think we all had a feeling that with closed loops all computation would be hopelessly corrupted sooner or later by even the lowest level of noise. Per contra, the peculiarity of shift registers computing a cycle of maximum possible length is that, given noise, all they can do is to jump to some other part of the same cycle. For any other closed loops and any cycle other than maximal, the corruption is inevitable.

There are definite advantages in handling information with probabilistic nets, that is, nets in which the transition from state to state on the basis of a signal receipt are only probable (e.g., it takes fewer states to handle some kinds of decision) rather than by a deterministic net. Hence, one would like in theory to separate the probabilistic aspects of the system from the closed loop aspect. Also, for the sake of theory one would like to have a general conception. For this we are most indebted to Moreno-Diaz.(15) Just as Turing could define a universal machine with a finite reading, writing and stepping head and as much tape as it needed that could compute any number, so one can define a universal net that can do what any net can do, and we can construct it trivially. Just as the first part of the tape of the universal Turing machine determines what number the machine shall compute, one can have an encoder that will determine what the universal net shall compute; a decoder, sampling every neuron at each clocked pulse, gives an output that obviates the alphabetical permutations of the neurons. Neither encoder nor decoder needs closed loops. Let the universal computer have them but be determinate and strictly Boolean.

For every input to the decoder make a diagram, called a state transition matrix, in which is recorded from which state it goes to which state when given that input to the encoder. Stack them and project them to a plane below the stack on which you project all of the inputs on account of which each state went into each state, i.e., the function of the input. These inputs can be continuous variables or probabilities or what you will. The theory for the universal net is complete. The encoder and the decoder are free of circles and hence manageable. With such ideas we should be able eventually to think our way through a theory of any neuronal net, including the core of the reticular formation. The difficulties to be expected are purely combinatorial.

Since I was interested in the nervous system, which does have surprising regularity of structure, in terms of which one can ask questions as to its circuit action, I have little to do with what are called random nets. The first to excite my curiosity was one proposed by Kauffman to model ontogenesis. The data point to a connectivity of 2 in and 2 out of every neuron. With a net made of pure chance and the 16 logical functions of two variables tossed into each at random, one watches the action of the net started at a state chosen at random, sees how many states it traverses before it starts to cycle, called a run-in, and watches the length of the cycles. Even for small numbers of neurons, representing genetic sites, manual computations are staggering, but the run-ins and the cycles are both much shorter than any of us guessed. The problem proved so interesting in its own right, quite apart from ontogenesis, that Kauffman was given many hours of computing time. He was able to model the ontogenetic nets of viruses and bacteria — up to 8000 genes; but 80002 is a large state-transition matrix and it takes many of them, each tried often, to get a fair sample period. The (2 x 106)2 for mammalian cells cannot be modeled with all of the computer memory of the United States. The simulation shows that Kauffman is on the right track and that the laws of chance and number must account for what otherwise requires too much preordained harmony or too many miracles. Hence I tried for more than a year to get answers in the form of recursively computable probabilities and I have made some progress, but the end is not in sight.

There is lurking in the background the possibility that the apparently random connections of the afferents to particular cells of the reticular core, and perhaps of these to one another with a reasonably constant connectivity, may have similar advantages based on the laws of chance and number. At least these are ways of thinking about those loops. For my introduction to them I am most indebted to Lawrence Kubie, who proposed that there were other possibilities than simple through-and-through conduction via irreversible synapses, more circuitous and diffuse, possibly in closed loops. He even considered waves of excitation in dendrite and soma unlike those of axon, and his paper(1) concludes with this memorable paragraph:

The circling excitation wave may be closely related to the excitatory agent which Sherrington has found it necessary to postulate, and inhibition may find its mechanism, as Keith Lucas suggested, in some expression of the refractory phase; to which the suggestion is added here that it may be of the refractory phase, or some other fundamental attribute, of the circling excitation wave.(1: 177)

Footnotes

References

Kubie, L.S. A Theoretical Application to Some Neurological Problems of the Properties of Excitation Waves Which Move in Closed Circuits. Brain. 53: 166-177, 1930.

McCulloch, W.S., and Pitts, W.H. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophvs., 5: 115-133, 1943.

Pitts, W., and McCulloch, W.S. How We Know Universals: The Perception of Auditory and Visual Forms. Bull. Math. Biophys., 9: 127-147, 1947.

McCulloch, W.S., and Kilmer, L. Summary of Research Progress: Theory of the Reticular Formation. Quarterly Progress Report No. 82 pp. 275-280. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1966.

Hennie, F.C, III, Iterative Arrays of Logical Circuits. M.I.T. Press, Cambridge, Mass. & Wiley and Sons, New York, 1961.

Schnabel, C.P.J. Number of Modes of Oscillation of a Net of N Neurons. Quarterly Progress Report No. 80, p. 253. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1966.

Moreno-Diaz, R. Realizability of a Neural Network Capable of All Possible Modes of Oscillation. Quarterly Progress Report No. 82, pp. 280-285. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1966.

Kauffman, S.A, and McCulloch, W.S. Random Nets of Formal Genes. Quarterly Progress Report No. 88, pp. 340-348. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1968.

Huffman, D.A The Synthesis of Linear Sequential Coding Networks. In Cherry, C., ed. Information Theory, pp 77-95. Butterworths, London, 1955.

Massey, J.L. Shift-Register Synthesis and Applications. Quarterly Progress Report No. 85, pp 239-240. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1967.

McCulloch, W.S., and da Fonseca, J.L.S. Insight into Neuronal Closed Loops from Shift-Register Theory. Quarterly Progress Report No. 85, pp 325-327. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1967.

da Fonseca, J.L.S., and McCulloch, W.S. Synthesis and Linearization of Nonlinear Feedback Shift Registers - Basis of a Model of Memory. Quarterly Progress Report No. 86, pp 355-366. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1967.

Blum, M., Onesto, N.M., and Verbeek, L.A.M. Tolerable Errors of Neurons for Infallible Nets. In R. H. Wilcox and W.C. Mann (eds.) Redundancy Techniques for Computer Systems, pp. 66-69. Spartan Books, Washington, D.C 1962.

Winograd, S., and Cowan, J.D. Reliable Computation in the Presence of Noise. M.I.T. Press, Cambridge, Mass., 1963.

Moreno-Diaz, R., and McCulloch, W.S. Fiorello. Quarterly Progress Report No. 88, pp. 337-339. Research Laboratory of Electronics, M.I.T., Cambridge, Mass., 1968.

 

For further research:

Wordcloud: Called, Cambridge, Circuit, Closed, Compute, Core, Cycle, Decoder, Delay, Electronics, Excitation, Feedback, Fonseca, Information, Input, Kauffman, Laboratory, Linear, Logic, Loops, Mass, McCulloch, Modes, Moreno-Diaz, Nervous, Net, Neurons, Nonlinear, Number, Oscillation, Path, Possible, Progress, Proved, Quarterly, Question, Random, Registers, Report, Research, Reticular, Shift, State, System, Theory, Think, Universal, Waves

Keywords: Nets, Feedback, Loops, Theory, Systems, Scale, Path, Point, Moreno-Diaz, Rhythms

Google Books: http://asclinks.live/z8r0

Google Scholar: http://asclinks.live/1n5l

Jstor: http://asclinks.live/twm1


1 Reprinted from The Journal of Nervous and Mental Disease, Vol. 149, No. 1, pp. 54-58, 1969.
2 This work was support by the National Institute of Health and by the U.S. Air Force (Aerospace Medical Division) Contract AF## (615)-3885.