Yo, que no soy peor que ustedes, juro ante ustedes, que no son peores que yo, ser su colega cientifico y ayudante, si se someten a los hechos experimentales y a las leyes lógicas de nuestra Ciencia Natural. Si no, no.
(Comparar con “El Coloquio de los Perros,”- Cervantes, 1610.)
For historical reasons, we in Neuropsychiatry have found ourselves in a curious predicament that began with modem physics when Galileo(1, 2) correctly excluded mental, that is, formal and final, causes from the explanation of physical events. Descartes,(3) based on eight years of dissection, similarly excluded mind as causal in animals conceived as automata. Instead he postulated nervous impulses in tubes with a string in them to close off the pulse when it had done its work. To a similar circularity of process Claude Bernard(4) attributed the stability of the internal milieu, and Magendie(5) defined the reflex in terms of a process in some part of the body causing impulses to go to the spine over the dorsal roots reflected out over ventral roots to the place where they arose, and there stopping or reversing the process that gave rise to them.
These notions culminated in Sechenov's(6) attempt to explain all psychological events in terms of reflexes. To do this, he was forced to extend the notion of a reflex to cover all responses, not merely inverse feedback. He gave us vagal inhibition of the heart and bulboreticular inhibition of all reflexes in the frog. But he had to assume plasticity to account for the acquisition of expedient modes of activity. He was content to think of these modes much as Darwin thought of spontaneous variation and the survival of the fittest to the environment Forty-three years ago these conceptions, and their subsequent confirmations and elaborations, drove me from psychology to the anatomy and physiology, the physics and the chemistry, of brains. The neuronal hypothesis of Ramón y Cajal(7) pointed the way toward an analysis of the connectivity of the vast anastomotic net in terms of which to understand its functional organization. Electrophysiology has enabled us to trace increasingly its intricate activity. Biophysics, biochemistry, neuropharmacology and, more recently, electron microscopy and macromolecular biology are rapidly filling in the physical picture in detail.
Let us suppose it done, so that we could locate, qualify, and quantify the changes in disease and the traces in learning. Suppose we had a workable thermodynamic theory of systems open to matter and energy and a theory of nonlinear oscillators good enough to handle nervous activity. In short, suppose we had the answer to every question the strict physicist could ask as a physicist It would not be enough. Neither regulation nor expediency is a physical notion. Neither is thinking or learning. Our neurology is couched chiefly in physical terms. Psychology and psychiatry cannot be. The efficient and material causes of physics have systematically excluded mental relations.
In physics, to account for stability we minimize a physical quantity—namely energy—and our model is a potential well In biology or in engineering, we minimize error, i.e., deviation from an expedient value of anything that is to be stabilized. We model it in a closed loop having an input, an output, and a comparator that alters the input inversely by any measure of the difference between the actual and the expedient value of the output It was Bigelow's realization that for this regulation all that had to return was information about error, which prompted him, Rosenblueth and Wiener,(8) in 1943, to start cybernetics. Information is not a physical quantity. Its unit, or bit, is one answer, yes or no, to any question about anything. Its measure is novelty or surprise as to which message is received from a given ensemble of messages and is therefore its improbability, -Σ p log p, which is a pure number.
I have yet to find a good translation of Sechenov's “The Elements of Thought”(9) in which he traced our learning to think abstractly by way of our motor relations to our environment, and to our sense of movement What little I know of it is curiously reminiscent of recently recovered Stoic, physicalistic or pansomatic, logic of propositions. The Stoics(10) regarded a proposition as a triadic relation of three physical things: first the physical event in question; second, the speech or writing standing for it which might be true or false; and third, a thing in the head of the proposer, or in the head of the receiver, like a “grasp by the hand,” the λexròv. Moreover, with them also time was a necessary part of the definition of a proposition. The speech and the thing of which it was spoken were only intentionally related in the head of the speaker and of the hearer. Thinking is an activity of this intentional kind in which the ideas intended at the moment are the ideas of previous intentions.(11) Occam said that we think in two kinds of terms, one, natural terms, which we share with other animals, and the other, conventional terms, enjoyed by us alone. It is doubtful bow long trains of thought can be carried on in natural terms. Our attempts to build perceptions are not very productive and there are theories now to limit their possibilities unless they contain closed, regenerative loops, as proposed by Laurence Kubie in 1930.(12) Thinking in conventional terms, notably in number, was mechanized in the abacus. The first important machines of this kind were built by Ramon Llull(13) Leibnitz(14) was the greatest in this tradition of mechanists, but these machines first came alive in Turing's machine, in which the value of the operand affects the next operation. In 1936, he was able to show that they could compute any number that a man could compute.(15, 16) In 1943, Fitts and I(17) proved the converse, assuming man's brain to be a proper net of mere threshold components. We showed that activity reverberating in a closed path, by preserving the form of an input, could account for memory of an event. In theory one did not need to suppose a change in the physical properties of the net We now know that reverberation occurs and protein synthesis is suspected. There are certainly other possibilities to be considered. The requisite plasticity thus may be multiply embodied.
In 1947, Fitts and I(18) proved that unalterable nets of these unalterable neurons could be designed to perceive shape, regardless of size, and chord, regardless of key. Moreover, they could be designed so that their function was little perturbed by small perturbations of thresholds, of signal strengths and even of local synapsis. The function of the net for recognizing such abstractions in perception needs only to compute a few averages or sums, for all transforms belonging to a group, of the value assigned by its arbitrary functional to the transform as the figure excitation in the space and time of some mosaic of relays. Our particular model is well within the known anastomotic complexity of the net and conformed to all that was then known of the physiology. Our own and others’ evidence has shown that computations we assigned to the cortex are in no small measure precomputed in the lateral geniculate and in the eye itself. This in no way invalidates the theory, which is mathematically adequate for explaining the perception of abstract universals. Recently, Minsky and Papert(19) have proved that to learn to recognize patterns in any powerful sense, the model must have not only the external loop providing reward or punishment, but also internal closed paths. The difficulty which this engenders will become obvious after we have considered coding.
Neurons die at an increasing rate and under optimum conditions are subject to local fluctuations, beautifully measured by Verveen and Derksen.(20) To insure reliable performance required redundancy in an anastomotic net. Think of it as composed of layer after layer of neurons, each receiving encoded signals from the preceding rank, decoding them, computing and re-encoding them for the next rank. Think of encoding and decoding as synaptic ramifications of axons and dendrites. Winograd and Cowan(21) have shown that given a sufficiently rich anastomosis and a sufficient diversity of functions computed, we can have reliable computation despite scattered loss of cells and local fluctuation. For the requisite variety of functions, the neurons cannot be simple threshold devices but must be able to compute a much greater variety of functions, as we know real neurons do. The synapsis must be as great as it is; in fact, the richer it is the more flexible it is, and the more reliable.
Shannon(22) produced two theorems of information theory. First, he proved that for a sufficiently large ensemble of messages almost any sufficiently rich code was almost optimal; for us this means that the detail of synapsis in an associated cortex is not severely constrained. Second, he proved that if one transmits information over a noisy channel at less than channel capacity with a proper code, then one can keep errors to a negligible value. This is called the information theoretic capacity. Winograd s and Cowan's “Reliable Computation in the Presence of Noise” gives us information theoretic capacity in computation. The curious consequence of their work is that to ask the function of a particular neuron in associations! areas is like asking the function of the second letter in every word in your language.
What I have said thus far would hold even if all nerve impulses were gated each millisecond by a single dock and if all information were carried by the presence or absence of a pulse in each axon at each millisecond. Real neurons live and communicate in real, continuous time, and their messages are conveyed by modulation of intervals between pulses. The codes of the vestibular signals, of the stretch receptors, and perhaps of others involved in perception of position and motion, seem to be carried by repetition rate, which is a redundant form of pulse-interval modulation. Recent work of Lettvin, Gesteland and Maturana(23) proves that in the nose and in the eye the temporal coding actually multiplexes, carrying many kinds of information over a single channel, not just two kinds, as Wall(24) showed for temperature and touch in single somatic afferent peripheral axons. Feedback to the redpient neurons may not merely gate signals but also affect the encoding and hence alter the message and the way it is to be decoded. On the sensory side, it is comparable to the gamma efferents determining the response of muscle spindles and so affecting the reflexes initiated by a given stretch of muscles. As Barron suggested, it is only when this circuit becomes operative that motor neurons elaborate their dendrites. Presumably, similar elaborations occur in the dorsal plate only when motor responses are possible; for, in their absence early in maturation, effective perceptions fail to develop. As von Foerster notes,(25) self-organizing systems, like these, require that the system be open to a surround, be it the world or the rest of the body, that acts upon them and which they can affect; otherwise they could not properly exist They cannot even be described in a language which is not open to the development of new concepts whose significance depends upon the context We may talk about these problems in a natural language, but only at our peril It leads us into traps comparable to the paradoxes of set theory, like the class of all regular classes. As Gotthard Gunther has so clearly shown,(26) the logic of being and the logic of becoming are not the same. A concept is logically of a higher order than the items we bring together in the act of forming the concept To speak rigorously of such matters, one needs a theory of types in which the higher types are generated at a time subsequent to the generation of the items typed. This issues in a hierarchy of representations which we need to model the process of learning. In it we form new concepts to guide future conduct Here Gunther's logic is needed. Fact has only one value: It is. A statement concerning a fact has two values: True or false. No factual statement concerning tomorrow is either true or false today. It can be made or understood only by a self-referential system that can distinguish fact from fancy, theory from observation, and jest from earnest. The model, for a self-referential system to make these distinctions, must make a comparison between an input and activity in a closed loop. As Fitts and I noted, reverberant activity embodies ideas persisting over some finite but often ill-defined sequence of operations so that their mathematical expression requires what are essentially as many nested parentheses. We have yet no transparent powerful representation of them in any calculus, and few and feeble theorems, at best. The attempt to handle them in the extensive calculus of the Principia Mathematica(27) results in a confusion of types. The proper calculus is certainly at least four-placed and intentional Intentions are irreducibly triadic relations. We sought a calculus for relations and, thanks to conversations with Lewey O. Gilstrap, Jr. on the matrix algebra for diadic relation, Roberto Moreno-Diaz and I were able to understand C. S. Peirce's works on triadic relations.(28) The foundations are there. We know what prevented him from advancing. We have taken the next steps. Only time will tell how far that calculus will carry us.
I wish I could say as much for theoretical advance in Kilmer's and my study(29) of the circuit action of that central structure, the core of the reticular formation, which commits the whole animal to one of a dozen or more incompatible modes of total behavior. Its business is this: given the condition of the organism and the state of the environment, to decide which mode of behavior is most expedient. The fundamental modes and the basic decisional mechanism must be inherited, for every vertebrate would be extinct if it had to learn them. We have other structures to embody learning, taking of habits and planning long-range policy, notably the cerebral cortex. There the logic is inductive, ending in hypotheses. We have other structures, basal ganglia, cerebellar and lateral reticular nuclei, to execute the decisions. They program, time, alternate, and coordinate activity. We hold that their logic is as deductive as it is in any chain of command. Told that they are confronted with a case under some rule they move, like a sturgeon, to the conclusion in action, the fact. The logic of the reticular core is abductive—the απαγωγη´ of Aristotle.(30) Its circuit action resembles a diagnostic clinic in which each doctor, or module of the reticular core, has some knowledge of the patient, often largely overlapping each other's but no two alike. They hold a consultation, talking to each other until some with the sufficient knowledge convince a sufficient majority, make the diagnosis and initiate the therapy. The number of modules is, say, 106 and the time is long enough for at least a score of interchanges before the decision is readied. If it has time, the reticulum can make the cortex attend to what die reticulum deems important—and when it has nothing pressing, it investigates the world. Men are insatiably curious. They are by nature learners. When they cease learning they become inattentive and drop into some other mode of behavior. While curiosity has killed scientists as well as cats, it has given Man knowledge as his greatest asset in survival. At present, Kilmer and I have a computer simulation of a dozen modules that does come to what we have defined as die expedient modal decision. It is neither obdurate nor too easily upset, and it almost always does come to substantial agreement in a proper number of interchanges of signals. The next step is to give it enough distributed reverberations to permit it to become temporarily conditioned. We expect to spend another year of computation, adjusting its nonlinear properties and possibly increasing its size. So much for our model of what lies behind the notion of expediency.
The third important aspect of the cybernetics of learning also appeared in 1943. Kenneth Craik, a physiological psychologist, wrote “The Nature of Explanation.”(31) He modeled the brain as modeling its world including itself and as readjusting the model from birth to death. This notion gives us a notion of memory as distributed, even as its model is distributed. Hence, recall is not a process of consecutive search, as on a tape, nor one of seeking the right address in a random-access store, nor a store addressed by its contents, but one addressed by relations, i.e., a truly associative memory. This theory of modeling shows its greatest strength in our understanding of how one or a few well-established observations that are incompatible with A given model can make us alter its basic fabric, for they force us to construct a whole new framework of relations. Such are the great revolutions of physics itself. Logic and mathematics are not immune to them. In each, the best fate that can befall a theory we thought to be general is for it to be reduced to a way of conceiving special cases or special aspects under a more general theory. Frequently the new theories require, in description, a language of a higher level for concepts of former concepts.
The next most important model of memory appeared in W. Ross Ashby's “Design for a Brain “(32) in the section on ultrastability. With a wonderfully oversimplified model of neurons in loosely coupled nets, he showed that an activity, originally confined to reverberation in the periphery of the net, will involve structures even deeper in the net We know this to be true in the pathological case of causalgia requiring surgery ever deeper in the nervous system as the years go by. The model makes it dear that in the case of the learning of any one thing, what trace we must seek and where we must seek it will depend upon the whole sequence of events in all learnings by that particular man. If he learns A before he learns B, the learning of B will involve a modification of traces of A, and vice versa.
Thus far, I have spoken only of cybernetic problems of learning. Let me, in closing, draw your attention to the success of its methods in investigating and understanding human learning of perceptual and manual skills. I am not interested in rote memorizing, which a recording tape or a photographic plate does better than we. Let us start with an adaptable self-optimizing control system and note that we require one language to describe it properly because that language corresponds to the way such systems handle signals. The values of the variables the system controls are its business. This lowest language you may think of as its object language. The values of the output it is instructed to seek belong not to this language, but to the language of command. But these command signals are themselves the language of a higher order of adaptive self-optimizing systems, themselves subject to commands of a higher order. Thus, if we are to think of them rigorously, we will have to have a hierarchy of languages. Of course these languages are but specified parts of natural language in which we could easily be trapped. Gordon Pask, who is thoroughly familiar with all of these difficulties, has built teaching machines that parallel these hierarchies. They are constructed to optimize the subject's learning by confronting him with tasks increasing in difficulty matched to his acquired skill. The machines detect when the learner's interest flags or he begins trying to outguess them. He is then ready to undertake a task of a higher order because he has already learned to command the skills of the lower order. In substance, he has invented a language of a higher order. The machines can detect, on statistical grounds, the difference between mere fatigue and the subject's deliberate errors to induce a change of the requirements, and the machines respond accordingly. Such a machine proves to be a better judge of how a student should learn than the student himself. The outcome is that students learn more easily, more quickly and reach greater skill than by the best coaching of an instructor, for die machines have adjustable parameters whereby the experimenter may himself, as he learns, adjust them systematically. Just remember that no two students are alike and that each receives the most suitable schooling. To put it simply, these cybernetic investigations of learning show, first, that it cannot be explained as mere adaptation. Second, that the real questions of learning cannot be handled by the current statistical theories of learning couched either in terms of stimulus and response or in terms of responsiveness of systems of any ensemble. Third, our notions of drives, motives and rewards proved to be irrelevant; men simply are learners. Fourth, the notion of simple sign-system construction is good for only one level. Put in the affirmative, a hierarchical structure is necessary and must be reflected theoretically in any rigorous linguistic structure by a corresponding hierarchy of languages. If you are vitally interested in these problems, read Gordon Pasks recent papers; or better, visit his laboratory.3 Without the proper conception of learning it is difficult to understand its disorders.
Gentlemen, I am not here to tilt against windmills, but, as a neuropsychiatrist, to acknowledge our debt to the logicians and mathematicians who have contributed to our cybernetic model of psyche and to the physicists and engineers who have devised our model of its embodiments. I am neither a monist nor a dualist, neither a mentalist nor a materialist—but a scientist, a pragmatist who knows truth cannot lead us astray, but that many a true conclusion is implied by any false premise. Exitus acta non probat!4
Sechenov, I. M.: Selected Physiological and Psychological Works of I. Sechenov. Moscow, Foreign Languages Publishing House, 1952-1956, pp. 265-401. (The article was originally published in the magazine Vestnik Yevropy, nos. 3-4, 1878.)
Von Domaras, Eilhard: The logical structure of mind (with an introduction by W. S. McCulloch). In: Thayer, Lee O. (Ed.) Communication: Theory and Research. Springfield, Ill., Charles C Thomas, 1967, pp. 348-428.
Gunther, Gotthard: Cybernetic Ontology and Transjunctional Operations. In: Yovits, Marshall C., Jacobi, G. T., and Goldstein, G. D. (Ed.) Self-Organizing Systems. Washington, D. C., Spartan Books, 1962, pp. 313-392.