W.S. McCulloch
Ladies and Gentlemen of My Cloth:
I have been asked to read to a tired audience an after-dinner lecture, like my odyssey, “Where is Fancy Bred?,” on a subject no one else would mention. Since Eilhard von Domarus is dead and John Abbott is much too modest, my topic is defined as that logic which we need in neuropsychiatry. From several proffered titles your chairman chose “Abracadabra,” which we shall understand as the use of symbols to affect man's conduct.
I doubt if anyone ever speaks or indulges in any gesture or symbolic act for any other end than to persuade or exhort himself or others to some action—explicit in overt behavior—or implicit in alternation of succeeding stages of thought and feeling. This hortatory facet of symbols we share with many beasts that have no words. By words I mean symbols that stand for things or their relations or for our notions of them, for this is the second facet of a symbol. In the theory of information the former is called the meaning of the symbol, or signal, to the recipient and is defined by McKay in terms of the selective operation it exercises upon the transitional probabilities of his overt or covert behavior. The second, what is meant by the symbol, has a list of names, according to the particular theory of relations of that school that defines it. For here we are confronted with our own deficiency. We still lack a realistic logic, even a useful calculus, of relations of more than two relata. A gave B to C, A looks like B to C, A thinks, believes, hopes, dreams that B is C—are all beyond our comprehension. If and when we psychiatrists talk nonsense of these relations—and we do—all we can plead is that the proper calculus is still to seek. I would not let you think I have any solution to offer you. It is beyond my ken.
But I am here tonight for happier reasons. We are beginning to know a bit of the circuit theory of brains and are forming a logic suited to that theory. To make it clear to you I would begin about A.D. 1200 and sweep succeeding cobwebs out of the sky.
From A.D. 500, for 700 years, the scholastic philosophers had elaborated the realistic logic necessary for science. They had begun with those statements which are always true—the Eternal Verities—derived chiefly from logic and mathematics, but including what we call the laws of nature. To the former belong such truths of arithmetic as that 7 and 3 are 10; always were, are, and will be; and the Pythagorean theorem, that the square on the hypotenuse is equal to the sum of the squares on the other two sides. To the latter verities belong such laws as that like begets like. The former need no justification, for, for them to be is for them to be understood. The latter, being mediated in some sense by perception require, as Duns Scotus would say, a firm proposition residing in the soul— and I quote it. “Whatever occurs as in a great many things from some cause which is not free is the effect of that cause.” This is Davy Hume's “habit of mind” with the qualification necessary to realistic logic—that the cause be not free—a requirement that all of our statistics X2, students etc., seek to insure—for we are realists aware of the nominalists’ criticisms. After William of Ockham one may never forget the concepts may be “mere” fancies lacking that which they mean—that which is in the world beyond our fancy. Whitehead, Harvard's great professor of the nature of natural knowledge put it thus: “The existence of objects is the first law of science” —and an object is “a recognizability,” that is, a universal, subsequent to sensation in generation, prior in understanding or, as Ockham would say, “in adequation.” Would that we had them in psychiatry! Do Bridgeman, your great physicist and philosopher of science, real justice. Define your concepts in terms of the operation by which you test their existence. What is a complex? What is a fixation? What is a neurosis?
I fear me we have lost our way. Science, in the sense of a realistic logic is beyond us. But what of the persuasive aspects of symbols? That was not always so. There was a time when spells worked. And sometimes and in some places they still do. Let me give you one example. My chief at the University of Illinois was the great psychiatrist—now the papal knight—Sir Francis Gerty, who could never enter again into his mother's womb and be born a psychoanalyst. He asked me to find him a first-rate human being for a resident. I did. He was the great psychiatrist of Haiti, Doctor Mars. To him I will be always indebted for the only operational definition of neurotics that I have ever had. Our University of Illinois was full of them, for my psychoanalytic brethren wanted no psychotics, and each patient had had a superb study to rule out any known disease or gross perturbation of his physiology. Doctor Mars studied them all as mere organisms responding to stress of one sort or another and found them less well-poised than normals. They overresponded to almost every test. This is exactly the opposite of my own findings on psychotics, manic depressives, schizophrenics, involutional melancholics, and puerperal and postpartum psychotics—who are peculiarly resistant. But I am much more indebted to Mars for his amazement with American psychiatry. He explained to me that in Haiti he never saw such patients. The voodoo witch doctors took care of them very effectively. They sent him cases of general paresis, Pick's disease, Alsheimer's disease, cerebral arterial sclerosis, and the classic psychoses that could not be handled by their magic. This magic is the hortatory, or persuasive, potency of symbols—its power over patients—the great strength of the abracadabra.
Since this is an after-dinner lecture and this is the fifth page of the manuscript, perhaps I may read you a story to point up this difference in the use of symbols; the first, by the—shall I say patients? (they were scientists) —and the second by the psychiatrist, who like me wore a beard—only it was a goatee. The time is World War II. The Manhattan project had gathered to its hush-hush bosom the best of our teen-age mathematicians with the first down on their chins. The project was so secret that the youngsters were forbidden to mention it to the draft board of the district. Imagine them, one after another, and all cross talking, appearing before the examining board, including the goatee. And his question—"Young man, why do you wear a beard?” Ans. “Old man, why do you wear a beard?” “You think you're clever?” “If I didn't I'd be a fool.” “What are you interested in?” “Symbols.” “Why?” “They are my business.” “What else?” “Nothing.” And after an inquisitive pause the conscientious psychiatrist writes “schizophrenia”—and the draft board “4F”----I'm sorry. That is the punch line, and no one laughed, but it couldn't wait for the end of the story. Some years later the examining goatee at the American Psychiatric Association told us of the strange collection of downy schizophrenics he had found in a single district in New York City, and Dr. Eilhard von Domarus said, “What do you mean by schizophrenia?” He answered, “When people talk incomprehensibly symbolically I feel they have schizophrenia.” To which Eilhard asked, “Mein Gott, Doctor, in what part of your anatomy do you have that feeling?” And the goatee put his hand on his hypochondrium!
The abracadabra of psychiatry persuaded the draft board; and of our mathematicians we had, not cannon fodder, but one atomic bomb!
No one in his right mind doubts the power of the abracadabra. With the decline of realistic logic, beginning, say, in 1200, the cabalistic symbols began to be prepotent. From then until nearly 1850 they have more so-called cures of neuropsychiatric cases to their credit than all the legitimate practitioners of medicine combined— of course, magic words, whether it is mesmerism, hypnotism, a crucifix, a shrine, the bones of a saint, or a rabbit's foot. The question that puzzled me most is why psychoanalysis fails so miserably. I think I know the answer, but before I come to it I would like to make the point stick. Hitler knew it: Goering enjoyed it.
The story is of a Bavarian farmer whose only cow lay down on her side in the barn and could not be got up by him, by his friends or by the Nordic veterinarian. After dark they sent for a Jewish cabalist who, having assured himself it was a cow, and not dead, whispered into its ear. Whereupon it stood up. The farmer and his friends, overjoyed, insisted on knowing the magic words. To which he answered, “I said ‘Heil Hitler,’ and in this country when one says that all the cattle stand up.” So, Freud wondered why psychoanalysis was so ineffective. In his last book he opines it never works except through transference.
Let me admit my failure. Some ten years ago when I read all of his writings and tried to find his antecedents I had missed two things, both of which omissions are apparent in my paper called “The Past of a Delusion.” The first I heard of as soon as Sir Francis Walshe read it. Whence had come to France—to the very clinics where Freud went in France—the notion of the subconscious as an explanation of hysteric symptoms? The answer is simple, once one has the facts. It stems from Whewell and from Hamilton and was brought to those very clinics by Laycock, Professor of Neurology and Psychiatry of Edinburgh (whose most famous pupil was neither Freud, nor any of his teachers, but Hughlings Jackson). Second, and to me most troublesome, was the cabalistic nature of Freud's writings— particularly his “Moses”—inspired, according to Freud, by Michelangelo's horned Moses but lacking the horns. I had intended to make something of this to you tonight, but I now know that the great scholar of the Academic Lecture has it well in hand and I bow out happily, for he will do it better than anyone else. I must make this point, for it is essential to abracadabra, and Freud knew it. The old symbols had worked cures he could not effect, and could not understand why he could not effect. Having watched the procedure for more than a quarter of a century, as an outsider, but also as a professor of psychiatry, and seen what happened to my students, I am beginning to think I may be on the tail of the answer to be sought. The old abracadabra, like the secret name of God, by the authority of mysterious symbols, carried the essentially healthy ideas of the cabalist, charlatan or philanthropist, and reformed the mind of the patient to conform to the sanity of his doctor. The psychoanalytic procedure works in reverse, for the doctor is listener, the subject is speaker, and the distorted mind of the patient by the power of abracadabra molds his doctor to his own disease. I would be the last to belittle this power, whether it be in the goatee's “schizophrenia” on the draft board, ending in 4F, or “Heil Hitler” on the cattle. They stand up. Only, in psychoanalysis, it works in reverse. Too bad. Symbols, icons, have that power over the mind of man.
The best that one may hope for the analyst is that his infection by his patient, or by his analyst, will leave him with an immunity. This I have seen happen on several occasions, and it has made these analysts my friends. But all too often the immunity is generalized, as it was with the goatee who had the hypochondriacal reaction to all symbols he did not understand. Would that the cattle had a like reaction to “Heil Hitler.” But we are not cattle—or our god, like Moses, would have horns. I mean Michelangelo's Moses—not for cuckoldry, but for godhead—which is perhaps why Freud omitted them, though his inspiration enjoyed them. Let me make the necessary critical remark.
Exactly at the time when William of Ockham's criticism of logic upset the balance in favor of an anti-intellectual empiricism, Ramon Lull was born in Spain (about 1230). He also was concerned with the power of icons, or symbols, not for persuading others, but for organizing one's own thinking. You have already seen ABRACADABRA; at the end you will see Lull's symbol for analysis of syllogism. They are equally symbolic. To the psychiatrist they are equally unintelligible. But the one is only of use in persuading patients, the other in organizing one's own ideas. Pragmatically, in the sense of your great psychologist William James, they are both realistic, for they both affect man's thinking; but his teacher—of Cambridge, though not of Harvard—distinguished them. The abracadabra relates, in its realism, only to persuasion. Lull's existential diagram relates the form of the syllogism to its content in fact, which is pragmatism, the realistic logic of Charles Peirce. This is the point of departure. What we need by way of a logic for neuropsychiatry has its roots in realistic logic and has a long history—through Lull, through Leibnitz, and Euler and Venn, through DeMorgan and Schroder, and chiefly through Charles Peirce. True pragmaticism rests on this: that what is true works; False pragmaticism on this: that what works is true. The downy mathematicians’ equations worked because they were true. The goatee's diagnosis worked, but it would be a mistake to say, therefore, it was true. I will waste no more time on the erroneous logics of psychiatry—for there are many—but stick to the development of an “abracadabra” for the probabilistic logic in the circuit theory of the brain. From the followers of Lull, Euler picked up, and Venn perfected, the use of intersecting circles and closed curves to convey class inclusions and to build logical machines to handle syllogisms. Boole, attempting to quantify the predicate—all men are some mortals—all men are all featherless bipeds—set up a calculus of l's and 0's for his logical machine. DeMorgan dreamed up the calculus of relations, and Peirce created the logic of relatives. From this sprang the calculus of propositions, which has grown apace in the work of Frege, Piano, Whitehead, and Russell, and a host of others, into a powerful method for handling logic. I think it was in 1926 that Godel arithmetized logic, which reduces proving theorems to calculating numbers. Then Turing proved that a finite machine with a finite number of states, marking and erasing in one square at a time on an infinite tape, could compute any computable number. Before he had built his machine, Pitts and McCulloch—picking up the symbolic structure of the Principia Mathematica and subscripting every symbol for the time when a particular neuron made its all-or-none statement, i.e., fired—developed a logic for ideas immanent in nervous activity. In substance, it proved the equivalence of all general Turing machines, including brains. In that paper Pitts, using Larry Kubie's notion of reverberating chains of neurons, gave us a theory of temporal invariants, of a kind for which all others are at best surrogates. Thus we had a theory of universals abstracted from the time of their occurrence. Von Neumann made this text the basis for teaching the theory of computing machines, which was then in its infancy. In a subsequent paper on how we know universals we were able to generalize our notions by means of group invariants computed by nervous nets. This satisfied von Neumann on that score, and recently it has been shown that this logic of the lowest level of propositions, with the quantifiers for all and for some and mere equality, suffices for all mathematics. Thus we may regard it as proved that a brain, even the oversimplified brain of oversimplified neurons, can handle the mathematics in its own right, can find any figure of excitation in its input, can remember and can come to any conclusion that follows from its premises, provided only that its circuitry, its wiring diagram, its neuroanatomy and neurophysiology are appropriate to that task.
Today it is possible to say that we can make a calculating machine that will do with information whatever you want, provided you state in finite and unambiguous terms what you want it to do. Unfortunately we can build so many and such diverse machines that this statement is of little help to the neurophysiologist, for he must find out for himself by empirical fumbling in what part of the nervous system a given item is computed. The most successful of our attempts at this experimental epistemology is “What the frog's eye tells the frog's brain,” written by Lettvin as I believe science should be written. The second difficulty is that we do not know precisely what we think brains do with information and cannot possibly state it finitely and unambiguously. The British psychiatrist, W. Ross Ashby, was the first to make a serious attempt to meet this difficulty. In his Design for a Brain he had given us the first picture of memory with any verisimilitude and a mechanistic description of ultrastability achieved by loosely coupled systems. It is now in its second, and far better, edition. He saw this second dilemma, and wrote an article “Can a Mechanical Chess Player Beat its Inventor?'’ In it he shows that it can, and so started the whole movement on artificial intelligence. Gordon Pask has built the most entertaining of learning machines of iron salts in solution. Roughly artificial intelligence can be divided into deductive machinery—a la Turing—inductive machinery—a la Uttley, of the National Physics Laboratory in England—and abductive machinery —a la Oliver Selfridge, of Lincoln Laboratory, with his ‘‘Pandemonium'’ and his “Sloppy.” Deductive machinery starts from rules and cases and determines the result. Inductive machinery starts from cases and results and generalizes, producing rules. Such is Pavlovian conditioning. Abductive machinery starts from rules and guesses that this is a case under that rule; this, the hypothesis, is the basis of modern science and probably the source of all insight, intuition, and invention. I doubt whether there is any fourth way of thinking that leads toward truth. I speak here of thought—for there are, of course, many mechanisms to these ends. And in Cambridge, starting from Charles Peirce's definition of a third kind of quantity, “information,” came our information theory. And from feedback notions, generalized by Julian Bigelow and famous through his collaborative efforts with Rosenblueth—and that most productive of our mathematicians, Wiener—cybernetics was born. This and Ashby's ultrastability are certainly more responsible for man's adaptability and survival than almost any other single item. But I fear me not even these excellent abracadabras have been read with understanding by us psychiatrists. Recently I heard a great neurophysiologist, who has been subjected to all these notions, tell a funny story about a psychiatrist, and my remarks remind me of that story. The psychiatrist had built a computing machine or artificial brain, and asked it whether it could think like a man. The machine replied, “That reminds me of a story!” Except for the flavor of a recursive function, I fear that every psychiatrist is in that same boat, though he, more than any other man, needs to understand the circuit theory of man's brains.
When my boon companion, Hank Brosin, was about to examine one of our most gifted promoters for his Boards in neuropsychiatry, he, Hank, stopped me suddenly in the hall and said, “Warren, give me the toughest question for a gentleman of our cloth.” I said, “What is a normal man?” He answered, “That's good,” marched to the other end of the hall swinging his arms and returned with his hands under his chin and queried, “What is the answer?” I don't remember my answer, but that is not what interests me most. I know well the digital tricks of brains, all or none impulses, its redundancy of coding, its redundancy of channel by parallel projections with overlap preserving as long as possible the topology of the input. I know its multiple closed loops servosystems stabilized by bulboreticular inverse feedback, and a host of its local tricks for staying right side up and going forward adaptively in a changing world. I know also the “utility of death” to systems once adjusted, the price metazoa pay for memory once fixed in our perseverating systems. Only our shuffled genes can save our kind. Though I know well its importance I will not even digress into that inheritor of Descartes’ pineal gland, the much belied reticular formation, for I have no theory adequate to its function.
Instead I shall use the remaining fourteen minutes to mention what I have been after since 1952, when I came to the Research Laboratory of Electronics to work on the circuit theory of brains.
In Ward Halstead's Symposium on Cerebral Mechanisms I displayed my icons, my abracadabras, for teaching symbolic logic to neurologists and psychologists. For youngsters and for me they work well. They are Venn's diagrams. They are composed of intersecting curves each bisecting every area produced by earlier curves. Into them we can put one for true, zero for false, whereupon they become Wittgenstein's truth tables for calculating the outcome of combinations of propositions; but, unlike his, they can be extended to propositions with any number of arguments. Each is, then, the proper symbol for the firing of a given neuron on receipt of such and such inputs. If one supposes, for reasons of signal strength of afferents or fluctuations of threshold, or even for perturbations of connections, that they will misbehave with a given probability, call it p, one can put p's into appropriate places in the diagram and operate with these by obvious rules. This creates what von Neumann wanted, a probabilistic logic, not a logic of probabilities but one in which the function, not merely the argument, is only probable. With these I worked for seven years alone before I found my way to answers to three of von Neumann's questions. The first is: How can we construct circuits of unreliable components that are more reliable than the components? He attempted it with neurons all of one kind, each computing the same function, neither A nor B, with only 2 inputs per neuron, and finally with equiprobable failure of components which either fired or failed to fire, regardless of their inputs. He was very unhappy with the outcome. Neurons had to be far better than any of us believe them to be, and only then by using enormous numbers of multiplex channels with tricky interventions could he obtain improvement. But, using my abracadabra in a strictly Pythagorean manner I was able to show that any one of his 3 unrealistic assumptions had prevented a reasonable and realistic answer. Had he let each neuron have a possibility of computing many functions determined by its threshold, let it have 3 or more inputs, he would certainly have had infallible circuits for fallible components very cheaply.
The second problem that always bothered him was why a nervous system, under the influence of alcohol or absinthe to such an extent that all of its thresholds shifted so that each neuron computed some improper function of its input, still gave a usable inputoutput function. This is the great trick of the respiratory mechanism which keeps working under surgical anesthesia, and, except for mechanical intervention, even in epileptic stridor. This, in our abracadabra, is not difficult to understand or to construct by proper design of nervous nets.
His third problem, presented in his famous lecture to the American Psychiatric Association (his last lecture to biologists) was this. The eye has a layer of photoreceptors and behind it only two computing layers, bipolars and ganglion cells. We know that there is a redundancy of 100 to 1 between receptors and ganglion cells and that about a tenth of our fibers in the optic stalk go from brain to eye, to tell it what to compute. What sort of elements are these neurons that the functions to be computed are so flexible as to rate a 100/1 reduction in channel capacity? This also is now easily answered, for, if each bipolar cell had only a couple of efferent axons impinging on it and responded to only two rods or cones, it could be told to compute 15 out of 16 possible logical functions of its inputs. If it received from 3 and if both bipolar and ganglion cells were affected then it could compute 253 out of the 256 logical functions. The proof of the latter we owe to the mathematician, Prange.
I have no longer to work alone on these Diophantine solutions of the Pythagorean logisticon where small whole numbers assert their awkward elbows. I have now with me Manuel Blum from Venezuela, Leo Verbeek from Holland, and Jack Cowan from Edinburgh—everyone of them brighter than I ever was and enjoying youth, which should never be wasted on young people. Industry has caught onto the utility of artificial neurons, and even the armed forces are having a look at minimizing the circuitry required for Venn sequences of the appearance of ones instead of zeros.
Ladies and Gentlemen, the hour grows late and I close rapidly. We have now an oversimplified theory, a theory organized by an abracadabra to match our intuitive processes. It will be made rigorous. It will make it possible to define the “normal man,” and, in terms of abductions, even a true Bleulerian schizophrenia—not merely a feeling in the hypochondrium of the psychiatrist. The pragmatism of William James will give place to the pragmaticism of Charles Peirce. Curiously, those who will have brought it about are often psychiatrists—Meinert, Edinger, Westphal, von Monakow, von Economo, Berger, Bumke—and in their generation, Sigmund Freud, to be remembered for his proper location of the perikaryon of many afferent neurons in the spinal cord of the frog, and, of course, my teacher, Dusser de Barenne. Among my juniors are W. Ross Ashby of England, Schützenberger of France, Braitenberg of Naples, and, of my many students, Lettvin and Scheibel. Psychiatry was ever thus.
For further research:
Wordcloud: Abracadabra, Answer, Board, Brain, Calculus, Cases, Circuit, Computing, Doctor, End, Freud, Function, General, Goatee, Information, Inputs, Lecture, Logic, Machine, Man, Mathematicians, Mechanisms, Mind, Nervous, Neurons, Notions, Number, Patient, Possible, Power, Probabilities, Propositions, Psychiatrist, Psychiatry, Realistic, Relations, Research, Rules, Science, Starts, Story, Symbols, Systems, Theory, Think, True, Understand, University, Work
Keywords: Symbols, Information, Symbol, Wiring, Abracadabra, Titles, Logic, Science, School, Lecture
Google Books: http://asclinks.live/sxde
Google Scholar: http://asclinks.live/qs6r
Jstor: http://asclinks.live/mq9u