I want to make clear two of the notions which Dr. Rioch used very ably in his description of psychotherapy: the notion of information and the notion of give and take between human beings in terms of feed-back. Now, one of the most important things that has happened to my generation is that many of us, who started out in psychology and shifted over to neurophysiology, as time went on were compelled to shift from thinking in neural terms to thinking in terms of energy and in terms of structures in the way our grandfathers had done in physics. But as we were getting acquainted with those disciplines, we found a new discipline arising: namely, communication-engineering, which is a much more radical departure from physics than at first appears.
Let me speak first of the physicist. The physicist deals with a world in which things happen or else do not happen. He observes and he records what happens. As long as he is doing this and formulating theories about it and testing out those theories, he remains a physicist and nothing more. But the moment a physicist tells you, “This instrument is not behaving according to Hoyle,” he has ceased to be a physicist and has become a communication-engineer; for he is now speaking of a signal.
I want first, for reasons that will appear later, to speak of the simplest of all possible signals: namely, I have given what I hope is an oversimplified statement and probably it will not turn out to be true just as I have given it to you. The difficulty in guessing at the central nervous system functions is always that any function given simultaneously in space in a set of relay nets can be converted into one given sequentially over a single neuron. You will need, then, as many relay times as you need relays in space. The problem that lies before us who work on the physiology of the brain is to propose for ourselves mechanistic hypotheses as to its working. These must be such that they can be checked anatomically, to make sure that the nervous system is so wired, and physiologically to see that it does so operate. In that we will expect aid. At the present time our physiological knowledge is ahead of our anatomical knowledge and several of the best neurophysiologists, like Lorente de Nó, are going back to discover the necessary anatomy.
The brain as involved in behavior is for the psychologist that which is in the black box between an input and an output. He is welcome to make any kind of assumption concerning its content that he has a mind to, provided it will give him that output for that input. The difficulty is that when he seeks for a model to represent the inside of that box, it is extremely difficult for him to invent one that will give him the required output for that input. And it is Rioch's delightful proposal that the proper model for the action of the brain in behavior is the action of the brain! That's only hard on us, the physiologists, who have to find out how it works.
the kind of signal which either happens or else does not happen—an all-or-none signal. Physically it is some sort of an event propagated somehow through space and time. It is a perfectly good physical affair in the sense that the physicist can find it then and there. It begins somewhere at one time and it ends somewhere else at a later time. But that is not all that it is. It has a second aspect: namely, it is true or else it is false.
By that I do not mean anything very difficult to understand. If a telephone bell rings and you go to the phone, pick it up, and there is no call, it was a false signal. Maybe lightning struck a telephone pole. Maybe insulation burned out somewhere. One produces such false signals whenever, along the line of communication, one introduces a disturbance which is a real, physical affair, but not the one presumably carried over that line. The basic difference between communication-engineering and physics is that in communication-engineering one deals with a signal, an entity which, even in the simplest case, is true or else false, a property which no simply physical thing has as a physical thing. It is this property, of being true or else false, that in the case of all-or-none signals, makes it possible for us to set up an easy way of keeping track of information, quantitatively.
Let me try to make dear the notion of information as we obtain it from the world. We have receptors, so-called “pick-ups,” devices which can initiate impulses in neurons that lead those impulses into the central nervous system. The impulses that come from those receptors are all-or-none in this sense, that whenever a neuron is excited it does all that it can then do. This naturally follows, for a neuron is a distributed telegraphic type of relay, and the size of the signal it believes depends on its sending end, not on the size of the signal it received. When an impulse arrives in the central nervous system, it only signifies that a neuron was adequately stimulated; that is to say, was stimulated in the manner appropriate for the stimulation of that neuron.
If we put an electrode on that neuron and initiate an impulse in it, the impulse is a perfectly good physical impulse, but it is a false signal. If you press on your eye, you will see a light when there need be no light. I mean nothing more when I say “false signal.”
Now let us consider a collection of neurons, each of which is to be excited by something in the world outside; but these neurons are not to be so connected that the firing of one in any way influences the firing of any other one. This is to say they do not constitute one system, merely an ensemble, or collection of systems. Let us consider one such neuron first. It has two possible states in any one relay time, which is about a millisecond. It has either one signal or else it has none. Consider two, and there are now four possible states: a signal in the first, but not in the second; a signal in the second, but not in the first; a signal in both; a signal in neither.
Similarly, for three neurons there are eight possible states and for four neurons there are sixteen. In general for N neurons there are 2N possible states. If, now, the chance that a neuron has an impulse on it at a given time is one-half, the chance that two will have is one-fourth; three, one-eighth; and all N neurons is 1÷2N = 2-N. We may define a unit of information as a decision whether or not a given neuron is to have a signal on it. N such decisions specify one and only one out of the 2N states. Thus our amount of information is simply the logarithm to the base 2 of the reciprocal of the probability of the state.
When light falls on the photoreceptors of the eye, no impulse leaves a ganglion cell unless several receptors attached to it through bipolar cells are simultaneously excited. This decreases the probability of an impulse in the ganglion cell occurring by chance. There are about a hundred million photoreceptors in the eye and only a million ganglion cells. That is a reduction of about one hundred to one between photoreceptors and the channels over which the information is sent to the brain. By demanding coincidence before we pass the signals on to the brain we have insisted that many of our sensory receptors agree in the assertion that there is something there before we bother to pass it on. That is a loss of information. And that information given up is used by us to buy security that we are responding to something which is there weeded out through us to such an extent that our maximum output is about one decision per millisecond when we are lecturing, playing the piano, or whatnot. The maximum amount of information that I can convey is this: one part of one-hundred million of what my eyes can receive. No device that man has ever made corrupts information at such a rate, and none is less apt to go awry, because the brain bases its one decision per millisecond upon such a vast influx of information.
Such is the general nature of our information. If we have a nervous system which is composed of telegraphic relays, neurons, each of which will emit a signal on receipt of a combination of signals provided it is not prevented from doing so by some other signal, and if we have spontaneously acting neurons in its net, we know that a nervous system if properly designed can compute any computable number. The proof of this is Turing's construction of a finite machine that can do these things for similar operations. The question remains, how do we do things that we do do? And that is quite a different question. That requires looking into the nervous system to find out what the mechanism is, because there is always a host of ways in which a finite machine can do any of these things. It only has to be given enough time.
Now, one of the important questions that arises is how one has ideas at all. One looks in vain through the ordinary psychological theories for any conception of a way in which one could perceive a square regardless of its position, its size, and so on. Very often one thinks that he has a general conception only to find on looking into it that he has a term but doesn't have the general conception belonging to the term. He only has some particular. Let me put it this way. One way of having the idea of a dog would be this: if I give you one dog and from that dog you make another dog, and another dog, until eventually you make a particular French poodle, and when you get to that French poodle you stop and say “uh-huh, dog,” then all you need for action is your particular idea of the poodle and the word, “dog”; but you don't have the general idea corresponding to the term “dog.” You have what is for some the canonical representative of dogs. This is in a sense a way of having a general term for which the idea is not fully general.
But there is another way in which you could truly have the universal or idea “dog.” If, when I gave you one dog, you made all dogs and then took the average length, the average width, the average height, the average leggishness, the average hairiness, the average barki-ness, the average tail-waggingness, and so on, you would come out with a set of averages each of which in no way depends upon which dog I had given you. For, given any one you would have made all. You would, as we say, have constructed an invariant of the group of dogs. There are some bona fide general ideas that we have which are universal in this sense that they are of all dogs, not just of a representative one: I mean, in which the real idea is dog quite apart from its representation by a French poodle.
I would like to tackle the simplest way in which one can have such a general idea. Let us suppose that you walk into a restaurant where you never were before. You open the door and you smell a smell you never smelled before; you shut the door and walk away. So long as you remember that smell you may forget the time and place where you smelled it and still have the idea. You have pulled it out of time. What one needs to get an idea “freed up” from temporal reference, in other words, is a memory of some kind.
The simplest kind of memory can be composed of the same relays which we use for transmitting information: namely, neurons. If we construct a ring of relays and send into that ring a signal patterned after something in the world, say we feed in “di-di-di-da,” as long as “di-di-di-da” chases its tail around the ring we have it at all times though we received it only once. Given it at one time, we have made it at all times by simple reverberation. Consequently, it no longer matters when we received it. We know now that there was some time such that at that time, “di-di-di-da.”
Now every other variety of memory is only a substitute for reverberation. We may build reverberative out of telegraphic relays; we may build them as acoustic tanks; we may build them in a host of ways. Into any of them we put a signal, let it come around and regenerate it—so that it is sharpened again and again as it goes round and round and round. The difficulty is that such memories require a continuous output of as much energy as is required for transmission of the signals. Consequently, we are in a habit of making marks on a piece of paper or something else of the sort and then looking at them again. In other words, there is a second possibility of a memory: namely, some sort of trace. Apparently we make traces in our brains also—and these traces seem to be divisible into two major groups according to their behavior over long periods of time and according to how they respond to repeated testing.
One type resembles growth with use. It is important in skilled acts which, as we test them and test them and test them, tend to grow worse, but when we let them rest they reappear so much better than before that we seem to learn to swim in the winter and to skate in the summer.
The second is a type of trace which does not ordinarily need any repetition but is repeated by testing. This kind of memory holds snapshots of the world. Although there was a man named Babinski who seems to have been on the trail of it, much earlier Craik, in England, first clearly established that we take these snapshots at a rate of about ten per second. It may be eight per second; it may be twelve per second. But if we try to hurry up this procedure, before we get to twenty per second, it drops back to the half. If we try to slow it down, before we get to five per second, it starts making double-takes.
This became of importance in hooking man to his own electronic devices, particularly radar, where his inability to observe things in the “skip-periods” became crucial. It was then found that you could lock in his visual affairs with auditory signals and push them up or down so that you could get down nearly to five and up practically to twenty. The man who did the first most important work on that problem is John Stroud, a psychologist on the West Coast.
At present I do not want to go into the physiological and anatomical reasons why we should expect such an affair; but I cannot imagine anything except something “quantized” this way that would account for our form vision. To do it not ten times per second but at, say, i ,ooo per second requires more brains than could be folded up so as to get it small enough to get it in the head. Let us simply suppose that one takes ten snapshots of the world in a second. And let us be fairly generous, and say that there are about 1,000 binary decisions at the most in any one such snapshot. Now I am going to do a rough piece of arithmetic. Eight per second, 64 seconds per minute, 64 minutes per hour, 16 hours per day, 256 days in the year and 64 years in the life is 2 to the 33rd power, which is 10 to the 10th frames, with 10 to the 3rd items per frame is 10 to the 13th: a rough guess as to the size of human memory.
Heinz von Förster who wrote Das Gedächtnis was one of several theoretical physicists who became interested in the size of human memory. Working from the best psychological data available, he came to the conclusion that the mean half life of a trace in human memory is something of the order of half a day. Under those circumstances it is possible for him to make a guess as to the size of human memory by knowing that the memory has to be big enough so that it will never be filled; and at the same time not so big that there will never be much of it used. In other words, if you make traces in such a memory, and they evaporate before you get around to taking them out again, it is no use putting them in. Man has a memory with a half life of half a day. Man has about ten to the seventh, perhaps ten to the eighth, access-channels to put things into it; and each takes a millisecond to get its signal in. This says human memory should be, if you had to design it for such a machine, something of the order of ten to the thirteenth if you're going to be stingy, and ten to the fifteenth if you're going to be generous. Notice this is of about the same order of magnitude that we came to from von Forster's theory of snapshots.
But there's a peculiarity about this memory: namely, that it does not go away asymptotically towards nothing. It goes away towards a residuum which has been variously estimated anywhere from 1 to 12 per cent. How good the evidence is for that amount, I do not know; but let us make it to per cent. Now, no number of curves that go away towards zero, if you add them up, will go away towards anything but zero. If the traces in the brain, like most of the things we know about in the world, go away on a thermal basis, then we may guess several things about this memory. But none of them would account for this retention. We would have to suppose that there was some process which re-engraved the traces on the memory. So to speak, all cats die; pretty soon no cats? Yes. But cats have kittens!
We can now estimate the amount of energy that would be required to make such a trace. If it has a half life of half a day, it will have a barrier equivalent to about 1.42 or 1.43 electron volts. To account for retention with memory of a mean half life of half a day, having things that last forever in it to the tune of 10 per cent, would require of the totality of remaking traces about .02 watts. The energy requirement is not great considering that the brain is about a 24-watt organ. In other words, human memory is energetically possible.
There are several things about this kind of memory that are extremely troublesome. In the first place, where is the trace? Is the trace in any one place? Is the trace multiple? Is the trace some peculiar shift in the parameters of the net everywhere? Well, let us look first at what kind of a thing we can have to account for it, if we are going to have to have something of the order of ten to the thirteenth to ten to the fifteenth traces.
You only have something of the order of ten to the tenth neurons in your head. You cannot put one memory in each cell and lock it up there. You haven't got 1,000th of the brains for it. You would have to do it then by some change in the threshold of the cell in such a way that the configuration of endings on it to fire it would be different after, from what it was before the trace was made. Or you have to change the properties of the end-feet: they are numerous enough. The required number of combinations you can make by, so to speak, wiring up inside the neuron so that it will fire for one combination and not for another is theoretically quite possible.
Protein molecules have a half life of the general order of half a day. Protein molecules have the trick of generating others in their own image. That is how your genes made you in the first place. So that it is quite conceivable that snapshot memory is going to turn out to be a matter of protein chemistry where the crucial item is a structural protein, linked perhaps to a carbohydrate, to a phospholipid, or to nucleic acid. That is what one would expect.
Next, in how many places must it be in the nervous system if it is going to persist? It can't be all in one place or a single lesion would knock it out in a highly specific fashion, and that is not the general story. The most likely place to look is probably somewhere in one of the great masses of small cells which conceivably can be linked up to form some facultative paths; those facultative paths might be something which was temporarily active over quite a long time until its properties became altered by that activity.
If you were going to try to find such a thing in the spinal cord, the only place to look for it is the substantia gelatinosa of Rolando. If you are going to find it in the midbrain, it's got to be in the reticular formation. And it might, of course, be in the bark of the brain, or cortex. The latter does not seem to me to be as likely as those other places. As Dr. Rioch could probably tell you much better than I can; if you leave the midbrain intact and take off the brain above it, a cat still learns. The loss of the cortex seems rather to knock out the possibility of having certain kinds of ideas than to knock out memory itself. It may be a process distributed throughout the nervous system, but if I had to guess its whereabouts, I would put my money on the midbrain.
There is one more point I want to make. We can have ideas retained in memory perhaps by reverberation and so abstracted from their temporal reference. In other words, have them apart from reference to the time at which they occurred. Also, we have devices like those that turn our eyes mediated over a circuit running over the superior colliculus, that will center a form to be seen before we have to go to work on it to tell what shape it is. And one's vision for shape is amazingly poor with objects which are not centered, but this device, having centered it, makes it a matter of no moment where it first appeared.
There are nice problems that arise such as how one can hear a chord regardless of the pitch and know that it is a major third or diminished seventh, or whatnot. One obviously cannot twist his ears so as to change the key. If it is given in C, it is given in C, yet one can still recognize the major third or the diminished seventh.
Imagine for yourself layer upon layer of mosaics of relays. Let the output from those mosaics descend straight down. Send your information up channels that slant upward through it. Set the threshold of each of those relays at a value so high that it requires energizing of that whole layer and a signal of a particular slanting fiber to fire any one relay. Now when you energize the lowest layer, your signal will go out practically where it came in, and as you energize higher and higher layers it will step off in the direction of the slope of the ascending fibers. In other words, the first time it will go here, the second time one step that way from here, the third time two steps, and so on.
There is reason to believe that the auditory cortex has a structure somewhat of this kind. We want at the present time to have new histological sections of it made along what is practically the axis of pitch. The interesting thing is that pitch is projected to the cortex so that octaves span about equal distances on the bark of the brain. Under those circumstances, if you bring your information through those hundreds of layers of relays that constitute the bark of the brain and send it down vertically, you will expect that you have taken whatever you got by way of an input and, in effect, translated it along the axis before you let it out again. And from this you are in the position to do what you did in the case of making the average dog. You will have been able to take something given at one pitch, make it at all pitches, and compute your averages. You will have translated each note in key without changing the interval. Mechanisms of this kind may account for our general ideas.
For further research:
Wordcloud: Account, Average, Away, Brain, Cell, Dog, Else, False, Fire, General, Given, Going, Guess, Half, Human, Idea, Important, Impulse, Information, Layer, Life, Man, Memory, Namely, Nervous, Neurons, Order, Output, Per, Physical, Physicist, Possible, Relays, Required, Several, Signal, Size, Snapshots, Something, Speak, States, System, Ten, Term, Test, Things, Trace, Words, Work, World
Keywords: Things, Memory, Feed-Back, Information, Beings, Give, Wiring, Terms, Looms,
Google Books: http://asclinks.live/z5h0
Google Scholar: http://asclinks.live/kt4t