The human central nervous system is composed of some 1010 neurons. Each has branches, a body, and a long, thin, output lead—called an axon. The burning of sugar supplies the energy to keep the neuron’s outside 0.07 volt positive to its inside. The capacity is fixed at, say, 1 µf per square centimeter, but its resistance through the surface is highly nonlinear— very great so long as the outside is above 0.02 volt to, 0.03 volt positive, but below that value, nearly a dead short circuit. Hence we see that if the voltage is reduced at any small area, current runs in freely-discharging adjacent areas, which then short-circuit, draw current, and discharge the next areas. Such is the propagated nervous impulse. It proceeds as a smoke ring of currents along the axon at a velocity determined by distributed capacity, distributed resistance, and distributed source of power. The velocities range from about 150 msec to 1 msec. The impulse has a rise time of less than 0.5 msec and a slightly slower decay. We speak of this impulse as all-or-none, not in the sense of its being always and everywhere of the same size, but of its size at any place being determined only by the condition at that place— not in any way depending on the strength of the disturbance that initiated it. In short, a neuron is a distributed repeater. So much for the components.
We call their effective connections “synapses.” Sometimes we can identify them as places where terminal branches of axons of one neuron lie against branches or cell body of another, or end in buttons applied to them. By “effective” connections I mean only that impulses in an axon evoke or prevent impulses in the neuron on which or near which that axon ends. In a few places where we know the details of anatomy and have the proper electrical records, the best guess is that if a fine axon courses along a cell body to end on or among its branches, it will, by leaving a source of current on the body and a sink among the branches, raise the voltage of the body and so inhibit its firing; whereas a large fiber coming in from among the branches to end on the cell body will almost certainly carry impulses to it to excite it or to facilitate its excitation. But that is certainly not the whole story.
There is a second way of gating impulses by impulses. Consider an impulse in an axon of uniform diameter. Its factor of safety for propagation is about 10 to 1. Now let the axon branch into two or more smaller axons. Their surfaces per unit length may exceed that of the parent axon, but their internal resistances go up as the squares of their radii go down. Hence the branches demand more current and a longer time constant. This demand decreases the safety factor at each successive branch point. These fibers are not insulated from one another by dielectrics, but their disturbances are short-circuited by embedding them in a conductor or common ground— say salt water. This does not prevent neighboring fibers from affecting one another. Seen from this salt water, every impulse is a sink of current preceded and followed by a source. Sources in neighboring fibers raise the threshold in a given fiber, and sinks lower it. In this manner, impulses in neighboring fibers help or hinder the passage of impulses to or toward the synapse with the next cell. In fact, we have ample proof that this does happen in ordinary nervous activity. An impulse dying in a fiber certainly tends to block the next farther back
What we have said so far insures that those neurons in a brain can be and are used as relays in a computer to gate all-or-none impulses; but it says nothing of coding. We should not attempt that question until we have a better picture of the function and general organization of the central nervous system. Its function is legion, but of one general kind. Every cell in the human body has a multiple inverse feedback—giving it great dynamic stability. Most of our juices are buffered. This is necessary if our parts are to survive in a changing environment. But, our body as a whole would die promptly were it not for our reflexive mechanisms, each an inverse feedback. Of these there are certainly more than a thousand but less than a million distinguishable loops. Many pairs of such loops acting simultaneously would produce contrary effects (like the motion of the knee in opposite directions) or disastrous results (like inhaling and swallowing). Always, many have to be employed in any complex act. Hence the principal function of the central nervous system is that of computer for an enormously complex, multiple closed-loop servosystem. All this is too general to be helpful. Let us therefore look at a bump at the back of the hindbrain, called the cerebellum.
The chief job of the cerebellum is to bring to rest at the right place whatever part of the body is put in motion. To do this it has first to be informed of the position, velocity, and acceleration of each part, which is done by paths ascending from the parts, relayed in the spinal cord. Next it needs to know the program and subroutines involved—information that it gets from the basal ganglia and the brain stem. In as much as the act occurs in a gravitational field and generally requires knowledge of the rotation and translation of the head, the cerebellum has to be informed by that part of the ear which handles this sort of information—the so-called labyrinth. Finally, the cerebellum needs information from distance receptors by light and sound as to the world about it and the objects in it. This information has to be processed by the bark of the big brain, the cerebral cortex, before it is sent to the cerebellum. Having calculated the necessary orders to the muscles to bring things to rest at the right places, the cerebellum sends them to the so-called reticular formation of the brain stem, which relays them to motor nuclei to be combined with local information to order the muscles to contract or relax in proper fashion.
The reticular formation plays an important role in this regulation. Consider any multiple closed-loop servosystem, and suppose that each of its component loops is stable but that all control any single effector— an aileron or a foot. Now it can easily happen that the combination jams the system at one end of its range or throws it into oscillation, which may be schizogenic. To prevent this happening it is now common practice to take part of the input derived from each loop to a common adder and feed it back inversely to many or all of the output devices. In the human body the outer and upper part of the reticular formation receives impulses from all of the ascending systems of information, and its descending fibers tend to facilitate all activity; but they also play on the central, inner part of the reticular formation, which receives the bulk of its input from descending systems coming from cerebrum, basal ganglia, and cerebellum. The output of this reticular formation is chiefly down the spinal cord to inhibit all neurons from exciting muscles. Its control is so powerful that it can stop a convulsion produced by strychnine. This is a massive, crude effect of a generalized inhibitor. As yet we have no idea how it is normally coded. We only know that it requires repetitive firing to be grossly effective.
Since the nervous system is a parallel, as well as a sequential system of computers, coding can be of two kinds. One involves which fibers are carrying the impulses; the other, the timing of impulses in fibers. The first depends largely on the anatomy, the second on the physiology of the synapses. Take them in that order. There are a couple of million axons from eye to brain, say fifty thousand from ears, and about one million from the rest of the body, all told. These preserve some semblance of their relative positions all the way from the surface of eye, ear, and body, through successive relays to the bark of forebrain, midbrain, and hindbrain. It is enough to insure that neighborhood of sense organ is converted into neighborhood in representation throughout sensory systems. Thus the topological relations of inputs are preserved (a trick that we should remember in dealing with nonnumerical data) for a tencent lens is about a million parallel channels, and even if it distorts or blurs the picture, it is still safer to trust its images than to trust our present attempts to scan and reconstruct, as in radar and television. In this sense we are as much true space computers as in our servosystems we are true time computers. Projection of neighborhood into neighborhood renders the system relatively stable under perturbation of threshold of cells, strength of stimuli, and detail of connection, even under scattered loss of channels—be they axons or cell bodies. These advantages make the system resemble the analog devices wherein most errors are in the last place, rather than digital devices using any radix, wherein the error is as likely to affect the first as the last digit. So much for the anatomy.
Now for temporal coding! When Pitts and I constructed the first calculus for neuronal nets, or repeater nets, we took it for granted that we could pass only one bit per impulse, that is, about one per millisecond. But increasing knowledge of fibers showed that with an axon receiving a constant stimulus, it emitted a spike with a delay that scattered only some 20 to 30 µsec, and that two impulses that summed fully at 120 µsec failed to sum at all at about 160 µsec. Hence the reliability with which it detects and emits by pulse interval modulation. This, MacKay and I calculated—showing that in this manner one could pass far more information at far lower duty cycles than we had previously expected. Alexander Andrew then showed that still more could be passed by better coding, making more use of shorter intervals. Then came our work on the input neurons. It showed that the maximum rates that could be got past its branch points was far less than had been expected. We presented that work, as a more realistic upper bound on information capacity, at the London Symposium on Information Theory. I mention this specifically because our difficulty was this: interference would cause omission of signals in neighboring channels, most when they were closest in time and represented points most closely located in muscle or skin. No one yet has been able to optimize the code for these kinds of noise, so we cannot put a reasonably high lower bound on information capacity. Amassian has shown that some cells in the cerebral cortex code into pulse interval modulation information as to the place whence their excitation came. This cortex is the organ that abstracts the ideas of objects about us from sensory inputs of any kind.
Let us, therefore, turn to some things we do know about actual coding. While there are a few examples of simple all-or-none items, sense organs generally code the logarithm of intensity into frequency. They can do this for any quantity—brightness, loudness, length, and so forth—or for the first derivative, and sometimes for the second derivative. There are also a few that emit impulses at a given frequency when there is no signal and produce a simple increase or decrease of frequency for the strength of signal. For example, the vestibule, the organ of balance and acceleration in the ear, sends in a basic frequency of, say, 39 impulses per second and adds or subtracts from that frequency as the head is accelerated.
Now we return to the problem of the synapse. Some neurons are simple coincidence detectors; some, on receipt of appropriate information, emit a train of impulses; and still others, like those of the balance organs and the big output cells of the cerebellum, deviate in either direction from a fixed resting rate according to the signals reaching them. One of the most instructive groups of cells is the next relay in the vestibular system, for it emits a signal at the frequency which is exactly the difference between the carrier frequency—say 39 per second—which it had been receiving and the frequency that it now receives. So for 36 or 43 it emits 3 per second. In short, for a synapse in general there is no way of knowing what sort of a function the output is of the input. Until you examine it, all you can say is that the synapse is some sort of filter—almost certainly nonlinear —which only says it cannot operate on future information.
For further research:
Wordcloud: Axon, Body, Brain, Branches, Capacity, Cell, Central, Cerebellum, Certainly, Coding, Computers, Connections, Current, Distributed, Ear, Effective, Emits, End, Excitation, Far, Fibers, Formation, Frequency, Function, General, Hence, Impulses, Information, Input, Loops, Million, Muscles, Neighborhood, Neighboring, Nervous, Neurons, Organ, Output, Per, Positive, Receives, Relays, Rest, Reticular, Sense, Signal, Source, Synapse, System, Volt
Keywords: Impulse, System, Branches, Repeater, Positive, Computers, Runs, Currents, Axon, Areas
Google Books: http://asclinks.live/nt8e
Google Scholar: http://asclinks.live/l0qi