# The Computer and the Brain

## Metadata
- Author: [[John von Neumann]]
- Full Title: The Computer and the Brain
- Category: #books
## Highlights
- Communication is pervasive at every level. If we consider that error rates escalate rapidly with increased communication and that a single-bit error can destroy the integrity of a process, digital computation is doomed—or so it seemed at the time. Remarkably, that was the common view until Shannon came along with the first key idea of the information age. He demonstrated how we can create arbitrarily accurate communication using the most unreliable communication channels. What Shannon said in his now landmark paper, “A Mathematical Theory of Communication,” published in the Bell System Technical Journal in July and October 1948, and in particular in his noisy-channel coding theorem, was that if you have available a channel with any error rate (except for exactly 50 percent per bit, which would mean that channel is transmitting pure noise), you are able to transmit a message and make the error rate as accurate as you want. In other words, the error rate can be one bit out of n bits, where n can be as large as you want. So, for example, in the extreme, if you have a channel that transmits bits of information correctly only 51 percent of the time (that is, it transmits the correct bit just slightly more often than the wrong bit), you can nonetheless transmit messages such that only one bit out of a million is incorrect, or one bit out of a trillion or a trillion trillion. How is this possible? The answer is through redundancy. ([Location 127](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=127))
- Tags: [[blue]]
- Computation consists of three elements: communication (which, as I mentioned, is pervasive both within and between computers), memory, and logic gates (which perform the arithmetic and logical functions). ([Location 146](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=146))
- Tags: [[blue]]
- It is due to Shannon’s theorem that we can handle arbitrarily large and complex digital data and algorithms without the processes being disturbed or destroyed by errors. ([Location 148](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=148))
- Tags: [[blue]]
- Turing and Alonzo Church, his former professor, went on to develop the Church-Turing thesis, which states that if a problem that can be presented to a Turing machine is not solvable by a Turing machine, it is also not solvable by any machine, following natural law. Even though the Turing machine has only a handful of commands and processes only one bit at a time, it can compute anything that any computer can compute. ([Location 159](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=159))
- Tags: [[blue]]
- “Strong” interpretations of the Church-Turing thesis propose an essential equivalence between what a human can think or know and what is computable by a machine. The basic idea is that the human brain is subject to natural law, and thus its information-processing ability cannot exceed that of a machine (and therefore of a Turing machine). ([Location 162](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=162))
- Tags: [[blue]]
- In the same paper, Turing reports another unexpected discovery, that of unsolvable problems. These are problems that are well defined and have unique answers that can be shown to exist but that we can also prove can never be computed by a Turing machine—that is to say, by any machine—which is a reversal of what had been a nineteenth-century confidence that problems that could be defined would ultimately be solved. Turing showed that there are as many unsolvable problems as solvable ones. Kurt Gödel reached a similar conclusion in his 1931 “Incompleteness Theorem.” We are thus left with the perplexing situation of being able to define a problem, to prove that a unique answer exists, and yet to know that the answer can never be discovered. ([Location 168](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=168))
- Tags: [[blue]]
- The von Neumann model includes a central processing unit where arithmetical and logical operations are carried out, a memory unit where the program and data are stored, mass storage, a program counter, and input/output channels. ([Location 184](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=184))
- Tags: [[blue]]
- The Turing machine was not designed to be practical. Turing’s theorems were not concerned with the efficiency of solving problems but rather with examining the range of problems that could be solved by computation. Von Neumann’s goal was to create a practical concept of a computational machine. His concept replaces Turing’s one-bit computations with multiple-bit words (generally some multiple of eight bits). Turing’s memory tape is sequential, so Turing machine programs spend an inordinate amount of time moving the tape back and forth to store and retrieve intermediate results. In contrast, von Neumann’s machine has a random access memory, so any data item can be immediately retrieved. One of von Neumann’s key concepts is the stored program, which he had introduced a decade earlier: the program is stored in the same type of random access memory as the data (and often in the same block of memory). This allows the computer to be reprogrammed for different tasks. It even allows for self-modifying code (if the program store is writable), which allows for a powerful form of recursion. Up until that time, virtually all computers, including Turing’s own Colossus, were built for a specific task. The stored program allows a computer to be truly universal, thereby fulfilling Turing’s vision of the universality of computation. ([Location 187](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=187))
- Tags: [[blue]]
- Keep in mind that the von Neumann machine continually communicates data between and within its various units, so it would not be possible to build one if it were not for Shannon’s theorems and the methods he devised for transmitting and storing reliable digital information. ([Location 223](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=223))
- Tags: [[blue]]
- This book is the earliest serious examination of the human brain from the perspective of a mathematician and computer pioneer. Prior to von Neumann, the fields of computer science and neuroscience were two islands with no bridge between them. ([Location 230](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=230))
- Tags: [[blue]]
- Von Neumann starts by articulating the differences and similarities between the computer and the human brain. Given that he wrote in 1955 and 1956, the manuscript is remarkably accurate, especially in the details that are pertinent to the comparison. He notes that the output of neurons is digital: an axon either fires or it doesn’t. This was far from obvious at the time, in that the output could have been an analog signal. The processing in the dendrites leading into a neuron and in the soma neuron cell body, however, are analog. He describes these calculations as a weighted sum of inputs with a threshold. This model of how neurons work led to the field of connectionism, in which systems are built based on this neuron model in both hardware and software. The first such connectionist system was created by Frank Rosenblatt as a software program on an IBM 704 computer at Cornell in 1957. ([Location 236](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=236))
- Tags: [[pink]]
- Von Neumann notes that the speed of neural processing is extremely slow, on the order of a hundred calculations per second, but that the brain compensates for this through massive parallel processing. Each one of its 1010 neurons is processing simultaneously (this number is also reasonably accurate; estimates today are between 1010 and 1011). In fact, each of the connections (with an average of about 103 connections per neuron) is computing simultaneously. ([Location 250](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=250))
- Tags: [[pink]]
- Recent advances in reverse engineering the visual cortex have confirmed that we make sophisticated visual judgments in only three or four neural cycles. ([Location 283](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=283))
- Tags: [[pink]]
- There is considerable plasticity in the brain, which enables us to learn. But there is far greater plasticity in a computer, which can completely restructure its methods by changing its software. Thus a computer will be able to emulate the brain, but the converse is not the case. ([Location 284](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=284))
- Tags: [[pink]]
- A year after von Neumann’s death in 1957, fellow mathematician Stan Ulam quoted von Neumann as having said that “the ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” This is the first known use of the word “singularity” in the context of human history. ([Location 293](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=293))
- Tags: [[pink]]
- neurons have a major disadvantage where the speed of their operations is concerned. Neurons are, he estimates, perhaps 105 times slower than vacuum tubes or transistors in the time required to complete a basic logical operation. Here he is portentously correct, in ways about to emerge. If anything, he underestimates the neuron’s very considerable disadvantage. If we assume that a neuron can have a “clock frequency” of no better than roughly 102 Hz, then the clock frequencies of almost 1,000 MHz (that is, 109 basic operations per second) now displayed in the most recent generation of desktop machines push the neuron’s disadvantage closer to a factor of 107. The conclusion is inescapable. If the brain is a digital computer with a von Neumann architecture, it is doomed to be a computational tortoise by comparison. ([Location 367](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=367))
- Tags: [[pink]]
- the brain’s computational regime appears to compensate for its inescapable lack of logical depth by exploiting an extraordinary logical breadth. As he says, “large and efficient natural automata are likely to be highly parallel, while large and efficient artificial automata will tend to be less so, and rather to be serial” (emphasis his). The former “will tend to pick up as many logical (or informational) items as possible simultaneously, and process them simultaneously” (emphasis ours). This means, he adds, that we must reach out, beyond the brain’s neurons, to include all of its many synapses in our all-up count of the brain’s “basic active organs.” ([Location 389](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=389))
- Tags: [[pink]]
- The brain is neither a tortoise nor a dunce after all, for it was never a serial, digital machine to begin with: it is a massively parallel analog machine. ([Location 401](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=401))
- Tags: [[pink]]
- It must quickly be said, however, that this decisive insight does nothing to dim the integrity of his digital and serial technologies, nor does it dim our hopes for creating artificial intelligence. Quite the contrary. We can make electronic versions of synaptic connections, and, eschewing the classical von Neumann architecture, we can create vast parallel networks of artificial neurons, thus realizing electronic versions of the “shallow-depth” but “extraordinary-breadth” computational regime evidently exploited by the brain. These will have the additional and most intriguing property of being roughly 106 times faster, overall, than their biological namesakes, simply because they will have electronic rather than biochemical components. This means many things. For one, a synapse-for-synapse electronic duplicate of your biological brain could enjoy, in only thirty seconds, a train of thought that would consume a year’s time as conducted by the components within your own skull. And that same machine could enjoy, in half an hour, an intellectual life that would consume your entire three-score-and-ten years, if conducted within your own brain. Intelligence, clearly, has an interesting future. ([Location 411](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=411))
- Tags: [[pink]]
- In an analog machine each number is represented by a suitable physical quantity, whose values, measured in some pre-assigned unit, is equal to the number in question. This quantity may be the angle by which a certain disk has rotated, or the strength of a certain current, or the amount of a certain (relative) voltage, etc. To enable the machine to compute, i.e. to operate on these numbers according to a predetermined plan, it is necessary to provide organs (or components) that can perform on these representative quantities the basic operations of mathematics. ([Location 530](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=530))
- Tags: [[pink]]
- These basic operations are usually understood to be the “four species of arithmetic”: addition (the operation x + y), subtraction (x - y), multiplication (xy), division (x/y). ([Location 535](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=535))
- Tags: [[pink]]
- A rather remarkable attribute of some analog machines, on which I will have to comment a good deal further, is this. Occasionally the machine is built around other “basic” operations than the four species of arithmetic mentioned above. Thus the classical “differential analyzer,” which expresses numbers by the angles by which certain disks have rotated, proceeds as follows. Instead of addition, x + y, and subtraction, x - y, the operations (x ± y)/2 are offered, because a readily available, simple component, the “differential gear” (the same one that is used on the back axle of an automobile) produces these. ([Location 541](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=541))
- Tags: [[pink]]
- any computing machine that is to solve a complex mathematical problem must be “programmed” for this task. This means that the complex operation of solving that problem must be replaced by a combination of the basic operations of the machine. Frequently it means something even more subtle: approximation of that operation—to any desired (prescribed) degree—by such combinations. Now for a given class of problems one set of basic operations may be more efficient, i.e. allow the use of simpler, less extensive, combinations, than another such set. ([Location 556](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=556))
- Tags: [[pink]]
- It must be emphasized, to begin with, that in digital machines there is uniformly only one organ for each basic operation. This contrasts with most analog machines, where there must be enough organs for each basic operation, depending on the requirements of the problem in hand (cf. above). It should be noted, however, that this is a historical fact rather than an intrinsic requirement—analog machines (of the electrically connected type, cf. above) could, in principle, be built with only one organ for each basic operation, and a logical control of any of the digital types to be described below. ([Location 637](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=637))
- Tags: [[pink]]
- The “only one organ for each basic operation” principle necessitates, however, the providing for a larger number of organs that can be used to store numbers passively—the results of various partial, intermediate calculations. That is, each such organ must be able to “store” a number—removing the one it may have stored previously—accepting it from some other organ to which it is at the time connected, and to “repeat” it upon “questioning”: to emit it to some other organ to which it is at that (other) time connected. Such an organ is called a “memory register,” the totality of these organs is called a “memory,” and the number of registers in a memory in the “capacity” of that memory. ([Location 647](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=647))
- Tags: [[pink]]
- An order must indicate which basic operation is to be performed, from which memory registers the inputs of that operation are to come, and to which memory register its output is to go. Note that this presupposes that all memory registers are numbered serially—the number of a memory register is called its “address.” It is convenient to number the basic operations, too. Then an order simply contains the number of its operation and the addresses of the memory registers referred to above, as a sequence of decimal digits (in a fixed order). ([Location 689](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=689))
- Tags: [[pink]]
- an order is usually “physically” the same thing as a number. Hence the natural way to store it—in the course of the problem in whose control it participates—is in a memory register. In other words, each order is stored in the memory, in a definite memory register, that is to say, at a definite address. This opens up a number of specific ways to handle the matter of an orders successor. Thus it may be specified that the successor of an order at the address X is—unless the opposite is made explicit—the order at the address X + 1. “The opposite” is a “transfer,” a special order that specifies that the successor is at an assigned address Y. ([Location 699](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=699))
- Tags: [[pink]]
- Note the important difference between this mode of control and the plugged one, described earlier: There the control sequence points were real, physical objects, and their plugged connections expressed the problem. Now the orders are ideal entities, stored in the memory, and it is thus the contents of this particular segment of the memory that express the problem. Accordingly, this mode of control is called “memory-stored control.” ([Location 709](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=709))
- Tags: [[pink]]
- this case, since the orders that exercise the entire control are in the memory, a higher degree of flexibility is achieved than in any previous mode of control. Indeed, the machine, under the control of its orders, can extract numbers (or orders) from the memory, process them (as numbers!), and return them to the memory (to the same or to other locations); i.e. it can change the contents of the memory—indeed this is its normal modus operandi. Hence it can, in particular, change the orders (since these are in the memory!)—the very orders that control its actions. Thus all sorts of sophisticated order-systems become possible, which keep successively modifying themselves and hence also the computational processes that are likewise under their control. In this way more complex processes than mere iterations become possible. ([Location 713](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=713))
- Tags: [[pink]]
- the main limitation of analog machines relates to precision. Indeed, the precision of electrical analog machines rarely exceeds 1:103, and even mechanical ones (like the differential analyzer) achieve at best 1:104 to 105. Digital machines, on the other hand, can achieve any desired precision; ([Location 770](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=770))
- Tags: [[pink]]
- if there are large numbers of arithmetical operations, the errors occurring in each operation are superposed. Since they are in the main (although not entirely) random, it follows that if there are N operations, the error will not be increased N times, but about times. This by itself will not, as a rule, suffice to necessitate a stepwise 1:1012 precision for an over-all 1:103 result (cf. above). For this to be so, 1/1012 ∼ 1/103 would be needed, i.e. N ∼ 1016, whereas even in the fastest modern machines N gets hardly larger than 1010. ([Location 798](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=798))
- Tags: [[pink]]
- The memory organs belong to several different classes. The characteristic by which they are classified is the “access time.” The access time is defined as follows. First: the time required to store a number which is already present in some other part of the machine (usually in a register of active organs, cf. below)—removing the number that the memory organ may have been storing before. Second: the time required to “repeat” the number stored—upon “questioning”—to another part of the machine, which can accept it (usually to a register of active organs, cf. below). ([Location 842](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=842))
- Tags: [[pink]]
- All existing machines and memories use “direct addressing,” which is to say that every word in the memory has a numerical address of its own that characterizes it and its position within the memory (the total aggregate of all hierarchic levels) uniquely. This numerical address is always explicitly specified when the memory word is to be read or written. ([Location 908](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=908))
- Tags: [[pink]]
- The most immediate observation regarding the nervous system is that its functioning is prima facie digital. ([Location 924](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=924))
- Tags: [[pink]]
- The basic component of this system is the nerve cell, the neuron, and the normal function of a neuron is to generate and to propagate a nerve impulse. This impulse is a rather complex process, which has a variety of aspects—electrical, chemical, and mechanical. It seems, nevertheless, to be a reasonably uniquely defined process, i.e. nearly the same under all conditions; it represents an essentially reproducible, unitary response to a rather wide variety of stimuli. ([Location 925](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=925))
- Tags: [[pink]]
- The nerve cell consists of a body from which originate, directly or indirectly, one or more branches. Such a branch is called an axon of the cell. The nerve impulse is a continuous change, propagated—usually at a fixed speed, which may, however, be a function of the nerve cell involved—along the (or rather, along each) axon. As mentioned above, this condition can be viewed under multiple aspects. One of its characteristics is certainly that it is an electrical disturbance; in fact, it is most frequently described as being just that. This disturbance is usually an electrical potential of something like 50 millivolts and of about a millisecond’s duration. Concurrently with this electrical disturbance there also occur chemical changes along the axon. Thus, in the area of the axon over which the pulse-potential is passing, the ionic constitution of the intracellular fluid changes, and so do the electrical-chemical properties (conductivity, permeability) of the wall of the axon, the membrane. At the endings of the axon the chemical character of the change is even more obvious; there, specific and characteristic substances make their appearance when the pulse arrives. Finally, there are probably mechanical changes as well. Indeed, it is very likely that the changes of the various ionic permeabilities of the cell membrane (cf.… ([Location 931](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=931))
- Tags: [[pink]]
- on the molecular scale there are no sharp distinctions between all these kinds of changes: every chemical change is induced by a change in intramolecular forces which determine changed relative positions of the molecules, i.e. it is mechanically induced. Furthermore, every such intramolecular mechanical change alters the electrical properties of the molecule involved, and therefore induces changed electrical properties and changed relative electrical potential levels. To sum up: on the usual (macroscopic) scale, electrical, chemical, and mechanical processes represent alternatives between which sharp distinctions can be maintained. However, on the near-molecule level of the… ([Location 945](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=945))
- Tags: [[pink]]
- the fully developed nerve impulses are comparable, no matter how induced. Because their character is not an unambiguously defined one (it may be viewed electrically as well as chemically, cf. above), its induction, too, can be alternatively attributed to electrical or to chemical causes. Within the… ([Location 952](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=952))
- Tags: [[pink]]
- I can now return to the digital character of this mechanism. The nervous pulses can clearly be viewed as (two-valued) markers, in the sense discussed previously: the absence of a pulse then represents one value (say, the binary digit 0), and the presence of one represents the other (say, the binary digit 1). This must, of course, be interpreted as an occurrence on a specific axon (or, rather, on all the axons of a specific neuron), and possibly in a specific time relation to other events. It is, then, to be interpreted as a marker (a binary digit 0 or 1) in a specific, logical role. As mentioned above, pulses (which appear on the axons of a given neuron) are usually stimulated by other pulses that are impinging on the body of the neuron. This stimulation is, as a rule, conditional, i.e. only certain combinations and synchronisms of such primary pulses stimulate the secondary pulse in question—all others will fail to so stimulate. That is, the neuron is an organ which accepts and emits definite physical entities, the pulses. Upon receipt of pulses in certain combinations and synchronisms it will be stimulated to emit a pulse of its own, otherwise it will not emit. The rules which describe to which groups of pulses it will so respond are the rules that govern it as an active organ. This is clearly the description of the functioning of an organ in a digital machine, and of the way in which the role and function of a digital organ has to be characterized. It therefore justifies the original assertion, that the nervous system has a prima facie digital character. ([Location 961](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=961))
- Tags: [[pink]]
- this is not the most significant way to define the reaction time of a neuron, when viewed as an active organ in a logical machine. The reason for this is that immediately after the stimulated pulse has become evident, the stimulated neuron has not yet reverted to its original, prestimulation condition. It is fatigued, i.e. it could not immediately accept stimulation by another pulse and respond in the standard way. From the point of view of machine economy, it is a more important measure of speed to state after how much time a stimulation that induced a standard response can be followed by another stimulation that will also induce a standard response. This duration is about 1.5 times 10-2 seconds. It is clear from these figures that only one or two per cent of this time is needed for the actual trans-synaptic stimulation, the remainder representing recovery time, during which the neuron returns from its fatigued, immediate post-stimulation condition to its normal, prestimulation one. It should be noted that this recovery from fatigue is a gradual one—already at a certain earlier time (after about .5 times 10-2 seconds) the neuron can respond in a nonstandard way, namely it will produce a standard pulse, but only in response to a stimulus which is significantly stronger than the one needed under standard conditions. ([Location 990](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=990))
- Tags: [[pink]]
- the reaction time of a neuron is, depending on how one defines it, somewhere between 10-4 and 10-2 seconds, but the more significant definition is the latter one. Compared to this, modern vacuum tubes and transistors can be used in large logical machines at reaction times between 10-6 and 10-7 seconds. (Of course, I am allowing here, too, for the complete recovery time; the organ in question is, after this duration, back to its prestimulation condition.) That is, our artifacts are, in this regard, well ahead of the corresponding natural components, by factors like 104 to 105. ([Location 1000](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1000))
- Tags: [[pink]]
- The energy dissipation in the human central nervous system (in the brain) is of the order of 10 watts. Since, as pointed out above, the order of 1010 neurons are involved here, this means a dissipation of 10-9 watts per neuron. The typical dissipation of a vacuum tube is of the order of 5 to 10 watts. The typical dissipation of a transistor may be as little as 10-1 watts. Thus the natural components lead the artificial ones with respect to dissipation by factors like 108 to 109 —the same factors that appeared above with respect to volume requirements. ([Location 1032](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1032))
- Tags: [[pink]]
- It seems nevertheless worth while to formulate certain conclusions at this point. They are the following ones. First: in terms of the number of actions that can be performed by active organs of the same total size (defined by volume or by energy dissipation) in the same interval, the natural componentry is a factor 104 ahead of the artificial one. This is the quotient of the two factors obtained above, i.e. of 108 to 109 by 104 to 105. Second: the same factors show that the natural componentry favors automata with more, but slower, organs, while the artificial one favors the reverse arrangement of fewer, but faster, organs. Hence it is to be expected that an efficiently organized large natural automation (like the human nervous system) will tend to pick up as many logical (or informational) items as possible simultaneously, and process them simultaneously, while an efficiently organized large artificial automation (like a large modern computing machine) will be more likely to do things successively—one thing at a time, or at any rate not so many things at a time. That is, large and efficient natural automata are likely to be highly parallel, while large and efficient artificial automata will tend to be less so, and rather to be serial. (Cf. some earlier remarks on parallel versus serial arrangements.) Third: it should be noted, however, that parallel and serial operation are not unrestrictedly substitutable for each other—as would be required to make the first remark above completely valid, with its simple scheme of dividing the size-advantage factor by the speed-disadvantage factor in order to get a single (efficiency) “figure of merit.” More specifically, not everything serial can be immediately paralleled—certain operations can only be performed after certain others, and not simultaneously with them (i.e. they must use the results of the latter). In such a case, the transition from a serial scheme to a parallel one may be impossible, or it may be possible but only concurrently with a change in the logical approach and organization of the procedure. Conversely, the desire to serialize a parallel procedure may impose new requirements on the automaton. Specifically, it will almost always create new memory requirements, since the results of the operations that are performed first must be stored while the operations that come after these are performed. Hence the logical approach and structure in natural automata may be expected to differ widely from those in artificial automata. Also, it is likely that the memory requirements of the latter will turn out to be systematically more severe than those of the former. ([Location 1042](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1042))
- Tags: [[pink]]
- the normal output of a neuron is the standard nerve pulse. This can be induced by various forms of stimulation, including the arrival of one or more pulses from other neurons. Other possible stimulators are phenomena in the outside world to which a particular neuron is specifically sensitive (light, sound, pressure, temperature), and physical and chemical changes within the organism at the point where the neuron is situated. ([Location 1066](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1066))
- Tags: [[pink]]
- there is some plausibility in assuming that things can, in reality, be even more complicated than this. It may well be that certain nerve pulse combinations will stimulate a given neuron not simply by virtue of their number but also by virtue of the spatial relations of the synapses to which they arrive. That is, one may have to face situations in which there are, say, hundreds of synapses on a single nerve cell, and the combinations of stimulations on these that are effective (that generate a response pulse in the last-mentioned neuron) are characterized not only by their number but also by their coverage of certain special regions on that neuron (on its body or on its dendrite system, cf. above), by the spatial relations of such regions to each other, and by even more complicated quantitative and geometrical relationships that might be relevant. ([Location 1088](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1088))
- Tags: [[pink]]
- On all these matters certain (more or less incomplete) bodies of observation exist, and they all indicate that the individual neuron may be—at least in suitable special situations—a much more complicated mechanism than the dogmatic description in terms of stimulus-response, following the simple pattern of elementary logical operations, can express. ([Location 1107](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1107))
- Tags: [[pink]]
- It should be said, however, that all complications of this type mean, in terms of the counting of basic active organs as we have practiced it so far, that a nerve cell is more than a single basic active organ, and that any significant effort at counting has to recognize this. Obviously, even the more complicated stimulation criteria have this effect. If the nerve cell is activated by the stimulation of certain combinations of synapses on its body and not by others, then the significant count of basic active organs must presumably be a count of synapses rather than of nerve cells. If the situation is further refined by the appearance of the “mixed” phenomena referred to above, these counts get even more difficult. Already the necessity of replacing the nerve cell count by a synapsis count may increase the number of basic active organs by a considerable factor, like 10 to 100. This, and similar circumstances, ought to be borne in mind in connection with the basic active organ counts referred to so far. ([Location 1141](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1141))
- Tags: [[pink]]
- The question of the physical embodiment of this memory remains. For this, various authors have suggested a variety of solutions. It has been proposed to assume that the thresholds—or, more broadly stated, the stimulation criteria—for various nerve cells change with time as functions of the previous history of that cell. Thus frequent use of a nerve cell might lower its threshold, i.e. ease the requirements of its stimulation, and the like. If this were true, the memory would reside in the variability of the stimulation criteria. It is certainly a possibility, but I will not attempt to discuss it here. A still more drastic embodiment of the same idea would be achieved by assuming that the very connections of the nerve cells, i.e. the distribution of conducting axons, vary with time. This would mean that the following state of things could exist. Conceivably, persistent disuse of an axon might make it ineffective for later use. On the other hand, very frequent (more than normal) use might give the connection that it represents a lower threshold (a facilitated stimulation criterion) over that particular path. In this case, again, certain parts of the nervous system would be variable in time and with previous history and would, thus, in and by themselves represent a memory. Another form of memory, which is obviously present, is the genetic part of the body: the chromosomes and their constituent genes are clearly memory elements which by their state affect, and to a certain extent determine, the functioning of the entire system. Thus there is a possibility of a genetic memory system also. ([Location 1195](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1195))
- Tags: [[pink]]
- Now, in this context, the genetic phenomena play an especially typical role. The genes themselves are clearly parts of a digital system of components. Their effects, however, consist of stimulating the formation of specific chemicals, namely of definite enzymes that are characteristic of the gene involved, and, therefore, belong in the analog area. Thus, in this domain, a particular specific instance of the alternation between analog and digital is exhibited, i.e. this is a member of a broader class, to which I referred as such above in a more general way. ([Location 1250](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1250))
- Tags: [[pink]]
- As pointed out before, we know a certain amount about how the nervous system transmits numerical data. They are usually transmitted by periodic or nearly periodic trains of pulses. An intensive stimulus on a receptor will cause the latter to respond each time soon after the limit of absolute refractoriness has been underpassed. A weaker stimulus will cause the receptor to respond also in a periodic or nearly periodic way, but with a somewhat lower frequency, since now not only the limit of absolute refractoriness but even a limit of a certain relative refractoriness will have to be under-passed before each next response becomes possible. Consequently, intensities of quantitative stimuli are rendered by periodic or nearly periodic pulse trains, the frequency always being a monotone function of the intensity of the stimulus. ([Location 1326](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1326))
- Tags: [[pink]]
- Clearly, under the conditions, precisions like the ones mentioned above (10 to 12 decimals!) are altogether out of question. The nervous system is a computing machine which manages to do its exceedingly complicated work on a rather low level of precision: according to the above, only precision levels of 2 to 3 decimals are possible. This fact must be emphasized again and again because no known computing machine can operate reliably and significantly on such a low precision level. ([Location 1337](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1337))
- Tags: [[pink]]
- Arithmetical Deterioration. Roles of Arithmetical and Logical Depths To anyone who has studied the deterioration of precision in the course of a long calculation, the answer is clear. This deterioration is due, as pointed out before, to the accumulation of errors by superposition, and even more by the amplification of errors committed early in the calculation, by the manipulations in the subsequent parts of the calculation; i.e. it is due to the considerable number of arithmetical operations that have to be performed in series, or in other words to the great arithmetical depth of the scheme. The fact that there are many operations to be performed in series is, of course, just as well a characteristic of the logical structure of the scheme as of its arithmetical structure. It is, therefore, proper to say that all of these deterioration-of-precision phenomena are due to the great logical depth of the schemes one is dealing with here. ([Location 1346](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1346))
- Tags: [[pink]]
- Arithmetical Precision or Logical Reliability, Alternatives It should also be noted that the message-system used in the nervous system, as described in the above, is of an essentially statistical character. In other words, what matters are not the precise positions of definite markers, digits, but the statistical characteristics of their occurrence, i.e. frequencies of periodic or nearly periodic pulse-trains, etc. Thus the nervous system appears to be using a radically different system of notation from the ones we are familiar with in ordinary arithmetics and mathematics: instead of the precise systems of markers where the position—and presence or absence—of every marker counts decisively in determining the meaning of the message, we have here a system of notations in which the meaning is conveyed by the statistical properties of the message. We have seen how this leads to a lower level of arithmetical precision but to a higher level of logical reliability: a deterioration in arithmetics has been traded for an improvement in logics. ([Location 1354](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1354))
- Tags: [[pink]]
- Thus logics and mathematics in the central nervous system, when viewed as languages, must structurally be essentially different from those languages to which our common experience refers. ([Location 1387](https://readwise.io/to_kindle?action=open&asin=B008AUGWLS&location=1387))
- Tags: [[pink]]