👓 What can Schrödinger’s cat say about 3D printers on Mars? | Aeon | Aeon Essays

Read What can Schrödinger’s cat say about 3D printers on Mars? by Michael Lachmann and Sara Walker (Aeon | Aeon Essays)
A cat is alive, a sofa is not: that much we know. But a sofa is also part of life. Information theory tells us why

A nice little essay in my area, but I’m not sure there’s anything new in it for me. It is nice that they’re trying to break some of the problem down into smaller components before building it back up into something else. Reframing things can always be helpful. Here, in particular, they’re reframing the definitions of life and alive.

🔖 Origins Of Life | Complexity Explorer

Bookmarked Origins Of Life (complexityexplorer.org)

About the Course:

This course aims to push the field of Origins of Life research forward by bringing new and synthetic thinking to the question of how life emerged from an abiotic world.

This course begins by examining the chemical, geological, physical, and biological principles that give us insight into origins of life research. We look at the chemical and geological environment of early Earth from the perspective of likely environments for life to originate.

Taking a look at modern life we ask what it can tell us about the origin of life by winding the clock backwards. We explore what elements of modern life are absolutely essential for life, and ask what is arbitrary? We ponder how life arose from the huge chemical space and what this early 'living chemistry'may have looked like.

We examine phenomena, that may seem particularly life like, but are in fact likely to arise given physical dynamics alone. We analyze what physical concepts and laws bound the possibilities for life and its formation.

Insights gained from modern evolutionary theory will be applied to proto-life. Once life emerges, we consider how living systems impact the geosphere and evolve complexity. 

The study of Origins of Life is highly interdisciplinary - touching on concepts and principles from earth science, biology, chemistry, and physics.  With this we hope that the course can bring students interested in a broad range of fields to explore how life originated. 

The course will make use of basic algebra, chemistry, and biology but potentially difficult topics will be reviewed, and help is available in the course discussion forum and instructor email. There will be pointers to additional resources for those who want to dig deeper.

This course is Complexity Explorer's first Frontiers Course.  A Frontiers Course gives students a tour of an active interdisciplinary research area. The goals of a Frontiers Course are to share the excitement and uncertainty of a scientific area, inspire curiosity, and possibly draw new people into the research community who can help this research area take shape!

I’m totally in for this!

Hat tip for the reminder to:

Listened to Steven Johnson on the Importance of Play and the Decisions We Make by Alan Alda from Clear+Vivid with Alan Alda

How do we come up with ideas? How do we make decisions? And how can we do both better? Steven Johnson has explored this question and written a dozen books about it. In this playful, thoughtful episode, Steven has some fascinating stories, like how Darwin made the decision to get married — or how a defecating duck helped lead to the invention of the computer. Through their own stories, Steven and Alan Alda share their thoughts about the transformative nature of ideas and what sort of environments best give rise to creativity.

I love the idea of the slow hunch discussed here. It’s part of the reason I keep a commonplace book. Johnson also discusses his own personal commonplace book, though he doesn’t give it that particular name here.

The commercial about Alda Communication Training makes me wonder if they recommend scientists and communicators have their own websites? In particular, I’m even more curious because of Johnson’s mention of his commonplace book and how he uses it in this episode. I suspect that scientists having a variety of interconnecting commonplaces (via Webmention) using basic IndieWeb or A Domain of One’s Own principles could better create slow hunches, create more links, increase creativity and diversity, and foster greater innovation. I’ll have to follow up on this idea. While some may do something slightly like this within other parts of social media, I don’t get the impression that it’s as useful a tool in those places (isn’t as searchable or permanent feeling, and is likely rarely reviewed over). Being able to own your digital commonplace as a regular tool certainly has more value as Johnson describes. Functionality like On This Day dramatically increases its value.

But there’s another point that we should make more often, I think, which is that one of the most robust findings in the social sciences and psychology over the last 20 years is that diverse groups are just collectively smarter and more original in the way that they think in, in both their way of dreaming up new ideas, but also in making complicated decisions, that they avoid all the problems of group think and homogeneity that you get when you have a group of like minded people together who are just amplifying each other’s beliefs.—Steven Johnson [00:09:59]

Think about a big decision in your life. Think about the age span of the people you’re talking to about that choice. Are they all your peers within three or four years? Are you talking somebody who’s a generation older and a generation younger?—Steven Johnson [00:13:24]

I was talking to Ramzi Hajj yesterday about having mentors (with a clear emphasis on that mentor being specifically older) and this quote is the same sentiment, just with a slightly different emphasis.

One of the things that is most predictive of a species, including most famously, humans, of their capacity for innovation and problem solving as an adult is how much they play as a newborn or as a child.—Steven Johnson [00:28:10]

Play is important for problem solving.

I think you boil this all down into the idea that if you want to know what the next big thing is, look for where people are having fun.—Alan Alda [00:31:35]

This is interesting because I notice that one of the  binding (and even physically stated) principles of the IndieWeb is to have fun. Unconsciously, it’s one of the reasons I’ve always thought that what the group is doing is so important.

Ha! Alda has also been watching Shtisel recently [00:50:04].

📑 Solomon Golomb (1932–2016) | Stephen Wolfram Blog

Annotated Solomon Golomb (1932–2016) by Stephen Wolfram (blog.stephenwolfram.com)

As it happens, he’d already done some work on coding theory—in the area of biology. The digital nature of DNA had been discovered by Jim Watson and Francis Crick in 1953, but it wasn’t yet clear just how sequences of the four possible base pairs encoded the 20 amino acids. In 1956, Max Delbrück—Jim Watson’s former postdoc advisor at Caltech—asked around at JPL if anyone could figure it out. Sol and two colleagues analyzed an idea of Francis Crick’s and came up with “comma-free codes” in which overlapping triples of base pairs could encode amino acids. The analysis showed that exactly 20 amino acids could be encoded this way. It seemed like an amazing explanation of what was seen—but unfortunately it isn’t how biology actually works (biology uses a more straightforward encoding, where some of the 64 possible triples just don’t represent anything).  

I recall talking to Sol about this very thing when I sat in on a course he taught at USC on combinatorics. He gave me his paper on it and a few related issues as I was very interested at the time about the applications of information theory and biology.

I’m glad I managed to sit in on the class and still have the audio recordings and notes. While I can’t say that Newton taught me calculus, I can say I learned combinatorics from Golomb.

Bookmarked From bit to it: How a complex metabolic network transforms information into living matter by Andreas Wagner (BMC Systems Biology)

Background

Organisms live and die by the amount of information they acquire about their environment. The systems analysis of complex metabolic networks allows us to ask how such information translates into fitness. A metabolic network transforms nutrients into biomass. The better it uses information on available nutrient availability, the faster it will allow a cell to divide.

Results

I here use metabolic flux balance analysis to show that the accuracy I (in bits) with which a yeast cell can sense a limiting nutrient's availability relates logarithmically to fitness as indicated by biomass yield and cell division rate. For microbes like yeast, natural selection can resolve fitness differences of genetic variants smaller than 10-6, meaning that cells would need to estimate nutrient concentrations to very high accuracy (greater than 22 bits) to ensure optimal growth. I argue that such accuracies are not achievable in practice. Natural selection may thus face fundamental limitations in maximizing the information processing capacity of cells.

Conclusion

The analysis of metabolic networks opens a door to understanding cellular biology from a quantitative, information-theoretic perspective.

https://doi.org/10.1186/1752-0509-1-33

Received: 01 March 2007 Accepted: 30 July 2007 Published: 30 July 2007

Hat tip to Paul Davies in The Demon in the Machine

Bookmarked Statistical Physics of Self-Replication by Jeremy L. England (J. Chem. Phys. 139, 121923 (2013); )
Self-replication is a capacity common to every species of living thing, and simple physical intuition dictates that such a process must invariably be fueled by the production of entropy. Here, we undertake to make this intuition rigorous and quantitative by deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
https://doi.org/10.1063/1.4818538

Syndicated copy also available on arXiv: https://arxiv.org/abs/1209.1179

Hat tip to Paul Davies in The Demon in the Machine

🔖 The notion of information in biology, an appraisal | Jérôme Segal | Journal BIO Web of Conferences

Bookmarked The notion of information in biology, an appraisal by Jérôme SegalJérôme Segal (Journal BIO Web of Conferences Volume 4, Page 00017, 2015; ORIGINS – Studies in Biological and Cultural Evolution)

Developed during the first half of the 20th century, in three different fields, theoretical physics, statistics applied to agronomy and telecommunication engineering, the notion of information has become a scientific concept in the context of the Second War World. It is in this highly interdisciplinary environment that “information theory” emerged, combining the mathematical theory of communication and cybernetics. This theory has grown exponentially in many disciplines, including biology. The discovery of the genetic “code” has benefited from the development of a common language based on information theory and has fostered a almost imperialist development of molecular genetics, which culminated in the Human Genome Project. This project however could not fill all the raised expectations and epigenetics have shown the limits of this approach. Still, the theory of information continues to be applied in the current research, whether the application of the self-correcting coding theory to explain the conservation of genomes on a geological scale or aspects the theory of evolution.

[pdf]

https://doi.org/10.1051/bioconf/20150400017

Reply to The Man Who Tried to Redeem the World with Logic | Nautilus

Replied to The Man Who Tried to Redeem the World with Logic by Amanda GefterAmanda Gefter (Nautilus)
McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence.

Quick note of a factual and temporal error: the article indicates:

After all, it had been Wiener who discovered a precise mathematical definition of information: The higher the probability, the higher the entropy and the lower the information content.

In fact, it was Claude E. Shannon, one of Wiener’s colleagues, who wrote the influential A Mathematical Theory of Communication published in Bell System Technical Journal in 1948, almost 5 years after the 1943 part of the timeline the article is indicating. Not only did Wiener not write the paper, but it wouldn’t have existed yet to have been a factor in Pitts deciding to choose a school or adviser at the time. While Wiener may have been a tremendous polymath, I suspect that his mathematical area of expertise during those years would have been closer to analysis and not probability theory.

To put Pitts & McCulloch’s work into additional context, Claude Shannon’s stunning MIT master’s thesis A symbolic analysis of relay and switching circuits in 1940 applied Boolean algebra to electronic circuits for the first time and as a result largely allowed the digital age to blossom. It would be nice to know if Pitts & McCulloch were aware of it when they published their work three years later.

👓 The Man Who Tried to Redeem the World with Logic | Issue 21: Information – Nautilus

Read The Man Who Tried to Redeem the World with Logic (Nautilus)
Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker,…

Highlights, Quotes, Annotations, & Marginalia

McCulloch was a confident, gray-eyed, wild-bearded, chain-smoking philosopher-poet who lived on whiskey and ice cream and never went to bed before 4 a.m.  

Now that is a business card title!

March 03, 2019 at 06:01PM

McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence.  

tl;dr

March 03, 2019 at 06:06PM

Gottfried Leibniz. The 17th-century philosopher had attempted to create an alphabet of human thought, each letter of which represented a concept and could be combined and manipulated according to a set of logical rules to compute all knowledge—a vision that promised to transform the imperfect outside world into the rational sanctuary of a library.  

I don’t think I’ve ever heard this quirky story…

March 03, 2019 at 06:08PM

Which got McCulloch thinking about neurons. He knew that each of the brain’s nerve cells only fires after a minimum threshold has been reached: Enough of its neighboring nerve cells must send signals across the neuron’s synapses before it will fire off its own electrical spike. It occurred to McCulloch that this set-up was binary—either the neuron fires or it doesn’t. A neuron’s signal, he realized, is a proposition, and neurons seemed to work like logic gates, taking in multiple inputs and producing a single output. By varying a neuron’s firing threshold, it could be made to perform “and,” “or,” and “not” functions.  

I’m curious what year this was, particularly in relation to Claude Shannon’s master’s thesis in which he applied Boolean algebra to electronics.
Based on their meeting date, it would have to be after 1940.And they published in 1943: https://link.springer.com/article/10.1007%2FBF02478259

March 03, 2019 at 06:14PM

McCulloch and Pitts alone would pour the whiskey, hunker down, and attempt to build a computational brain from the neuron up.  

A nice way to pass the time to be sure. Naturally mathematicians would have been turning “coffee into theorems” instead of whiskey.

March 03, 2019 at 06:15PM

“an idea wrenched out of time.” In other words, a memory.  

March 03, 2019 at 06:17PM

McCulloch and Pitts wrote up their findings in a now-seminal paper, “A Logical Calculus of Ideas Immanent in Nervous Activity,” published in the Bulletin of Mathematical Biophysics.  

March 03, 2019 at 06:21PM

I really like this picture here. Perhaps for a business card?
colorful painting of man sitting with abstract structure around him
  
March 03, 2019 at 06:23PM

it had been Wiener who discovered a precise mathematical definition of information: The higher the probability, the higher the entropy and the lower the information content.  

Oops, I think this article is confusing Wiener with Claude Shannon?

March 03, 2019 at 06:34PM

By the fall of 1943, Pitts had moved into a Cambridge apartment, was enrolled as a special student at MIT, and was studying under one of the most influential scientists in the world.  

March 03, 2019 at 06:32PM

Thus formed the beginnings of the group who would become known as the cyberneticians, with Wiener, Pitts, McCulloch, Lettvin, and von Neumann its core.  

Wiener always did like cyberneticians for it’s parallelism with mathematicians….

March 03, 2019 at 06:38PM

In the entire report, he cited only a single paper: “A Logical Calculus” by McCulloch and Pitts.  

First Draft of a Report on EDVAC by jon von Neumann

March 03, 2019 at 06:43PM

Oliver Selfridge, an MIT student who would become “the father of machine perception”; Hyman Minsky, the future economist; and Lettvin.  

March 03, 2019 at 06:44PM

at the Second Cybernetic Conference, Pitts announced that he was writing his doctoral dissertation on probabilistic three-dimensional neural networks.  

March 03, 2019 at 06:44PM

In June 1954, Fortune magazine ran an article featuring the 20 most talented scientists under 40; Pitts was featured, next to Claude Shannon and James Watson.  

March 03, 2019 at 06:46PM

Lettvin, along with the young neuroscientist Patrick Wall, joined McCulloch and Pitts at their new headquarters in Building 20 on Vassar Street. They posted a sign on the door: Experimental Epistemology.  

March 03, 2019 at 06:47PM

“The eye speaks to the brain in a language already highly organized and interpreted,” they reported in the now-seminal paper “What the Frog’s Eye Tells the Frog’s Brain,” published in 1959.  

March 03, 2019 at 06:50PM

There was a catch, though: This symbolic abstraction made the world transparent but the brain opaque. Once everything had been reduced to information governed by logic, the actual mechanics ceased to matter—the tradeoff for universal computation was ontology. Von Neumann was the first to see the problem. He expressed his concern to Wiener in a letter that anticipated the coming split between artificial intelligence on one side and neuroscience on the other. “After the great positive contribution of Turing-cum-Pitts-and-McCulloch is assimilated,” he wrote, “the situation is rather worse than better than before. Indeed these authors have demonstrated in absolute and hopeless generality that anything and everything … can be done by an appropriate mechanism, and specifically by a neural mechanism—and that even one, definite mechanism can be ‘universal.’ Inverting the argument: Nothing that we may know or learn about the functioning of the organism can give, without ‘microscopic,’ cytological work any clues regarding the further details of the neural mechanism.”  

March 03, 2019 at 06:54PM

Nature had chosen the messiness of life over the austerity of logic, a choice Pitts likely could not comprehend. He had no way of knowing that while his ideas about the biological brain were not panning out, they were setting in motion the age of digital computing, the neural network approach to machine learning, and the so-called connectionist philosophy of mind.  

March 03, 2019 at 06:55PM

by stringing them together exactly as Pitts and McCulloch had discovered, you could carry out any computation.  

I feel like this is something more akin to what may have been already known from Boolean algebra and Whitehead/Russell by this time. Certainly Shannon would have known of it?

March 03, 2019 at 06:58PM

👓 Henry Quastler | Wikipedia

Read Henry Quastler (Wikipedia)
Henry Quastler (November 11, 1908 – July 4, 1963) was an Austrian physician and radiologist who became a pioneer in the field of information theory applied to biology after emigrating to America. His work with Sidney Dancoff led to the publication of what is now commonly called Dancoff's Law.

Spent a moment to make a few additions to the page as well…

👓 Dr. Hubert Yockey of Bel Air, Director of APG Reactor; Manhattan Project Nuclear Physicist, Dies at 99 | The Dagger

Read Dr. Hubert Yockey of Bel Air, Director of APG Reactor; Manhattan Project Nuclear Physicist, Dies at 99 (The Dagger - Local News with an Edge)
Hubert Palmer Yockey, 99, died peacefully under hospice care at his home in Bel Air, MD, on January 31, 2016, with his daughter, Cynthia Yockey, at his side. Born in Alexandria, Minnesota, he was t…

Acquired The Emergence of Biological Organization by Henry Quastler

Acquired The Emergence of Biological Organization by Henry Quastler (Yale Univ Press; First Edition edition (1964))

In 1964 Quastler's book The Emergence of Biological Organization was published posthumously. In 2002, Harold J. Morowitz described it as a "remarkably prescient book" which is "surprisingly contemporary in outlook". In it Quastler pioneers a theory of emergence, developing model of "a series of emergences from probionts to prokaryotes".

The work is based on lectures given by Quastler during the spring term of 1963, when he was Visiting Professor of Theoretical Biology at Yale University. In these lectures Quastler argued that the formation of single-stranded polynucleotides was well within the limits of probability of what could have occurred during the pre-biologic period of the Earth. However, he noted that polymerization of a single-stranded polymer from mononucleotides is slow, and its hydrolysis is fast; therefore in a closed system consisting only of mononucleotides and their single-stranded polymers, only a small fraction of the available molecules will be polymerized. However, a single-stranded polymer may form a double-stranded one by complementary polymerization, using a single-stranded polynucleotide as a template. Such a process is relatively fast and the resulting double-stranded polynucleotide is much more stable than the single single-stranded one since each monomer is bound not only along the sugar phosphate backbone, but also through inter-strand bonding between the bases.

The capability for self-replication, a fundamental feature of life, emerged when double-stranded polynucleotides disassociated into single-stranded ones and each of these served as a template for synthesis of a complementary strand, producing two double-stranded copies. Such a system is mutable since random changes of individual bases may occur and be propagated. Individual replicators with different nucleotide sequences may also compete with each other for nucleotide precursors. Mutations that influence the folding state of polynucleotides may affect the ratio of association of strands to dissociation and thus the ability to replicate. The folding state would also affect the stability of the molecule. These ideas were then developed to speculate on the emergence of genetic information, protein synthesis and other general features of life.

Lily E. Kay says that Quastler's works "are an illuminating example of a well reasoned epistemic quest and a curious disciplinary failure". Quastler's aspiration to create an information based biology was innovative, but his work was "plagued by problems: outdated data, unwarranted assumptions, some dubious numerology, and, most importantly, an inability to generate an experimental agenda." However Quastler's "discursive framework" survived.

Forty-five years after Quastler's 1964 proposal, Lincoln and Joyce described a cross-catalytic system that involves two RNA enzymes (ribosymes) that catalyze each other's synthesis from a total of four component substrates. This synthesis occurred in the absence of protein and could provide the basis for an artificial genetic system.

The Emergence of Biological Organization

There was a single used copy in the UK for $12.49 and all the rest are $149.00+ so I snapped it up. Should be an interesting read in and of itself, but I suspect it’s got an interesting niche of the history of science covered with respect to bit history, complexity, and biological organization.

Should arrive some time between March 13 – March 25.

👓 Book review by Nicolas Rashevsky of Information theory in biology | The bulletin of mathematical biophysics

Read Book review of Information theory in biology by Nicolas Rashevsky (The bulletin of mathematical biophysics, June 1954, Volume 16, Issue 2, pp 183–185)

While sifting through some old bookmarks from CiteULike which is going to disappear from the web soon, I ran across one for this book review of Henry Quastler’s book Information Theory in Biology (1953).

The last page of the review had an interesting information theoretical take on not only book reviews, but the level of information they contain with respect for improved teaching and learning in an era prior to Mihaly Csikszentmihalyi’s ideas about “flow”.

As it isn’t the easiest thing to track down, I’ll quote the relevant paragraphs from page 185:

The purpose of a scientific book (we at least hope!) is to store and convey information in a given field. The purpose of a review is to convey  information about a book. It is therefore legitimate to attempt a mathematical theory of writing books and to find the optimal conditions which make a book good. At first it may seem that the optimal conditions consist of maximizing the amount of information per page, that is, in minimizing the redundancy. But a certain amount of redundancy may not only be desirable, but necessary. When presenting a new subject to young students who have never heard of it, a judicious amount of repetition is good pedagogy. Giving an exact abstract definition and then illustrating it by an example already constitutes a logical redundancy. But how useful it frequently is! The minimum of redundancy that is found in some well-known and excellent mathematical books (nomina sunt odiosa!) occasionally makes those books difficult to read even for mathematicians.
The optimum amount of redundancy is a function of the information and intelligence of the reader for whom the book is written. The analytical form of this function is to be determined by an appropriate mathematical theory of learning. Writing a book even in a field which belongs entirely to the domains of Her Majesty the Queen of Sciences is, alas, still more an art than a science. Is it not possible, however, that in the future it may become an exact science?
If a reviewer’s information and intelligence are exactly equal to the value for which the book has been optimized, then he will perceive as defects in the book only deviations from the optimal conditions. His criticism will be objective and unbiased. If, however, the reviewer’s information and intelligence deviate in any direction from the value for which the book is intended, then he will perceive shortcomings which are not due to the deviation of the book from the optimum, but to the reviewer’s personal characteristics. He may also perceive some advantages in the same way. If in the society of the future every individual will be tagged, through appropriate tests, as to his information and intelligence at a given time, expressed in appropriate units, then a reviewer will be able to calculate the correction for his personal bias. These are fantastic dreams of today, which may become reality in the future.

Some of this is very indicative of why one has to spend some significant time finding and recommending the right textbooks [1][2] for students and why things like personalized learning and improvements in pedagogy are so painfully difficult. Sadly on the pedagogy side we haven’t come as far as he may have hoped in nearly 70 ears, and, in fact, we may have regressed.

I’ve often seen web developers in the IndieWeb community mention the idea that “naming things is hard”, so I can’t help but noticing that this 1950’s reviewer uses the Latin catchphrase nomina sunt odiosa which translates as “names are odious”, which has a very similar, but far older sentiment about naming. It was apparently a problem for the ancients as well.

📑 Walter Pitts by Neil Smalheiser | Journal Perspectives in Biology and Medicine

Bookmarked Walter Pitts by Neil SmalheiserNeil Smalheiser (Journal Perspectives in Biology and Medicine. Volume 43. Issue 2. Page 217 - 226.)
Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.  

This looks like an interesting bio to read.

📑 A logical calculus of the ideas immanent in nervous activity by Warren S. McCulloch, Walter Pitts

Bookmarked A logical calculus of the ideas immanent in nervous activity by Warren S. McCulloch, Walter Pitts (The bulletin of mathematical biophysics December 1943, Volume 5, Issue 4, pp 115–133)
Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.

Found reference to this journal article in a review of Henry Quastler’s book Information Theory in Biology. It said:

A more serious thing, in the reviewer’s opinion, is the complete absence of contributions dealing with information theory and the central nervous system, which may be the field par excellence for the use of such a theory. Although no explicit reference to information theory is made in the well-known paper of W. McCulloch and W. Pitts (1943), the connection is quite obvious. This is made explicit in the systematic elaboration of the McCulloch-Pitts’ approach by J. von Neumann (1952). In his interesting book J. T. Culbertson (1950) discussed possible neural mechanisms for recognition of visual patterns, and particularly investigated the problems of how greatly a pattern may be deformed without ceasing to be recognizable. The connection between this problem and the problem of distortion in the theory of information is obvious. The work of Anatol Rapoport and his associates on random nets, and especially on their applications to rumor spread (see the series of papers which appeared in this Journal during the past four years), is also closely connected with problems of information theory.

Electronic copy available at: http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf