Bookmarked Sum by David Eagleman (eagleman.com)

SUM is a dazzling exploration of funny and unexpected afterlives that have never been considered -- each presented as a vignette that offers us a stunning lens through which to see ourselves here and now.

In one afterlife you may find that God is the size of a microbe and is unaware of your existence. In another, your creators are a species of dim-witted creatures who built us to figure out what they could not. In a different version of the afterlife you work as a background character in other people's dreams. Or you may find that God is a married couple struggling with discontent, or that the afterlife contains only those people whom you remember, or that the hereafter includes the thousands of previous gods who no longer attract followers. In some afterlives you are split into your different ages; in some you are forced to live with annoying versions of yourself that represent what you could have been; in others you are re-created from your credit card records and Internet history. David Eagleman proposes many versions of our purpose here; we are mobile robots for cosmic mapmakers, we are reunions for a scattered confederacy of atoms, we are experimental subjects for gods trying to understand what makes couples stick together.

These wonderfully imagined tales -- at once funny, wistful, and unsettling -- are rooted in science and romance and awe at our mysterious existence: a mixture of death, hope, computers, immortality, love, biology, and desire that exposes radiant new facets of our humanity.

Looks interesting, but I’ll pass at the moment. Ran across reference on a philosophical level in a Complexity related talk from the Santa Fe Institute.
Liked a tweet by ALIFE Conference 2020 (Twitter)

👓 What can Schrödinger’s cat say about 3D printers on Mars? | Aeon | Aeon Essays

Read What can Schrödinger’s cat say about 3D printers on Mars? by Michael Lachmann and Sara Walker (Aeon | Aeon Essays)
A cat is alive, a sofa is not: that much we know. But a sofa is also part of life. Information theory tells us why
A nice little essay in my area, but I’m not sure there’s anything new in it for me. It is nice that they’re trying to break some of the problem down into smaller components before building it back up into something else. Reframing things can always be helpful. Here, in particular, they’re reframing the definitions of life and alive.

📑 Solomon Golomb (1932–2016) | Stephen Wolfram Blog

Annotated Solomon Golomb (1932–2016) by Stephen Wolfram (blog.stephenwolfram.com)

As it happens, he’d already done some work on coding theory—in the area of biology. The digital nature of DNA had been discovered by Jim Watson and Francis Crick in 1953, but it wasn’t yet clear just how sequences of the four possible base pairs encoded the 20 amino acids. In 1956, Max Delbrück—Jim Watson’s former postdoc advisor at Caltech—asked around at JPL if anyone could figure it out. Sol and two colleagues analyzed an idea of Francis Crick’s and came up with “comma-free codes” in which overlapping triples of base pairs could encode amino acids. The analysis showed that exactly 20 amino acids could be encoded this way. It seemed like an amazing explanation of what was seen—but unfortunately it isn’t how biology actually works (biology uses a more straightforward encoding, where some of the 64 possible triples just don’t represent anything).  

I recall talking to Sol about this very thing when I sat in on a course he taught at USC on combinatorics. He gave me his paper on it and a few related issues as I was very interested at the time about the applications of information theory and biology.

I’m glad I managed to sit in on the class and still have the audio recordings and notes. While I can’t say that Newton taught me calculus, I can say I learned combinatorics from Golomb.

👓 Solomon Golomb (1932–2016) | Stephen Wolfram

Read Solomon Golomb (1932–2016) by Stephen WolframStephen Wolfram (blog.stephenwolfram.com)

The Most-Used Mathematical Algorithm Idea in History

An octillion. A billion billion billion. That’s a fairly conservative estimate of the number of times a cellphone or other device somewhere in the world has generated a bit using a maximum-length linear-feedback shift register sequence. It’s probably the single most-used mathematical algorithm idea in history. And the main originator of this idea was Solomon Golomb, who died on May 1—and whom I knew for 35 years.

Solomon Golomb’s classic book Shift Register Sequences, published in 1967—based on his work in the 1950s—went out of print long ago. But its content lives on in pretty much every modern communications system. Read the specifications for 3GLTEWi-FiBluetooth, or for that matter GPS, and you’ll find mentions of polynomials that determine the shift register sequences these systems use to encode the data they send. Solomon Golomb is the person who figured out how to construct all these polynomials.

A fantastic and pretty comprehensive obit for Sol. He did miss out on more of Sol’s youth as well as his cross-town chess rivalry with Basil Gordon when they both lived in Baltimore, but before they lived across town from each other again in Los Angeles.

Many of the fantastical seeming stories here, as well as Sol’s personality read very true to me with respect to the man I knew for almost two decades.

🔖 [1903.07456] Self-Organization and Artificial Life | arXiv

Bookmarked Self-Organization and Artificial Life by Carlos Gershenson, Vito Trianni, Justin Werfel, Hiroki Sayama (arXiv.org)
Self-organization can be broadly defined as the ability of a system to display ordered spatio-temporal patterns solely as the result of the interactions among the system components. Processes of this kind characterize both living and artificial systems, making self-organization a concept that is at the basis of several disciplines, from physics to biology to engineering. Placed at the frontiers between disciplines, Artificial Life (ALife) has heavily borrowed concepts and tools from the study of self-organization, providing mechanistic interpretations of life-like phenomena as well as useful constructivist approaches to artificial system design. Despite its broad usage within ALife, the concept of self-organization has been often excessively stretched or misinterpreted, calling for a clarification that could help with tracing the borders between what can and cannot be considered self-organization. In this review, we discuss the fundamental aspects of self-organization and list the main usages within three primary ALife domains, namely "soft" (mathematical/computational modeling), "hard" (physical robots), and "wet" (chemical/biological systems) ALife. Finally, we discuss the usefulness of self-organization within ALife studies, point to perspectives for future research, and list open questions.
Bookmarked From bit to it: How a complex metabolic network transforms information into living matter by Andreas Wagner (BMC Systems Biology)

Background

Organisms live and die by the amount of information they acquire about their environment. The systems analysis of complex metabolic networks allows us to ask how such information translates into fitness. A metabolic network transforms nutrients into biomass. The better it uses information on available nutrient availability, the faster it will allow a cell to divide.

Results

I here use metabolic flux balance analysis to show that the accuracy I (in bits) with which a yeast cell can sense a limiting nutrient's availability relates logarithmically to fitness as indicated by biomass yield and cell division rate. For microbes like yeast, natural selection can resolve fitness differences of genetic variants smaller than 10-6, meaning that cells would need to estimate nutrient concentrations to very high accuracy (greater than 22 bits) to ensure optimal growth. I argue that such accuracies are not achievable in practice. Natural selection may thus face fundamental limitations in maximizing the information processing capacity of cells.

Conclusion

The analysis of metabolic networks opens a door to understanding cellular biology from a quantitative, information-theoretic perspective.

https://doi.org/10.1186/1752-0509-1-33

Received: 01 March 2007 Accepted: 30 July 2007 Published: 30 July 2007

Hat tip to Paul Davies in The Demon in the Machine
Bookmarked Statistical Physics of Self-Replication by Jeremy L. England (J. Chem. Phys. 139, 121923 (2013); )
Self-replication is a capacity common to every species of living thing, and simple physical intuition dictates that such a process must invariably be fueled by the production of entropy. Here, we undertake to make this intuition rigorous and quantitative by deriving a lower bound for the amount of heat that is produced during a process of self-replication in a system coupled to a thermal bath. We find that the minimum value for the physically allowed rate of heat production is determined by the growth rate, internal entropy, and durability of the replicator, and we discuss the implications of this finding for bacterial cell division, as well as for the pre-biotic emergence of self-replicating nucleic acids.
https://doi.org/10.1063/1.4818538
Syndicated copy also available on arXiv: https://arxiv.org/abs/1209.1179

Hat tip to Paul Davies in The Demon in the Machine

📖 Read pages 60-66 of 272 of The Demon in the Machine by Paul Davies

📖 Read pages 60-66 of 251 of The Demon in the Machine: How Hidden Webs of Information Are Finally Solving the Mystery of Life by Paul Davies

So far there’s nothing new for me here. He’s encapsulating a lot of prior books I’ve read. (Though he’s doing an incredible job of it.) There are a handful of references that I’ll want to go take a look at though.

🔖 The notion of information in biology, an appraisal | Jérôme Segal | Journal BIO Web of Conferences

Bookmarked The notion of information in biology, an appraisal by Jérôme SegalJérôme Segal (Journal BIO Web of Conferences Volume 4, Page 00017, 2015; ORIGINS – Studies in Biological and Cultural Evolution)

Developed during the first half of the 20th century, in three different fields, theoretical physics, statistics applied to agronomy and telecommunication engineering, the notion of information has become a scientific concept in the context of the Second War World. It is in this highly interdisciplinary environment that “information theory” emerged, combining the mathematical theory of communication and cybernetics. This theory has grown exponentially in many disciplines, including biology. The discovery of the genetic “code” has benefited from the development of a common language based on information theory and has fostered a almost imperialist development of molecular genetics, which culminated in the Human Genome Project. This project however could not fill all the raised expectations and epigenetics have shown the limits of this approach. Still, the theory of information continues to be applied in the current research, whether the application of the self-correcting coding theory to explain the conservation of genomes on a geological scale or aspects the theory of evolution.

[pdf]

https://doi.org/10.1051/bioconf/20150400017

👓 The Man Who Tried to Redeem the World with Logic | Issue 21: Information – Nautilus

Read The Man Who Tried to Redeem the World with Logic (Nautilus)
Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker,…

Highlights, Quotes, Annotations, & Marginalia

McCulloch was a confident, gray-eyed, wild-bearded, chain-smoking philosopher-poet who lived on whiskey and ice cream and never went to bed before 4 a.m.  

Now that is a business card title!

March 03, 2019 at 06:01PM

McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence.  

tl;dr

March 03, 2019 at 06:06PM

Gottfried Leibniz. The 17th-century philosopher had attempted to create an alphabet of human thought, each letter of which represented a concept and could be combined and manipulated according to a set of logical rules to compute all knowledge—a vision that promised to transform the imperfect outside world into the rational sanctuary of a library.  

I don’t think I’ve ever heard this quirky story…

March 03, 2019 at 06:08PM

Which got McCulloch thinking about neurons. He knew that each of the brain’s nerve cells only fires after a minimum threshold has been reached: Enough of its neighboring nerve cells must send signals across the neuron’s synapses before it will fire off its own electrical spike. It occurred to McCulloch that this set-up was binary—either the neuron fires or it doesn’t. A neuron’s signal, he realized, is a proposition, and neurons seemed to work like logic gates, taking in multiple inputs and producing a single output. By varying a neuron’s firing threshold, it could be made to perform “and,” “or,” and “not” functions.  

I’m curious what year this was, particularly in relation to Claude Shannon’s master’s thesis in which he applied Boolean algebra to electronics.
Based on their meeting date, it would have to be after 1940.And they published in 1943: https://link.springer.com/article/10.1007%2FBF02478259

March 03, 2019 at 06:14PM

McCulloch and Pitts alone would pour the whiskey, hunker down, and attempt to build a computational brain from the neuron up.  

A nice way to pass the time to be sure. Naturally mathematicians would have been turning “coffee into theorems” instead of whiskey.

March 03, 2019 at 06:15PM

“an idea wrenched out of time.” In other words, a memory.  

March 03, 2019 at 06:17PM

McCulloch and Pitts wrote up their findings in a now-seminal paper, “A Logical Calculus of Ideas Immanent in Nervous Activity,” published in the Bulletin of Mathematical Biophysics.  

March 03, 2019 at 06:21PM

I really like this picture here. Perhaps for a business card?
colorful painting of man sitting with abstract structure around him
  
March 03, 2019 at 06:23PM

it had been Wiener who discovered a precise mathematical definition of information: The higher the probability, the higher the entropy and the lower the information content.  

Oops, I think this article is confusing Wiener with Claude Shannon?

March 03, 2019 at 06:34PM

By the fall of 1943, Pitts had moved into a Cambridge apartment, was enrolled as a special student at MIT, and was studying under one of the most influential scientists in the world.  

March 03, 2019 at 06:32PM

Thus formed the beginnings of the group who would become known as the cyberneticians, with Wiener, Pitts, McCulloch, Lettvin, and von Neumann its core.  

Wiener always did like cyberneticians for it’s parallelism with mathematicians….

March 03, 2019 at 06:38PM

In the entire report, he cited only a single paper: “A Logical Calculus” by McCulloch and Pitts.  

First Draft of a Report on EDVAC by jon von Neumann

March 03, 2019 at 06:43PM

Oliver Selfridge, an MIT student who would become “the father of machine perception”; Hyman Minsky, the future economist; and Lettvin.  

March 03, 2019 at 06:44PM

at the Second Cybernetic Conference, Pitts announced that he was writing his doctoral dissertation on probabilistic three-dimensional neural networks.  

March 03, 2019 at 06:44PM

In June 1954, Fortune magazine ran an article featuring the 20 most talented scientists under 40; Pitts was featured, next to Claude Shannon and James Watson.  

March 03, 2019 at 06:46PM

Lettvin, along with the young neuroscientist Patrick Wall, joined McCulloch and Pitts at their new headquarters in Building 20 on Vassar Street. They posted a sign on the door: Experimental Epistemology.  

March 03, 2019 at 06:47PM

“The eye speaks to the brain in a language already highly organized and interpreted,” they reported in the now-seminal paper “What the Frog’s Eye Tells the Frog’s Brain,” published in 1959.  

March 03, 2019 at 06:50PM

There was a catch, though: This symbolic abstraction made the world transparent but the brain opaque. Once everything had been reduced to information governed by logic, the actual mechanics ceased to matter—the tradeoff for universal computation was ontology. Von Neumann was the first to see the problem. He expressed his concern to Wiener in a letter that anticipated the coming split between artificial intelligence on one side and neuroscience on the other. “After the great positive contribution of Turing-cum-Pitts-and-McCulloch is assimilated,” he wrote, “the situation is rather worse than better than before. Indeed these authors have demonstrated in absolute and hopeless generality that anything and everything … can be done by an appropriate mechanism, and specifically by a neural mechanism—and that even one, definite mechanism can be ‘universal.’ Inverting the argument: Nothing that we may know or learn about the functioning of the organism can give, without ‘microscopic,’ cytological work any clues regarding the further details of the neural mechanism.”  

March 03, 2019 at 06:54PM

Nature had chosen the messiness of life over the austerity of logic, a choice Pitts likely could not comprehend. He had no way of knowing that while his ideas about the biological brain were not panning out, they were setting in motion the age of digital computing, the neural network approach to machine learning, and the so-called connectionist philosophy of mind.  

March 03, 2019 at 06:55PM

by stringing them together exactly as Pitts and McCulloch had discovered, you could carry out any computation.  

I feel like this is something more akin to what may have been already known from Boolean algebra and Whitehead/Russell by this time. Certainly Shannon would have known of it?

March 03, 2019 at 06:58PM

👓 Henry Quastler | Wikipedia

Read Henry Quastler (Wikipedia)
Henry Quastler (November 11, 1908 – July 4, 1963) was an Austrian physician and radiologist who became a pioneer in the field of information theory applied to biology after emigrating to America. His work with Sidney Dancoff led to the publication of what is now commonly called Dancoff's Law.
Spent a moment to make a few additions to the page as well…

👓 Dr. Hubert Yockey of Bel Air, Director of APG Reactor; Manhattan Project Nuclear Physicist, Dies at 99 | The Dagger

Read Dr. Hubert Yockey of Bel Air, Director of APG Reactor; Manhattan Project Nuclear Physicist, Dies at 99 (The Dagger - Local News with an Edge)
Hubert Palmer Yockey, 99, died peacefully under hospice care at his home in Bel Air, MD, on January 31, 2016, with his daughter, Cynthia Yockey, at his side. Born in Alexandria, Minnesota, he was t…