An Euclidean Declaration

So far, my favorite part of Jordan Ellenberg‘s new book Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else is this footnoted observation:

“we hold these truths to be self-evident” wasn’t Jefferson’s line; his first draft of the Declaration has “we hold these truths to be sacred & undeniable.” It was Ben Franklin who scratched out those words and wrote “self-evident” instead, making the document a little less biblical, a little more Euclidean.

Read Longtime philosophy Professor Stephen Barker dies at 92 (The Hub)
He was named professor emeritus after teaching in the Department of Philosophy for nearly four decades
I was thinking about logic a bit this evening and looked up an old professor. Saddened to hear he’s passed away.

👓 The Man Who Tried to Redeem the World with Logic | Issue 21: Information – Nautilus

Read The Man Who Tried to Redeem the World with Logic (Nautilus)
Walter Pitts was used to being bullied. He’d been born into a tough family in Prohibition-era Detroit, where his father, a boiler-maker,…

Highlights, Quotes, Annotations, & Marginalia

McCulloch was a confident, gray-eyed, wild-bearded, chain-smoking philosopher-poet who lived on whiskey and ice cream and never went to bed before 4 a.m.  

Now that is a business card title!

March 03, 2019 at 06:01PM

McCulloch and Pitts were destined to live, work, and die together. Along the way, they would create the first mechanistic theory of the mind, the first computational approach to neuroscience, the logical design of modern computers, and the pillars of artificial intelligence.  

tl;dr

March 03, 2019 at 06:06PM

Gottfried Leibniz. The 17th-century philosopher had attempted to create an alphabet of human thought, each letter of which represented a concept and could be combined and manipulated according to a set of logical rules to compute all knowledge—a vision that promised to transform the imperfect outside world into the rational sanctuary of a library.  

I don’t think I’ve ever heard this quirky story…

March 03, 2019 at 06:08PM

Which got McCulloch thinking about neurons. He knew that each of the brain’s nerve cells only fires after a minimum threshold has been reached: Enough of its neighboring nerve cells must send signals across the neuron’s synapses before it will fire off its own electrical spike. It occurred to McCulloch that this set-up was binary—either the neuron fires or it doesn’t. A neuron’s signal, he realized, is a proposition, and neurons seemed to work like logic gates, taking in multiple inputs and producing a single output. By varying a neuron’s firing threshold, it could be made to perform “and,” “or,” and “not” functions.  

I’m curious what year this was, particularly in relation to Claude Shannon’s master’s thesis in which he applied Boolean algebra to electronics.
Based on their meeting date, it would have to be after 1940.And they published in 1943: https://link.springer.com/article/10.1007%2FBF02478259

March 03, 2019 at 06:14PM

McCulloch and Pitts alone would pour the whiskey, hunker down, and attempt to build a computational brain from the neuron up.  

A nice way to pass the time to be sure. Naturally mathematicians would have been turning “coffee into theorems” instead of whiskey.

March 03, 2019 at 06:15PM

“an idea wrenched out of time.” In other words, a memory.  

March 03, 2019 at 06:17PM

McCulloch and Pitts wrote up their findings in a now-seminal paper, “A Logical Calculus of Ideas Immanent in Nervous Activity,” published in the Bulletin of Mathematical Biophysics.  

March 03, 2019 at 06:21PM

I really like this picture here. Perhaps for a business card?
colorful painting of man sitting with abstract structure around him
  
March 03, 2019 at 06:23PM

it had been Wiener who discovered a precise mathematical definition of information: The higher the probability, the higher the entropy and the lower the information content.  

Oops, I think this article is confusing Wiener with Claude Shannon?

March 03, 2019 at 06:34PM

By the fall of 1943, Pitts had moved into a Cambridge apartment, was enrolled as a special student at MIT, and was studying under one of the most influential scientists in the world.  

March 03, 2019 at 06:32PM

Thus formed the beginnings of the group who would become known as the cyberneticians, with Wiener, Pitts, McCulloch, Lettvin, and von Neumann its core.  

Wiener always did like cyberneticians for it’s parallelism with mathematicians….

March 03, 2019 at 06:38PM

In the entire report, he cited only a single paper: “A Logical Calculus” by McCulloch and Pitts.  

First Draft of a Report on EDVAC by jon von Neumann

March 03, 2019 at 06:43PM

Oliver Selfridge, an MIT student who would become “the father of machine perception”; Hyman Minsky, the future economist; and Lettvin.  

March 03, 2019 at 06:44PM

at the Second Cybernetic Conference, Pitts announced that he was writing his doctoral dissertation on probabilistic three-dimensional neural networks.  

March 03, 2019 at 06:44PM

In June 1954, Fortune magazine ran an article featuring the 20 most talented scientists under 40; Pitts was featured, next to Claude Shannon and James Watson.  

March 03, 2019 at 06:46PM

Lettvin, along with the young neuroscientist Patrick Wall, joined McCulloch and Pitts at their new headquarters in Building 20 on Vassar Street. They posted a sign on the door: Experimental Epistemology.  

March 03, 2019 at 06:47PM

“The eye speaks to the brain in a language already highly organized and interpreted,” they reported in the now-seminal paper “What the Frog’s Eye Tells the Frog’s Brain,” published in 1959.  

March 03, 2019 at 06:50PM

There was a catch, though: This symbolic abstraction made the world transparent but the brain opaque. Once everything had been reduced to information governed by logic, the actual mechanics ceased to matter—the tradeoff for universal computation was ontology. Von Neumann was the first to see the problem. He expressed his concern to Wiener in a letter that anticipated the coming split between artificial intelligence on one side and neuroscience on the other. “After the great positive contribution of Turing-cum-Pitts-and-McCulloch is assimilated,” he wrote, “the situation is rather worse than better than before. Indeed these authors have demonstrated in absolute and hopeless generality that anything and everything … can be done by an appropriate mechanism, and specifically by a neural mechanism—and that even one, definite mechanism can be ‘universal.’ Inverting the argument: Nothing that we may know or learn about the functioning of the organism can give, without ‘microscopic,’ cytological work any clues regarding the further details of the neural mechanism.”  

March 03, 2019 at 06:54PM

Nature had chosen the messiness of life over the austerity of logic, a choice Pitts likely could not comprehend. He had no way of knowing that while his ideas about the biological brain were not panning out, they were setting in motion the age of digital computing, the neural network approach to machine learning, and the so-called connectionist philosophy of mind.  

March 03, 2019 at 06:55PM

by stringing them together exactly as Pitts and McCulloch had discovered, you could carry out any computation.  

I feel like this is something more akin to what may have been already known from Boolean algebra and Whitehead/Russell by this time. Certainly Shannon would have known of it?

March 03, 2019 at 06:58PM

👓 The Semantic Web, Syllogism, and Worldview | Clay Shirky

Read The Semantic Web, Syllogism, and Worldview by Clay Shirky (shirky.com)
The W3C's Semantic Web project has been described in many ways over the last few years: an extension of the current web in which information is given well-defined meaning, a place where machines can analyze all the data on the Web, even a Web in which machine reasoning will be ubiquitous and devastatingly powerful. The problem with descriptions this general, however, is that they don't answer the obvious question: What is the Semantic Web good for? The simple answer is this: The Semantic Web is a machine for creating syllogisms. A syllogism is a form of logic, first described by Aristotle, where "...certain things being stated, something other than what is stated follows of necessity from their being so." [Organon]
Not sure I like the logic on his vampire example as the language is missing some simple subtlety in it’s definition.

Basic Category Theory by Tom Leinster | Free Ebook Download

Bookmarked Basic Category Theory (arxiv.org)
This short introduction to category theory is for readers with relatively little mathematical background. At its heart is the concept of a universal property, important throughout mathematics. After a chapter introducing the basic definitions, separate chapters present three ways of expressing universal properties: via adjoint functors, representable functors, and limits. A final chapter ties the three together. For each new categorical concept, a generous supply of examples is provided, taken from different parts of mathematics. At points where the leap in abstraction is particularly great (such as the Yoneda lemma), the reader will find careful and extensive explanations.
Tom Leinster has released a digital e-book copy of his textbook Basic Category Theory on arXiv [1]

h/t to John Carlos Baez for the notice:

My friend Tom Leinster has written a great introduction to that wonderful branch of math called category theory! It’s free:

https://arxiv.org/abs/1612.09375

It starts with the basics and it leads up to a trio of related concepts, which are all ways of talking about universal properties.

Huh? What’s a ‘universal property’?

In category theory, we try to describe things by saying what they do, not what they’re made of. The reason is that you can often make things out of different ingredients that still do the same thing! And then, even though they will not be strictly the same, they will be isomorphic: the same in what they do.

A universal property amounts to a precise description of what an object does.

Universal properties show up in three closely connected ways in category theory, and Tom’s book explains these in detail:

through representable functors (which are how you actually hand someone a universal property),

through limits (which are ways of building a new object out of a bunch of old ones),

through adjoint functors (which give ways to ‘freely’ build an object in one category starting from an object in another).

If you want to see this vague wordy mush here transformed into precise, crystalline beauty, read Tom’s book! It’s not easy to learn this stuff – but it’s good for your brain. It literally rewires your neurons.

Here’s what he wrote, over on the category theory mailing list:

…………………………………………………………………..

Dear all,

My introductory textbook “Basic Category Theory” was published by Cambridge University Press in 2014. By arrangement with them, it’s now also free online:

https://arxiv.org/abs/1612.09375

It’s also freely editable, under a Creative Commons licence. For instance, if you want to teach a class from it but some of the examples aren’t suitable, you can delete them or add your own. Or if you don’t like the notation (and when have two category theorists ever agreed on that?), you can easily change the Latex macros. Just go the arXiv, download, and edit to your heart’s content.

There are lots of good introductions to category theory out there. The particular features of this one are:
• It’s short.
• It doesn’t assume much.
• It sticks to the basics.

 

References

[1]
T. Leinster, Basic Category Theory, 1st ed. Cambridge University Press, 2014.

[1609.02422] What can logic contribute to information theory?

Bookmarked [1609.02422] What can logic contribute to information theory? by David EllermanDavid Ellerman (128.84.21.199)
Logical probability theory was developed as a quantitative measure based on Boole's logic of subsets. But information theory was developed into a mature theory by Claude Shannon with no such connection to logic. But a recent development in logic changes this situation. In category theory, the notion of a subset is dual to the notion of a quotient set or partition, and recently the logic of partitions has been developed in a parallel relationship to the Boolean logic of subsets (subset logic is usually mis-specified as the special case of propositional logic). What then is the quantitative measure based on partition logic in the same sense that logical probability theory is based on subset logic? It is a measure of information that is named "logical entropy" in view of that logical basis. This paper develops the notion of logical entropy and the basic notions of the resulting logical information theory. Then an extensive comparison is made with the corresponding notions based on Shannon entropy.
Ellerman is visiting at UC Riverside at the moment. Given the information theory and category theory overlap, I’m curious if he’s working with John Carlos Baez, or what Baez is aware of this.

Based on a cursory look of his website(s), I’m going to have to start following more of this work.

What An Actual Handwaving Argument in Mathematics Looks Like

I’m sure we’ve all heard them many times, but this is what an actual handwaving argument looks like in a mathematical setting.

Handwaving during Algebraic Number Theory

Instagram filter used: Normal

Photo taken at: UCLA Math Sciences Building

Handwaving during Algebraic Number Theory

A photo posted by Chris Aldrich (@chrisaldrich) on