🎧 A computer learns about ingredients and recipes

A computer learns about ingredients and recipes by Jeremy Cherfas from Eat This Podcast
<img src="http://www.eatthispodcast.com/wp-content/uploads/2017/03/food2vec-banner.png"><br><br> Perhaps you've heard about IBM's giant Watson computer, which dispenses ingredient advice and novel recipes. Jaan Altosaar, a PhD candidate at Princeton University, is working on a recipe recommendation engine that anyone can use.<br><br> <audio class="u-audio" src="http://media.blubrry.com/eatthispodcast/p/mange-tout.s3.amazonaws.com/2017/food2vec.mp3" preload="none" controls="controls"><a href="http://media.blubrry.com/eatthispodcast/p/mange-tout.s3.amazonaws.com/2017/food2vec.mp3">audio</a></audio><br> <a class="button" title="download Eat This Podcast: A computer learns about ingredients and recipes" href="http://media.blubrry.com/eatthispodcast/p/mange-tout.s3.amazonaws.com/2017/food2vec.mp3">download</a><br><br> Subscribe: <a class="powerpress_link_subscribe powerpress_link_subscribe_itunes" title="Subscribe on iTunes" href="//www.eatthispodcast.com/feed/podcast/" rel="nofollow">iTunes</a> | <a class="powerpress_link_subscribe powerpress_link_subscribe_android" title="Subscribe on Android" href="http://subscribeonandroid.com/www.eatthispodcast.com/feed/podcast/" rel="nofollow">Android</a> | <a class="powerpress_link_subscribe powerpress_link_subscribe_rss" title="Subscribe via RSS" href="http://www.eatthispodcast.com/feed/podcast/" rel="nofollow">RSS</a> | <a class="powerpress_link_subscribe powerpress_link_subscribe_more" title="More" href="http://www.eatthispodcast.com/how-to-subscribe/" rel="nofollow">More</a><br> Support this podcast: <a href="https://www.patreon.com/etp">on Patreon</a>

Back in February I had retweeted something interesting from physicist and information theorist Michael Nielsen:

I found the article in it so interesting, there was some brief conversation around it and I thought to recommend it to my then new friend Jeremy Cherfas, whose Eat This Podcast I had just recently started to enjoy. Mostly I thought he would find it as interesting as I, though I hardly expected he’d turn it into a podcast episode. Though I’ve been plowing through back episodes in his catalog, fortunately this morning I ran out of downloaded episodes in the car so I started streaming the most recent one to find a lovely surprise: a podcast produced on a tip I made.

While he surely must have been producing the episode for some time before I started supporting the podcast on Patreon last week, I must say that having an episode made from one of my tips is the best backer thank you I’ve ever received from a crowd funded project.

Needless to say, I obviously found the subject fascinating. In part it did remind me of a section of Herve This’ book The Science of the Oven (eventually I’ll get around to posting a review with more thoughts) and some of his prior research which I was apparently reading on Christmas Day this past year. On page 118 of the text This discusses the classic French sauces of Escoffier’s students Louis Saulnier and Theodore Gringoire [1] and that a physical chemical analysis of them shows there to be only twenty-three kinds. He continues on:

A system that I introduced during the European Conference on Colloids and Interfaces in 2002 [2] offers a new classification, based on the physical chemical structure of the sauce. In it, G indicates a gas, E an aqueous solution, H a fat in the liquid state, and S a solid. These “phases” can be dispersed (symbol /), mixed (symbol +), superimposed (symbol θ), included (symbol @). Thus, veal stock is a solution, which is designated E. Bound veal stock, composed of starch granules swelled by the water they have absorbed, dispersed in an aqueous solution, is thus described by the formula (E/S)/E.

This goes on to describe in a bit more detail how the scientist-cook could then create a vector space of all combinations of foods from a physical state perspective. A classification system like this could be expanded and bolted on top of the database created by Jaan Altosaar and improved to provide even more actual realistic recipes of the type discussed in the podcast. The combinatorics of the problem are incredibly large, but my guess is that the constraints on the space of possible solutions is brought down incredibly in actual practice. It’s somewhat like the huge numbers of combinations the A, C, T, and Gs in our DNA that could be imagined, yet only an incredibly much smaller subset of that larger set could be found in a living human being.

Small World

The additional byproduct of catching this episode was that it finally reminded me why I had thought the name Jaan Altosaar was so familiar to me when I read his article. It turns out I know Jaan and some of his previous work. Sometime back in 2014 I had corresponded with him regarding his fantastic science news site Useful Science which was just then starting. While I was digging up the connection I realized that my old friend Sol Golomb had also referenced Jaan to me via Mark Wilde for some papers he suggested I read.

References

[1]
T. Gringoire and L. Saulnier, Le répertoire de la cuisine. Dupont et Malgat, 1914.
[2]
H. This, “La gastronomie moléculaire,” Sci Aliments, vol. 23, no. 2, pp. 187–198, 2003 [Online]. Available: http://sda.revuesonline.com/article.jsp?articleId=2577 [Source]
Syndicated copies to:

📖 On page 215 of 321 of At Home in the Universe by Stuart Kauffman

📖 Read pages 191 – 215 of At Home in the Universe by Stuart Kauffman

In chapter 9 Kauffman applies his NK landscape model to explain the evolution seen in the Cambrian explosion and the re-population following the Permian extinction. He then follows it up with some interesting discussion which applies it to technological innovation, learning curves, and growth in areas of economics. The chapter has given me a few thoughts on the shape and structure (or “landscape”) of mathematics. I’ll come back to this section to see if I can’t extend the analogy to come up with something unique in math.

The beginning of Chapter 10 he begins discussing power laws and covering the concept of emergence from ecosystems, coevolution, and the evolution of coevolution. In one part he evokes Adam Smith’s invisible hand which seemingly benefits everyone acting for its own selfishness. Though this seems to be the case since it was written, I do wonder what timescales and conditions it works under. As an example, selfishness on the individual, corporate, nation, and other higher levels may not necessarily be so positive with respect to potential issues like climate change which may drastically affect the landscape on and in which we live.

Syndicated copies to:

Hector Zenil

A new paper (arXiv) and some videos on entropy and algorithmic complexity

I’ve run across some of his work before, but I ran into some new material by Hector Zenil that will likely interest those following information theory, complexity, and computer science here. I hadn’t previously noticed that he refers to himself on his website as an “information theoretic biologist” — everyone should have that as a title, shouldn’t they? As a result, I’ve also added him to the growing list of ITBio Researchers.

If you’re not following him everywhere (?) yet, start with some of the sites below (or let me know if I’ve missed anything).

Hector Zenil:

His most recent paper on arXiv:
Low Algorithmic Complexity Entropy-deceiving Graphs | .pdf

A common practice in the estimation of the complexity of objects, in particular of graphs, is to rely on graph- and information-theoretic measures. Here, using integer sequences with properties such as Borel normality, we explain how these measures are not independent of the way in which a single object, such a graph, can be described. From descriptions that can reconstruct the same graph and are therefore essentially translations of the same description, we will see that not only is it necessary to pre-select a feature of interest where there is one when applying a computable measure such as Shannon Entropy, and to make an arbitrary selection where there is not, but that more general properties, such as the causal likeliness of a graph as a measure (opposed to randomness), can be largely misrepresented by computable measures such as Entropy and Entropy rate. We introduce recursive and non-recursive (uncomputable) graphs and graph constructions based on integer sequences, whose different lossless descriptions have disparate Entropy values, thereby enabling the study and exploration of a measure’s range of applications and demonstrating the weaknesses of computable measures of complexity.

Subjects: Information Theory (cs.IT); Computational Complexity (cs.CC); Combinatorics (math.CO)
Cite as: arXiv:1608.05972 [cs.IT] (or arXiv:1608.05972v4 [cs.IT]

YouTube

Yesterday he also posted two new introductory videos to his YouTube channel. There’s nothing overly technical here, but they’re nice short productions that introduce some of his work. (I wish more scientists did communication like this.) I’m hoping he’ll post them to his blog and write a bit more there in the future as well.

Universal Measures of Complexity

Relevant literature:

Reprogrammable World

Relevant literature:

Cross-boundary Behavioural Reprogrammability Reveals Evidence of Pervasive Turing Universality by Jürgen Riedel, Hector Zenil
Preprint available at http://arxiv.org/abs/1510.01671

Ed.: 9/7/16: Updated videos with links to relevant literature

Syndicated copies to:

Devastating News: Sol Golomb has apparently passed away on Sunday

The world has certainly lost one of its greatest thinkers, and many of us have lost a dear friend, colleague, and mentor.

I was getting concerned that I hadn’t heard back from Sol for a while, particularly after emailing him late last week, and then I ran across this notice through ITSOC & the IEEE:

Solomon W. Golomb (May 30, 1932 – May 1, 2016)

Shannon Award winner and long-time ITSOC member Solomon W. Golomb passed away on May 1, 2016.
Solomon W. Golomb was the Andrew Viterbi Chair in Electrical Engineering at the University of Southern California (USC) and was at USC since 1963, rising to the rank of University and Distinguished Professor. He was a member of the National Academies of Engineering and Science, and was awarded the National Medal of Science, the Shannon Award, the Hamming Medal, and numerous other accolades. As USC Dean Yiannis C. Yortsos wrote, “With unparalleled scholarly contributions and distinction to the field of engineering and mathematics, Sol’s impact has been extraordinary, transformative and impossible to measure. His academic and scholarly work on the theory of communications built the pillars upon which our modern technological life rests.”

In addition to his many contributions to coding and information theory, Professor Golomb was one of the great innovators in recreational mathematics, contributing many articles to Scientific American and other publications. More recent Information Theory Society members may be most familiar with his mathematics puzzles that appeared in the Society Newsletter, which will publish a full remembrance later.

A quick search a moment later revealed this sad confirmation along with some great photos from an award Sol received just a week ago:

As is common in academia, I’m sure it will take a few days for the news to drip out, but the world has certainly lost one of its greatest thinkers, and many of us have lost a dear friend, colleague, and mentor.

I’ll try touch base with his family and pass along what information sniff I can. I’ll post forthcoming obituaries as I see them, and will surely post some additional thoughts and reminiscences of my own in the coming days.

Golomb and national medal of science
President Barack Obama presents Solomon Golomb with the National Medal of Science at an awards ceremony held at the White House in 2013.
Syndicated copies to: