I love Eat This Podcast

Food as a vehicle to explore the byways of taste, economics and trade, culture, science, history, archaeology, geography and just about anything else.

Many who follow my blog recently will know that I’ve been binge listening to Jeremy Cherfas‘ wonderful podcast series: Eat This Podcast.

I’m now so many wonderful episodes in, that it was far past time to give something back to Jeremy for the hours of work he’s put in to give me so much entertainment, enjoyment, and even knowledge. So I just made a pledge to support him on Patreon.

If you haven’t been paying attention, Eat This Podcast is a fantastic series on food, but it it uses the “foods we eat to examine and shed light on the lives we lead, from authenticity to zoology”. Food becomes his “vehicle to explore the byways of taste, economics and trade, culture, science, history, archaeology, geography and just about anything else.”

It’s unlike much of anything I’ve seen or followed in the food space for some time. As someone who is a fan of the science of food and fantastic writers like Harold McGee, Herve This, Alton Brown, Tom Standage, Michael Pollan, Nathan Myhrvold, Maxime Bilet, Matt Gross, and Michael Ruhlman (to name only a few), Eat This Podcast is now a must listen for me.

Not only are the episodes always interesting and unique, they’re phenomenally well researched and produced. You’d think he had a massive staff and production support at the level of a news organization like NPR. By way of mentioning NPR, I wanted to highlight the thought, care, and skill he puts into not only the stunning audio quality, but into the selection of underlying photos, musical bumpers, and the links to additional resources he finds along the way.

And if my recommendation isn’t enough, then perhaps knowing that this one person effort has been nominated for the James Beard Award in both 2015 and 2016 may tip the scales?

If you haven’t listened to any of them yet, I highly recommend you take a peek at what he has to offer. You can subscribe, download, and listen to them all for free. If you’re so inclined, I hope you’ll follow my lead and make a pledge to support his work on Patreon as well.

Subscription information:

Syndicated copies to:

👓 Why Facts Don’t Change Our Minds | The New Yorker

Why Facts Don’t Change Our Minds by Elizabeth Kolbert (The New Yorker)
New discoveries about the human mind show the limitations of reason.

Continue reading “👓 Why Facts Don’t Change Our Minds | The New Yorker”
Syndicated copies to:

Science and technology: what happened in 2016 | Daniel Lemire’s blog

Science and technology: what happened in 2016 by Daniel Lemire (Daniel Lemire's blog)(Duration: P3DT4H22M43S)

This year, you are able to buy CRISPR-based gene editing toolkits for \$150 on the Internet as well as autonomous drones, and you can ask your Amazon Echo to play your favorite music or give you a traffic report. You can buy a fully functional Android tablet for \$40 on Amazon. If you have made it to a Walmart near you lately, you know that kids are going to receive dirt cheap remote-controlled flying drones for Christmas this year. Amazon now delivers packages by drone with its Prime Air service. There are 2.6 billion smartphones in the world. Continue reading “Science and technology: what happened in 2016 | Daniel Lemire’s blog”

Syndicated copies to:

The Food Lab: Better Home Cooking Through Science by J. Kenji López

The Food Lab: Better Home Cooking Through Science by J. Kenji López-Alt (amazon.com)
The New York Times bestselling winner of the 2016 James Beard Award for General Cooking and the IACP Cookbook of the Year Award. A grand tour of the science of cooking explored through popular American dishes, illustrated in full color. Ever wondered how to pan-fry a steak with a charred crust and an interior that's perfectly medium-rare from edge to edge when you cut into it? How to make homemade mac 'n' cheese that is as satisfyingly gooey and velvety-smooth as the blue box stuff, but far tastier? How to roast a succulent, moist turkey (forget about brining!)―and use a foolproof method that works every time? As Serious Eats's culinary nerd-in-residence, J. Kenji López-Alt has pondered all these questions and more. In The Food Lab, Kenji focuses on the science behind beloved American dishes, delving into the interactions between heat, energy, and molecules that create great food. Kenji shows that often, conventional methods don’t work that well, and home cooks can achieve far better results using new―but simple―techniques. In hundreds of easy-to-make recipes with over 1,000 full-color images, you will find out how to make foolproof Hollandaise sauce in just two minutes, how to transform one simple tomato sauce into a half dozen dishes, how to make the crispiest, creamiest potato casserole ever conceived, and much more.

An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.

Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.

From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.

Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical \$30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).

Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).

Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.

Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?

“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone

Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?

My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:

(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging \$30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?

Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.

I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.

Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.

Syndicated copies to:

Donald Forsdyke Indicates the Concept of Information in Biology Predates Claude Shannon

In the 1870s Ewald Hering in Prague and Samuel Butler in London laid the foundations. Butler's work was later taken up by Richard Semon in Munich, whose writings inspired the young Erwin Schrodinger in the early decades of the 20th century.

As it was published, I had read Kevin Hartnett’s article and interview with Christoph Adami The Information Theory of Life in Quanta Magazine. I recently revisited it and read through the commentary and stumbled upon an interesting quote relating to the history of information in biology:

These two historical references predate Claude Shannon’s mathematical formalization of information in A Mathematical Theory of Communication (The Bell System Technical Journal, 1948) and even Erwin Schrödinger‘s lecture (1943) and subsequent book What is Life (1944).

For those interested in reading more on this historical tidbit, I’ve dug up a copy of the primary Forsdyke reference which first appeared on arXiv (prior to its ultimate publication in History of Psychiatry [.pdf]):

🔖 [1406.1391] ‘A Vehicle of Symbols and Nothing More.’ George Romanes, Theory of Mind, Information, and Samuel Butler by Donald R. Forsdyke  [1]
Submitted on 4 Jun 2014 (v1), last revised 13 Nov 2014 (this version, v2)

Abstract: Today’s ‘theory of mind’ (ToM) concept is rooted in the distinction of nineteenth century philosopher William Clifford between ‘objects’ that can be directly perceived, and ‘ejects,’ such as the mind of another person, which are inferred from one’s subjective knowledge of one’s own mind. A founder, with Charles Darwin, of the discipline of comparative psychology, George Romanes considered the minds of animals as ejects, an idea that could be generalized to ‘society as eject’ and, ultimately, ‘the world as an eject’ – mind in the universe. Yet, Romanes and Clifford only vaguely connected mind with the abstraction we call ‘information,’ which needs ‘a vehicle of symbols’ – a material transporting medium. However, Samuel Butler was able to address, in informational terms depleted of theological trappings, both organic evolution and mind in the universe. This view harmonizes with insights arising from modern DNA research, the relative immortality of ‘selfish’ genes, and some startling recent developments in brain research.

Comments: Accepted for publication in History of Psychiatry. 31 pages including 3 footnotes. Based on a lecture given at Santa Clara University, February 28th 2014, at a Bannan Institute Symposium on ‘Science and Seeking: Rethinking the God Question in the Lab, Cosmos, and Classroom.’

The original arXiv article also referenced two lectures which are appended below:

[Original Draft of this was written on December 14, 2015.]

References

[1]
D. Forsdyke R., “‘A vehicle of symbols and nothing more’. George Romanes, theory of mind, information, and Samuel Butler,” History of Psychiatry, vol. 26, no. 3, Aug. 2015 [Online]. Available: http://journals.sagepub.com/doi/abs/10.1177/0957154X14562755
Syndicated copies to:

No, It’s Not Your Opinion. You’re Just Wrong. | Houston Press

Before you crouch behind your Shield of Opinion you need to ask yourself two questions: 1. Is this actually an opinion? 2. If it is an opinion how informed is it and why do I hold it?

This has to be the best article of the entire year: “No, It’s Not Your Opinion. You’re Just Wrong.”

It also not coincidentally is the root of the vast majority of the problems the world is currently facing. There are so many great quotes here, I can’t pick a favorite, so I’ll highlight the same one Kimb Quark did that brought my attention to it:

Syndicated copies to:

Popular Science Books on Information Theory, Biology, and Complexity

The beginning of a four part series in which I provide a gradation of books and texts that lie in the intersection of the application of information theory, physics, and engineering practice to the area of biology.

Previously, I had made a large and somewhat random list of books which lie in the intersection of the application of information theory, physics, and engineering practice to the area of biology.  Below I’ll begin to do a somewhat better job of providing a finer gradation of technical level for both the hobbyist or the aspiring student who wishes to bring themselves to a higher level of understanding of these areas.  In future posts, I’ll try to begin classifying other texts into graduated strata as well.  The final list will be maintained here: Books at the Intersection of Information Theory and Biology.

Introductory / General Readership / Popular Science Books

These books are written on a generally non-technical level and give a broad overview of their topics with occasional forays into interesting or intriguing subtopics. They include little, if any, mathematical equations or conceptualization. Typically, any high school student should be able to read, follow, and understand the broad concepts behind these books.  Though often non-technical, these texts can give some useful insight into the topics at hand, even for the most advanced researchers.

Possibly one of the best places to start, this text gives a great overview of most of the major areas of study related to these fields.

Entropy Demystified: The Second Law Reduced to Plain Common Sense by Arieh Ben-Naim

One of the best books on the concept of entropy out there.  It can be read even by middle school students with no exposure to algebra and does a fantastic job of laying out the conceptualization of how entropy underlies large areas of the broader subject. Even those with Ph.D.’s in statistical thermodynamics can gain something useful from this lovely volume.

A relatively recent popular science volume covering various conceptualizations of what information is and how it’s been dealt with in science and engineering.  Though it has its flaws, its certainly a good introduction to the beginner, particularly with regard to history.

The Origin of Species by Charles Darwin

One of the most influential pieces of writing known to man, this classical text is the basis from which major strides in biology have been made as a result. A must read for everyone on the planet.

Information, Entropy, Life and the Universe: What We Know and What We Do Not Know by Arieh Ben-Naim

Information Theory and Evolution by John Avery

Information Theory, Evolution, and the Origin of Life by Hubert P. Yockey

The four books above have a significant amount of overlap. Though one could read all of them, I recommend that those pressed for time choose Ben-Naim first. As I write this I’ll note that Ben-Naim’s book is scheduled for release on May 30, 2015, but he’s been kind enough to allow me to read an advance copy while it was in process; it gets my highest recommendation in its class. Loewenstein covers a bit more than Avery who also has a more basic presentation. Most who continue with the subject will later come across Yockey’s Information Theory and Molecular Biology which is similar to his text here but written at a slightly higher level of sophistication. Those who finish at this level of sophistication might want to try Yockey third instead.

The Red Queen: Sex and the Evolution of Human Nature by Matt Ridley

Grammatical Man: Information, Entropy, Language, and Life  by Jeremy Campbell

Life’s Ratchet: How Molecular Machines Extract Order from Chaos by Peter M. Hoffmann

Complexity: The Emerging Science at the Edge of Order and Chaos by M. Mitchell Waldrop

The Big Picture: On the Origins of Life, Meaning, and the Universe Itself (Dutton, May 10, 2016)

In the coming weeks/months, I’ll try to continue putting recommended books on the remainder of the rest of the spectrum, the balance of which follows in outline form below. As always, I welcome suggestions and recommendations based on others’ experiences as well. If you’d like to suggest additional resources in any of the sections below, please do so via our suggestion box. For those interested in additional resources, please take a look at the ITBio Resources page which includes information about related research groups; references and journal articles; academic, research institutes, societies, groups, and organizations; and conferences, workshops, and symposia.

These books are written at a level that can be grasped and understood by most with a freshmen or sophomore university level. Coursework in math, science, and engineering will usually presume knowledge of calculus, basic probability theory, introductory physics, chemistry, and basic biology.

These books are written at a level that can be grasped and understood by those at a junior or senor university level. Coursework in math, science, and engineering may presume knowledge of probability theory, differential equations, linear algebra, complex analysis, abstract algebra, signal processing, organic chemistry, molecular biology, evolutionary theory, thermodynamics, advanced physics, and basic information theory.

These books are written at a level that can be grasped and understood by most working at the level of a master’s level at most universities.  Coursework presumes all the previously mentioned classes, though may require a higher level of sub-specialization in one or more areas of mathematics, physics, biology, or engineering practice.  Because of the depth and breadth of disciplines covered here, many may feel the need to delve into areas outside of their particular specialization.

Syndicated copies to:

Schools of Thought in the Hard and Soft Sciences

A framework for determining the difference between the hard and soft sciences.

A recent post in one of the blogs at Discover Magazine the other day had me thinking about the shape of science over time.

The article made me wonder about the divide between the ‘soft’ and ‘hard’ sciences, and how we might better define and delineate them. Perhaps in a particular field, the greater the proliferation of “schools of though,” the more likely something is to be a soft science? (Or mathematically speaking, there’s an inverse relationship in a field between how well supported it is and the number of schools of thought it has.) I consider a school of thought to be a hypothetical/theoretical proposed structure meant to potentially help advance the state of the art and adherents join one of many varying camps while evidence is built up (or not) until one side carries the day.

Theorem: The greater the proliferation of “schools of though,” the more likely something is to be a soft science.

Generally in most of the hard sciences like physics, biology, or microbiology, there don’t seem to be any opposing or differing schools of thought. While in areas like psychology or philosophy they abound, and often have long-running debates between schools without any hard data or evidence to truly allow one school to win out over another. Perhaps as the structure of a particular science becomes more sound, the concept of schools of thought become more difficult to establish?

For some of the hard sciences, it would seem that schools of thought only exist at the bleeding edge of the state-of-the-art where there isn’t yet enough evidence to swing the field one way or another to firmer ground.

Example: Evolutionary Biology

We might consider the area of evolutionary biology in which definitive evidence in the fossil record is difficult to come by, so there’s room for the opposing thoughts for gradualism versus punctuated equilibrium to be individual schools. Outside of this, most of evolutionary theory is so firmly grounded that there aren’t other schools.

Example: Theoretical Physics

The relatively new field of string theory might be considered a school of thought, though there don’t seem to be a lot of other opposing schools at the moment. If it does, such a school surely exists, in part, because there isn’t the ability to validate it with predictions and current data. However, because of the strong mathematical supporting structure, I’ve yet to hear anyone use the concept of school of thought to describe string theory, which sits in a community which seems to believe its a foregone conclusion that it or something very close to it represents reality. (Though for counterpoint, see Lee Smolin’s The Trouble with Physics.)

Example: Mathematics

To my knowledge, I can’t recall the concept of school of thought ever being applied to mathematics except in the case of the Pythagorean School which historically is considered to have been almost as much a religion as a science. Because of its theoretical footings, I suppose there may never be competing schools, for even in the case of problems like P vs. NP, individuals may have some gut reaction to which way things are leaning, everyone ultimately knows it’s going to be one or the other ($P=NP$ or $P \neq NP$). Many mathematicians also know that it’s useful to try to prove a theorem during the day and then try to disprove it (or find a counterexample) by night, so even internally and individually they’re self-segregating against creating schools of thought right from the start.

Example: Religion

Looking at the furthest end of the other side of the spectrum, because there is no verifiable way to prove that God exists, there has been an efflorescence of religions of nearly every size and shape since the beginning of humankind. Might we then presume that this is the softest of the ‘sciences’?

What examples or counter examples can you think of?

Syndicated copies to:

Uri Alon: Why Truly Innovative Science Demands a Leap into the Unknown

I recently ran across this TED talk and felt compelled to share it. It really highlights some of my own personal thoughts on how science should be taught and done in the modern world.  It also overlaps much of the reading I’ve been doing lately on innovation and creativity. If these don’t get you to watch, then perhaps mentioning that Alon manages to apply comedy and improvisation techniques to science will.

Uri Alon was already one of my scientific heroes, but this adds a lovely garnish.

To Understand God’s Thought…

Syndicated copies to:

Workshop on Information Theoretic Incentives for Artificial Life

For those interested in the topics of information theory in biology and artificial life, Christoph SalgeGeorg MartiusKeyan Ghazi-Zahedi, and Daniel Polani have announced a Satellite Workshop on Information Theoretic Incentives for Artificial Life at the 14th International Conference on the Synthesis and Simulation of Living Systems (ALife 2014) to be held at the Javits Center, New York, on July 30 or 31st.

Their synopsis states:

Artificial Life aims to understand the basic and generic principles of life, and demonstrate this understanding by producing life-like systems based on those principles. In recent years, with the advent of the information age, and the widespread acceptance of information technology, our view of life has changed. Ideas such as “life is information processing” or “information holds the key to understanding life” have become more common. But what can information, or more formally Information Theory, offer to Artificial Life?

One relevant area is the motivation of behaviour for artificial agents, both virtual and real. Instead of learning to perform a specific task, informational measures can be used to define concepts such as boredom, empowerment or the ability to predict one’s own future. Intrinsic motivations derived from these concepts allow us to generate behaviour, ideally from an embodied and enactive perspective, which are based on basic but generic principles. The key questions here are: “What are the important intrinsic motivations a living agent has, and what behaviour can be produced by them?”

Related to an agent’s behaviour is also the question on how and where the necessary computation to realise this behaviour is performed. Can information be used to quantify the morphological computation of an embodied agent and to what degree are the computational limitations of an agent influencing its behaviour?

Another area of interest is the guidance of artificial evolution or adaptation. Assuming it is true that an agent wants to optimise its information processing, possibly obtain as much relevant information as possible for the cheapest computational cost, then what behaviour would naturally follow from that? Can the development of social interaction or collective phenomena be motivated by an informational gradient? Furthermore, evolution itself can be seen as a process in which an agent population obtains information from the environment, which begs the question of how this can be quantified, and how systems would adapt to maximise this information?

The common theme in those different scenarios is the identification and quantification of driving forces behind evolution, learning, behaviour and other crucial processes of life, in the hope that the implementation or optimisation of these measurements will allow us to construct life-like systems.

Details for submissions, acceptances, potential talks, and dates can be found via  Nihat Ay’s Research Group on Information Theory of Cognitive Systems. For more information on how to register, please visit the ALife 2014 homepage. If there are any questions, or if you just want to indicate interest in submitting or attending, please feel free to mail them at itialife@gmail.com.

According to their release, the open access journal Entropy will sponsor the workshop by an open call with a special issue on the topic of the workshop. More details will be announced to emails received via itialife@gmail.com and over the alife and connectionists mailing lists.

Syndicated copies to: