🎧 The Power Of Categories | Invisibilia (NPR)

Listened to The Power Of Categories from Invisibilia | NPR.org
The Power Of Categories examines how categories define us — how, if given a chance, humans will jump into one category or another. People need them, want them. The show looks at what categories provide for us, and you'll hear about a person caught between categories in a way that will surprise you. Plus, a trip to a retirement community designed to help seniors revisit a long-missed category.
The transgender/sexual dysphoria story here is exceedingly interesting because it could potentially have some clues to how those pieces of biology work and what shifts things in one direction or another. How is that spectrum created/defined? A few dozen individuals like that could help provide an answer.

The story about the Indian retirement community in Florida is interesting, but it also raises the (unasked, in the episode at least) question of the detriment it can do to a group of people to be lead by some the oldest members of their community. The Latin words senīlis ‎(“of or pertaining to old age”) and senex ‎(“old”) are the roots of words like senate, senescence, senility, senior, and seniority, and though it’s nice to take care of our elders, the younger generations should take a hard look at the unintended consequences which may stem from this.

In some sense I’m also reminded about Thomas Kuhn’s book The Structure of Scientific Revolutions and why progress in science (and yes, society) is held back by the older generations who are still holding onto outdated models. Though simultaneously, they do provide some useful “brakes” on both velocity of change as well as potential ill effects which could be damaging in short timeframes.

🎧 Entanglement | Invisibilia (NPR)

Listened to Entanglement from Invisibilia | NPR.org
In Entanglement, you'll meet a woman with Mirror Touch Synesthesia who can physically feel what she sees others feeling. And an exploration of the ways in which all of us are connected — more literally than you might realize. The hour will start with physics and end with a conversation with comedian Maria Bamford and her mother. They discuss what it's like to be entangled through impersonation.
I can think of a few specific quirks I’ve got that touch tangentially on mirror synethesia. This story and some of the research behind it is truly fascinating. Particularly interesting are the ideas of the contagion of emotion. It would be interesting to take some complexity and network theory and add some mathematical models to see how this might look. In particular the recent political protests in the U.S. might make great models. This also makes me wonder where Donald Trump sits on this emotional empathy spectrum, if at all.

One of the more interesting take-aways: the thoughts and emotions of those around you can affect you far more than you imagine.

Four episodes in and this podcast is still impossibly awesome. I don’t know if I’ve had so many thought changing ideas since I read David Christian’s book Maps of Time: An Introduction to Big History[1] The sad problem is that I’m listening to them at a far faster pace than they could ever continue to produce them.

References

[1]
D. Christian, Maps of Time: An Introduction to Big History. Univ of California Press, 2004.

🎧 How to Become Batman | Invisibilia (NPR)

Listened to How to Become Batman from Invisibilia | NPR.org
In "How to Become Batman," Alix and Lulu examine the surprising effect that our expectations can have on the people around us. You'll hear how people's expectations can influence how well a rat runs a maze. Plus, the story of a man who is blind and says expectations have helped him see. Yes. See. This journey is not without skeptics.
Expectations are much more important than we think.

Is it possible that this podcast is getting more interesting as it continues along?! In three episodes, I’ve gone from fan to fanboy.

👓 Artificial Intelligence suddenly got a whole lot more interesting | Ilyas Khan via Pulse | LinkedIn

Read Artificial Intelligence suddenly got a whole lot more interesting by Ilyas Kahn

Just over a year ago a senior Google engineer (Greg Corrado) explained why quantum computers, in the opinion of his research team did not lend themselves to Deep Learning techniques such as convolutional neural networks or even recurrent neural networks.

As a matter of fact, Corrado’s comments were specifically based on Google’s experience with the D-Wave machine, but as happens so often in the fast evolving Quantum Computing industry, the nuance that the then architecture and capacity of D-Wave’s quantum annealing methodology did not (and still does not) lend itself to Deep Learning or Deep Learning Neural Network (“DNN”) techniques was quickly lost in the headline. The most quoted part of Corrado’s comments became a sentence that further reinforced the view that Corrado (and thus Google) were negative about Deep Learning and Quantum Computing per-se and quickly became conflated to be true of all quantum machines and not just D-Wave :

“The number of parameters a quantum computer can hold, and the number of operations it can hold, are very small” (full article here).

The headline for the article that contained the above quote was “Quantum Computers aren’t perfect for Deep Learning“, that simply serves to highlight the less than accurate inference, and I have now lost count of the number of times that someone has misquoted Corrado or attributed his quote to Google’s subsidiary Deep Mind, as another way of pointing out limitations in quantum computing when it comes either to Machine Learning (“ML”) more broadly or Deep Learning more specifically.

Ironically, just a few months earlier than Corrado’s talk, a paper written by a trio of Microsoft researchers led by the formidable Nathan Wiebe (the paper was co-authored by his colleagues Ashish Kapoor and Krysta Svore) that represented a major dive into quantum algorithms for deep learning that would be advantageous over classical deep learning algorithms was quietly published on arXiv. The paper got a great deal less publicity than Corrado’s comments, and in fact even as I write this article more than 18 months after the paper’s v2 publication date, it has only been cited a handful of times (copy of most recent updated paper here)

Before we move on, let me deal with one obvious inconsistency between Corrado’s comments and the Wiebe/Kapoor/Svore (“WKS”) paper and acknowledge that we are not comparing “apples with apples”. Corrado was speaking specifically about the actual application of Deep Learning in the context of a real machine – the D-Wave machine, whilst WKS are theoretical quantum information scientists and their “efficient” algorithms need a machine before they can be applied. However, that is also my main point in the article. Corrado was speaking only about D-Wave, and Corrado is in fact a member of the Quantum Artificial Intelligence team, so it would be a major contradiction if Corrado (or Google more broadly) felt that Quantum Computing and AI were incompatible !

I am not here speaking only about the semantics of the name of Corrado’s team. The current home page, as of Nov 27th 2016, for Google’s Quantum AI Unit (based out in Venice Beach, LA) has the following statement (link to the full page here):

“Quantum Computing merges two great scientific revolutions of the 20th century: computer science and quantum physics. Quantum physics is the theoretical basis of the transistor, the laser, and other technologies which enabled the computing revolution. But on the algorithmic level today’s computing machinery still operates on “classical” Boolean logic. Quantum computing is the design of hardware and software that replaces Boolean logic by quantum law at the algorithmic level. For certain computations such as optimization, sampling, search or quantum simulation this promises dramatic speedups. Soon we hope to falsify the strong Church-Turing thesis: we will perform computations which current computers cannot replicate. We are particularly interested in applying quantum computing to artificial intelligence and machine learning. This is because many tasks in these areas rely on solving hard optimization problems or performing efficient sampling”

There is a lot to digest in that quote – including the tantalising statement about the strong “Church-Turing Thesis” (“CTT”). Coincidentally this is a very rich area of debate and research that if even trivially followed in this article would take up far more space than is available. For those interested in the foundational aspects of CTT you could do worse than invest a little time listening to the incomparable Scott Aaronson who spoke over summer on this topic (link here). And just a last word on CTT whilst we are on the subject, few, if anyone, will speculate right now that quantum computers will actually threaten the original Church-Turing Thesis and in the talk referenced above, Scott does a great job in outlining just why that is the case. Ironically the title of his talk is “Quantum Supremacy” and the quote that I have taken from Google’s website is directly taken from the team led by Hartmut Neven who has stated very publicly that Google will achieve that standard (ie Quantum Supremacy) in 2017.

Coming back to Artificial Intelligence and quantum computing, we should remember that even as recently as 14 to 18 months ago, most people would have been cautious about forecasting the advent of even small scale quantum computing. It is easy to forget, especially in the heady days since mid 2016, but none of Google, IBM or Microsoft had unveiled their advances, and as I wrote last week (here), things have clearly moved on very significantly in a relatively short space of time. Not only do we have an open “arms” race between the West and China to build a large scale quantum machine, but we have a serious clash of some of the most important technology innovators in recent times. Amazingly, scattered in the mix are a small handful of start-ups who are also building machines. Above all however, the main takeaway from all this activity from my point of view is that I don’t think it should be surprising that converting “black-box”neural network outputs into probability distributions will become the focus for anyone approaching DNN from a quantum physics and quantum computing background.

It is this significant advance that means that for the very same reason that Google/IBM/Microsoft talk openly about their plans to build a machine (and in the case of Google an acknowledgement that they have actually now built a quantum computer of their own) means that one of the earliest applications likely to be tested on even proto-type quantum computers will be some aspect of Machine Learning. Corrado was right to confirm that in the opinion of the Google team working at the time, the D-Wave machine was not usable for AI or ML purposes. It was not his fault that his comments were mis-reported. It is worth noting that one of the people most credibly seen as the “grandfather” of AI and Machine Learning, Geoffrey Hinton is part of the same team at Google that has adopted the Quantum Supremacy objective. There are clearly amazing teams assembled elsewhere, but where quantum computing meets Artificial Intelligence, then its hard to beat the sheer intellectual fire power of Google’s AI team.

Outside of Google, a nice and fairly simple way of seeing how the immediate boundary between the theory of quantum machine learning and its application on “real” machines has been eroded can be seen by looking at two versions of exactly the same talk by one of the sector’s early cheer leaders, Seth Lloyd. Here is a link to a talk that Lloyd gave through Google Tech Talks in early 2014, and here is a link to exactly the same talk except that it was delivered a couple of months ago. Not surprisingly Lloyd, as a theorist, brings a similar approach to the subject as WKS, but in the second of the two presentations, he also discusses one of his more recent pre-occupations in analysing large data sets using algebraic topological methods that can be manipulated by a quantum computer.

For those of you who might not be familiar with Lloyd I have included a link below to the most recent form of his talk on a quantum algorithm for large data sets represented by topological analysis.

One of the most interesting aspects that is illuminated by Lloyds position on quantum speed up using quantum algorithms for classical machine learning operations is his use of the example of the “Principal Component Analysis” algorithm (“PCA”). PCA is one of the most common machine learning techniques in classical computing, and Lloyd (and others) have been studying quantum computing versions for at least the past 3 to 4 years.

Finding a use case for a working quantum algorithm that can be implemented in a real use case such as one of the literally hundreds of applications for PCA is likely to be one of the earliest ways that quantum computers with even a limited number of qubits could be employed. Lloyd has already shown how a quantum algorithm can be proven to exhibit “speed up” when looking just at the number of steps taken in classifying the problem. I personally do not doubt that a suitable protocol will emerge as soon as people start applying themselves to a genuine quantum processor.

At Cambridge Quantum Computing, my colleagues in the quantum algorithm team have been working on the subject from a different perspective in both ML and DNN. The most immediate application using existing classical hardware has been from the guys that created ARROW> , where they have looked to build gradually from traditional ML through to DNN techniques for detecting and then classifying anomalies in “pure” times series (initially represented by stock prices). In the recent few weeks we have started advancing from ML to DNN, but the exciting thing is that the team has always looked at ARROW> in a way that lends itself to being potentially upgraded with possible quantum components that in turn can be run on early release smaller scale quantum processor. Using a team of quantum physicists to approach AI problems so they can ultimately be worked off a quantum computer clearly has some advantages.

There are, of course, a great many areas other than the seemingly trivial sphere of finding anomalies in share prices where AI will be applied. In my opinion the best recently published overview of the whole AI space (an incorporating the phase transition to quantum computing) is the Fortune Article (here) that appeared at the end of September and not surprisingly the focus on medical and genome related AI applications for “big” data driven deep learning applications figure highly in that part of the article that focuses on the current state of affairs.

I do not know exactly how far we are away from the first headlines about quantum processors being used to help generate efficiency in at least some aspects of DNN. My personal guess is that deep learning dropout protocols that help mitigate the over-fitting problem will be the first area where quantum computing “upgrades” are employed and I suspect very strongly that any machine that is being put through its paces at IBM or Google or Microsoft is already being designed with this sort of application in mind. Regardless of whether we are years away or months away from that first headline, the center of gravity in AI will have moved because of Quantum Computing.

Source: Artificial Intelligence suddenly got a whole lot more interesting | Ilyas Khan, KSG | Pulse | LinkedIn

NIMBioS Tutorial: Uncertainty Quantification for Biological Models

Bookmarked NIMBioS Tutorial: Uncertainty Quantification for Biological Models (nimbios.org)
NIMBioS will host an Tutorial on Uncertainty Quantification for Biological Models

Uncertainty Quantification for Biological Models

Meeting dates: June 26-28, 2017
Location: NIMBioS at the University of Tennessee, Knoxville

Organizers:
Marisa Eisenberg, School of Public Health, Univ. of Michigan
Ben Fitzpatrick, Mathematics, Loyola Marymount Univ.
James Hyman, Mathematics, Tulane Univ.
Ralph Smith, Mathematics, North Carolina State Univ.
Clayton Webster, Computational and Applied Mathematics (CAM), Oak Ridge National Laboratory; Mathematics, Univ. of Tennessee

Objectives:
Mathematical modeling and computer simulations are widely used to predict the behavior of complex biological phenomena. However, increased computational resources have allowed scientists to ask a deeper question, namely, “how do the uncertainties ubiquitous in all modeling efforts affect the output of such predictive simulations?” Examples include both epistemic (lack of knowledge) and aleatoric (intrinsic variability) uncertainties and encompass uncertainty coming from inaccurate physical measurements, bias in mathematical descriptions, as well as errors coming from numerical approximations of computational simulations. Because it is essential for dealing with realistic experimental data and assessing the reliability of predictions based on numerical simulations, research in uncertainty quantification (UQ) ultimately aims to address these challenges.

Uncertainty quantification (UQ) uses quantitative methods to characterize and reduce uncertainties in mathematical models, and techniques from sampling, numerical approximations, and sensitivity analysis can help to apportion the uncertainty from models to different variables. Critical to achieving validated predictive computations, both forward and inverse UQ analysis have become critical modeling components for a wide range of scientific applications. Techniques from these fields are rapidly evolving to keep pace with the increasing emphasis on models that require quantified uncertainties for large-scale applications. This tutorial will focus on the application of these methods and techniques to mathematical models in the life sciences and will provide researchers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties and perform sensitivity analysis for simulation models. Concepts to be covered may include: probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, adaptive surrogate model construction, high-dimensional approximation, random sampling and sparse grids, as well as local and global sensitivity analysis.

This tutorial is intended for graduate students, postdocs and researchers in mathematics, statistics, computer science and biology. A basic knowledge of probability, linear algebra, and differential equations is assumed.

Descriptive Flyer

Application deadline: March 1, 2017
To apply, you must complete an application on our online registration system:

  1. Click here to access the system
  2. Login or register
  3. Complete your user profile (if you haven’t already)
  4. Find this tutorial event under Current Events Open for Application and click on Apply

Participation in NIMBioS tutorials is by application only. Individuals with a strong interest in the topic are encouraged to apply, and successful applicants will be notified within two weeks after the application deadline. If needed, financial support for travel, meals, and lodging is available for tutorial attendees.

Summary Report. TBA

Live Stream. The Tutorial will be streamed live. Note that NIMBioS Tutorials involve open discussion and not necessarily a succession of talks. In addition, the schedule as posted may change during the Workshop. To view the live stream, visit http://www.nimbios.org/videos/livestream. A live chat of the event will take place via Twitter using the hashtag #uncertaintyTT. The Twitter feed will be displayed to the right of the live stream. We encourage you to post questions/comments and engage in discussion with respect to our Social Media Guidelines.


Source: NIMBioS Tutorial: Uncertainty Quantification for Biological Models

Mathematical Model Reveals the Patterns of How Innovations Arise | MIT Technology Review

Read Mathematicians have discovered how the universal patterns behind innovation arise (MIT Technology Review)
A mathematical model could lead to a new approach to the study of what is possible, and how it follows from what already exists.
Continue reading Mathematical Model Reveals the Patterns of How Innovations Arise | MIT Technology Review

🔖 Information theory, predictability, and the emergence of complex life

Bookmarked Information theory, predictability, and the emergence of complex life (arxiv.org)
Abstract: Despite the obvious advantage of simple life forms capable of fast replication, different levels of cognitive complexity have been achieved by living systems in terms of their potential to cope with environmental uncertainty. Against the inevitable cost associated to detecting environmental cues and responding to them in adaptive ways, we conjecture that the potential for predicting the environment can overcome the expenses associated to maintaining costly, complex structures. We present a minimal formal model grounded in information theory and selection, in which successive generations of agents are mapped into transmitters and receivers of a coded message. Our agents are guessing machines and their capacity to deal with environments of different complexity defines the conditions to sustain more complex agents.

🔖 Foldscope – The Origami Paper Microscope | Kickstarter

Bookmarked Foldscope - The Origami Paper Microscope (Kickstarter)
See the invisible with a powerful yet affordable microscope that fits in your pocket. Curiosity, discovery, and science for everyone!
A microscope in every pocket is surely a great idea.

They also have a journal article on PLoS ONE[1]

References

[1]
J. Cybulski S., J. Clements, and M. Prakash, “Foldscope: Origami-Based Paper Microscope,” PLoS ONE, vol. 9, no. 6, Jun. 2014 [Online]. Available: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0098781 [Source]

🔖 100 years after Smoluchowski: stochastic processes in cell biology

Bookmarked 100 years after Smoluchowski: stochastic processes in cell biology (arxiv.org)
100 years after Smoluchowski introduces his approach to stochastic processes, they are now at the basis of mathematical and physical modeling in cellular biology: they are used for example to analyse and to extract features from large number (tens of thousands) of single molecular trajectories or to study the diffusive motion of molecules, proteins or receptors. Stochastic modeling is a new step in large data analysis that serves extracting cell biology concepts. We review here the Smoluchowski's approach to stochastic processes and provide several applications for coarse-graining diffusion, studying polymer models for understanding nuclear organization and finally, we discuss the stochastic jump dynamics of telomeres across cell division and stochastic gene regulation.
65 pages, J. Phys A 2016 [1]

References

[1]
D. Holcman and Z. Schuss, “100 years after Smoluchowski: stochastic processes in cell biology,” arXiv, 26-Dec-2016. [Online]. Available: https://arxiv.org/abs/1612.08381. [Accessed: 03-Jan-2017]

How much do our (supposed) intellectual elite…

Read How much do our (supposed) intellectual elite know about human progress? by Michael Nielsen (facebook.com)

That's a question I've been stewing about for the past few weeks, ever since reading the results from a quiz (at http://www.nature.com/…/three-minutes-with-hans-rosling-wil… ) in the scientific journal Nature, from Hans Rosling.

The quiz contains 8 fundamental questions about the state of the world: questions about poverty, life expectancy, wealth, population, and so on. All big, important questions.

What has me stewing is that respondents to the quiz - I presume, nature.com's readers - do far worse than chance. That is, they would have done much better overall if they'd simply guessed their answers at random (the questions are multiple choice). Only on 2 of 8 questions do respondents do appreciably better than chance. On most questions they do worse than chance, sometimes much worse than chance. A chimpanzee pushing buttons at random would have done better than nature.com's readers.(By the way, I'm not certain the response data is from nature.com's readers. It may be separate data, perhaps from Rosling's audiences. If that's the case, it weakens my argument below.)

I'm not usually bothered by this kind of thing. Media love to bemoan surveys showing lack of basic scientific knowledge among the general population. That kind of thing doesn't alarm me. We're a society in which most people specialize, and it's not surprising if most of us are ignorant in major areas; collectively we can still do pretty well. But this data from Rosling - the Nature survey - really got under my skin. It's a survey of a group (one I'm part of, I guess) that often seems to think it has special knowledge of solutions to big, important problems - things like climate change, energy, development, and so on. And what I take from Rosling's data is that that group isn't just ignorant about the state of the world in some fundamental ways. They're actually anti-informed.

So, why does this matter?

On Twitter, I regularly see people like Rosling, Max Roser, Steven Pinker, and Dina Pomeranz post graphs showing changes in the state of the world. Often, those graphs are extremely positive, like Roser's wonderful graphs on poverty, education, literacy etc over the last 200 years:

(See the images below, or: https://twitter.com/MaxCRoser/status/811587302065602560… )

It is absolutely astonishing to read the responses to such tweets. Many people are furious at the idea that some things in the world are getting better. Many responses boil down to "Nah, nah, can't be true", or "I'll bet [irrelevant thing] is getting worse, why don't you focus on that, you tool of the capitalist conspiracy."

Of course, while those responses are irritating, & illustrate a certain kind of wilful ignorance, they don't really much matter. What bothers me more is that some of the most common responses are variants on "It doesn't matter, climate change is more important than all your graphs"; "Where are your climate graphs?"; "Nukes are going to kill us all"; etc.

This type of comment seems wrongheaded for more interesting reasons.

First, appreciating Roser's (and similar) graphs does not mean failing to acknowledge climate change, nuclear security, and other problems. Roser, for instance, has repeatedly acknowledged that the challenges of climate are huge and critical.

But I think the more significant thing is that graphs like Roser's don't happen by accident. They are extraordinary human achievements - the outcome of remarkable technical, social and organizational invention. If you don't know of these facts, in detail, or if you underplay their importance, then you cannot hope to understand the underlying technical, social, and organizational invention in any depth. And it seems to me that that kind of understanding may well be crucial to solving problems like climate, etc.

To put it another way, the anti-Pollyannas, including much of our intellectual elite who think they have "the solutions", have actually cut themselves off from understanding the basis for much of the most important human progress.

What's the solution? I'm not sure. But this line of thinking is deepening my appreciation for the work done by people such as Roser, Rosling et al. And it's making me think about how it can be scaled up & incorporated more broadly into our institutions.

🔖 A First Step Toward Quantifying the Climate’s Information Production over the Last 68,000 Years

Bookmarked A First Step Toward Quantifying the Climate’s Information Production over the Last 68,000 Years (link.springer.com)
Paleoclimate records are extremely rich sources of information about the past history of the Earth system. We take an information-theoretic approach to analyzing data from the WAIS Divide ice core, the longest continuous and highest-resolution water isotope record yet recovered from Antarctica. We use weighted permutation entropy to calculate the Shannon entropy rate from these isotope measurements, which are proxies for a number of different climate variables, including the temperature at the time of deposition of the corresponding layer of the core. We find that the rate of information production in these measurements reveals issues with analysis instruments, even when those issues leave no visible traces in the raw data. These entropy calculations also allow us to identify a number of intervals in the data that may be of direct relevance to paleoclimate interpretation, and to form new conjectures about what is happening in those intervals—including periods of abrupt climate change.
Saw reference in Predicting unpredictability: Information theory offers new way to read ice cores [1]

References

[1]
“Predicting unpredictability: Information theory offers new way to read ice cores,” Phys.org. [Online]. Available: http://phys.org/news/2016-12-unpredictability-theory-ice-cores.html. [Accessed: 12-Dec-2016]

A Bad Day at Black Rock America

There have been a growing number of reports [1][2][3][4] this week of creating lists of Americans and immigrants. I’m worried about the long term repercussions these acts will have on not only America’s future but that of the world at large. Though some of these reports contained slightly softer verbiage than Donald Trump’s original campaign statements almost a year to the day last year[5], I can’t help but think that his original statements were closer to his real intent.

Many have likely forgotten about the horrific black eye America already has as a result of the internment of Japanese Americans during World War II. Why would we be contemplating thinking about going down this road a second time? Almost a year ago I wrote a short homage to my friend and WWII veteran Millard Kaufman, who I know would be vehemently against this idea. If you haven’t seen his Academy Award nominated film Bad Day at Black Rock, I recommend you pick it up soon–it’s held up incredibly well since 1955 and is still more than culturally relevant today.

In Memoriam: Millard Kaufman, WWII Veteran and Front for Dalton Trumbo

A crippled character played by Spence Tracy shows us by example how not to cave in to menacing bullies in the John Sturges 1955 MGM classic Bad Day at Black Rock written by Millard Kaufman.
A crippled character played by Spence Tracy shows us by example how not to cave in to menacing bullies in the John Sturges’ 1955 MGM classic Bad Day at Black Rock written by Millard Kaufman.

Even Comedy Central’s The Daily Show ran a snippet of the news with their thoughts:

 

For those who don’t think that senior leadership in America might bend the rules a tad, I also recommend reading my friend Henry James Korn’s reflection of the incident in which Eisenhower expelled him from Johns Hopkins University for a criticism of LBJ during the late 60’s: “Yes, Eisenhower Expelled Me from Johns Hopkins University.”

Yes, Eisenhower expelled me from Johns Hopkins University

In his article, Henry also includes a ten-minute War Relocation Agency propaganda film which is eerily similar to some of what is being proposed now.

Needless to say, much of this type of behavior is on the same incredibly slippery slope that Nazi Germany began on when they began registering Jews in the early part of the last century. When will be learn from the horrific mistakes of the past to do better in the future?

Footnotes

[1]
D. Lind, “Donald Trump’s proposed ‘Muslim registry,’ explained,” Vox, 16-Nov-2016. [Online]. Available: http://www.vox.com/policy-and-politics/2016/11/16/13649764/trump-muslim-register-database. [Accessed: 18-Nov-2016]
[2]
“Trump’s Muslim registry wouldn’t be illegal, constitutional law experts say,” POLITICO, 17-Nov-2016. [Online]. Available: http://www.politico.com/story/2016/11/donald-trump-muslim-registry-constitution-231527. [Accessed: 18-Nov-2016]
[3]
N. Muaddi, “The Bush-era Muslim registry failed. Yet the US could be trying it again,” CNN, 18-Nov-2016. [Online]. Available: http://www.cnn.com/2016/11/18/politics/nseers-muslim-database-qa-trnd/. [Accessed: 18-Nov-2016]
[4]
M. Rosenberg and J. E. Ainsley, “Immigration hardliner says Trump team preparing plans for wall, mulling Muslim registry,” Reuters, 16-Nov-2016. [Online]. Available: http://www.reuters.com/article/us-usa-trump-immigration-idUSKBN13B05C. [Accessed: 18-Nov-2016]
[5]
“Donald Trump says he’d ‘Absolutely’ Require Muslims to Register,” New York Times, 20-Nov-2015. [Online]. Available: http://www.nytimes.com/politics/first-draft/2015/11/20/donald-trump-says-hed-absolutely-require-muslims-to-register/?_r=0. [Accessed: 18-Nov-2016] [Source]

🔖 H-theorem in quantum physics by G. B. Lesovik, et al.

Bookmarked H-theorem in quantum physics (Nature.com)

Abstract

Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. We further demonstrate that the typical evolution of energy-isolated quantum systems occurs with non-diminishing entropy. [1]

Footnotes

[1]
G. B. Lesovik, A. V. Lebedev, I. A. Sadovskyy, M. V. Suslov, and V. M. Vinokur, “H-theorem in quantum physics,” Scientific Reports, vol. 6. Springer Nature, p. 32815, 12-Sep-2016 [Online]. Available: http://dx.doi.org/10.1038/srep32815

Chris Aldrich is reading “Department of Energy May Have Broken the Second Law of Thermodynamics”

Read Department of Energy May Have Broken the Second Law of Thermodynamics (Inverse)
“Quantum-based demons” sound like they'd be at home in 'Stranger Things.'

Statistical Physics, Information Processing, and Biology Workshop at Santa Fe Institute

Bookmarked Information Processing and Biology by John Carlos Baez (Azimuth)
The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop.
I just found out about this from John Carlos Baez and wish I could go! How have I not managed to have heard about it?

Stastical Physics, Information Processing, and Biology

Workshop

November 16, 2016 – November 18, 2016
9:00 AM
Noyce Conference Room

Abstract.
This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.

The goal of this workshop is to address these issues by focusing on a set three specific question:

  1. How has the fraction of free energy flux on earth that is used by biological computation changed with time?;
  2. What is the free energy cost of biological computation / function?;
  3. What is the free energy cost of the evolution of biological computation / function.

In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.

Purpose: Research Collaboration
SFI Host: David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert