We are used to the availability of big data generated in nearly all fields of science as a consequence of technological progress. However, the analysis of such data possess vast challenges. One of these relates to the explainability of artificial intelligence (AI) or machine learning methods. Currently, many of such methods are non-transparent with respect to their working mechanism and for this reason are called black box models, most notably deep learning methods. However, it has been realized that this constitutes severe problems for a number of fields including the health sciences and criminal justice and arguments have been brought forward in favor of an explainable AI. In this paper, we do not assume the usual perspective presenting explainable AI as it should be, but rather we provide a discussion what explainable AI can be. The difference is that we do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics.
Software developer and data journalist Meredith Broussard joins Megan Morrone to discuss her book Artificial Unintelligence: How Computers Misunderstand the World, which makes the case against the idea that technology can solve all our problems, touching on self-driving cars, the digital divide, the difference between AI and machine learning, and more.
I’m curious if she’s got an Amazon Associates referral link so that we can give her an extra ~4% back for promoting her book? I don’t see one on her website unfortunately.
The opening of the show recalling the internet in the 90’s definitely took me back as I remember being in at least one class in college with Megan Morrone. I seem to recall that it was something in Writing Seminars, perhaps Contemporary American Letters?
There’s so much good to highlight here, but in particular I like the concept of technochauvinism, thought when I initially heard it I had a different conception of what it might be instead of the definition that Broussard gives as the belief that technology is always the solution to every problem. My initial impression of it was something closer to the idea of tech bro.
My other favorite piece of discussion centered on her delving into her local educational structure to find that there was a dearth of books and computers and how some of that might be fixed for future children. It’s reminiscent of a local computer scientist I know from Cal Tech who created some bus route models for the Pasadena school system to minimize their travel, gas cost, and personnel to save the district several million dollars. I’m hoping some of those savings go toward more books…
As many of you know, there has been an ongoing discussion concerning the name of the Neural Information Processing Systems conference. The current acronym NIPS has unintended connotations that some members of the community find offensive.
Following several well-publicized incidents of insensitivity at past conferences, and our acknowledgement of other less-publicized incidents, we conducted community polls requesting alternative names, rating the existing and alternative names, and soliciting additional comments.
After extensive discussions, the NIPS Board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on a name change or a well-regarded alternative name.
Machine Learning (ML) is one of the most exciting and dynamic areas of modern research and application. The purpose of this review is to provide an introduction to the core concepts and tools of machine learning in a manner easily understood and intuitive to physicists. The review begins by covering fundamental concepts in ML and modern statistics such as the bias-variance tradeoff, overfitting, regularization, and generalization before moving on to more advanced topics in both supervised and unsupervised learning. Topics covered in the review include ensemble models, deep learning and neural networks, clustering and data visualization, energy-based models (including MaxEnt models and Restricted Boltzmann Machines), and variational methods. Throughout, we emphasize the many natural connections between ML and statistical physics. A notable aspect of the review is the use of Python notebooks to introduce modern ML/statistical packages to readers using physics-inspired datasets (the Ising Model and Monte-Carlo simulations of supersymmetric decays of proton-proton collisions). We conclude with an extended outlook discussing possible uses of machine learning for furthering our understanding of the physical world as well as open problems in ML where physicists maybe able to contribute. (Notebooks are available at this https URL )
We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
Perhaps you've heard about IBM's giant Watson computer, which dispenses ingredient advice and novel recipes. Jaan Altosaar, a PhD candidate at Princeton University, is working on a recipe recommendation engine that anyone can use.
“Augmented cooking with machine intelligence”, with interesting remarks on generating food analogies… https://t.co/UluYk6p8TV
— michael_nielsen (@michael_nielsen) February 2, 2017
I found the article in it so interesting, there was some brief conversation around it and I thought to recommend it to my then new friend Jeremy Cherfas, whose Eat This Podcast I had just recently started to enjoy. Mostly I thought he would find it as interesting as I, though I hardly expected he’d turn it into a podcast episode. Though I’ve been plowing through back episodes in his catalog, fortunately this morning I ran out of downloaded episodes in the car so I started streaming the most recent one to find a lovely surprise: a podcast produced on a tip I made.
While he surely must have been producing the episode for some time before I started supporting the podcast on Patreon last week, I must say that having an episode made from one of my tips is the best backer thank you I’ve ever received from a crowd funded project.
Needless to say, I obviously found the subject fascinating. In part it did remind me of a section of Herve This’ book The Science of the Oven (eventually I’ll get around to posting a review with more thoughts) and some of his prior research which I was apparently reading on Christmas Day this past year. On page 118 of the text This discusses the classic French sauces of Escoffier’s students Louis Saulnier and Theodore Gringoire  and that a physical chemical analysis of them shows there to be only twenty-three kinds. He continues on:
A system that I introduced during the European Conference on Colloids and Interfaces in 2002  offers a new classification, based on the physical chemical structure of the sauce. In it, G indicates a gas, E an aqueous solution, H a fat in the liquid state, and S a solid. These “phases” can be dispersed (symbol /), mixed (symbol +), superimposed (symbol θ), included (symbol @). Thus, veal stock is a solution, which is designated E. Bound veal stock, composed of starch granules swelled by the water they have absorbed, dispersed in an aqueous solution, is thus described by the formula (E/S)/E.
This goes on to describe in a bit more detail how the scientist-cook could then create a vector space of all combinations of foods from a physical state perspective. A classification system like this could be expanded and bolted on top of the database created by Jaan Altosaar and improved to provide even more actual realistic recipes of the type discussed in the podcast. The combinatorics of the problem are incredibly large, but my guess is that the constraints on the space of possible solutions is brought down incredibly in actual practice. It’s somewhat like the huge numbers of combinations the A, C, T, and Gs in our DNA that could be imagined, yet only an incredibly much smaller subset of that larger set could be found in a living human being.
The additional byproduct of catching this episode was that it finally reminded me why I had thought the name Jaan Altosaar was so familiar to me when I read his article. It turns out I know Jaan and some of his previous work. Sometime back in 2014 I had corresponded with him regarding his fantastic science news site Useful Science which was just then starting. While I was digging up the connection I realized that my old friend Sol Golomb had also referenced Jaan to me via Mark Wilde for some papers he suggested I read.