👓 I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust | Gizmodo

Read I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust (Gizmodo)
The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.

Another example of an app saying “We don’t have bias in our AI” when it seems patently true that they do. I wonder how one would prove (mathematically) that one didn’t have bias?

Syndicated copies to:

👓 Name Change | N.I.P.S.

Read NIPS Name Change by Terrence Sejnowski, Marian Stewart Bartlett, Michael Mozer, Corinna Cortes, Isabelle Guyon, Neil D. Lawrence, Daniel D. Lee, Ulrike von Luxburg, Masashi Sugiyama, Max Welling (nips.cc)

As many of you know, there has been an ongoing discussion concerning the name of the Neural Information Processing Systems conference. The current acronym NIPS has unintended connotations that some members of the community find offensive.

Following several well-publicized incidents of insensitivity at past conferences, and our acknowledgement of other less-publicized incidents, we conducted community polls requesting alternative names, rating the existing and alternative names, and soliciting additional comments.

After extensive discussions, the NIPS Board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on a name change or a well-regarded alternative name.

This just makes me sick…

Syndicated copies to:

I’m reminded by conversations at E21 Consortium’s Symposium on Artificial Intelligence in 21st Century Education (#e21sym) today about a short essay by Michael Nielsen (t) which I saw the other day on volitional philanthropy that could potentially be applied to AI in education in interesting ways.

Syndicated copies to:

🔖 E21 Consortium | Symposium

Bookmarked Symposium on Artificial Intelligence in 21st Century Education (E21 Consortium )
Join us for a day of disruptive dialogue about Artificial Intelligence and 21st Century Education in Ottawa, an annual international symposium hosted by the University of Ottawa in collaboration with Carleton University, St. Paul University, Algonquin College, La Cité, and the Centre franco-ontarien de ressources pedagogique (CFORP).

hat tip: Stephen Downes

Syndicated copies to:

❤️ Downes tweet: The panel I’m on at #E21Sym will be live streamed any minute now

Liked Downes on Twitter by Stephen DownesStephen Downes (Twitter)
Syndicated copies to:

❤️ Downes tweet: I’m at “Education in the 21st Century: A Symposium on Artificial Intelligence” today at the University of Ottawa.

Liked a tweet by Stephen DownesStephen Downes (Twitter)
Syndicated copies to:

🎧 Episode 077 Exploring Artificial Intelligence with Melanie Mitchell | HumanCurrent

Listened to Episode 077: Exploring Artificial Intelligence with Melanie Mitchell by Haley Campbell-GrossHaley Campbell-Gross from HumanCurrent

What is artificial intelligence? Could unintended consequences arise from increased use of this technology? How will the role of humans change with AI? How will AI evolve in the next 10 years?

In this episode, Haley interviews leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe InstituteMelanie Mitchell. Professor Mitchell answers many profound questions about the field of artificial intelligence and gives specific examples of how this technology is being used today. She also provides some insights to help us navigate our relationship with AI as it becomes more popular in the coming years.

I knew Dr. Mitchell was working on a book during her hiatus, but didn’t know it was potentially coming out so soon! I loved her last book and can’t wait to get this one. Sadly, there’s no pre-order copies available at any of the usual suspects yet.

Syndicated copies to:

👓 'I was shocked it was so easy': ​meet the professor who says facial recognition ​​can tell if you're gay | The Guardian

Read 'I was shocked it was so easy': ​meet the professor who says facial recognition ​​can tell if you're gay by Paul Lewis (the Guardian)
Psychologist Michal Kosinski says artificial intelligence can detect your sexuality and politics just by looking at your face. What if he’s right?

How in God’s name are we repeating so many of the exact problems of the end of the 1800’s? First nationalism and protectionism and now the eugenics agenda?

Syndicated copies to:

📺 Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED

Watched We're building a dystopia just to make people click on ads by Zeynep TufekciZeynep Tufekci from ted.com

We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.

Syndicated copies to:

📺 Zeynep Tufekci: Machine intelligence makes human morals more important | TED

Watched Machine intelligence makes human morals more important by Zeynep TufekciZeynep Tufekci from ted.com

Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."

Syndicated copies to:

👓 One more reason not to sweat the robot takeover | Doc Searls

Read One more reason not to sweat the robot takeover by Doc Searls (doc.blog)
Long ago a high school friend wanted to connect through Classmates.com. We fell out of touch, but Classmates did not. It kept spamming me with stuff about my long-dead high school until I got it, somehow, to stop. Now I just got a mail from Classmates.com tempting me to know more about a classmate of mine from "Calabasas Academy Calabasas, CA Attended ’95-’99." Classmates' marketing robot calls me Jim and has a mailbox for me (see the image to the right) containing three promotional emails from itself. My high school was at the other end of the country, and I graduated in 1965.
Syndicated copies to:

👓 Artificial Intelligence suddenly got a whole lot more interesting | Ilyas Khan via Pulse | LinkedIn

Read Artificial Intelligence suddenly got a whole lot more interesting by Ilyas Kahn

Just over a year ago a senior Google engineer (Greg Corrado) explained why quantum computers, in the opinion of his research team did not lend themselves to Deep Learning techniques such as convolutional neural networks or even recurrent neural networks.

As a matter of fact, Corrado’s comments were specifically based on Google’s experience with the D-Wave machine, but as happens so often in the fast evolving Quantum Computing industry, the nuance that the then architecture and capacity of D-Wave’s quantum annealing methodology did not (and still does not) lend itself to Deep Learning or Deep Learning Neural Network (“DNN”) techniques was quickly lost in the headline. The most quoted part of Corrado’s comments became a sentence that further reinforced the view that Corrado (and thus Google) were negative about Deep Learning and Quantum Computing per-se and quickly became conflated to be true of all quantum machines and not just D-Wave :

“The number of parameters a quantum computer can hold, and the number of operations it can hold, are very small” (full article here).

The headline for the article that contained the above quote was “Quantum Computers aren’t perfect for Deep Learning“, that simply serves to highlight the less than accurate inference, and I have now lost count of the number of times that someone has misquoted Corrado or attributed his quote to Google’s subsidiary Deep Mind, as another way of pointing out limitations in quantum computing when it comes either to Machine Learning (“ML”) more broadly or Deep Learning more specifically.

Ironically, just a few months earlier than Corrado’s talk, a paper written by a trio of Microsoft researchers led by the formidable Nathan Wiebe (the paper was co-authored by his colleagues Ashish Kapoor and Krysta Svore) that represented a major dive into quantum algorithms for deep learning that would be advantageous over classical deep learning algorithms was quietly published on arXiv. The paper got a great deal less publicity than Corrado’s comments, and in fact even as I write this article more than 18 months after the paper’s v2 publication date, it has only been cited a handful of times (copy of most recent updated paper here)

Before we move on, let me deal with one obvious inconsistency between Corrado’s comments and the Wiebe/Kapoor/Svore (“WKS”) paper and acknowledge that we are not comparing “apples with apples”. Corrado was speaking specifically about the actual application of Deep Learning in the context of a real machine – the D-Wave machine, whilst WKS are theoretical quantum information scientists and their “efficient” algorithms need a machine before they can be applied. However, that is also my main point in the article. Corrado was speaking only about D-Wave, and Corrado is in fact a member of the Quantum Artificial Intelligence team, so it would be a major contradiction if Corrado (or Google more broadly) felt that Quantum Computing and AI were incompatible !

I am not here speaking only about the semantics of the name of Corrado’s team. The current home page, as of Nov 27th 2016, for Google’s Quantum AI Unit (based out in Venice Beach, LA) has the following statement (link to the full page here):

“Quantum Computing merges two great scientific revolutions of the 20th century: computer science and quantum physics. Quantum physics is the theoretical basis of the transistor, the laser, and other technologies which enabled the computing revolution. But on the algorithmic level today’s computing machinery still operates on “classical” Boolean logic. Quantum computing is the design of hardware and software that replaces Boolean logic by quantum law at the algorithmic level. For certain computations such as optimization, sampling, search or quantum simulation this promises dramatic speedups. Soon we hope to falsify the strong Church-Turing thesis: we will perform computations which current computers cannot replicate. We are particularly interested in applying quantum computing to artificial intelligence and machine learning. This is because many tasks in these areas rely on solving hard optimization problems or performing efficient sampling”

There is a lot to digest in that quote – including the tantalising statement about the strong “Church-Turing Thesis” (“CTT”). Coincidentally this is a very rich area of debate and research that if even trivially followed in this article would take up far more space than is available. For those interested in the foundational aspects of CTT you could do worse than invest a little time listening to the incomparable Scott Aaronson who spoke over summer on this topic (link here). And just a last word on CTT whilst we are on the subject, few, if anyone, will speculate right now that quantum computers will actually threaten the original Church-Turing Thesis and in the talk referenced above, Scott does a great job in outlining just why that is the case. Ironically the title of his talk is “Quantum Supremacy” and the quote that I have taken from Google’s website is directly taken from the team led by Hartmut Neven who has stated very publicly that Google will achieve that standard (ie Quantum Supremacy) in 2017.

Coming back to Artificial Intelligence and quantum computing, we should remember that even as recently as 14 to 18 months ago, most people would have been cautious about forecasting the advent of even small scale quantum computing. It is easy to forget, especially in the heady days since mid 2016, but none of Google, IBM or Microsoft had unveiled their advances, and as I wrote last week (here), things have clearly moved on very significantly in a relatively short space of time. Not only do we have an open “arms” race between the West and China to build a large scale quantum machine, but we have a serious clash of some of the most important technology innovators in recent times. Amazingly, scattered in the mix are a small handful of start-ups who are also building machines. Above all however, the main takeaway from all this activity from my point of view is that I don’t think it should be surprising that converting “black-box”neural network outputs into probability distributions will become the focus for anyone approaching DNN from a quantum physics and quantum computing background.

It is this significant advance that means that for the very same reason that Google/IBM/Microsoft talk openly about their plans to build a machine (and in the case of Google an acknowledgement that they have actually now built a quantum computer of their own) means that one of the earliest applications likely to be tested on even proto-type quantum computers will be some aspect of Machine Learning. Corrado was right to confirm that in the opinion of the Google team working at the time, the D-Wave machine was not usable for AI or ML purposes. It was not his fault that his comments were mis-reported. It is worth noting that one of the people most credibly seen as the “grandfather” of AI and Machine Learning, Geoffrey Hinton is part of the same team at Google that has adopted the Quantum Supremacy objective. There are clearly amazing teams assembled elsewhere, but where quantum computing meets Artificial Intelligence, then its hard to beat the sheer intellectual fire power of Google’s AI team.

Outside of Google, a nice and fairly simple way of seeing how the immediate boundary between the theory of quantum machine learning and its application on “real” machines has been eroded can be seen by looking at two versions of exactly the same talk by one of the sector’s early cheer leaders, Seth Lloyd. Here is a link to a talk that Lloyd gave through Google Tech Talks in early 2014, and here is a link to exactly the same talk except that it was delivered a couple of months ago. Not surprisingly Lloyd, as a theorist, brings a similar approach to the subject as WKS, but in the second of the two presentations, he also discusses one of his more recent pre-occupations in analysing large data sets using algebraic topological methods that can be manipulated by a quantum computer.

For those of you who might not be familiar with Lloyd I have included a link below to the most recent form of his talk on a quantum algorithm for large data sets represented by topological analysis.

One of the most interesting aspects that is illuminated by Lloyds position on quantum speed up using quantum algorithms for classical machine learning operations is his use of the example of the “Principal Component Analysis” algorithm (“PCA”). PCA is one of the most common machine learning techniques in classical computing, and Lloyd (and others) have been studying quantum computing versions for at least the past 3 to 4 years.

Finding a use case for a working quantum algorithm that can be implemented in a real use case such as one of the literally hundreds of applications for PCA is likely to be one of the earliest ways that quantum computers with even a limited number of qubits could be employed. Lloyd has already shown how a quantum algorithm can be proven to exhibit “speed up” when looking just at the number of steps taken in classifying the problem. I personally do not doubt that a suitable protocol will emerge as soon as people start applying themselves to a genuine quantum processor.

At Cambridge Quantum Computing, my colleagues in the quantum algorithm team have been working on the subject from a different perspective in both ML and DNN. The most immediate application using existing classical hardware has been from the guys that created ARROW> , where they have looked to build gradually from traditional ML through to DNN techniques for detecting and then classifying anomalies in “pure” times series (initially represented by stock prices). In the recent few weeks we have started advancing from ML to DNN, but the exciting thing is that the team has always looked at ARROW> in a way that lends itself to being potentially upgraded with possible quantum components that in turn can be run on early release smaller scale quantum processor. Using a team of quantum physicists to approach AI problems so they can ultimately be worked off a quantum computer clearly has some advantages.

There are, of course, a great many areas other than the seemingly trivial sphere of finding anomalies in share prices where AI will be applied. In my opinion the best recently published overview of the whole AI space (an incorporating the phase transition to quantum computing) is the Fortune Article (here) that appeared at the end of September and not surprisingly the focus on medical and genome related AI applications for “big” data driven deep learning applications figure highly in that part of the article that focuses on the current state of affairs.

I do not know exactly how far we are away from the first headlines about quantum processors being used to help generate efficiency in at least some aspects of DNN. My personal guess is that deep learning dropout protocols that help mitigate the over-fitting problem will be the first area where quantum computing “upgrades” are employed and I suspect very strongly that any machine that is being put through its paces at IBM or Google or Microsoft is already being designed with this sort of application in mind. Regardless of whether we are years away or months away from that first headline, the center of gravity in AI will have moved because of Quantum Computing.

Source: Artificial Intelligence suddenly got a whole lot more interesting | Ilyas Khan, KSG | Pulse | LinkedIn

Syndicated copies to:

Professor Emeritus Seymour Papert, pioneer of constructionist learning, dies at 88

Liked Professor Emeritus Seymour Papert, pioneer of constructionist learning, dies at 88 (MIT News)
World-renowned mathematician, learning theorist, and educational-technology visionary was a founding faculty member of the MIT Media Lab.
Syndicated copies to:

The Hidden Algorithms Underlying Life | Quanta Magazine

Bookmarked Searching for the Algorithms Underlying Life by John Pavlus (Quanta Magazine)
The biological world is computational at its core, argues computer scientist Leslie Valiant.

I did expect something more entertaining from Google when I searched for “what will happen if I squeeze a paper cup full of hot coffee?”

Syndicated copies to:

Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88 | The New York Times

Professor Minsky laid the foundation for the field by demonstrating the possibilities of imparting common-sense reasoning to computers.

Source: Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88 – The New York Times