Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
Read Our Twitter and Teargas book club reading schedule by Bryan Alexander (Bryan Alexander)
Bryan, thanks for the list of interesting and creative ways one could interact and participate in an online book club. It’s a great outline which includes some not-often-seen methods–and somewhat reminiscent of #DS106 work. I hope to see some interesting creativity come out of it.
As I’m looking at this, folks who want a quick and brief background (or who need to be sold on the importance of the topic) may appreciate Frontline’s recent two part documentary which I recently watched [1][2]. Tufekci appears and gives some excellent commentary in it. For additional overview/background, I’ll also recommend her three TED talks which I’ve watched in the recent past.[1][2][3] I suspect they cover some of the details in this book.
Syndicated copies to:
Syndicated copies: