🎧 Stephen Fry On How Our Myths Help Us Know Who We Are | Clear+Vivid with Alan Alda

Listened to Stephen Fry On How Our Myths Help Us Know Who We Are by Alan Alda from Clear+Vivid with Alan Alda

Stephen Fry loves words. But he does more than love them. He puts them together in ways that so delight readers, that a blog or a tweet by him can get hundreds of thousands of people hanging on his every keystroke. As an actor, he’s brought to life every kind of theatrical writing from sketch comedy to classics. He’s performed in everything from game shows to the British audiobook version of Harry Potter. And always with a rich intelligence and searching eye. In this conversation with Alan Alda, Stephen explores how myths — sometimes very ancient ones — help us understand and, even guide, our modern selves.

Just a lovely episode here. I particularly like the idea about looking back to Greek mythology and the issues between the gods and humans being overlain in parallel on our present and future issues between humans and computers/robots/artificial intelligence.

👓 Deep text: a catastrophic threat to the bullshit economy? | Abject

Read Deep text: a catastrophic threat to the bullshit economy? (Abject)
I used to be an artist, then I became a poet; then a writer. Now when asked, I simply refer to myself as a word processor. — Kenneth Goldsmith It’s a striking headline, and the Guardian…

📑 Walter Pitts by Neil Smalheiser | Journal Perspectives in Biology and Medicine

Bookmarked Walter Pitts by Neil SmalheiserNeil Smalheiser (Journal Perspectives in Biology and Medicine. Volume 43. Issue 2. Page 217 - 226.)
Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.  

This looks like an interesting bio to read.

🎧 Triangulation 380 The Age of Surveillance Capitalism | TWiT.TV

Listened to Triangulation 380 The Age of Surveillance Capitalism by Leo Laporte from TWiT.tv

Shoshana Zuboff is the author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. She talks with Leo Laporte about how social media is being used to influence people.

Links

Even for the people who are steeped in some of the ideas of surveillance capitalism, ad tech, and dark patterns, there’s a lot here to still be surprised about. If you’re on social media, this should be required listening/watching.

I can’t wait to get the copy of her book.

Folks in the IndieWeb movement have begun to fix portions of the problem, but Shoshana Zuboff indicates that there are several additional levels of humane understanding that will need to be bridged to make sure their efforts aren’t just in vain. We’ll likely need to do more than just own our own data, but we’ll need to go a step or two further as well.

The thing I was shocked to not hear in this interview (and which may not be in the book either) is something that I think has been generally left unmentioned with respect to Facebook and elections and election tampering (29:18). Zuboff and Laporte discuss Facebook’s experiments in influencing people to vote in several tests for which they published academic papers. Even with the rumors that Mark Zuckerberg was eyeing a potential presidential run in 2020 with his trip across America and meeting people of all walks of life, no one floated the general idea that as the CEO of Facebook, he might use what they learned in those social experiments to help get himself (or even someone else) elected by sending social signals to certain communities to prevent them from voting while sending other signals to other communities to encourage them to vote. The research indicates that in a very divided political climate that with the right sorts of voting data, it wouldn’t take a whole lot of work for Facebook to help effectuate a landslide victory for particular candidates or even entire political parties!! And of course because of the distributed nature of such an attack on democracy, Facebook’s black box algorithms, and the subtlety of the experiments, it would be incredibly hard to prove that such a thing was even done.

I like her broad concept (around 43:00) where she discusses the idea of how people tend to frame new situations using pre-existing experience and that this may not always be the most useful thing to do for what can be complex ideas that don’t or won’t necessarily play out the same way given the potential massive shifts in paradigms.

Also of great interest is the idea of instrumentarianism as opposed to the older ideas of totalitarianism. (43:49) Totalitarian leaders used to rule by fear and intimidation and now big data stores can potentially create these same types of dynamics, but without the need for the fear and intimidation by more subtly influencing particular groups of people. When combined with the ideas behind “swarming” phenomenon or Mark Granovetter’s ideas of threshold reactions in psychology, only a very small number of people may need to be influenced digitally to create drastic outcomes. I don’t recall the reference specifically, but I recall a paper about the mathematics with respect to creating ethnic neighborhoods that only about 17% of people needed to be racists and move out of a neighborhood to begin to create ethnic homogeneity and drastically less diversity within a community.

Also tangentially touched on here, but not discussed directly, I can’t help but think that all of this data with some useful complexity theory might actually go a long way toward better defining (and being able to actually control) Adam Smith’s economic “invisible hand.”

There’s just so much to consider here that it’s going to take several revisits to the ideas and some additional research to tease this all apart.

🎧 Triangulation 383 Meredith Broussard: Artificial Unintelligence | TWiT.TV

Listened to Triangulation 383 Meredith Broussard: Artificial Unintelligence by Megan MorroneMegan Morrone from TWiT.tv

Software developer and data journalist Meredith Broussard joins Megan Morrone to discuss her book Artificial Unintelligence: How Computers Misunderstand the World, which makes the case against the idea that technology can solve all our problems, touching on self-driving cars, the digital divide, the difference between AI and machine learning, and more.

I’ve been waiting a while for Meredith’s book Artificial Unintelligence: How Computers Misunderstand the World to come out and this is an excellent reminder to pick up several copies for some friends who I know will appreciate it.

I’m curious if she’s got an Amazon Associates referral link so that we can give her an extra ~4% back for promoting her book? I don’t see one on her website unfortunately.

The opening of the show recalling the internet in the 90’s definitely took me back as I remember being in at least one class in college with Megan Morrone. I seem to recall that it was something in Writing Seminars, perhaps Contemporary American Letters?

There’s so much good to highlight here, but in particular I like the concept of technochauvinism, thought when I initially heard it I had a different conception of what it might be instead of the definition that Broussard gives as the belief that technology is always the solution to every problem. My initial impression of it was something closer to the idea of tech bro.

My other favorite piece of discussion centered on her delving into her local educational structure to find that there was a dearth of books and computers and how some of that might be fixed for future children. It’s reminiscent of a local computer scientist I know from Cal Tech who created some bus route models for the Pasadena school system to minimize their travel, gas cost, and personnel to save the district several million dollars. I’m hoping some of those savings go toward more books…

👓 How Math Can Be Racist: Giraffing | 0xabad1dea

Read How Math Can Be Racist: Giraffing (0xabad1dea)
Well, any computer scientist or experienced programmer knows right away that being “made of math” does not demonstrate anything about the accuracy or utility of a program. Math is a lot more of a social construct than most people think. But we don’t need to spend years taking classes in algorithms to understand how and why the types of algorithms used in artificial intelligence systems today can be tremendously biased. Here, look at these four photos. What do they have in common?

👓 AI Is Making It Extremely Easy for Students to Cheat | WIRED

Read AI Is Making It Extremely Easy for Students to Cheat (WIRED)
Teachers are being forced to adapt to new tools that execute homework perfectly.

The headline is a bit click-baity, but the article is pretty solid nonetheless.

There is some interesting discussion in here on how digital technology meets pedagogy. We definitely need to think about how we reframe what is happening here. I’m a bit surprised they didn’t look back at the history of the acceptance (or not) of the calculator in math classes from the 60’s onward.

Where it comes to math, some of these tools can be quite useful, but students need to have the correct and incorrect uses of these technologies explained and modeled for them. Rote cheating certainly isn’t going to help them, but if used as a general tutorial of how and why methods work, then it can be invaluable and allow them to jump much further ahead of where they might otherwise be.

I’m reminded of having told many in the past that the general concepts behind the subject of calculus are actually quite simple and relatively easy to master. The typical issue is that students in these classes may be able to do the first step of the problem which is the actual calculus, but get hung up on not having practiced the algebra enough and the 10 steps of algebra after the first step of calculus is where their stumbling block lies in getting the correct answer.

🔖 The Deep Learning Revolution by Terrence J. Sejnowski | MIT Press

Bookmarked The Deep Learning Revolution by Terrence J. Sejnowski (MIT Press)

How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.

The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

The Deep Learning Revolution by Terrence J. Sejnowski book cover

🔖 Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Bookmarked Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell (curtisbrown.co.uk)

No recent scientific enterprise has been so alluring, so terrifying, and so filled with extravagant promise and frustrating setbacks as artificial intelligence. But how intelligent—really—are the best of today’s AI programs? How do these programs work? What can they actually do, and what kinds of things do they fail at? How human-like do we expect them to become, and how soon do we need to worry about them surpassing us in most, if not all, human endeavors? 

From Melanie Mitchell, a leading professor and computer scientist, comes an in-depth and careful study of modern day artificial intelligence. Exploring the cutting edge of current AI and the prospect of 'intelligent' mechanical creations - who many fear may become our successors - Artificial Intelligence looks closely at the allure, the roller-coaster history, and the recent surge of seeming successes, grand hopes, and emerging fears surrounding AI. Flavoured with personal stories and a twist of humour, this ultimately accessible account of modern AI gives a clear sense of what the field has actually accomplished so far and how much further it has to go.

🎧 Episode 077 Exploring Artificial Intelligence with Melanie Mitchell | Human Current

Listened to Episode 077 Exploring Artificial Intelligence with Melanie Mitchell by Haley Campbell-GrossHaley Campbell-Gross from HumanCurrent

What is artificial intelligence? Could unintended consequences arise from increased use of this technology? How will the role of humans change with AI? How will AI evolve in the next 10 years?

In this episode, Haley interviews leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe Institute, Melanie Mitchell. Professor Mitchell answers many profound questions about the field of artificial intelligence and gives specific examples of how this technology is being used today. She also provides some insights to help us navigate our relationship with AI as it becomes more popular in the coming years.

Melanie Mitchell on Human Current

Definitely worth a second listen.

👓 I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust | Gizmodo

Read I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust (Gizmodo)
The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.

Another example of an app saying “We don’t have bias in our AI” when it seems patently true that they do. I wonder how one would prove (mathematically) that one didn’t have bias?

👓 Name Change | N.I.P.S.

Read NIPS Name Change by Terrence Sejnowski, Marian Stewart Bartlett, Michael Mozer, Corinna Cortes, Isabelle Guyon, Neil D. Lawrence, Daniel D. Lee, Ulrike von Luxburg, Masashi Sugiyama, Max Welling (nips.cc)

As many of you know, there has been an ongoing discussion concerning the name of the Neural Information Processing Systems conference. The current acronym NIPS has unintended connotations that some members of the community find offensive.

Following several well-publicized incidents of insensitivity at past conferences, and our acknowledgement of other less-publicized incidents, we conducted community polls requesting alternative names, rating the existing and alternative names, and soliciting additional comments.

After extensive discussions, the NIPS Board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on a name change or a well-regarded alternative name.

This just makes me sick…

I’m reminded by conversations at E21 Consortium’s Symposium on Artificial Intelligence in 21st Century Education (#e21sym) today about a short essay by Michael Nielsen (t) which I saw the other day on volitional philanthropy that could potentially be applied to AI in education in interesting ways.

🔖 E21 Consortium | Symposium

Bookmarked Symposium on Artificial Intelligence in 21st Century Education (E21 Consortium )
Join us for a day of disruptive dialogue about Artificial Intelligence and 21st Century Education in Ottawa, an annual international symposium hosted by the University of Ottawa in collaboration with Carleton University, St. Paul University, Algonquin College, La Cité, and the Centre franco-ontarien de ressources pedagogique (CFORP).

hat tip: Stephen Downes