👓 AI Is Making It Extremely Easy for Students to Cheat | WIRED

Read AI Is Making It Extremely Easy for Students to Cheat (WIRED)
Teachers are being forced to adapt to new tools that execute homework perfectly.
The headline is a bit click-baity, but the article is pretty solid nonetheless.

There is some interesting discussion in here on how digital technology meets pedagogy. We definitely need to think about how we reframe what is happening here. I’m a bit surprised they didn’t look back at the history of the acceptance (or not) of the calculator in math classes from the 60’s onward.

Where it comes to math, some of these tools can be quite useful, but students need to have the correct and incorrect uses of these technologies explained and modeled for them. Rote cheating certainly isn’t going to help them, but if used as a general tutorial of how and why methods work, then it can be invaluable and allow them to jump much further ahead of where they might otherwise be.

I’m reminded of having told many in the past that the general concepts behind the subject of calculus are actually quite simple and relatively easy to master. The typical issue is that students in these classes may be able to do the first step of the problem which is the actual calculus, but get hung up on not having practiced the algebra enough and the 10 steps of algebra after the first step of calculus is where their stumbling block lies in getting the correct answer.

🔖 The Deep Learning Revolution by Terrence J. Sejnowski | MIT Press

Bookmarked The Deep Learning Revolution by Terrence J. Sejnowski (MIT Press)

How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.

The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

The Deep Learning Revolution by Terrence J. Sejnowski book cover

🔖 Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell

Bookmarked Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell (curtisbrown.co.uk)

No recent scientific enterprise has been so alluring, so terrifying, and so filled with extravagant promise and frustrating setbacks as artificial intelligence. But how intelligent—really—are the best of today’s AI programs? How do these programs work? What can they actually do, and what kinds of things do they fail at? How human-like do we expect them to become, and how soon do we need to worry about them surpassing us in most, if not all, human endeavors? 

From Melanie Mitchell, a leading professor and computer scientist, comes an in-depth and careful study of modern day artificial intelligence. Exploring the cutting edge of current AI and the prospect of 'intelligent' mechanical creations - who many fear may become our successors - Artificial Intelligence looks closely at the allure, the roller-coaster history, and the recent surge of seeming successes, grand hopes, and emerging fears surrounding AI. Flavoured with personal stories and a twist of humour, this ultimately accessible account of modern AI gives a clear sense of what the field has actually accomplished so far and how much further it has to go.

🎧 Episode 077 Exploring Artificial Intelligence with Melanie Mitchell | Human Current

Listened to Episode 077 Exploring Artificial Intelligence with Melanie Mitchell by Haley Campbell-GrossHaley Campbell-Gross from HumanCurrent

What is artificial intelligence? Could unintended consequences arise from increased use of this technology? How will the role of humans change with AI? How will AI evolve in the next 10 years?

In this episode, Haley interviews leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe Institute, Melanie Mitchell. Professor Mitchell answers many profound questions about the field of artificial intelligence and gives specific examples of how this technology is being used today. She also provides some insights to help us navigate our relationship with AI as it becomes more popular in the coming years.

Melanie Mitchell on Human Current
Definitely worth a second listen.

👓 I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust | Gizmodo

Read I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust (Gizmodo)
The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.
Another example of an app saying “We don’t have bias in our AI” when it seems patently true that they do. I wonder how one would prove (mathematically) that one didn’t have bias?

👓 Name Change | N.I.P.S.

Read NIPS Name Change by Terrence Sejnowski, Marian Stewart Bartlett, Michael Mozer, Corinna Cortes, Isabelle Guyon, Neil D. Lawrence, Daniel D. Lee, Ulrike von Luxburg, Masashi Sugiyama, Max Welling (nips.cc)

As many of you know, there has been an ongoing discussion concerning the name of the Neural Information Processing Systems conference. The current acronym NIPS has unintended connotations that some members of the community find offensive.

Following several well-publicized incidents of insensitivity at past conferences, and our acknowledgement of other less-publicized incidents, we conducted community polls requesting alternative names, rating the existing and alternative names, and soliciting additional comments.

After extensive discussions, the NIPS Board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on a name change or a well-regarded alternative name.

This just makes me sick…
I’m reminded by conversations at E21 Consortium’s Symposium on Artificial Intelligence in 21st Century Education (#e21sym) today about a short essay by Michael Nielsen (t) which I saw the other day on volitional philanthropy that could potentially be applied to AI in education in interesting ways.

🔖 E21 Consortium | Symposium

Bookmarked Symposium on Artificial Intelligence in 21st Century Education (E21 Consortium )
Join us for a day of disruptive dialogue about Artificial Intelligence and 21st Century Education in Ottawa, an annual international symposium hosted by the University of Ottawa in collaboration with Carleton University, St. Paul University, Algonquin College, La Cité, and the Centre franco-ontarien de ressources pedagogique (CFORP).
hat tip: Stephen Downes

❤️ Downes tweet: The panel I’m on at #E21Sym will be live streamed any minute now

Liked Downes on Twitter by Stephen DownesStephen Downes (Twitter)

❤️ Downes tweet: I’m at “Education in the 21st Century: A Symposium on Artificial Intelligence” today at the University of Ottawa.

Liked a tweet by Stephen DownesStephen Downes (Twitter)

🎧 Episode 077 Exploring Artificial Intelligence with Melanie Mitchell | HumanCurrent

Listened to Episode 077: Exploring Artificial Intelligence with Melanie Mitchell by Haley Campbell-GrossHaley Campbell-Gross from HumanCurrent

What is artificial intelligence? Could unintended consequences arise from increased use of this technology? How will the role of humans change with AI? How will AI evolve in the next 10 years?

In this episode, Haley interviews leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe InstituteMelanie Mitchell. Professor Mitchell answers many profound questions about the field of artificial intelligence and gives specific examples of how this technology is being used today. She also provides some insights to help us navigate our relationship with AI as it becomes more popular in the coming years.

Melanie Mitchell
I knew Dr. Mitchell was working on a book during her hiatus, but didn’t know it was potentially coming out so soon! I loved her last book and can’t wait to get this one. Sadly, there’s no pre-order copies available at any of the usual suspects yet.

👓 'I was shocked it was so easy': ​meet the professor who says facial recognition ​​can tell if you're gay | The Guardian

Read 'I was shocked it was so easy': ​meet the professor who says facial recognition ​​can tell if you're gay by Paul Lewis (the Guardian)
Psychologist Michal Kosinski says artificial intelligence can detect your sexuality and politics just by looking at your face. What if he’s right?
How in God’s name are we repeating so many of the exact problems of the end of the 1800’s? First nationalism and protectionism and now the eugenics agenda?

📺 Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED

Watched We're building a dystopia just to make people click on ads by Zeynep TufekciZeynep Tufekci from ted.com

We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.

📺 Zeynep Tufekci: Machine intelligence makes human morals more important | TED

Watched Machine intelligence makes human morals more important by Zeynep TufekciZeynep Tufekci from ted.com

Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."