Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.
Interesting discussion of systems with built in openness or flexibility as a feature. They highlight Slack which has a core product, but allows individual users and companies to add custom pieces to it to use in the way they want. This provides a tremendous amount of addition value that Slack would never have known or been able to build otherwise. These sorts of products or platforms have the ability not only to create their inherent links, but add value by being able to flexibly create additional links outside of themselves or let external pieces create links to them.
Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.
César Hidalgo has a radical suggestion for fixing our broken political system: automate it! In this provocative talk, he outlines a bold idea to bypass politicians by empowering citizens to create personalized AI representatives that participate directly in democratic decisions. Explore a new way to make collective decisions and expand your understanding of democracy.
“It’s not a communication problem, it’s a cognitive bandwidth problem.”—César Hidalgo
He’s definitely right about the second part, but it’s also a communication problem because most of political speech is nuanced toward the side of untruths and covering up facts and potential outcomes to represent the outcome the speaker wants. There’s also far too much of our leaders saying “Do as I say (and attempt to legislate) and not as I do.” Examples include things like legislators working to actively take away things like abortion or condemn those who are LGBTQ when they actively do those things for themselves or their families or live out those lifestyles in secret.
“One of the reasons why we use Democracy so little may be because Democracy has a very bad user interface and if we improve the user interface of democracy we might be able to use it more.”—César Hidalgo
This is an interesting idea, but definitely has many pitfalls with respect to how we know AI systems currently work. We’d definitely need to start small with simpler problems and build our way up to the more complex. However, even then, I’m not so sure that the complexity issues could ultimately be overcome. On it’s face it sounds like he’s relying too much on the old “clockwork” viewpoint of phyiscs, though I know that obviously isn’t (or couldn’t be) his personal viewpoint. There’s a lot more pathways for this to become a weapon of math destruction currently than the utopian tool he’s envisioning.
AMONG THE MEGA-CORPORATIONS that surveil you, your cellphone carrier has always been one of the keenest monitors, in constant contact with the one small device you keep on you at almost every moment. A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.
Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.
“It sure smells like the prescreening provisions of the FCRA,” Reidenberg told The Intercept. “From a functional point of view, what they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the FCRA.” ❧
In an initial conversation with a Facebook spokesperson, they stated that the company does “not provide creditworthiness services, nor is that a feature of Actionable Insights.” When asked if Actionable Insights facilitates the targeting of ads on the basis of creditworthiness, the spokesperson replied, “No, there isn’t an instance where this is used.” It’s difficult to reconcile this claim with the fact that Facebook’s own promotional materials tout how Actionable Insights can enable a company to do exactly this. Asked about this apparent inconsistency between what Facebook tells advertising partners and what it told The Intercept, the company declined to discuss the matter on the record, ❧
We are joined by Chris Gilliard, Professor of English at Macomb Community College. His scholarship concentrates on privacy, institutional tech policy, digital redlining, and the re-inventions of discriminatory practices through data mining and algorithmic decision-making, especially as these apply to college students. He is currently developing a project that looks at how popular misunderstandings of mathematical concepts create the illusions of fairness and objectivity in student analytics, predictive policing, and hiring practices. Follow him on Twitter at @hypervisible.
An interesting episode on surveillance capitalism and redlining.
I’m a bit surprised to find that I’ve been blocked by Chris Gilliard (@hypervisible) on Twitter. I hope I haven’t done or said anything in particular to have offended him. More likely I may have been put on a block list to which he’s subscribed?? Just not sure. I’ll have to follow him from another account as I’m really interested in his research particularly as it applies to fixing these areas within the edtech space and applications using IndieWeb principles. I think this may be the first instance that I’ve gone to someone’s account to notice that I’ve been blocked.
When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal. ❧
Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it’s the means they used by which to reach it that were wrong.
This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg’s “connecting people” mantra when what he should be is “connecting people for good” or “creating positive connections”.