# A TikTok novelty: FACE-BASED FITLER BUBBLES
The AI-bias techlash seems to have had no impact on newer platforms.
Follow a random profile, and TikTok will only recommend people who look almost the same.
Let’s do the experiment from a fresh account:
Anne Applebaum talks to Renée DiResta about building a more trustworthy Internet.
Renée DiResta is the Director of Research at New Knowledge and a Mozilla Fellow in Media, Misinformation, and Trust. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised the United States Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.
I appreciate Renee’s ideas and suspect they’re related to those in Ezra Klein’s new books, which I hope to start reading shortly.
The media's "epistemic crisis," algorithmic biases, and the radio's inherent, historical misogyny.
In hearings this week, House Democrats sought to highlight an emerging set of facts concerning the President’s conduct. On this week’s On the Media, a look at why muddying the waters remains a viable strategy for Trump’s defenders. Plus, even the technology we trust for its clarity isn’t entirely objective, especially the algorithms that drive decisions in public and private institutions. And, how early radio engineers designed broadcast equipment to favor male voices and make women sound "shrill."
Cathy O’Neil has a great interview on her book Weapons of Math Distraction. I highly recommend everyone read it, but if for some reason you can’t do it this month, this interview is a good starting place for repairing that deficiency.
In section three, I’ll note that I’ve studied the areas of signal processing and information theory in great depth, but never run across the fascinating history of how we physically and consciously engineered women out of radio and broadcast in quite the way discussed here. I recall the image of “Lena” being nudged out of image processing recently, but the engineering wrongs here are far more serious and pernicious.
Altering the internet's economic and digital infrastructure to promote free speech
Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they’ve kicked this firm off their site, but I think they’ve got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let’s start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn’t dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints. ❧
Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it’s inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.
If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the “masses” (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning’s coffee or today’s lunch marginalized.
To analogize it, we’ve provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.
If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn’t have the need for as much human curation.
Annotated on December 11, 2019 at 11:13AM
That approach: build protocols, not platforms. ❧
I can now see why @jack made his Twitter announcement this morning. If he opens up and can use that openness to suck up more data, then Twitter’s game could potentially be doing big data and higher end algorithmic work on even much larger sets of data to drive eyeballs.
I’ll have to think on how one would “capture” a market this way, but Twitter could be reasonably poised to pivot in this direction if they’re really game for going all-in on the idea.
It’s reasonably obvious that Twitter has dramatically slowed it’s growth and isn’t competing with some of it’s erstwhile peers. Thus they need to figure out how to turn a relatively large ship without losing value.
Annotated on December 11, 2019 at 11:20AM
It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.❧
But platforms **are **making **huge **decisions about who is allowed to speak. While they’re generally allowing everyone to have a voice, they’re also very subtly privileging many voices over others. While they’re providing space for even the least among us to have a voice, they’re making far too many of the worst and most powerful among us logarithmic-ally louder.
It’s not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn’t have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.
The issue we ought to be looking at is the dynamic range between people and the messages they’re able to send through social platforms.
We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they’re able to buy, we’re imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.
If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I’d have to actively search that content out for it to cause me that sort of harm.
Annotated on December 11, 2019 at 11:39AM
Moving back to a focus on protocols over platforms can solve many of these problems. ❧
This may also only be the case if large corporations are forced to open up and support those protocols. If my independent website can’t interact freely and openly with something like Twitter on a level playing field, then it really does no good.
Annotated on December 11, 2019 at 11:42AM
And other recent developments suggest that doing so could overcome many of the earlier pitfalls of protocol-based systems, potentially creating the best of all words: useful internet services, with competition driving innovation, not controlled solely by giant corporations, but financially sustainable, providing end users with more control over their own data and privacy—and providing mis- and disinformation far fewer opportunities to wreak havoc. ❧
Some of the issue with this then becomes: “Who exactly creates these standards?” We already have issues with mega-corporations like Google wielding out sized influence in the ability to create new standards like Schema.org or AMP.
Who is to say they don’t tacitly design their standards to directly (and only) benefit themselves?
Annotated on December 11, 2019 at 11:47AM
I didn’t go looking for grief this afternoon, but it found me anyway, and I have designers and programmers to thank for it.
Function's Anil Dash joins Matt to discuss how Big Tech broke the web and how we can get it back.
Bookmarked on December 06, 2019 at 09:54PM
Fourteen years ago, a dozen geeks gathered around our dining table for Tagsgiving dinner. No, that’s not a typo. In 2005, my husband and I celebrated Thanksgiving as “Tagsgiving,” in honor of the web technology that had given birth to our online community development shop. I invited our guests...
Tagging systems were “folksonomies:” chaotic, self-organizing categorization schemes that grew from the bottom up. ❧
There’s something that just feels so wrong in this article about old school tagging and the blogosphere that has a pullquote meant to encourage one to Tweet the quote. #irony
–December 04, 2019 at 11:03AM
I literally couldn’t remember when I’d last looked at my RSS subscriptions.
On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination. ❧
–December 04, 2019 at 11:34AM
You might connect with someone who regularly used the same tags that you did, but that was because they shared your interests, not because they had X thousand followers. ❧
An important and sadly underutilized means of discovery. –December 04, 2019 at 11:35AM
I find it interesting that Alexandra’s Twitter display name is AlexandraSamuel.com while the top of her own website has the apparent title @AlexandraSamuel. I don’t think I’ve seen a crossing up of those two sorts of identities before though it has become more common for people to use their own website name as their Twitter name. Greg McVerry is another example of this.
Thanks to Jeremy Cherfas and Aaron Davis for the links to this piece. I suspect that Dr. Samuel will appreciate that we’re talking about this piece using our own websites and tagging them with our own crazy taxonomies. I’m feeling nostalgic now for the old Technorati…
Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.
Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.
César Hidalgo has a radical suggestion for fixing our broken political system: automate it! In this provocative talk, he outlines a bold idea to bypass politicians by empowering citizens to create personalized AI representatives that participate directly in democratic decisions. Explore a new way to make collective decisions and expand your understanding of democracy.
“It’s not a communication problem, it’s a cognitive bandwidth problem.”—César Hidalgo
He’s definitely right about the second part, but it’s also a communication problem because most of political speech is nuanced toward the side of untruths and covering up facts and potential outcomes to represent the outcome the speaker wants. There’s also far too much of our leaders saying “Do as I say (and attempt to legislate) and not as I do.” Examples include things like legislators working to actively take away things like abortion or condemn those who are LGBTQ when they actively do those things for themselves or their families or live out those lifestyles in secret.
“One of the reasons why we use Democracy so little may be because Democracy has a very bad user interface and if we improve the user interface of democracy we might be able to use it more.”—César Hidalgo
This is an interesting idea, but definitely has many pitfalls with respect to how we know AI systems currently work. We’d definitely need to start small with simpler problems and build our way up to the more complex. However, even then, I’m not so sure that the complexity issues could ultimately be overcome. On it’s face it sounds like he’s relying too much on the old “clockwork” viewpoint of phyiscs, though I know that obviously isn’t (or couldn’t be) his personal viewpoint. There’s a lot more pathways for this to become a weapon of math destruction currently than the utopian tool he’s envisioning.
Controversial ‘smart locks’ show the way that surveillance tech begins with the poor, before spreading to the rest of us
Instead, when we talk about technology, we should be thinking about power dynamics.
Great piece about ethics in technology.
YouTube, after years of criticism, has finally decided to specifically ban videos that promote the idea that one group is superior to others. The new policy, announced Wednesday, includes a complet…
AMONG THE MEGA-CORPORATIONS that surveil you, your cellphone carrier has always been one of the keenest monitors, in constant contact with the one small device you keep on you at almost every moment. A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.
Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.