Bookmarked Coded Bias (CODED BIAS)
CODED BIAS explores the fallout of MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.

This looks like an interesting documentary.

Read Marketers are Addicted to Bad Data by Jacques Corby-TuechJacques Corby-Tuech (jacquescorbytuech.com)

Modern marketing is all about data and however hard you might try, you can't spend any time around marketers online without being subjected to endless think pieces, how-to guides, ebooks or other dreck about how we need to track and measure and count every little thing.

We've got click rates, impressions, conversion rates, open rates, ROAS, pageviews, bounces rates, ROI, CPM, CPC, impression share, average position, sessions, channels, landing pages, KPI after never ending KPI.

That'd be fine if all this shit meant something and we knew how to interpret it. But it doesn't and we don't.

Liked a tweet (Twitter)
A very important point to consider. I need to create a feed of some of these people to follow. I have one called Weapons of Math Destruction that’s pretty close to this, and focused on artificial intelligence, algorithms, and ethics.

 
Read Thread by @MarcFaddoul on TikTok face-based filter bubbles (threadreaderapp.com)

# A TikTok novelty: FACE-BASED FITLER BUBBLES
The AI-bias techlash seems to have had no impact on newer platforms.
Follow a random profile, and TikTok will only recommend people who look almost the same.
Let’s do the experiment from a fresh account:

screenshot of Tiktok showing apparent racism

Listened to Fake news is # Solvable from The Rockefeller Foundation

Anne Applebaum talks to Renée DiResta about building a more trustworthy Internet.

Renée DiResta is the Director of Research at New Knowledge and a Mozilla Fellow in Media, Misinformation, and Trust. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised the United States Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

Many talk about the right to freedom of speech online, but rarely do discussions delve a layer deeper into the idea of the “right to reach”. I’ve lately taken to analogizing the artificial reach of bots and partisan disinformation and labeled the idea social media machine guns to emphasize this reach problem. It’s also related to Cathy O’Neill’s concept of Weapons of Math Destruction. We definitely need some new verbiage to begin describing these sorts of social ills so that we have a better grasp of what they are and how they can effect us.

I appreciate Renee’s ideas and suspect they’re related to those in Ezra Klein’s new books, which I hope to start reading shortly.

 

Read Do you really need all this personal information, @RollingStone? by Doc SearlesDoc Searles (Doc Searls Weblog)
Here’s the popover that greets visitors on arrival at Rolling Stone’s website: Our Privacy Policy has been revised as of January 1, 2020. This policy outlines how we use your informatio…
Holy crap that’s a lot of tracking for one site. What’s worse is that I can’t imagine really what or how they would honestly be monetizing it to their own benefit without selling me out as a person.
Listened to The Disagreement Is The Point from On the Media | WNYC Studios

The media's "epistemic crisis," algorithmic biases, and the radio's inherent, historical misogyny.

In hearings this week, House Democrats sought to highlight an emerging set of facts concerning the President’s conduct. On this week’s On the Media, a look at why muddying the waters remains a viable strategy for Trump’s defenders. Plus, even the technology we trust for its clarity isn’t entirely objective, especially the algorithms that drive decisions in public and private institutions. And, how early radio engineers designed broadcast equipment to favor male voices and make women sound "shrill."

1. David Roberts [@drvox], writer covering energy for Vox, on the "epistemic crisis" at the heart of our bifurcated information ecosystem. Listen.

2. Cathy O'Neil [@mathbabedotorg], mathematician and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, on the biases baked into our algorithms. Listen.

3. Tina Tallon [@ttallon], musician and professor, on how biases built into radio technology have shaped how we hear women speak. Listen.

Some great discussion on the idea of women being “shrill” and ad hominem attacks instead of attacks on ideas.

Cathy O’Neil has a great interview on her book Weapons of Math Distraction. I highly recommend everyone read it, but if for some reason you can’t do it this month, this interview is a good starting place for repairing that deficiency.

In section three, I’ll note that I’ve studied the areas of signal processing and information theory in great depth, but never run across the fascinating history of how we physically and consciously engineered women out of radio and broadcast in quite the way discussed here. I recall the image of “Lena” being nudged out of image processing recently, but the engineering wrongs here are far more serious and pernicious.

Read Protocols, Not Platforms: A Technological Approach to Free Speech by Mike Masnick (knightcolumbia.org)
Altering the internet's economic and digital infrastructure to promote free speech

Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they’ve kicked this firm off their site, but I think they’ve got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let’s start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn’t dominated by a single censor. .”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it’s inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the “masses” (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning’s coffee or today’s lunch marginalized.

To analogize it, we’ve provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn’t have the need for as much human curation.

Annotated on December 11, 2019 at 11:13AM

That approach: build protocols, not platforms.

I can now see why @jack made his Twitter announcement this morning. If he opens up and can use that openness to suck up more data, then Twitter’s game could potentially be doing big data and higher end algorithmic work on even much larger sets of data to drive eyeballs.

I’ll have to think on how one would “capture” a market this way, but Twitter could be reasonably poised to pivot in this direction if they’re really game for going all-in on the idea.

It’s reasonably obvious that Twitter has dramatically slowed it’s growth and isn’t competing with some of it’s erstwhile peers. Thus they need to figure out how to turn a relatively large ship without losing value.

Annotated on December 11, 2019 at 11:20AM

It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

But platforms **are **making **huge **decisions about who is allowed to speak. While they’re generally allowing everyone to have a voice, they’re also very subtly privileging many voices over others. While they’re providing space for even the least among us to have a voice, they’re making far too many of the worst and most powerful among us logarithmic-ally louder.

It’s not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn’t have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

The issue we ought to be looking at is the dynamic range between people and the messages they’re able to send through social platforms.

We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they’re able to buy, we’re imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I’d have to actively search that content out for it to cause me that sort of harm.

Annotated on December 11, 2019 at 11:39AM

Moving back to a focus on protocols over platforms can solve many of these problems.

This may also only be the case if large corporations are forced to open up and support those protocols. If my independent website can’t interact freely and openly with something like Twitter on a level playing field, then it really does no good.

Annotated on December 11, 2019 at 11:42AM

And other recent developments suggest that doing so could overcome many of the earlier pitfalls of protocol-based systems, potentially creating the best of all words: useful internet services, with competition driving innovation, not controlled solely by giant corporations, but financially sustainable, providing end users with more control over their own data and privacy—and providing mis- and disinformation far fewer opportunities to wreak havoc.

Some of the issue with this then becomes: “Who exactly creates these standards?” We already have issues with mega-corporations like Google wielding out sized influence in the ability to create new standards like Schema.org or AMP.

Who is to say they don’t tacitly design their standards to directly (and only) benefit themselves?

Annotated on December 11, 2019 at 11:47AM

Listened to The internet we lost by Matthew Yglesias from The Weeds | Vox
Function's Anil Dash joins Matt to discuss how Big Tech broke the web and how we can get it back.

Some recent discussion relating to Anil Dash’s overarching thesis of the Web we Lost. He’s also got some discussion related to algorithms and Weapons of Math Destruction. He specifically highlights the idea of context collapse and needing to preface one’s work with the presumption that people coming to it will be completely lacking your prior background and history of the subject. He also talks about algorithmic amplification of fringe content which many people miss. We need a better name for what that is and how to discuss it. I liken it to the introduction of machine guns in early 1900’s warfare that allowed for the mass killing of soldiers and people at a scale previously unseen. People with the technology did better than those without it, but it still gave unfair advantage to some over others. I’ve used the tag social media machine guns before, but we certainly need to give it a concrete (and preferably negative) name.

Bookmarked on December 06, 2019 at 09:54PM

Read What Happened to Tagging? by Alexandra SamuelAlexandra Samuel (JSTOR Daily)
Fourteen years ago, a dozen geeks gathered around our dining table for Tagsgiving dinner. No, that’s not a typo. In 2005, my husband and I celebrated Thanksgiving as “Tagsgiving,” in honor of the web technology that had given birth to our online community development shop. I invited our guests...
It almost sounds like Dr. Samuel could be looking for the IndieWeb community, but just hasn’t run across it yet. Since she’s writing about tags, I can’t help but mischievously snitch tagging it to her, though I’ll do so only in hopes that it might make the internet all the better for it.

Tagging systems were “folksonomies:” chaotic, self-organizing categorization schemes that grew from the bottom up.

There’s something that just feels so wrong in this article about old school tagging and the blogosphere that has a pullquote meant to encourage one to Tweet the quote.
–December 04, 2019 at 11:03AM

I literally couldn’t remember when I’d last looked at my RSS subscriptions.
On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.

–December 04, 2019 at 11:34AM

You might connect with someone who regularly used the same tags that you did, but that was because they shared your interests, not because they had X thousand followers.

An important and sadly underutilized means of discovery. –December 04, 2019 at 11:35AM

I find it interesting that Alexandra’s Twitter display name is AlexandraSamuel.com while the top of her own website has the apparent title @AlexandraSamuel. I don’t think I’ve seen a crossing up of those two sorts of identities before though it has become more common for people to use their own website name as their Twitter name. Greg McVerry is another example of this.

Thanks to Jeremy Cherfas[1] and Aaron Davis[2] for the links to this piece. I suspect that Dr. Samuel will appreciate that we’re talking about this piece using our own websites and tagging them with our own crazy taxonomies. I’m feeling nostalgic now for the old Technorati…

🎧 Triangulation 413 David Weinberger: Everyday Chaos | TWiT.TV

Listened to Triangulation 413 David Weinberger: Everyday Chaos from TWiT.tv

Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.

Interesting discussion of systems with built in openness or flexibility as a feature. They highlight Slack which has a core product, but allows individual users and companies to add custom pieces to it to use in the way they want. This provides a tremendous amount of addition value that Slack would never have known or been able to build otherwise. These sorts of products or platforms have the ability not only to create their inherent links, but add value by being able to flexibly create additional links outside of themselves or let external pieces create links to them.

Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.

Watched A bold idea to replace politicians by César Hidalgo from ted.com
César Hidalgo has a radical suggestion for fixing our broken political system: automate it! In this provocative talk, he outlines a bold idea to bypass politicians by empowering citizens to create personalized AI representatives that participate directly in democratic decisions. Explore a new way to make collective decisions and expand your understanding of democracy.

“It’s not a communication problem, it’s a cognitive bandwidth problem.”—César Hidalgo

He’s definitely right about the second part, but it’s also a communication problem because most of political speech is nuanced toward the side of untruths and covering up facts and potential outcomes to represent the outcome the speaker wants. There’s also far too much of our leaders saying “Do as I say (and attempt to legislate) and not as I do.” Examples include things like legislators working to actively take away things like abortion or condemn those who are LGBTQ when they actively do those things for themselves or their families or live out those lifestyles in secret.

“One of the reasons why we use Democracy so little may be because Democracy has a very bad user interface and if we improve the user interface of democracy we might be able to use it more.”—César Hidalgo

This is an interesting idea, but definitely has many pitfalls with respect to how we know AI systems currently work. We’d definitely need to start small with simpler problems and build our way up to the more complex. However, even then, I’m not so sure that the complexity issues could ultimately be overcome. On it’s face it sounds like he’s relying too much on the old “clockwork” viewpoint of phyiscs, though I know that obviously isn’t (or couldn’t be) his personal viewpoint. There’s a lot more pathways for this to become a weapon of math destruction currently than the utopian tool he’s envisioning.

Read Privacy Is Just the Beginning of the Debate Over Tech by Jathan Sadowski (onezero.medium.com)
Controversial ‘smart locks’ show the way that surveillance tech begins with the poor, before spreading to the rest of us

Instead, when we talk about technology, we should be thinking about power dynamics.

Great piece about ethics in technology.