Read Protocols, Not Platforms: A Technological Approach to Free Speech by Mike Masnick (knightcolumbia.org)
Altering the internet's economic and digital infrastructure to promote free speech

Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they’ve kicked this firm off their site, but I think they’ve got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let’s start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn’t dominated by a single censor. .”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it’s inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the “masses” (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning’s coffee or today’s lunch marginalized.

To analogize it, we’ve provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn’t have the need for as much human curation.

Annotated on December 11, 2019 at 11:13AM

That approach: build protocols, not platforms.

I can now see why @jack made his Twitter announcement this morning. If he opens up and can use that openness to suck up more data, then Twitter’s game could potentially be doing big data and higher end algorithmic work on even much larger sets of data to drive eyeballs.

I’ll have to think on how one would “capture” a market this way, but Twitter could be reasonably poised to pivot in this direction if they’re really game for going all-in on the idea.

It’s reasonably obvious that Twitter has dramatically slowed it’s growth and isn’t competing with some of it’s erstwhile peers. Thus they need to figure out how to turn a relatively large ship without losing value.

Annotated on December 11, 2019 at 11:20AM

It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

But platforms **are **making **huge **decisions about who is allowed to speak. While they’re generally allowing everyone to have a voice, they’re also very subtly privileging many voices over others. While they’re providing space for even the least among us to have a voice, they’re making far too many of the worst and most powerful among us logarithmic-ally louder.

It’s not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn’t have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

The issue we ought to be looking at is the dynamic range between people and the messages they’re able to send through social platforms.

We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they’re able to buy, we’re imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I’d have to actively search that content out for it to cause me that sort of harm.

Annotated on December 11, 2019 at 11:39AM

Moving back to a focus on protocols over platforms can solve many of these problems.

This may also only be the case if large corporations are forced to open up and support those protocols. If my independent website can’t interact freely and openly with something like Twitter on a level playing field, then it really does no good.

Annotated on December 11, 2019 at 11:42AM

And other recent developments suggest that doing so could overcome many of the earlier pitfalls of protocol-based systems, potentially creating the best of all words: useful internet services, with competition driving innovation, not controlled solely by giant corporations, but financially sustainable, providing end users with more control over their own data and privacy—and providing mis- and disinformation far fewer opportunities to wreak havoc.

Some of the issue with this then becomes: “Who exactly creates these standards?” We already have issues with mega-corporations like Google wielding out sized influence in the ability to create new standards like Schema.org or AMP.

Who is to say they don’t tacitly design their standards to directly (and only) benefit themselves?

Annotated on December 11, 2019 at 11:47AM

👓 We Are All Public Figures Now | Ella Dawson

Read We Are All Public Figures Now by Ella Dawson (Ella Dawson)
A woman gets on a plane. She’s flying from New York to Dallas, where she lives and works as a personal trainer. A couple asks her if she’ll switch seats with one of them so that they can sit together, and she agrees, thinking it’s her good deed for the day. She chats with her new seatmate and ...
This story brings up some interesting questions about private/public as well as control on the internet. Social media is certainly breaking some of our prior social norms.

Highlights, Quotes, Annotations, & Marginalia

To summarize his argument, the media industry wants to broaden our definition of the public so that it will be fair game for discussion and content creation, meaning they can create more articles and videos, meaning they can sell more ads. The tech industry wants everything to be public because coding for privacy is difficult, and because our data, if public, is something they can sell. Our policy makers have failed to define what’s public in this digital age because, well, they don’t understand it and wouldn’t know where to begin. And also, because lobbyists don’t want them to.  

We actively create our public selves, every day, one social media post at a time.  

Even when the attention is positive, it is overwhelming and frightening. Your mind reels at the possibility of what they could find: your address, if your voting records are logged online; your cellphone number, if you accidentally included it on a form somewhere; your unflattering selfies at the beginning of your Facebook photo archive. There are hundreds of Facebook friend requests, press requests from journalists in your Instagram inbox, even people contacting your employer when they can’t reach you directly. This story you didn’t choose becomes the main story of your life. It replaces who you really are as the narrative someone else has written is tattooed onto your skin.  

What Blair did and continues to do as she stokes the flames of this story despite knowing this woman wants no part of it goes beyond intrusive. It is selfish, disrespectful harassment.  

Previously this was under the purview of journalists who typically had some ethics as well as editors to prevent this from happening. Now the average citizen has been given these same tools that journalists always had and they just haven’t been trained in their use.

How can we create some feedback mechanism to improve the situation? Should these same things be used against the perpetrators to show them how bad things could be?  

A friend of mine asked if I’d thought through the contradiction of criticizing Blair publicly like this, when she’s another not-quite public figure too.  

Did this really happen? Or is the author inventing it to diffuse potential criticism as she’s writing about the same story herself and only helping to propagate it?

There’s definitely a need to write about this issue, so kudos for that. Ella also deftly leaves out the name of the mystery woman, I’m sure on purpose. But she does include enough breadcrumbs to make the rest of the story discover-able so that one could jump from here to participate in the piling on. I do appreciate that it doesn’t appear that she’s given Blair any links in the process, which for a story like this is some subtle internet shade.

But Blair is not just posting about her own life; she has taken non-consenting parties along for the ride.  

the woman on the plane has deleted her own Instagram account after receiving violent abuse from the army Blair created.  

Feature request: the ability to make one’s social media account “disappear” temporarily while a public “attack” like this is happening.

We need a great name for this. Publicity ghosting? Fame cloaking?

👓 “Did you even READ the piece?” This startup wants to make that question obsolete for commenters | Nieman Lab

Replied to “Did you even READ the piece?” This startup wants to make that question obsolete for commenters by Christine SchmidtChristine Schmidt (Nieman Lab)
The battle against the uncivil comments section is also a battle against high bounce rates for reallyread.it.
This is an intriguing little company. I could see this being some great opening infrastructure for creating read posts.

On my own website I’ve got a relative heirarchy of bookmarks, likes, reads, replies, follows, and favorites. (A read post indicates that I’ve actually read an entire piece–something I wish more websites and social platforms supported in lieu of allowing people to link or retweet content they haven’t personally vetted.) Because I’m posting this content on my personal site and it’s visible to others as part of my broader online identity I take it far more seriously than if I were tossing any old comment into an empty box on someone else’s website. To some extend this is the type of value that embedded comments sections for Facebook tries to enforce–because a commenter is posting using an identity that their friends, family, and community can see, there’s a higher likelihood that they’ll adhere to the social contract and be civil. I suspect that the Nieman Lab is using Disqus so that commenters are similarly tied to some sort of social identity, though in a world with easy-to-create-throw-away social accounts perhaps even this may not be enough.

While there’s a lot to be said about the technology and research that could be done with such a tool as outlined in the article, I think that it also ought to be bundled with people needing to use some part of their online social identities which they’re “stuck to” in some sense.

The best model I’ve seen for this in the web space is for journalism sites to support the W3C’s recommended Webmention specification. They post and host their content as always, but they farm out their comment sections to others by being able to receive webmentions. Readers will need to write their comments on their own websites or in other areas of the social web and then send webmentions back to the outlet which can then moderate and display them as part of the open discourse. While I have a traditional “old school” commenting block on my website, the replies and reactions I get to my content are so much richer when they’re sent via webmention from people posting on their own sites.

I’ve also recently been experimenting with some small outlets in allowing them to receive webmentions. They can display a wider range of reactions to their content including bookmarks, likes, favorites, reads, and even traditional comments. Because webmentions are two-way links they’re audit-able and provide a better monolithic means of “social proof” relating to an article than the dozens of social widgets with disjointed UI that most outlets are currently using.

Perhaps this is the model that journalism outlets should begin to support?