Read Harmful speech as the new porn by Jeff Jarvis (BuzzMachine)
In 1968, Lyndon Johnson appointed a National Commission on Obscenity and Pornography to investigate the supposed sexual scourge corrupting America’s youth...
I kept getting interrupted while reading this. I started around 8:30 or so… Fascinating look and thoughts which Jeff writes here.

📑 Highlights and Annotations

For the parallels between the fight against harmful and hateful speech online today and the crusade against sexual speech 50 years ago are stunning: the paternalistic belief that the powerless masses (but never the powerful) are vulnerable to corruption and evil with mere exposure to content; the presumption of harm without evidence and data; cries calling for government to stamp out the threat; confusion about the definitions of what’s to be forbidden; arguments about who should be responsible; the belief that by censoring content other worries can also be erased.

Annotated on January 18, 2020 at 08:29AM

One of the essays comes from Charles Keating, Jr., a conservative whom Nixon added to the body after having created a vacancy by dispatching another commissioner to be ambassador to India. Keating was founder of Citizens for Decent Literature and a frequent filer of amicus curiae briefs to the Supreme Court in the Ginzberg, Mishkin, and Fanny Hill obscenity cases. Later, Keating was at the center of the 1989 savings and loan scandal — a foretelling of the 2008 financial crisis — which landed him in prison. Funny how our supposed moral guardians — Nixon or Keating, Pence or Graham — end up disgracing themselves; but I digress.

Annotated on January 18, 2020 at 08:40AM

The fear then was the corruption of the masses; the fear now is microtargeting drilling directly into the heads of a strategic few.

Annotated on January 18, 2020 at 08:42AM

McCarthy next asks: “Who selects what is to be recorded or transmitted to others, since not everything can be recorded?” But now, everything can be recorded and transmitted. That is the new fear: too much speech.

Annotated on January 18, 2020 at 08:42AM

Many of the book’s essayists defend freedom of expression over freedom from obscenity. Says Rabbi Arthur Lelyveld (father of Joseph, who would become executive editor of The New York Times): “Freedom of expression, if it is to be meaningful at all, must include freedom for ‘that which we loathe,’ for it is obvious that it is no great virtue and presents no great difficulty for one to accord freedom to what we approve or to that to which we are indifferent.” I hear too few voices today defending speech of which they disapprove.

I might take issue with this statement and possibly a piece of Jarvis’ argument here. I agree that it’s moral panic that there could be such a thing as “too much speech” because humans have a hard limit for how much they can individually consume.

The issue I see is that while anyone can say almost anything, the problem becomes when a handful of monopolistic players like Facebook or YouTube can use algorithms to programattically entice people to click on and consume fringe content in mass quantities and that subtly, but assuredly nudges the populace and electorate in an unnatural direction. Most of the history of human society and interaction has long tended toward a centralizing consensus in which we can manage to cohere. The large scale effects of algorithmic-based companies putting a heavy hand on the scales are sure to create unintended consequences and they’re able to do it at scales that the Johnson and Nixon administrations only wish they had access to.

If we look at as an analogy to the evolution of weaponry, I might suggest we’ve just passed the border of single shot handguns and into the era of machine guns. What is society to do when the next evolution occurs into the era of social media atomic weapons?
Annotated on January 18, 2020 at 10:42AM

Truth is hard.

Annotated on January 18, 2020 at 10:42AM

As an American and a staunch defender of the First Amendment, I’m allergic to the notion of forbidden speech. But if government is going to forbid it, it damned well better clearly define what is forbidden or else the penumbra of prohibition will cast a shadow and chill on much more speech.

Perhaps it’s not what people are saying so much as platforms are accelerating it algorithmically? It’s one thing for someone to foment sedition, praise Hitler, or yell their religious screed on the public street corner. The problem comes when powerful interests in the form of governments, corporations, or others provide them with megaphones and tacitly force audiences to listen to it.

When Facebook or Youtube optimize for clicks keyed on social and psychological constructs using fringe content, we’re essentially saying that machines, bots, and extreme fringe elements are not only people, but that they’ve got free speech rights, and they can be prioritized with the reach and exposure of major national newspapers and national television in the media model of the 80’s.

I highly suspect that if real people’s social media reach were linear and unaccelerated by algorithms we wouldn’t be in the morass we’re generally seeing on many platforms.
Annotated on January 18, 2020 at 11:08AM

“Privacy in law means various things,” he writes; “and one of the things it means is protection from intrusion.” He argues that in advertising, open performance, and public-address systems, “these may validly be regulated” to prevent porn from being thrust upon the unsuspecting and unwilling. It is an extension of broadcast regulation.
And that is something we grapple with still: What is shown to us, whether we want it shown to us, and how it gets there: by way of algorithm or editor or bot. What is our right not to see?

Privacy as freedom from is an important thing. I like this idea.
Annotated on January 18, 2020 at 11:20AM

The Twenty-Six Words that Created the Internet is Jeff Kosseff’s definitive history and analysis of the current fight over Section 230, the fight over who will be held responsible to forbid speech. In it, Kosseff explains how debate over intermediary liability, as this issue is called, stretches back to a 1950s court fight, Smith v. California, about whether an L.A. bookseller should have been responsible for knowing the content of every volume on his shelves.

For me this is the probably the key idea. Facebook doesn’t need to be responsible for everything that their users post, but when they cross the line into actively algorithmically promoting and pushing that content into their users’ feeds for active consumption, then they **do** have a responsibility for that content.

By analogy image the trusted local bookstore mentioned. If there are millions of books there and the user has choice when they walk in to make their selection in some logical manner. But if the bookseller has the secret ability to consistently walk up to children and put porn into their hands or actively herding them into the adult sections to force that exposure on them (and they have the ability to do it without anyone else realizing it), then that is the problem. Society at large would further think that this is even more reprehensible if they realized that local governments or political parties had the ability to pay the bookseller to do this activity.

In case the reader isn’t following the analogy, this is exactly what some social platforms like Facebook are allowing our politicans to do. They’re taking payment from politicans to actively lie, tell untruths, and create fear in a highly targeted manner without the rest of society to see or hear those messages. Some of these sorts of messages are of the type that if they were picked up on an open microphone and broadcast outside of the private group they were intended for would have been a career ending event.

Without this, then we’re actively stifling conversation in the public sphere and actively empowering the fringes. This sort of active targeted fringecasting is preventing social cohesion, consensus, and comprimise and instead pulling us apart.

Perhaps the answer for Facebook is to allow them to take the political ad money for these niche ads and then not just cast to the small niche audience, but to force them to broadcast them to everyone on the platform instead? Then we could all see who our politicians really are?
Annotated on January 18, 2020 at 11:50AM

Of course, it’s even more absurd to expect Facebook or Twitter or Youtube to know and act on every word or image on their services than it was to expect bookseller Eleazer Smith to know the naughty bits in every book on his shelves.

Here’s the point! We shouldn’t expect them to know, but similarly if they don’t know, then they should not be allowed to randomly privilege some messages over others for how those messages are distributed on the platform. Why is YouTube accelerating messages about Nazis instead of videos of my ham sandwich at lunch? It’s because they’re making money on the Nazis.
Annotated on January 18, 2020 at 12:07PM

there must be other factors that got us Trump

Primarily people not really knowing how racisit and horrible he really was in addition to his inability to think clearly, logically, or linearly. He espoused a dozen or so simple aphorisms like “Build the wall,” but was absolutely unable to indicate a plan that went beyond the aphorism. How will it be implemented, funded, what will the short and long term issues that result. He had none of those things that many others presumed would be worked out as details by smart and intelligent people rather than the “just do it” managerial style he has been shown to espouse.

Too many republicans, particularly at the end said, “he’s not really that bad” and now that he’s in power and more authoritarian than they expected are too weak to admit their mistake.
Annotated on January 18, 2020 at 12:28PM

Axel Bruns’ dismantling of the filter bubble.

research to read
Annotated on January 18, 2020 at 12:45PM

“To affirm freedom is not to applaud that which is done under its sign,” Lelyveld writes.

Annotated on January 18, 2020 at 12:51PM

Read Protocols, Not Platforms: A Technological Approach to Free Speech by Mike Masnick (knightcolumbia.org)
Altering the internet's economic and digital infrastructure to promote free speech

Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they’ve kicked this firm off their site, but I think they’ve got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let’s start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn’t dominated by a single censor. .”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it’s inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the “masses” (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning’s coffee or today’s lunch marginalized.

To analogize it, we’ve provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn’t have the need for as much human curation.

Annotated on December 11, 2019 at 11:13AM

That approach: build protocols, not platforms.

I can now see why @jack made his Twitter announcement this morning. If he opens up and can use that openness to suck up more data, then Twitter’s game could potentially be doing big data and higher end algorithmic work on even much larger sets of data to drive eyeballs.

I’ll have to think on how one would “capture” a market this way, but Twitter could be reasonably poised to pivot in this direction if they’re really game for going all-in on the idea.

It’s reasonably obvious that Twitter has dramatically slowed it’s growth and isn’t competing with some of it’s erstwhile peers. Thus they need to figure out how to turn a relatively large ship without losing value.

Annotated on December 11, 2019 at 11:20AM

It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

But platforms **are **making **huge **decisions about who is allowed to speak. While they’re generally allowing everyone to have a voice, they’re also very subtly privileging many voices over others. While they’re providing space for even the least among us to have a voice, they’re making far too many of the worst and most powerful among us logarithmic-ally louder.

It’s not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn’t have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

The issue we ought to be looking at is the dynamic range between people and the messages they’re able to send through social platforms.

We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they’re able to buy, we’re imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I’d have to actively search that content out for it to cause me that sort of harm.

Annotated on December 11, 2019 at 11:39AM

Moving back to a focus on protocols over platforms can solve many of these problems.

This may also only be the case if large corporations are forced to open up and support those protocols. If my independent website can’t interact freely and openly with something like Twitter on a level playing field, then it really does no good.

Annotated on December 11, 2019 at 11:42AM

And other recent developments suggest that doing so could overcome many of the earlier pitfalls of protocol-based systems, potentially creating the best of all words: useful internet services, with competition driving innovation, not controlled solely by giant corporations, but financially sustainable, providing end users with more control over their own data and privacy—and providing mis- and disinformation far fewer opportunities to wreak havoc.

Some of the issue with this then becomes: “Who exactly creates these standards?” We already have issues with mega-corporations like Google wielding out sized influence in the ability to create new standards like Schema.org or AMP.

Who is to say they don’t tacitly design their standards to directly (and only) benefit themselves?

Annotated on December 11, 2019 at 11:47AM

🎧 A Century of Free Speech | On the Media | WNYC Studios

Listened to A Century of Free Speech by Bob Garfield from On the Media | WNYC Studios

Where do our modern notions of free speech come from? A new book goes back to 1919.

For this week's pod extra, we feature a conversation from WNYC'S Brian Lehrer Show. Brian talked with Columbia University President Lee Bollinger and University of Chicago Law Professor Geoffrey Stone, editors of The Free Speech Centurya collection of essays by leading scholars, marking 100 years since the Supreme Court issued the three decisions that established the modern notion of free speech.

Whether it’s fake news or money in politics, we’re still arguing over the First Amendment, and their book lays out the origins of the argument just after the first World War.  

👓 Judge may jail Roger Stone after Instagram post | NBC News

Read Judge may jail Roger Stone after Instagram post (NBC News)
Judge Amy Berman Jackson will rule whether he has violated the conditions of release after a hearing Thursday.

Deplatforming and making the web a better place

I’ve spent some time this morning thinking about the deplatforming of the abhorrent social media site Gab.ai by Google, Apple, Stripe, PayPal, and Medium following the Tree of Life shooting in Pennsylvania. I’ve created a deplatforming page on the IndieWeb wiki with some initial background and history. I’ve also gone back and tagged (with “deplatforming”) a few articles I’ve read or podcasts I’ve listened to recently that may have some interesting bearing on the topic.

The particular design question I’m personally looking at is roughly:

How can we reshape the web and social media in a way that allows individuals and organizations a platform for their own free speech and communication without accelerating or amplifying the voices of the abhorrent fringes of people espousing broadly anti-social values like virulent discrimination, racism, fascism, etc.?

In some sense, the advertising driven social media sites like Facebook, Twitter, et al. have given the masses the equivalent of not simply a louder voice within their communities, but potential megaphones to audiences previously far, far beyond their reach. When monetized against the tremendous value of billions of clicks, there is almost no reason for these corporate giants to filter or moderate socially abhorrent content.  Their unfiltered and unregulated algorithms compound the issue from a societal perspective. I look at it in some sense as the equivalent of the advent of machine guns and ultimately nuclear weapons in 20th century warfare and their extreme effects on modern society.

The flip side of the coin is also potentially to allow users the ability to better control and/or filter out what they’re presented on platforms and thus consuming, so solutions can relate to both the output as well as the input stages.

Comments and additions to the page (or even here below) particularly with respect to positive framing and potential solutions on how to best approach this design hurdle for human communication are more than welcome.


Deplatforming

Deplatforming or no platform is a form of banning in which a person or organization is denied the use of a platform (physical or increasingly virtual) on which to speak.

In addition to the banning of those with socially unacceptable viewpoints, there has been a long history of marginalized voices (particularly trans, LGBTQ, sex workers, etc.) being deplatformed in systematic ways.

The banning can be from any of a variety of spaces ranging from physical meeting spaces or lectures, journalistic coverage in newspapers or television to domain name registration, web hosting, and even from specific social media platforms like Facebookor Twitter. Some have used these terms as narrowly as in relation to having their Twitter “verified” status removed.

“We need to puncture this myth that [deplatforming]’s only affecting far-right people. Trans rights activistsBlack Lives Matterorganizers, LGBTQI people have been demonetized or deranked. The reason we’re talking about far-right people is that they have coverage on Fox News and representatives in Congress holding hearings. They already have political power.” — Deplatforming Works: Alex Jones says getting banned by YouTube and Facebook will only make him stronger. The research says that’s not true. in Motherboard 2018-08-10

Examples

Glenn Beck

Glenn Beck parted ways with Fox News in what some consider to have been a network deplatforming. He ultimately moved to his own platform consisting of his own website.

Reddit Communities

Reddit has previously banned several communities on its platform. Many of the individual users decamped to Voat, which like Gab could potentially face its own subsequent deplatforming.

Milo Yiannopoulos

Milo Yiannopoulos, the former Breitbart personality, was permanently banned from Twitter in 2016 for inciting targeted harassment campaigns against actress Leslie Jones. He resigned from Breitbart over comments he made about pedophilia on a podcast. These also resulted in the termination of a book deal with Simon & Schuster as well as the cancellation of multiple speaking engagements at Universities.

The Daily Stormer

Neo-Nazi site The Daily Stormer was deplatformed by Cloudflare in the wake of 2017’s “Unite the Right” rally in Charlottesville. Following criticism, Matthew Prince, Cloudflare CEO, announced that he was ending the Daily Stormer’s relationship with Cloudflare, which provides services for protecting sites against distributed denial-of service (DDoS) attacks and maintaining their stability.

Alex Jones/Infowars

Alex Jones and his Infowars were deplatformed by Apple, Spotify, YouTube, and Facebook in late summer 2018 for his Network’s false claims about the Newtown shooting.

Gab

Gab.ai was deplatformed from PayPal, Stripe, Medium , Apple, and Google as a result of their providing a platform for alt-right and racist groups as well as the shooter in the Tree of Life Synagogue shooting in October 2018

Gab.com is under attack. We have been systematically no-platformed by App Stores, multiple hosting providers, and several payment processors. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure that justice is served for the horrible atrocity committed in Pittsburgh. Gab will continue to fight for the fundamental human right to speak freely. As we transition to a new hosting provider Gab will be inaccessible for a period of time. We are working around the clock to get Gab.com back online. Thank you and remember to speak freely.

—from the Gab.ai homepage on 2018-10-29

History

Articles

Research

See Also

  • web hosting
  • why
  • shadow banning
  • NIPSA
  • demonitazition – a practice (particularly leveled at YouTube) of preventing users and voices from monetizing their channels. This can have a chilling effect on people who rely on traffic for income to support their work (see also 1)

📑 Fark Banned Misogyny to Facilitate Free Speech | Motherboard

Annotated Fark Banned Misogyny to Facilitate Free Speech by Jason Koebler (Motherboard)
"I am really pleased to see different sites deciding not to privilege aggressors' speech over their targets'," Phillips said. "That tends to be the default position in so many online 'free speech' debates which suggest that if you restrict aggressors' speech, you're doing a disservice to America—a position that doesn't take into account the fact that antagonistic speech infringes on the speech of those who are silenced by that kind of abuse."  

👓 Fark Banned Misogyny to Facilitate Free Speech | Motherboard

Read Fark Banned Misogyny to Facilitate Free Speech (Motherboard)
"Now that we've done it I feel like a complete ass for waiting so long. So will everyone else who makes the same call."

👓 ‘By whatever means necessary’: The origins of the ‘no platform’ policy | Hatful of History

Read ‘By whatever means necessary’: The origins of the ‘no platform’ policy by Dr Evan Smith (Hatful of History)
Recently the concept of ‘no platform’ was in the news again when there were attempts to cancel a talk by Germaine Greer at Cardiff University. While there is no doubt that the use of ‘no platform’ has expanded since its first use in the 1970s, the term is bandied about in the media with little definition and understanding of how it was developed as a specific response to the fascism of the National Front (and later the British National Party). This post looks back at the origins of the term and how it was developed into a practical anti-fascist strategy.
hat tip: Kevin Marks

👓 Free Speech in the Age of Algorithmic Megaphones | Wired

Read Free Speech in the Age of Algorithmic Megaphones (WIRED)
Researchers have long known that local actors—as well as Russia—use manipulative tactics to spread information online. With Facebook suspending a slew of domestic accounts, a difficult reckoning is upon us.
We need something in the digtial world that helps to put the brakes on gossip and falsehoods much the same way real life social networks tend to slow these things down. Online social networks that gamify and monopolize based on clicks using black box algorithms are destroying some of the fabric of our society.

Lies were able to go across the world before the truth had a chance to put on it’s breeches in the past, but it’s ability to do so now is even worse. We need to be able to figure out a way to flip the script.

🔖 John Stuart Mill’s Ideas on Free Speech Illustrated

Bookmarked John Stuart Mill's Ideas on Free Speech Illustrated (Heterodox Academy)
Heterodox Academy has produced a new book based on John Stuart Mill’s famous essay On Liberty to make it accessible for the 21st century. Here’s what makes our edition special:
1) It’s just the second chapter (out of 5), because that chapter gives the best arguments ever made for the importance of free speech and viewpoint diversity;
2) We have reduced that chapter by 50% to remove repetitions and historical references that would be obscure today, producing a very readable 7000 word essay;
3) Editors Richard Reeves (a biographer of Mill) and Jon Haidt (a social psychologist) have written a brief introduction to link Mill and his time to the issues of our time, and
4) Artist Dave Cicirelli has created 16 gorgeous original illustrations that amplify the power of Mill’s metaphors and arguments.

All Minus One is ideal for use in college courses, advanced high school classes, or in any organization in which people would benefit from productive disagreement. We offer free and paid versions of the book below.
Caveat emptor: though this appears to be high quality, this looks like it’s heavily edited and excerpted.

h/t Claire Lehmann