Replied to a thread by @weh_kaniini and @FaebornNetworks (Twitter)
If you can make it, we’d love to have you join us at IndieWebCamp West this weekend. We’re overdue to build out more functionality for audience, privacy, and safety. Perhaps suggest it as a session if you can join?
Read a post by Bix (bix.blog)

Manton mentioned the much-anticipated ability to show replies on your blog posts, so of course I peeked at how it worked and at least for the moment have it running here, too. I do wonder how, or I guess if, this might affect how people reply to things in Timeline, since those replies no longer are restricted to appearing in Timeline? I haven’t decided yet what my decision will be here, because I have moderation questions, but for the time being I’ll leave it active.

Watching the conversations around this new feature on Micro.blog.
Read How to Change Your Off-Facebook Activity Settings by Gennie Gebhart (Electronic Frontier Foundation)
Facebook's long-awaited Off-Facebook Activity tool started rolling out today. While it's not a perfect measure, and we still need stronger data privacy laws, this tool is a good step toward greater transparency and user control regarding third-party tracking. We hope other companies...
Bookmarked January 29, 2020 at 06:45AM
Bookmarked Permanent Record by Edward Snowden (Metropolitan Books)

In 2013, twenty-nine-year-old Edward Snowden shocked the world when he broke with the American intelligence establishment and revealed that the United States government was secretly pursuing the means to collect every single phone call, text message, and email. The result would be an unprecedented system of mass surveillance with the ability to pry into the private lives of every person on earth. Six years later, Snowden reveals for the very first time how he helped to build this system and why he was moved to expose it.

Spanning the bucolic Beltway suburbs of his childhood and the clandestine CIA and NSA postings of his adulthood, Permanent Record is the extraordinary account of a bright young man who grew up online―a man who became a spy, a whistleblower, and, in exile, the Internet’s conscience. Written with wit, grace, passion, and an unflinching candor, Permanent Record is a crucial memoir of our digital age and destined to be a classic.

Hat tip: Strong recommendation by Vicki Boykis (#)
Read Harmful speech as the new porn by Jeff Jarvis (BuzzMachine)
In 1968, Lyndon Johnson appointed a National Commission on Obscenity and Pornography to investigate the supposed sexual scourge corrupting America’s youth...
I kept getting interrupted while reading this. I started around 8:30 or so… Fascinating look and thoughts which Jeff writes here.

📑 Highlights and Annotations

For the parallels between the fight against harmful and hateful speech online today and the crusade against sexual speech 50 years ago are stunning: the paternalistic belief that the powerless masses (but never the powerful) are vulnerable to corruption and evil with mere exposure to content; the presumption of harm without evidence and data; cries calling for government to stamp out the threat; confusion about the definitions of what’s to be forbidden; arguments about who should be responsible; the belief that by censoring content other worries can also be erased.

Annotated on January 18, 2020 at 08:29AM

One of the essays comes from Charles Keating, Jr., a conservative whom Nixon added to the body after having created a vacancy by dispatching another commissioner to be ambassador to India. Keating was founder of Citizens for Decent Literature and a frequent filer of amicus curiae briefs to the Supreme Court in the Ginzberg, Mishkin, and Fanny Hill obscenity cases. Later, Keating was at the center of the 1989 savings and loan scandal — a foretelling of the 2008 financial crisis — which landed him in prison. Funny how our supposed moral guardians — Nixon or Keating, Pence or Graham — end up disgracing themselves; but I digress.

Annotated on January 18, 2020 at 08:40AM

The fear then was the corruption of the masses; the fear now is microtargeting drilling directly into the heads of a strategic few.

Annotated on January 18, 2020 at 08:42AM

McCarthy next asks: “Who selects what is to be recorded or transmitted to others, since not everything can be recorded?” But now, everything can be recorded and transmitted. That is the new fear: too much speech.

Annotated on January 18, 2020 at 08:42AM

Many of the book’s essayists defend freedom of expression over freedom from obscenity. Says Rabbi Arthur Lelyveld (father of Joseph, who would become executive editor of The New York Times): “Freedom of expression, if it is to be meaningful at all, must include freedom for ‘that which we loathe,’ for it is obvious that it is no great virtue and presents no great difficulty for one to accord freedom to what we approve or to that to which we are indifferent.” I hear too few voices today defending speech of which they disapprove.

I might take issue with this statement and possibly a piece of Jarvis’ argument here. I agree that it’s moral panic that there could be such a thing as “too much speech” because humans have a hard limit for how much they can individually consume.

The issue I see is that while anyone can say almost anything, the problem becomes when a handful of monopolistic players like Facebook or YouTube can use algorithms to programattically entice people to click on and consume fringe content in mass quantities and that subtly, but assuredly nudges the populace and electorate in an unnatural direction. Most of the history of human society and interaction has long tended toward a centralizing consensus in which we can manage to cohere. The large scale effects of algorithmic-based companies putting a heavy hand on the scales are sure to create unintended consequences and they’re able to do it at scales that the Johnson and Nixon administrations only wish they had access to.

If we look at as an analogy to the evolution of weaponry, I might suggest we’ve just passed the border of single shot handguns and into the era of machine guns. What is society to do when the next evolution occurs into the era of social media atomic weapons?
Annotated on January 18, 2020 at 10:42AM

Truth is hard.

Annotated on January 18, 2020 at 10:42AM

As an American and a staunch defender of the First Amendment, I’m allergic to the notion of forbidden speech. But if government is going to forbid it, it damned well better clearly define what is forbidden or else the penumbra of prohibition will cast a shadow and chill on much more speech.

Perhaps it’s not what people are saying so much as platforms are accelerating it algorithmically? It’s one thing for someone to foment sedition, praise Hitler, or yell their religious screed on the public street corner. The problem comes when powerful interests in the form of governments, corporations, or others provide them with megaphones and tacitly force audiences to listen to it.

When Facebook or Youtube optimize for clicks keyed on social and psychological constructs using fringe content, we’re essentially saying that machines, bots, and extreme fringe elements are not only people, but that they’ve got free speech rights, and they can be prioritized with the reach and exposure of major national newspapers and national television in the media model of the 80’s.

I highly suspect that if real people’s social media reach were linear and unaccelerated by algorithms we wouldn’t be in the morass we’re generally seeing on many platforms.
Annotated on January 18, 2020 at 11:08AM

“Privacy in law means various things,” he writes; “and one of the things it means is protection from intrusion.” He argues that in advertising, open performance, and public-address systems, “these may validly be regulated” to prevent porn from being thrust upon the unsuspecting and unwilling. It is an extension of broadcast regulation.
And that is something we grapple with still: What is shown to us, whether we want it shown to us, and how it gets there: by way of algorithm or editor or bot. What is our right not to see?

Privacy as freedom from is an important thing. I like this idea.
Annotated on January 18, 2020 at 11:20AM

The Twenty-Six Words that Created the Internet is Jeff Kosseff’s definitive history and analysis of the current fight over Section 230, the fight over who will be held responsible to forbid speech. In it, Kosseff explains how debate over intermediary liability, as this issue is called, stretches back to a 1950s court fight, Smith v. California, about whether an L.A. bookseller should have been responsible for knowing the content of every volume on his shelves.

For me this is the probably the key idea. Facebook doesn’t need to be responsible for everything that their users post, but when they cross the line into actively algorithmically promoting and pushing that content into their users’ feeds for active consumption, then they **do** have a responsibility for that content.

By analogy image the trusted local bookstore mentioned. If there are millions of books there and the user has choice when they walk in to make their selection in some logical manner. But if the bookseller has the secret ability to consistently walk up to children and put porn into their hands or actively herding them into the adult sections to force that exposure on them (and they have the ability to do it without anyone else realizing it), then that is the problem. Society at large would further think that this is even more reprehensible if they realized that local governments or political parties had the ability to pay the bookseller to do this activity.

In case the reader isn’t following the analogy, this is exactly what some social platforms like Facebook are allowing our politicans to do. They’re taking payment from politicans to actively lie, tell untruths, and create fear in a highly targeted manner without the rest of society to see or hear those messages. Some of these sorts of messages are of the type that if they were picked up on an open microphone and broadcast outside of the private group they were intended for would have been a career ending event.

Without this, then we’re actively stifling conversation in the public sphere and actively empowering the fringes. This sort of active targeted fringecasting is preventing social cohesion, consensus, and comprimise and instead pulling us apart.

Perhaps the answer for Facebook is to allow them to take the political ad money for these niche ads and then not just cast to the small niche audience, but to force them to broadcast them to everyone on the platform instead? Then we could all see who our politicians really are?
Annotated on January 18, 2020 at 11:50AM

Of course, it’s even more absurd to expect Facebook or Twitter or Youtube to know and act on every word or image on their services than it was to expect bookseller Eleazer Smith to know the naughty bits in every book on his shelves.

Here’s the point! We shouldn’t expect them to know, but similarly if they don’t know, then they should not be allowed to randomly privilege some messages over others for how those messages are distributed on the platform. Why is YouTube accelerating messages about Nazis instead of videos of my ham sandwich at lunch? It’s because they’re making money on the Nazis.
Annotated on January 18, 2020 at 12:07PM

there must be other factors that got us Trump

Primarily people not really knowing how racisit and horrible he really was in addition to his inability to think clearly, logically, or linearly. He espoused a dozen or so simple aphorisms like “Build the wall,” but was absolutely unable to indicate a plan that went beyond the aphorism. How will it be implemented, funded, what will the short and long term issues that result. He had none of those things that many others presumed would be worked out as details by smart and intelligent people rather than the “just do it” managerial style he has been shown to espouse.

Too many republicans, particularly at the end said, “he’s not really that bad” and now that he’s in power and more authoritarian than they expected are too weak to admit their mistake.
Annotated on January 18, 2020 at 12:28PM

Axel Bruns’ dismantling of the filter bubble.

research to read
Annotated on January 18, 2020 at 12:45PM

“To affirm freedom is not to applaud that which is done under its sign,” Lelyveld writes.

Annotated on January 18, 2020 at 12:51PM

Read a Twitter thread by  Mx. Aria Stewart Mx. Aria Stewart (Twitter)
It just crystallized for me what I think has been mistaken about thinking of unwanted interaction on social networks as a "privacy" problem. It's not.

A privacy problem is things becoming known more widely than they should, subject to surveillance and contextless scrutiny. 
The onslaught of sexual harassment on platforms like early Twitter (and later twitter for people of notability), @KeybaseIO, every naive social network is an attack on the right to exist in public. It is the inverse of a privacy problem. 
But the conceiving of this as a privacy problem brings the wrong solutions. It means we are offered tools to remove ourselves from public view, to restrict our public personas, to retreat from public life. It means women are again confined to private sphere, denied civic life. 
 It's so endemic, so entrenched, and so normal that women should have to retreat to protect ourselves that we think of this as part of femininity. A strong civic life is seen as unfeminine, forward. It poisons us politically, socially, and personally. 
It is, at its core, an attack on democracy as well. 
The only way to undo this is to reconceive of this, not as a privacy problem but as an attack on public life. There will be new problems with this but at least they will be new. 
There has been work done on this, but I've never seen it connected to civic life, and this connects with my thoughts and work on community. The unit that social networks must focus on cannot be the individual. We do not exist as individuals first but as members of our communities 
When a new user joins a social network, their connection must be to their peers, their existing social relationships. A new user can only be onboarded in the context of relationships already on the network. 
Early adopters form such a community, but extrapolating from the joining of those initial members to how to scale the network misses the critical transition: from no community to the first, not from the first users to the next. 
New communities can only be onboarded by connections from individuals that span communities. New communities must be onboarded collectively, or the network falls to the army of randos. 
The irony is that surveillance capitalism has the information to do this but not the will, because as objects of marketing, we are individuals, statistics and demographics, not communities. The reality lies in plain sight. 
There have been attempts at social networks, sadly none dense enough to succeed, but that treat people as part of a web, and that their peers can shield and protect them. The idea is solid. 
The other alternative is to stop trying to give people a solitary identity, a profile and onboarding to a flat network, but instead only provide them with community connections. Dreamwidth is this to a large degree, if too sparse for most people to connect. 
Our social networks must connect us, not to our "friends" but to our communities. The ones that succeed do this by intent or by accident.

Facebook has a narrow view of community, but for those it matches, it works. With major flaws, but it does. 
Twitter, its community of early adopters, its creepy onboarding by uploading your contacts and mining data to connect you works. If I were to join and follow a few people I know, it would rapidly suggest many more people in my queer and trans community. It works. 
And this is why Ello failed. This is why Diaspora failed. This is why Mastodon succeeded, if only by scraping by the bare minimum. This is why gnu social failed. This is why a random vbulletin forum can succeed. The ones that succeed connect a dense community. 
Note that gnu social and mastodon are the same protocol! But they are different social networks. The difference in their affordances and the community structures they encourage are vastly different, despite interoperating. 
I'd say I don't know how apparent this problem is to white men — the ones largely designing these networks — but I do know. I know because of the predictable failures we see.

Part of this, I think boils down to how invisible community is when you are the default user. 
At no time am I unaware that I am trans, that I am a woman, that the people I follow and who follow me are distinct from the background. I can spot my people in a crowd on the internet with precision, just like a KNN clustering can. 
Trans culture in particular is Extremely Online. We are exceptionally easy to onboard to a new platform. But the solution can scale if we focus on solving it. And by knowing who is in the community (likely) and who is not, we can understand what is and is not harassment. 
We don't need to even know what the communities are — Twitter does not — and yet it knows how we cluster, and that suffices.

If we stop thinking of this as a privacy problem — letting us hide from the connections that are our solution — we can enlarge public life. 
That exceptional article — — about how bots sow division shows us another facet of this problem and way of thinking. Conceiving of this as a privacy problem fundamentally reacts with division when solidarity is needed. 
We can only fight this with a new, loose solidarity and an awareness of community boundaries. We can build technology that makes space for us to be safe online by being present with those that support us, and react together, rather than as individuals and separating us for safety 
This thread has meandered a bit, but I'm dancing around something important. We fundamentally need to stop organizing online activity the way we do. Follow and be followed is not where it's at.

It's join, manage attention, build connection. 
Stop sorting things topically and trying to find connections in content.

Start looking for clusters of relationships between people.

The question should not be "what is this about?" but "who is this for?"
Some interesting ideas on social hiding in here.
Read The California DMV Is Making $50M a Year Selling Drivers’ Personal Information (Vice)
A document obtained by Motherboard shows how DMVs sell people’s names, addresses, and other personal information to generate revenue.
This seems excessive and egregious. I definitely don’t trust them to hold onto my data if they’re selling it to third parties, particularly commercial ones.

👓 An Apple store employee ‘helped’ a customer — by texting himself an intimate photo from her phone | Washington Post

Read An Apple store employee ‘helped’ a customer — by texting himself an intimate photo from her phone (Washington Post)
In a statement, Apple said the employee was "no longer associated with our company."
I’m surprised that this isn’t reported more often… I can only imagine the thousands of cases that aren’t noticed or reported.

👓 Facebook sold a rival-squashing move as privacy policy, documents reveal | the Guardian

Read Facebook sold a rival-squashing move as privacy policy, documents reveal (the Guardian)
Documents from a 2015 lawsuit allege that the tech giant’s policies were anticompetitive and misrepresented to the public

🔖 Student Privacy Syllabus Statement Project

Bookmarked Student Privacy Syllabus Statement Project (pad.riseup.net)
The following document is an in-progress draft of a statement that might be included with a syllabus to help raise student awareness about controversial data collection practices carried out in many of the technologies they use for learning. Though we cannot always change, fully understand, or opt out of these practices, we feel that ignoring their presence contributes to the broader helplessness in confronting the mass exploitation of personal data at large. This is meant to be a template statement that a professor could revise for inclusion in the syllabus, regardless of the subject matter of the course. We recognize that some power dynamics may not allow for such a statement and that each person should decide for themselves if such a statement in their syllabus is possibile considering their context. If anything we feel the idea of such a statement makes for an excellent thought experiment to address questions of the use of problematic collection and use of student data and to develop conversation around these issues. This draft was started by Autumm Caines and Erin Rose Glass and then opened to group comments during their "Architecture of Student Privacy" workshop during the Domains 2019 conference. We are now soliciting further comments in order to create a template for circulation and plan to write up the process for publication.
Hat tip:

👓 Do not track (an #OLCInnovate plea) – updated 4/30/18 | the red pincushion

Read Do not track (an #OLCInnovate plea) – updated 4/30/18 by Amy Collier (the red pincushion)
At the OLC Innovate conference—a conference where I was presenting with Adam Croom about the need to be more thoughtful and careful with student data—I ran into my own issues with unnecessary surveillance and invasions of privacy: Door keepers at the entrance to every session demandingly and som...

🔖 How a Swiss-based mathematician helped lift the lid on the Facebook data scandal | thelocal.ch

Bookmarked How a Swiss-based mathematician helped lift the lid on the Facebook data scandal (thelocal.ch)
They are lying, says Paul-Olivier Dehaye. They are all lying: Facebook, Tinder and Uber.
Hat tip: