Replied to a tweet by Dan York (Twitter)

Dan, since you’re in the WordPress space, there are several pieces in place there. Akismet and other anti-spam tools can still be used to filter webmentions just like any other comment/response on your site.

If you moderate your responses on your site, the webmention plugin has an “approve & always allow” function as well as domain allow-listing for people you know and trust.

It also bears saying: there’s also nothing that says you have to display webmentions on your site either, you can use them simply as notifications on your back end.

In my experience, I’ve also seen people strip active links, scripts, etc. out of their received webmentions as a security precaution. I believe that the WordPress suite of IndieWeb plugins does this by default.

If you need/want to go further, you could work on implementing the Vouch extension of Webmention. Any additional ideas or brainstorming you’ve got to help mitigate these sorts of harms is most welcome.

For the record, for Webmention to work as a protocol, it requires a link to your site to actually appear on a public web page–something neither trackback/pingback required and made them even easier/cheaper to game.

Read Thread by @pfrazee@pfrazee (threadreaderapp.com)

Yo, decentralizers. If our projects are ONLY about censorship resistance and NOT about better algorithms for elevating truth, and NOT about creating constrained but real powers of moderation, then we're making things worse. 1/n

It kills me, absolutely kills me, that after years of decentralization advocacy it's a moment like this when all the dweb projects pop up on HN and social media. The interest popped -- not when truth became inconvenient for corporate power, but when lies did.

Charitably, people may be reflecting on the kind of power imbalance being revealed and reflecting on how it could be abused.

https://twitter.com/pwang/status/1348335710303096833

 

Uncharitably? Do I need to even say it.

For anybody still unsure:

We have to find a way to square our ideals and our fears about monopoly control with the realities of how our technology is working. It's not enough to defend an ideal. We need to be effective.

We've all done our spiderman homework. What comes with great power?

If we really believe that free speech is important -- as I do -- and we want to protect it, then we need to work hard to make sure that free speech provides value to people. Otherwise they're going to shrug and let it drift away, "a nice idea, but impractical, really"

The question isn't "how do we make moderation impossible?" The question is, how do we make moderation trustworthy.

That, it turns out, is much harder than p2p tweets

It's also about *checking* power, not just distributing it. Like code-forking: FOSS doesn't always mean "anybody can contribute," but it definitely means that the users can fork if the core devs abuse their position. How can we get that kind of check on power here?

It's nuanced. It's harder to sell than "censorship resistance." Maybe we need a new framework for discussing this, a new set of words. I don't know what to tell you, but the reward is equal to the challenge. n/n

Read Permanent suspension of @realDonaldTrump (blog.twitter.com)
After close review of recent Tweets from the @realDonaldTrump account and the context around them — specifically how they are being received and interpreted on and off Twitter — we have permanently suspended the account due to the risk of further incitement of violence.

Too little, too late, but nonetheless.

Read My Repo, My House, My Rules by Eran HammerEran Hammer (hueniverse.com)
GitHub provides an invaluable hosting service. Like all hosting platforms, any interaction between the content owner — the maintainer — and their community— the users — is owned exclusively by the owner. If you visit my repositories on GitHub, you are visiting my property, hosted generously by GitHub. It is not public space.
I wonder if the reframing by the IndieWeb community of hosting things on their own sites will prevent this sort of rudeness in the future, or will the social construct fall down with the influence of spammers and trolls?
Read Digital publics, Conversations and Twitter by Kevin Marks (epeus.blogspot.com)
Last week, I left the Web 2.0 conference to listen to Mimi Ito , danah boyd and their colleagues talk about their research on Digital Publ...
Interestingly Kevin’s comments indicate that I’ve read this before. Definitely worth another read from time to time.
Read Shadow banning (Wikipedia)
Shadow banning is the act of blocking or partially blocking a user or their content from an online community such that it will not be readily apparent to the user that they have been banned. For instance, shadow banned comments posted to a blog or media site will not be visible to other persons accessing that site from their computers. By partly concealing, or making a user's contributions invisible or less prominent to other members of the service, the hope may be that in the absence of reactions to their comments, the problematic or otherwise out-of-favour user will become bored or frustrated and leave the site, and that spammers and trolls will not create new accounts.
Read - Want to Read: Design For Community: The Art Of Connecting Real People In Virtual Places by Derek PowazekDerek Powazek (New Riders)

Communities are part of all successful web sites in one way or another. It looks at the different stages that must be understood: Philosophy: Why does your site need community? What are your measures of success?Architecture: How do you set up a site to createpositive experience? How do you coax people out of their shells and get them to share their experiences online?Design: From color choice to HTML, how do you design the look of a community area?Maintenance: This section will contain stories of failed web communities, and what they could have done to stay on track, as well as general maintenance tips andtricks for keeping your community garden growing.

book cover of Design For Community: The Art Of Connecting Real People In Virtual Places

Read Your right to comment ends at my front door. by Derek Powazek (Derek Powazek)
John Gruber of Daring Fireball posted a response to critic who took him to task for not having comments on his site (skip down to “As for Wilcox’s arguments regarding user-submitted comments”). My humble site has a tiny fraction of the traffic of Daring Fireball, but in this latest incarnation, I also decided to go without comments. Here’s why. I agree wholeheartedly with John that the decision to add comments to your site begins and ends with the site’s owner. I also agree that his site is a “curated conversation.” Conversations have been happening between weblogs since the advent of the permalink. Joe Wilcox, who obviously has a bone to pick with John, has no right to pick that bone on John’s site.
I love the ideas here.

👓 How Facebook and Twitter Help Amplify Fringe Websites | Anti-Defamation League

Read How Facebook and Twitter Help Amplify Fringe Websites (Anti-Defamation League)
Extremists are leveraging Facebook and Twitter to ensure that the hateful philosophies that begin to germinate on message boards like Gab and 8chan find a new and much larger audience.
I’ll note here that I’ve noticed that sites like Gab have been working at transitioning into projects like Mastodon as a means of getting around roadblocks related to getting their mobile apps into marketplaces like the Apple and Google app stores.

We need far more tools to help individuals to control the crap that they see on the internet.

👓 Proposals for Reasonable Technology Regulation and an Internet Court | BuzzMachine

Read Proposals for Reasonable Technology Regulation and an Internet Court by Jeff Jarvis (BuzzMachine)
I have seen the outlines of a regulatory and judicial regime for internet companies that begins to make sense to me. In it, platforms set and are held...
An interesting update and take on internet regulations.

📑 How The Wall Street Journal is preparing its journalists to detect deepfakes | Nieman Lab

Annotated How The Wall Street Journal is preparing its journalists to detect deepfakes (Nieman Lab)
As deepfakes make their way into social media, their spread will likely follow the same pattern as other fake news stories. In a MIT study investigating the diffusion of false content on Twitter published between 2006 and 2017, researchers found that “falsehood diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information.” False stories were 70 percent more likely to be retweeted than the truth and reached 1,500 people six times more quickly than accurate articles.  
This sort of research should make it easier to find and stamp out from the social media side of things. We need regulations to actually make it happen however.