Automatically send webmentions to IndieNews
Several of my friends and colleagues have been critical of their social media experience recently — Twitter in particular. One friend left Twitter altogether last week. I’m sympathetic. I’m a serial Facebook quitter, and in the fall, I wrote a few blog posts about my disillusionment with relying on Twitter for social and professional conversations. I even took some time off . . . and enjoyed it.
And then there were conferences. Twitter is such an assett at professional conferences, adding a layer of depth to the presentations and conversations. Then came #moocmooc. And so Twitter dragged me back in.
For all of its benefits, Twitter still has a signal-to-noise ratio problem. And a harassment problem. It facilitates the antisocial and the parasocial alongside the social. Creative, and promiscuous, blocking helps the anti-/parasocial problem. But it’s still noisy, and even the good is under the control of the Twitter company. However, I’ve started trying two things this past week that are helping with these issues.
First, I’m largely ignoring my Twitter timeline, and instead, I’m following a few lists I’ve created, each of which have, at most, a few dozen people. This targeted reading, loosely by topic, means that I read less tweets, and that I can choose a topic to focus on at a given moment. (My lists include topics like critical pedagogy, music scholarship, digital humanities, social justice, Christianity, etc. I also follow hashtags like #mtped and #moocmooc.) The topical division is messy, of course, as individuals tend to tweet about more than one topic. But there is more signal and less noise on these lists, and less topic-jumping while reading one of these lists, than while reading my timeline. Since I’m following less people this way, I might miss something. But most of the good stuff I really need to see will be retweeted by someone in a list eventually. (And people retweeted often get added to the list.) And at some point, I have to resign myself to the fact that there will always be more good stuff out there than I have time to engage . . . and that’s okay.
The other change I’ve made is installing Known on my server (sketches.shaffermusic.com). Known is a blog-like, social-media-like platform designed with POSSE in mind: Publish on your Own Site and Syndicate Elsewhere — a growing trend on the IndieWeb. Known double-publishes on Tiwtter (and other platforms) and uses webmentions to collect the ensuing conversations onto the original Known site. (Bridgy helps, too.) It also differentiates Tweet-like status updates from Facebook-like mini-blog entires without imposing character limits. It also integrates with social media conversations and @-replies pretty well. In short, it’s a pretty smooth way to own and control your content while connecting on proprietary social media networks.
I’ve found that more targeted reading makes me happier, and more targeted Twitter use means more time for longer-form reading and writing. Further, I really want to control my own content, and doing so makes me more excited about writing (as does having a new publishing toy . . . er . . . platform).
I’m liking this setup, at least for now. Targeted, meaningful engagement on Twitter, more time to read the longer-form pieces I find there, an easier and more “indie” way to engage, and more motivation to dig in and really write. Good stuff. And no need to leave Twitter just yet.
Kris Shaffer, Ph.D. (Yale University, 2011), is an Instructional Technology Specialist and adjunct instructor in Computer Science and Digital Studies at the University of Mary Washington and Contributing Editor for Hybrid Pedagogy. He is also the lead author of Open Music Theory.
Header image by Jared Tarbell.
A few particular examples: I follow physicist John Carlos Baez and mathematician Terry Tao who both have one or more academic blogs for various topics which they POSSE work to several social silos including Google+ and Twitter. While they get some high quality response to posts natively, some of their conversations are forked/fragmented to those other silos. It would be far more useful if they were using webementions (and Brid.gy) so that all of that conversation was being aggregated to their original posts. If they supported webmentions directly, I suspect that some of their collaborators would post their responses on their own sites and send them after publication as comments. (This also helps to protect primacy and the integrity of the original responses as the receiving site could moderate them out of existence, delete them outright, or even modify them!)
While it’s pretty common for researchers to self-publish (sometimes known as academic samizdat) their work on their own site and then cross-publish to a pre-print server (like arXiv.org), prior to publishing in a (preferrably) major journal. There’s really no reason they shouldn’t just use their own personal websites, or online research journals like yours, to publish their work and then use that to collect direct comments, responses, and replies to it. Except possibly where research requires hosting uber-massive data sets which may be bandwidth limiting (or highly expensive) at the moment, there’s no reason why researchers shouldn’t self-host (and thereby own) all of their work.
Instead of publishing to major journals, which are all generally moving to an online subscription/readership model anyway, they might publish to topic specific hubs (akin to pre-print servers or major publishers’ websites). This could be done in much the same way many Indieweb users publish articles/links to IndieWeb News: they publish the piece on their own site and then syndicate it to the hub by webmention using the hub’s endpoint. The hub becomes a central repository of the link to the original as well as making it easier to subscribe to updates via email, RSS, or other means for hundreds or even thousands of researchers in the given area. Additional functionality could be built into these to support popularity measures as well to help filter some of the content on a weekly or monthly basis, which is essentially what many publishers are doing now.
In the end, citation metrics could be measured directly on the author’s original page by the number of incoming webmetions they’ve received on it as others referencing them would be linking to them and therefore sending webmentions. (PLOS|One does something kind of like this by showing related tweets which mention particular papers now: here’s an example.)
Naturally there is some fragility in some of this and protective archive measures should be taken to preserve sites beyond the authors lives, but much of this could be done by institutional repositories like University libraries which do much of this type of work already.
I’ve been meaning to write up a much longer post about how to use some of these types of technologies to completely revamp academic publishing, perhaps I should finish doing that soon? Hopefully the above will give you a little bit of an idea of what could be done.
I’ve been microblogging from my own site and syndicating content to Twitter and other social silos for a while.
I usually consume Twitter via an RSS hack and respond either via micropubs directly to my site or from a built in RSS reader on my own site. I use Brid.gy and webmention to collect replies back to my site to continue the conversation.which
For me, my personal website is my end-all-be-all hub for reading/publishing and Twitter, Facebook, et al. are just distribution channels.
From what I understand about Manton’s proposed implementation, he’ll be using or making a lot of these technologies available, he’ll just be making it a bit easier for my parents and the “masses” to do it.
Back in early October, I had also replied to a great post by Jay Rosen when he redesigned his own blog PressThink. I saw a brief response from him on Twitter at the time, but didn’t get a notification from him about his slightly longer reply, which I just saw over the weekend:
So, for his benefit as well as others who are interested in the ability to do something like this quickly and easily, I thought I’d write up a short outline of what I’d originally done so that without spending all the time I did, others can do the same or something similar depending on their needs.
If part of Mr. Rosen’s reply doesn’t give you enough motivation for why one would want to do this, IndieWeb.org has a laundry list of motivations along with a list of dead and defunct sites and social media silos that have taken pedabytes of data with them when they died.
How to (Quickly) Own and Display Your Tweets on Your Own Site
Download all your tweets
- Go to: https://twitter.com/settings/account
- Near the bottom of the page you should see a “Your Twitter archive” section
- See the
Request your archivebutton? Click it.
- After a (hopefully) short wait, a link to your archive should show up in your email associated with the account. Download it.
- Congratulations, you now own all of your tweets to date!
- You can open the
index.htmlfile in the downloaded folder to view all of your tweets locally on your own computer with your browser.
Display your Twitter archive
The best part is now that you’ve got all your tweets downloaded, you can almost immediately serve them from your own server without any real modification.
Simply create an (accessible–use the same permissions as other equivalent files) folder named
In my case I created a subfolder within my WordPress installation, named it “twitter”, and uploaded the files. Once this is done, you should be able to go to the URL
http://example.com/twitter and view them.
As an example and to see what my archive looks like, visit http://boffosocko.com/twitter.
Alternately one could set up a subdomain (eg. http://twitter.example.com) and serve them from there as well. You can change the URL by changing the name of the folder. As an alternate example, Kevin Marks uses the following: http://www.kevinmarks.com/tweets/.
When you’re done, don’t forget to set up a link from your website (perhaps in the main menu?) so that others can benefit from your public archive. Mine is tucked in under the “Blog” heading in my main menu.
Unfortunately, while you’ve now got a great little archive with some reasonable UI and even some very powerful search capabilities, most of the links on the archive direct back to the originals on Twitter and don’t provide direct permalinks within the archive. It’s also a static archive, so you’ve periodically got to re-download and upload to keep your archive current. I currently only update mine on a quarterly basis, at least until I build a more comprehensive set up.
Current Set Up
At the moment, I’m directly owning all of my Twitter activity on my social stream site, which is powered by Known, using the POSSE philosophy (Post on your Own Site, Syndicate Elsewhere). There I compose and publish all of my Tweets and re-Tweets (and even some likes) directly and then I syndicate them to Twitter in real-time. I’ve also built and documented a workflow for more quickly tweeting using my cell phone in combination with either the Twitter mobile app or their mobile site. (Longer posts here on BoffoSocko are also automatically syndicated (originally with JetPack and currently with Social Network Auto-Poster, which provides a lot more customization) to Twitter, so I also own all of that content directly too.)
You’ll notice that on both sites, when content has been syndicated, there’s a section at the bottom of the original posts that indicates to which services the content was syndicated along with permalinks to those posts. I’m using David Shanske’s excellent Syndication Links plugin to do this.
Ultimately, I’d like to polish the workflow a bit and post all of my shorter Twitter-like status updates from BoffoSocko.com, but I still have some work to do to better differentiate content so that my shorter form content doesn’t muddy up or distract from the people who prefer to follow my longer-form content. Based on his comment, I also suspect that this is the same semantic issue/problem that Jay Rosen has. I’d also like to provide separate feeds/subscription options so that people can more easily consume as much or as little content from my site as they’d like.
For those who are interested in more comprehensive solutions for owning and displaying their Tweets, I’ve looked into a few WordPress-based possibilities and like the following two which could also be potentially modified for custom display:
- DsgnWrks Twitter Importer
- Ozh’ Tweet Archiver (Separately available on GitHub with scripts [.csv, JSON] for importing more than 3200 Tweets limit imposed by Twitter API; it also has a custom “Twitter” theme available; for additional support and instructions there are additional blogposts available.  
Both of these not only allow you to own and display your tweets, but they also automatically import new Tweets using the current API. Keep in mind that they use the PESOS philosophy (Post Elsewhere, Syndicate to your Own Site) which is less robust than POSSE, mentioned above.
I’ll note that a tremendous number of WordPress-based plugins within the plugin repository that are Twitter related predate some of the major changes in Twitter’s API in the last year or two and thus no longer work and are no longer supported, so keep this in mind if you attempt to explore other solutions.
Those with more coding ability or wokring on other CMS platforms may appreciate a larger collection of thought and notes on the Twitter wiki page created by the IndieWeb Community. 
Do you own your own Tweets (either before or after-the-fact)? How did you do it? Feel free to tell others about your methods in the comments, or better yet, write them on your own site and send this post a webmention (see details below).
The IndieWeb movement is coding, collecting, and disseminating UI, UX, methods, and opensource code to help all netizens to better control their online identities, communicate, and connect themselves to others at IndieWeb.org. We warmly invite you to join us.
There is a relatively new candidate recommendation from the W3C for a game changing social web specification called Webmention which essentially makes it possible to do Twitter-like @mentions (or Medium-style) across the internet from site to site (as opposed to simply within a siloed site/walled garden like Twitter).
Webmentions would allow me to write a comment to someone else’s post on my own Tumblr site, for example, and then with a URL of the site I’m replying to in my post which serves as the @mention, the other site (which could be on WordPress, Drupal, Tumblr, or anything really) which also supports Webmentions could receive my comment and display it in their comment section.
Given the tremendous number of sites (and multi-platform sites) on which Disqus operates, it would be an excellent candidate to support the Webmention spec to allow a huge amount of inter-site activity on the internet. First it could include the snippet of code for allowing the site on which a comment is originally written to send Webmentions and secondly, it could allow for the snippet of code which allows for receiving Webmentions. The current Disqus infrastructure could also serve to reduce spam and display those comments in a pretty way. Naturally Disqus could continue to serve the same social functionality it has in the past.
Aggregating the conversation across the Internet into one place
Making things even more useful, there’s currently a third party free service called Brid.gy which uses open APIs of Twitter, Facebook, Instagram, Google+, and Flickr to bootstrap them to send these Webmentions or inter-site @mentions. What does this mean? After signing up at Bridgy, it means I could potentially create a post on my Disqus-enabled Tumblr (WordPress, or other powered site), share that post with its URL to Facebook, and any comments or likes made on the Facebook post will be sent as Webmentions to the comments section on my Tumblr site as if they’d been made there natively. (Disqus could add the metadata to indicate the permalink and location of where the comment originated.) This means I can receive comments on my blog/site from Twitter, Facebook, Instagram, G+, etc. without a huge amount of overhead, and even better, instead of being spread out in multiple different places, the conversation around my original piece of content could be conglomerated with the original!
Comments could be displayed inline naturally, and likes could be implemented as UI facepile either above or below the typical comment section. By enabling the sending/receiving of Webmentions, Disqus could further corner the market on comments. Even easier for Disqus, a lot of the code has already been written and is open source .
I believe that Webmention, when implemented, is going to cause a major sea-change in the way people use the web. Dare I say Web3.0?!
Some journals already count tweets, and blog mentions (generally for PR reasons) but typically don’t allow access to finding them on the web to see if they indicate positive or negative sentiment or to further the scientific conversation.
I’ve also run into cases in which scientific journals who are “moderating” comments, won’t approve reasoned thought, but will simultaneously allow (pre-approved?) accounts to flame every comment that is approved [example on Sciencemag.org: http://boffosocko.com/2016/04/29/some-thoughts-on-academic-publishing/ — see also comments there], so having the original comment live elsewhere may be useful and/or necessary depending on whether the publisher is a good or bad actor, or potentially just lazy.
I’ve also seen people use commenting layers like hypothes.is or genius.com to add commentary directly on journals, but these layers are often hidden to most. The community certainly needs a more robust commenting interface. I would hope that a decentralized version using web standards like Webmentions might be a worthwhile and robust solution.
Infuriatingly it usually involved having just spent 5 minutes reading something and then spending 10 minutes to hours writing a reasoned and thoughtful response. (Because every troll knows that’s what the internet was designed to encourage, right?)
After pressing the reply button (even scarier than hitting the “Publish” button because you don’t have the ability to edit it after-the-fact and someone else now “owns” your content), you see the dreaded notice that your comment is “AWAITING MODERATION…”
Will they approve it? Will they delete it? Is it gone forever? Did they really get it, or did it disappear into the ether? Oh #%@$!, I wish I’d made a back up copy because that took a bit of work, and I might like to refer to it again later. Are they going to censor my thoughts? Silence my voice?
I Get It: The Need for Moderation
I completely get the need for moderation on the web, particularly as almost no one is as kind, considerate, courteous, or civil as my friend P.M. Forni. (And who could be — he literally wrote the book(s) on the subject!)
On a daily basis, I’m spammed by sites desperate to sell or promote FIFA coins, Ray Bans, Christian Louboutin shoes, or even worse types of hateful blather, so I too gently moderate. I try to save my own readers from having to see such drivel, and don’t want to provide a platform or audience for them to shout from or at, respectively.
I won’t be silenced anymore
No longer can I be silenced by random moderators that I often don’t know.
Why, you ask?
I now post everything I write online onto a site I own first.
Because now, thanks to philosophies from the Indieweb movement and technologies like webmention, which growing numbers of websites are beginning to support, I now post everything I write online onto a site I own first. There it can be read in perpetuity by anyone who chooses to come read it, or from where I can syndicate it out to the myriad of social media sites for others to read en masse. (And maybe my voice has more reach than the site I’m posting to?)
Functionality like webmention (a more modern version of pingback or trackback) then allows my content to be sent to the website I was replying to in an elegant way for (eventual?) display. Or I can copy and paste it directly if they don’t support modern protocols.
Sure, they can choose to moderate me or choose not to feature my viewpoint on their own site if they wish, but at least I still own the work I put into those thoughts. I don’t have to worry about where they went or how I might be able to find them in the future. They will always be mine, and that is empowering.
Would you like to own your own data? Own your own domain? Free yourself from the restrictions of the social media silos like Facebook, Instagram, and Twitter? Visit Indieweb.org to see how you can do these things. Chat with like-minded individuals who can also help you out. Attend an upcoming IndieWebCamp or a local Homebrew Website Club in your area, or start one of your own!
A year ago, I opened started a publishing company and we came out with our first book Amerikan Krazy in late February. The author has a small backcatalogue that’s out of print, so in conjunction with his book launch, we’ve been slowly releasing ebook versions of his old titles. Coincidentally one of them was a fantastic little book about Ali entitled Muhammad Ali Retrospective, so I dropped everything I was doing to get it finished up and out as a quick way of honoring his passing.
But while I was working on some of the minutiae, I’ve been thinking in the back of my mind about the ideas of marginalia, commonplace books, and Amazon’s siloed community of highlights and notes. Is there a decentralized web-based way of creating a construct similar to webmention that will allow all readers worldwide to highlight, mark up and comment across electronic versions of texts so that they can share them in an open manner while still owning all of their own data? And possibly a way to aggregate them at the top for big data studies in the vein of corpus linguistics?
I think there is…
However it’ll take some effort, but effort that could have a worthwhile impact.
I have a few potential architectures in mind, but also want to keep online versions of books in the loop as well as potentially efforts like hypothes.is or even the academic portions of Genius.com which do web-based annotation.
If anyone in the IndieWeb, books, or online marginalia worlds has thought about this as well, I’d love to chat.
There are potential solutions to the recent News Genius-gate incident, and simple notifications can go a long way toward helping prevent online bullying behavior.
There has been a recent brouhaha on the Internet (see related stories below) because of bad actors using News Genius (and potentially other web-based annotation tools like Hypothes.is) to comment on websites without their owner’s knowledge, consent, or permission. It’s essentially the internet version of talking behind someone’s back, but doing it while standing on their head and shouting with your fingers in their ears. Because of platform and network effects, such rude and potentially inappropriate commentary can have much greater reach than even the initial website could give it. Naturally in polite society, such bullying behavior should be curtailed.
This type of behavior is also not too different from more subtle concepts like subtweets or the broader issues platforms like Twitter are facing in which they don’t have proper tools to prevent abuse and bullying online.
A creator receives no notification if someone has annotated their content.–Ella Dawson
Towards a Solution: Basic Awareness
I think that a major part of improving the issue of abuse and providing consent is building in notifications so that website owners will at least be aware that their site is being marked up, highlighted, annotated, and commented on in other locations or by other platforms. Then the site owner at least has the knowledge of what’s happening and can then be potentially provided with information and tools to allow/disallow such interactions, particularly if they can block individual bad actors, but still support positive additions, thought, and communication. Ideally this blocking wouldn’t occur site-wide, which many may be tempted to do now as a knee-jerk reaction to recent events, but would be fine grained enough to filter out the worst offenders.
Toward the end of notifications to site owners, it would be great if any annotating activity would trigger trackbacks, pingbacks, or the relatively newer and better webmention protocol of the W3C which comes out of the IndieWeb movement. Then site owners would at least have notifications about what is happening on their site that might otherwise be invisible to them. (And for the record, how awesome would it be if social media silos like Facebook, Twitter, Instagram, Google+, Medium, Tumblr, et al would support webmentions too!?!)
Perhaps there’s a way to further implement filters or tools (a la Akismet on platforms like WordPress) that allow site users to mark materials as spam, abusive, or “other” so that they are then potentially moved from “public” facing to “private” so that the original highlighter can still see their notes, but that the platform isn’t allowing the person’s own website to act as a platform to give safe harbor (or reach) to bad actors.
Further some site owners might appreciate gradable filters (G, PG, PG-13, R, X) so that either they or their users (or even parents of younger children) can filter what they’re willing to show on their site (or that their users can choose to see).
Consider also annotations on narrative forms that might be posted as spoilers–how can these be guarded against? For what happens when a even a well-meaning actor posts an annotation on page two which foreshadows that the butler did it thereby ruining the surprise on the last page? Certainly there’s some value in having such a comment from an academic/literary perspective, but it doesn’t mean that future readers will necessarily appreciate the spoiler. (Some CSS and a spoiler tag might easily and unobtrusively remedy the situation here?)
Certainly options can be built into the annotating platform itself as well as allowing server-side options for personal websites attempting to deal with flagrant violators and truly hard-to-eradicate cases.
Do you have a solution for helping to harden the Internet against bullies? Share it in the comments below.
- Genius Wants To Let Readers Annotate Any News Article. What Could Possibly Go Wrong? by Jessica Goldstein, ThinkProgress 2016-03-30
- Genius responds to Congresswoman Katherine Clark’s letter on preventing abuse by Noah Kulwin, Re/code 2016-03-29
- Misguided Genius by Chelsea Hassler, Slate 2016-03-28
- The Genius Problem by Chuq Von Rospach 2016-03-28
- Genius Web Annotator vs. One Young Woman With a Blog by Brady Dale, The Observer 2016-03-28
Join us in LA (Santa Monica) for two days of a BarCamp-style gathering of web creators building and sharing open web technologies to empower users to own their own identities & content, and advance the state of the #indieweb!
The IndieWeb movement is a global community that is building an open set of principles and methods that empower people to take back ownership of their identity and data instead of relying on 3rd party websites.
At IndieWebCamp you’ll learn about ways to empower yourself to own your data, create & publish content on your own site, and only optionally syndicate to third-party silos. Along the way you’ll get a solid grounding in the history and future of Microformats, domain ownership, IndieAuth, WebMention and more!
For remote participants, tune into the live chat (tons of realtime notes!) and the video livestream (URL TBD).
General IndieWeb Principles
|Your content is yours
When you post something on the web, it should belong to you, not a corporation. Too many companies have gone out of business and lost all of their users’ data. By joining the IndieWeb, your content stays yours and in your control.
|You are better connected
Your articles and status messages can go to all services, not just one, allowing you to engage with everyone. Even replies and likes on other services can come back to your site so they’re all in one place.
|You are in control
You can post anything you want, in any format you want, with no one monitoring you. In addition, you share simple readable links such as example.com/ideas. These links are permanent and will always work.
Friday (optional): 2016-11-04
Day 0 Prep Night
Day 0 is an optional prep night for people that want to button up their website a little bit to get ready for the IndieWebCamp proper.
18:30 Organizer setup
19:00 Doors open
19:30 Build session
22:00 Day 0 closed
Day 1 Discussion
Day 1 is about discussing in a BarCamp-like environment. Bring a topic you’d like to discuss or join in on topics as they are added to the board. We make the schedule together!
08:00 Organizer setup
08:30 Doors open – badges
09:15 Introductions and demos
10:00 Session scheduling
12:00 Group photo & Lunch
13:00 Sessions on the hour
16:00 Last session
17:00 Day 1 closing session, break, meetup later for dinner
Day 2 Building
Day 2 is about making things on and for your personal site! Work with others or on your own.
09:30 Doors open – badges
10:10 Day 2 kick-off, session scheduling
10:30 Build sessions
12:00 Catered lunch
14:30 Build sessions continue
16:30 Community clean-up
17:00 Camp closed!
Sponsorship opportunities are available for those interested.