I rolled out a few Webmention improvements to Micro.blog today: Fixed the permalink for a reply when you aren’t signed in, which was preventing external sites from verifying the link after receiving a Webmention from Micro.blog. Added limited support for accepting replies from external sites that ...
Greg, the outlet you’re thinking of is ColoradoBlvd.net, a local paper here in Pasadena, CA, which does support webmentions including backfeed of interactions with Twitter using Brid.gy. (Sadly Facebook’s API turned off their access to this sort of feature on August 1st.)
As for Ben Keith’s concern about spam, yes, Webmention can be a potential vector like trackbacks and pingbacks, but it does learn from their mistakes with better mitigation and verification. Work on the Vouch protocol/extension of Webmention continues to mitigate against these issues. I’ll also note that Akismet for WordPress works relatively well for Webmentions too, though there have still yet to be examples of Webmention spam in the wild.
For publishers using WordPress, there are some excellent plugins including Webmention (which has some experimental Vouch plumbing included already) and Symantic Linkbacks which work with WordPress’s native comments. I’ll note that they’re developed and actively maintained by several, including the core maintainer for pingbacks and trackbacks in WordPress.
I’m happy to help if anyone has questions.
Syndicated copies to:
It's been six months since we added Tags to Technorati (where I'm Senior Designer), and as it turns out, it was a pretty big deal. So before we get too far away from it, here's the story of how it came about. From my perspective, anyway.
The page was set up to show any post that contained a link to it – in other words, if you linked to that page, then your post appeared on that page. ❧
Just a rehash of Refbacks? or an early implementation of Webmention?!
October 04, 2018 at 09:19AM
Before we begin -
- Hi! I'm going to return to spending more time on Known. As you may know, I was Director of Investments at Matter Ventures for the last two years or so, which occupied a disproportionate amount of my time. This is no longer the case. While I'm working on another open source project - Unlock - during the day, I'll be able to devote more attention to Known.
- Known deserves a 1.0 release, and will get one. Marcus and I have spoken quite a bit about the route forward.
- Commercial enhancements to Known, like the hosted service and Convoy, will get their own update. Going forward, any commercial ambitions or support for Known will be secondary to the open source project, if they exist at all.
Okay. With all of that said, I'd like to put the following out for discussion. Replies, questions, and criticisms are welcome!
This may be some of the best news I’ve heard in months! Known is one of my favorite open source CMSes that’s easy to spin up and use. It also supports so many awesome IndieWeb specs like Webmention, Micropub, WebSub, etc. right out of the box.
The runner up awesome news is that Reclaim Hosting is very likely to revamp their installatron version of it.Syndicated copies to:
It took me a moment to realize what it was exactly since I hadn’t yet added a field to indicate it, but since the IndieWeb chat doesn’t send webmentions by itself, I’m glad I support refbacks to be aware of comments on my posts. The avatar didn’t come through quite like it should, but it’s nice to be able to treat refbacks like any other type of mention.
The chat has some reasonable microformats markup, so I suppose the parser could do a more solid job, but this is a pretty great start. Sadly, Refback isn’t as real-time as Webmention, but it’s better than nothing.
I suppose we could all be posting chats on our own sites and syndicating into places like IRC to own our two directional conversations, but until I get around to the other half… (or at least for WordPress, I recall having gotten syndication to IRC for WithKnown working a while back via plugin.)
PlumX Metrics provide insights into the ways people interact with individual pieces of research output (articles, conference proceedings, book chapters, and many more) in the online environment. Examples include, when research is mentioned in the news or is tweeted about. Collectively known as PlumX Metrics, these metrics are divided into five categories to help make sense of the huge amounts of data involved and to enable analysis by comparing like with like.
PlumX gathers and brings together appropriate research metrics for all types of scholarly research output.
We categorize metrics into 5 separate categories: Usage, Captures, Mentions, Social Media, and Citations.
Marco, your post about supporting rel=”payment” for Overcast made me start thinking about other potential solve-able problems in the podcast space. Now that you’ve solved a piece of the support/payment problem, perhaps you can solve for a big part of the “who actually listened to my podcast” problem?
In a recent article on the topic of Webmention for A List Apart, I covered the topic of listen posts and sending webmentions for them. In addition to people being able to post on their own website that they’ve listened to a particular episode, the hosting podcast site can receive these mentions and display them as social proof that the episode was actually listened to. In addition to individual websites being able to do this, it would be awesome if podcast players/apps could send webmentions on behalf of their users (either with user specific data like Name, website, avatar, etc. if it’s stored, without it, or anonymized by the player itself) so that the canonical page for the podcast could collect (and potentially display) them.
As a proof of concept, here’s a page for a podcast episode that can receive webmentions. Someone listens to it, makes a “listen post” on their site, and sends a webmention of that fact. The original page can then collect it on the backend or display it if it chooses. Just imagine what this could do for the podcast world at scale for providing actual listening statistics?
In addition to aggregate numbers of downloads a podcast is receiving, they could also begin to have direct data about actual listens. Naturally the app/player would have to set (or allow a configuration) some percentage threshold of how much was played before sending such a notification to the receiving site. Perhaps the webmention spec for listens could also include the data for the percentage listened and send that number in the payload?
The toughest part may be collecting the rel=”canonical” URL for the podcast’s post (to send the webmention there) rather than the audio file’s URL, though I suspect that the feed for the podcast may have this depending on the feed’s source.
If you want to go a step further, you could add Micropub support to Overcast, so that when people are done listening to episodes, the app could send a micropub request to their registered website (perhaps via authentication using IndieAuth?). This would allow people to automatically make “listen posts” to their websites using Overcast and thereby help those following them to discover new and interesting podcasts. (Naturally, you might need a setting for sites that support both micropub and webmention, so that the app doesn’t send a webmention when it does a micropub post for a site that will then send a second webmention as well.)
One could also have podcast players with Micropub support that would allow text entry for commenting on particular portions of podcasts (perhaps using media fragments)? Suddenly we’re closer to commenting on individual portions of audio content in a way that’s not too dissimilar to SoundCloud’s commenting interface, but done in a more open web way.
As further example, I maintain a list of listen posts on my personal website. Because it includes links to the original audio files, it also becomes a “faux-cast” that friends and colleagues can subscribe to everything I’m listening to (or sub-categorizations thereof) via RSS. Perhaps this also works toward helping to fix some of the discovery problem as well?
Thanks, as always, for your dedication to building one of the best podcast tools out there!Syndicated copies to:
If you use Micro.blog completely from the native apps, everything works smoothly. If you communicate via the IndieWeb through webmentions, everything (mostly) works smoothly. But there is a big hiccup that is still being worked out when you communicate via Webmentions to Micro.blog. The current functionality is described here, however it's not exhaustive and it doesn't work 100% of the time. Some of the issues are documented on this GitHub issue, and eventually we'll work out the best practice use case. So what if you don't care about best practices and just want to communicate with Micro.blog through Webmentions? I have a working solution on my own website. Typically in a Webmention you have a source (your post) and a target (the post you are replying to) and the Webmention endpoint used is retrieved from the target. However because with Micro.blog sometimes the target post is on Wordpress or an externally hosted blog instead of Micro.blog. This causes an issue, because if you are wanting the Webmention to be received by Micro.blog but the target post does not advertise the Micro.blog Webmention endpoint, your post will never make it in to the Micro.blog system for an externally hosted post that you are replying to. What I do is I essentially do a "cc/carbon copy" Webmention. First I do the standard Webmention sending procedure, and then I check if the target Webmention endpoint was Micro.blog's endpoint (https://micro.blog/webmention), if it is not then I know Micro.blog did not receive the post and I send an additional Webmention. The CC Webmention contains the source as my post, the target as the post I'm replying to, and it gets sent to the Micro.blog Webmention endpoint. Micro.blog does a couple of things upon receiving the Webmention. First, it checks to see if the source post is coming from a URL that belongs to a verified Micro.blog user. Second, it checks if the target post exists already in the Micro.blog system. If both of those checks go through, then it will add the new post and link it up to the correct Micro.blog user as a reply to the correct Micro.blog post. This is not necessarily an easy thing to add in most Webmention systems and is not the intended final destination of cross-site replies. But if you want it to work today, this useful hack will get it working for you.
A useful layout of the technicalities, particularly for those running their own sites and syndicating into the micro.blog network.Syndicated copies to:
Here’s a good example: http://v.hierofalco.net/2018/08/23/weird-indieweb-idea-of-the-day-guestbooks/
There’s a mention from https://ramblinggit.com/ in the comments, but it’s incredibly difficult to find that mention or what it contains, because there isn’t a linked URL on the avatar that goes to ramblinggit.com’s (Brad Enslen’s) content. In this particular case, it’s probably the most important piece of content on the page because the post itself is about a theoretical idea or “blue sky”, while the mention itself actually puts the theoretical idea into actual use and provides a great example. Sadly as it stands this value is completely hidden because of the UI. In some sense hiding the mention is also potentially contributing to unnecessary context collapse within hierofalco’s post’s comments and lessens the value of the mention itself.
While I appreciate the UX/UI desire to limit the amount of data displayed in one’s comment section since it is rarely, if ever, used, there’s a lot of value in the bi-directionality of webmentions and how they’re displayed. I’ve suggested before that newspapers, magazines and journalism sites (not to mention academics, researchers, and government sites) might benefit from the verifiable/audit-able links from their material to the reads, likes, favorites, and even listens (in the case of podcasts). If the comments sections simply have an avatar and a homepage link to the original, some of this (admittedly) marginal value is then lost. What about when Webmention is more common? Sites could simply display avatars and homepage links without actually linking to the original location of the webmention. They might do this to imply an endorsement(s) when none exists and the viewer is left with the difficult task of attempting manual verification.
I do love the fact that one can facepile these reactions, but why not simply have the facepile of avatars with URLs that direct to the original reaction? To me these should ideally have a title attribute that is the sending account’s name wrapped with the URL of the original webmention URL itself. While these are seemingly “throwaways” for likes/favorites, I often personally post “reads” and “listens” that also have notes or commentary that I use for my own purpose and thus don’t send them as explicit replies. If the facepiles for reads & listens are avatars that link back to the original then the site’s admin as well as others can choose (or not) to click through to the original. Perhaps the site administrator prefers to display those as replies, then they have the option in the interface to change the semantic linkback type from the simple response to a more “featured” response. (I’ve documented an example of this before.)
The issue becomes even more apparent in the case of “mentions” which are currently simply avatars with a homepage. There’s a much higher likelihood that there’s some valuable content (compared to a like certainly) behind this mention (though it still isn’t a specific reply). Readers of comment sections are much more likely to be interested in them and the potential conversation hiding behind them. As things stand currently, it’s a difficult and very manual thing to attempt to track down. In these cases, one should ideally be able to individually toggle facepile/not facepile for each mention depending on the content. If shown as a comment, then, yes, having the ability to show the whole thing, or an excerpted version, could be useful/desirable. If the mention is facepiled, it should be done as the others with an avatar and a wrapped URL to the mentioning content and an appropriate title (either the Identity/name of the sending site, the article title, or both if available).
For facepiled posts (and especially mentions) I’d much rather see something along the lines of:
<a title="Brad Enslen" href="https://ramblinggit.com/2018/08/new-guestbook/"><img src="https://secure.gravatar.com/avatar/0ce8b2c406e423f114e39fd4d128c31d?s=100&r=pg&d=mm" width="100" height="100"/></a>
(with the appropriate microformats markup, of course.)
As an example, what happens in the future when a New York Times article has webmentions that get hundreds or thousands of webmentions? Having everything be facepiled would be incredibly useful for quick display, but being able to individually go follow the conversations in situ would be wildly valuable as well. The newspaper could also then choose to show/hide specific replies or mentions in a much more moderated fashion to better encourage civil discourse. In the case where a bad actor/publisher attempts to “game” the system by simply showing thousands of likes/favorites/reads, what is to prevent them from cheating by showing as many as they like as “social proof” of their popularity when the only backtrack record is an avatar and a homepage without the actual verification of a thing on a site if someone chooses to audit the trail?
Perhaps even a step further in interesting UI for these semi-hidden mentions would be to do a full page preview (or hovercards) in a similar method for how WordPress handles hovercards for Gravatars or they way the hover functionality works for links at
Going even farther from a reader’s perspective, I could also see a case that while the site admin wants to slim down on the UI of all the different types of interactions for easy readability, perhaps the reader of a comments section might want to see all the raw mentions and details for each one and scroll through them? Perhaps it would be nice to add that option in the future? As things stand if a site facepiles even dozens of mentions, it’s incredibly painful and undesirable to track their associated commentary down. What if there was UI for the reader to unpack all these (especially per reaction category as it’s more likely one would want to do it for mentions, but not likes)?
I hope that as you wean yourself away from Twitter that you regain the ability to do longer posts–I quite like your writing style. This is certainly as well-put a statement about why one should leave Twitter as one could imagine.
I remember those old days and miss the feel it used to have as well. The regrowing blogosphere around the IndieWeb and Micro.blog are the closest thing I’ve seen to that original feel since ADN or smaller networks like 10 Centuries and pnut. I enjoy finding that as I wean myself away from Twitter, I do quite like going back to some of the peace and tranquility of reading and thinking my way through longer posts (and replies as well). Sometimes I wonder if it doesn’t take more than ten minutes of thought and work, it’s probably not worth putting on the internet at all, and even then it’s probably questionable… I’m half tempted to register the domain squirrels.social and spin up a Mastodon instance–fortunately it would take less than the ten minute time limit and there are enough animal related social silos out there already.
As an aside, I love the way you’ve laid out your webmentions–quite beautiful!Syndicated copies to:
The 5 R’s
I’ve seen the five R’s used many times in reference to the OER space (Open Educational Resources). They include the ability to allow others to: Retain, Reuse, Revise, Remix and/or Redistribute content with the appropriate use of licenses. These are all some incredibly powerful building blocks, but I feel like one particularly important building block is missing–that of the ability to allow easy accretion of knowledge over time.
Some in the educational community may not be aware of some of the more technical communities that use the idea of version control for their daily work. The concept of version control is relatively simple and there are a multitude of platforms and software to effectuate it including Git, GitHub, GitLab, BitBucket, SVN, etc. In the old days of file and document maintenance one might save different versions of the same general file with increasingly different and complex names to their computer hard drive: Syllabus.doc, Syllabus_revised.doc, Syllabus_revisedagain.doc, Syllabus_Final.doc, Syllabus_Final_Final.doc, etc. and by using either the names or date and timestamps on the file one might try to puzzle out which one was the correct version of the file that they were working on.
For the better part of a decade now there is what is known as version control software to allow people to more easily maintain a single version of their particular document but with a timestamped list of changes kept internally to allow users to create new updates or roll back to older versions of work they’ve done. While the programs themselves are internally complicated, the user interfaces are typically relatively easy to use and in less than a day one can master most of their functionality. Most importantly, these version control systems allow many people to work on the same file or resource at a time! This means that 10 or more people can be working on a textbook, for example, at the same. They create a fork or clone of the particular project to their personal work space where they work on it and periodically save their changes. Then they can push their changes back to the original or master where they can be merged back in to make a better overall project. If there are conflicts between changes, these can be relatively easily settled without much loss of time. (For those looking for additional details, I’ve previously written Git and Version Control for Novelists, Screenwriters, Academics, and the General Public, which contains a variety of detail and resources.) Version control should be a basic tool of every educators’ digital literacy toolbox.
For the OER community, version control can add an additional level of power and capability to their particular resources. While some resources may be highly customized or single use resources, many of them, including documents like textbooks can benefit from the work of many hands in an accretive manner. If these resources are maintained in version controllable repositories then individuals can use the original 5 R’s to create their particular content.
But what if a teacher were to add several new and useful chapters to an open textbook? While it may be directly useful to their specific class, perhaps it’s also incredibly useful to the broader range of teachers and students who might use the original source in the future? If the teacher who forks the original source has a means of pushing their similarly licensed content back to the original in an easy manner, then not only will their specific class benefit from the change(s), but all future classes that might use the original source will have the benefit as well!
If you’re not sold on the value of version control, I’ll mention briefly that Microsoft spent $7.5 Billion over the summer to acquire GitHub, which is one of the most popular version control and collaboration tools on the market. Given Microsofts’ push into the open space over the past several years, this certainly bodes well for both open as well as version control for years to come.
A Math Text
As a simple example, lets say that one professor writes the bulk of a mathematics text, but twenty colleagues all contribute handfuls of particular examples or exercises over time. Instead of individually hosting those exercises on their own sites or within their individual LMSes where they’re unlikely to be easy to find for other adopters of the text, why not submit the changes back to the original to allow more options and flexibility to future teachers? Massive banks of problems will allow more flexibility for both teachers and students. Even if the additional problems aren’t maintained in the original text source, they’ll be easily accessible as adjunct materials for future adopters.
One of the most powerful examples of the value of accretion in this manner is Wikipedia. While it’s somewhat different in form than some of the version control systems mentioned above, Wikipedia (and most wikis for that matter) have built in history views that allow users to see and track the trail of updates and changes over time. The Wikipedia in use today is vastly larger and more valuable today than it was on its first birthday because it allows ongoing edits to be not only improved over time, but those improvements are logged and view-able in a version controlled manner.
This is another example of an extensible OER platform that allows simple accretion. With the correct settings on a document, one can host an original and allow it to be available to others who can save it to their own Google Drive or other spaces. Leaving the ability for guests to suggest changes or to edit a document allows it to potentially become better over time without decreasing the value of the original 5 Rs.
Webmentions for Update Notifications
As many open educational resources are hosted online for easy retention, reuse, revision, remixing, and/or redistribution, keeping them updated with potential changes can potentially be a difficult proposition. It may not always be the case that resources are maintained on a single platform like GitHub or that users of these resources will necessarily know how to use these platforms or their functionality. As a potential “fix” I can easily see a means of leveraging the W3C recommended specification for Webmention as a means of keeping a tally of changes to resources online.
Let’s say Robin keeps a copy of her OER textbook on her WordPress website where students and other educators can easily download and utilize it. More often than not, those using it are quite likely to host changed versions of it online as well. If their CMS supports the Webmention spec like WordPress does via a simple plugin, then by providing a simple URL link as a means of crediting the original source, which they’re very likely to do as required by the Creative Commons license anyway, their site will send a notification of the copy’s existence to the original. The original can then display the webmentions as traditional comments and thus provide links to the chain of branches of copies which both the original creator as well as future users can follow to find individual changes. If nothing else, the use of Webmention will provide some direct feedback to the original author(s) to indicate their materials are being used. Commonly used education facing platforms like WordPress, Drupal, WithKnown, Grav, and many others either support the Webmention spec natively or do so with very simple plugins.
One of the issues some may see with pushing updates back to an original surrounds potential resource bloat or lack of editorial oversight. This is a common question or issue on open source version control repositories already, so there is a long and broad history of for how these things are maintained or managed in cases where there is community disagreement, an original source’s maintainer dies, disappears, loses interest, or simply no longer maintains the original. In the end, as a community of educators we owe it to ourselves and future colleagues to make an attempt at better maintaining, archiving, and allowing our work to accrete value over time.
The 6th R: Request Update
In summation, I’d like to request that we all start talking about the 6 R’s which include the current 5 along with the addition of a Request update (or maybe pull Request, Recompile, or Report to keep it in the R family?) ability as well. OER is an incredibly powerful concept already, but could be even more so with the ability to push new updates or at least notifications of them back to the original. Having the ability to do this will make it far easier to spread and grow the value of the OER concept as well as to disrupt the education spaces OER was evolved to improve.
Featured photo by Amador Loureiro on UnsplashSyndicated copies to:
Jan, as I had mentioned to you earlier this year at WordCamp Orange County, the work on the IndieWeb concept of Microsub with respect to feed readers is continuing apace. In the last few months Aaron Parecki has opened up beta versions of his Aperture microsub server as well as limited access to his Monocle reader interface in addition to the existing Indigenous and Together reader interfaces.
My friend Jack Jamieson is in the midst of building a WordPress-specific Microsub server implementation which he’s indicated still needs more work, but which he’s self-dogfooding on his own website as a feed reader currently.
If it’s of interest, you or your colleagues at Automattic might want to take a look at it in terms of potentially adding a related Microsub reader interface as the other half of his Microsub server. Given your prior work on the beautiful WordPress.com feed reader, this may be relatively easy work which you could very quickly leverage to provide the WordPress ecosystem with an incredibly powerful feed reader interface through which users can interact directly with other sites using the W3C’s Micropub and Webmention specifications for which there are already pre-existing plugins within the repository.
For some reference I’ll include some helpful links below which might help you and others get a jump start if you wish:
- Aaron Parecki article: An IndieWeb reader: My new home on the internet
- Microsub wiki page
- Reader interfaces
While I understand most of the high level moving pieces, some of the technical specifics are beyond my coding abilities. Should you need help or assistance in cobbling together the front end, I’m positive that Jack, Aaron Parecki, David Shanske, and others in the IndieWeb chat (perhaps the #Dev or #WordPress channels–there are also bridges for using IRC, Slack, or Matrix if you prefer them) would be more than happy to lend a hand to get another implementation of a Microsub reader interface off the ground. I suspect your experience and design background could also help to shape the Microsub spec as well as potentially add things to it which others haven’t yet considered from a usability perspective.
In the erstwhile, I hope all is well with you. Warmest regards!Syndicated copies to:
As #EDU522 Digital Teaching and Learning Too wraps up I find myself reflecting on my goals for the class…I mean “my goals” in the class not the hopes on the instructional design. Much more on that later. All summer, well before EDU 522 began, I set off to create a remixable template others cou...
I suspect that Dr. McVerry could have gotten further a bit faster had he built the course on WordPress directly instead of on a remixable platform. This would have made it easier to send webmention-based badges which could have been done by creating a badge page on which he could have added simple links to all of the student pages that had earned them. This would have made things a bit less manual on his part.
But at the same time, he’s now also got a remixable platform that others can borrow and use for similar courses!Syndicated copies to:
I suspect that @chrismessina could do it quickly, but for those who’d like to leave Twitter for #WordPress with similar functionality (but greater flexibility and independence), I recorded a 2 hour video for an #IndieWeb set up/walk through with some high level discussion a few months back. If you can do the 5 minute install, hopefully most of the rest is downhill with some basic plugin installation and minor configuration. The end of the walk through includes a live demonstration of a conversation between a WordPress site on one domain and a WithKnown site running on another domain.
tl;dr for the video:
- WordPress base install
- IndieWeb Plugin (gives you quick access to most of the plugins below)
- The SemPress Theme or Independent Publisher Theme
- Webmention and Semantic Linkbacks plugins (for site to site communication and notification)
- IndieAuth plugin (for authenticating with Micropub, Microsub, and other related tools)
- Micropub plugin (for a variety of clients you can use to publish to your site)
- Syndication Links plugin (to indicate which sites, like Twitter, that you syndicate your content to to stay in touch with those left behind)
- WebSub plugin (to ping feed readers for real-time communication)
- Brid.gy for WordPress plugin (to pull in backfed comments from other social silos)
- Post Kinds plugin (for better delineating articles, status updates (notes), replies, favorites, likes, etc. with appropriate microformats markup)
- Aperture Plugin (allows you to sign into a variety of Microsub readers which also act as your stream and allow you to reply to others directly from your reading interface. This part is still a bit experimental, but the kinks are being worked out presently for a richer experience.)
Additional pieces are discussed on my IndieWeb Research Page (focusing mostly on WordPress), in addition to IWC getting started on WordPress wiki page. If you need help, hop into the IndieWeb WordPress chat.
For those watching this carefully, you’ll notice that I’ve replied to David Shanske’s post on his website using my own website and sent him a webmention which will allow him to display my reply (if he chooses). I’ve also automatically syndicated my response to the copy of his reply on Twitter which includes others who are following the conversation there. Both he and I have full copies of the conversation on our own site and originated our responses from our own websites. If you like, retweet, or comment on the copy of this post on Twitter, through the magic of Brid.gy and the Webmention spec, it will come back to the comment section on my original post (after moderation).
Hooray for web standards! And hooray for everyone in the IndieWeb who are helping to make this type of social interaction easier and simpler with every passing day.Syndicated copies to: