Running time: 0h 12m 59s | Download (13.9 MB) | Subscribe by RSS | Huffduff
Overview Workflow
Posting
Researcher posts research work to their own website (as bookmarks, reads, likes, favorites, annotations, etc.), they can post their data for others to review, they can post their ultimate publication to their own website.
Discovery/Subscription methods
The researcher’s post can webmention an aggregating website similar to the way they would pre-print their research on a server like arXiv.org. The aggregating website can then parse the original and display the title, author(s), publication date, revision date(s), abstract, and even the full paper itself. This aggregator can act as a subscription hub (with WebSub technology) to which other researchers can use to find, discover, and read the original research.
Peer-review
Readers of the original research can then write about, highlight, annotate, and even reply to it on their own websites to effectuate peer-review which then gets sent to the original by way of Webmention technology as well. The work of the peer-reviewers stands in the public as potential work which could be used for possible evaluation for promotion and tenure.
Feedback mechanisms
Readers of original research can post metadata relating to it on their own website including bookmarks, reads, likes, replies, annotations, etc. and send webmentions not only to the original but to the aggregation sites which could aggregate these responses which could also be given point values based on interaction/engagement levels (i.e. bookmarking something as “want to read” is 1 point where as indicating one has read something is 2 points, or that one has replied to something is 4 points and other publications which officially cite it provide 5 points. Such a scoring system could be used to provide a better citation measure of the overall value of of a research article in a networked world. In general, Webmention could be used to provide a two way audit-able trail for citations in general and the citation trail can be used in combination with something like the Vouch protocol to prevent gaming the system with spam.
Archiving
Government institutions (like Library of Congress), universities, academic institutions, libraries, and non-profits (like the Internet Archive) can also create and maintain an archival copy of digital and/or printed copies of research for future generations. This would be necessary to guard against the death of researchers and their sites disappearing from the internet so as to provide better longevity.
Show notes
Resources mentioned in the microcast
IndieWeb for Education
IndieWeb for Journalism
Academic samizdat
arXiv.org (an example pre-print server)
Webmention
A Domain of One’s Own
Article on A List Apart: Webmentions: Enabling Better Communication on the Internet
Synidicating to Discovery sites
Examples of similar currently operating sites:
IndieNews (sorts posts by language)
IndieWeb.xyz (sorts posts by category or tag)
First off, re: open sourcing Indieweb.xyz—I’m driving toward that. I’m
in a private repo on Github right now. But, man. It’s unnerving to open
that kind of code when… it’s running live on my server. So I am trying
to find the security holes before releasing it.
I don’t have big plans for Indieweb.xyz, but one thing I’m planning on adding
is a way to create a whitelisted sub. You basically can make a list of URLs
and those are the only URLs that can submit to the sub. Who knows, I might
use Vouch for this. I just want to use something that makes it effortless.
I wonder if this might be useful for quick collaboration. Name the sub,
link a bunch of websites together and then go to town, sharing stuff.
I also am creating a few themes for people who want to run their own
Indieweb.xyz as well, since the one I’ve got is designed for the web at
large—clearly not the arXiv crowd.
Cool ideas!
Syndicated copies: