Threaded conversations between WordPress and Twitter

I’ve written about threading comments from one WordPress website to another before. I’ve long suspected this type of thing could be done with Twitter, but never really bothered with it or necessarily needed to do it, though I’ve often seen cases where others might have wanted to do this.

For a post today, I wrote on my own site and syndicated it to Twitter and got a reply back via webmention through Brid.gy. This process happens for me almost every day, and this all by itself feels magical.  The real magic however, and I don’t think I’ve done this before or seen it done, was that I replied to the backfed comment on my site inline and manually syndicated to Twitter using a permalink of the form http://www.example.com/standard-permalink-structure/?replytocom=57527#respond, where 57527 is the particular comment ID for my inline comment. (This comment ID can typically be found by hovering over the “Reply” or “Comment” button on one’s WordPress website in most browsers.)

Where to find the comment ID to provide the proper permalink to get properly nested comments backfed to your site.

When a reply to my second syndicated Twitter post came in, Brid.gy properly sent it as a comment to my comment AND nested it properly!

I’ve now got a nested copy of the conversation on my site that is identical to the one on Twitter.

I suspect that by carefully choosing the URL structure you syndicate to Twitter, you’ll allow yourself more control over how backfed comments from Brid.gy nest (or don’t) in your response section on your site.

Perhaps even more powerfully, non-WordPress-based websites could also use these permalinks structures for composing their replies to WordPress sites to have their replies nest properly too. I think I’ve seen Aaron Parecki do this in the wild.

Since the WordPress Webmention plugin now includes functionality for sending webmentions directly from the comments section, I’ll have to double check that the microformats on my comments are properly marked up to  see if I can start leveraging Brid.gy publish functionality to send threaded replies to Twitter automatically. Or perhaps work on something that will allow automatic replies via Twitter API. Hmmm…

Despite the fact that this could all be a bit more automated, the fact that one can easily do threaded replies between WordPress and Twitter makes me quite happy.

Thread onward!

For more on my IndieWeb explorations with Twitter, see my IndieWeb Research page.

Syndicated copies to:

👓 Self-platforming, DoOO, and academic workflows | Tim Clarke

Read Self-platforming, DoOO, and academic workflows by Tim ClarkeTim Clarke (simulacrumbly.com)
I see self-platforming as an expression of my own digital citizenship, and I also see it as my deliberate answer to the call for digital sanctuary.  The frequency and extent to which educators urge students onto extractive applications is of great concern.  Self-platforming offers opportunities to benefit from the collaborative, hyper-textual, asynchronous, and distributed qualities of the web, while diminishing the costs — often hidden to us — of working on proprietary and extractive platforms.

I love that Tim is looking closely at how the choices of tools he’s using can potentially impact his students/readers. I’ve also been in the boat he’s in–trying to wrangle some simple data in a way that makes it easy to collect, read, and disseminate content for myself, students, and other audiences.

Needing to rely on five or more outside services (Twitter, Instapaper, Pinboard, bit.ly, and finally even Canvas, where some of them are paid services) seems just painful and excessive. He mentions the amount and level of detail he’s potentially giving away to just bit.ly, but each of these are all taking a bite out of the process. Of course this doesn’t take into consideration the fact that Instapaper is actually a subsidiary of Betaworks, the company that owns and controls bit.ly, so there’s even more personal detail being consumed and aggregated there than he may be aware. All this is compounded by the fact that Instapaper is currently completely blocking its users within the EU because it hasn’t been able to comply with the privacy and personal data details/restrictions of the GDPR. Naturally, there’s currently no restrictions on it in the U.S. or other parts of the world.

I (and many others) have been hacking away for the past several years in trying to tame much of our personal data in a better way to own it and control it for ourselves. And isn’t this part of the point of having a domain of one’s own? Even his solution of using Shaarli to self-host his own bookmarks, while interesting, seems painful to me in some aspects. Though he owns and controls the data, because it sits on a separate domain it’s not as tightly integrated into his primary site or as easily searched. To be even more useful, it needs additional coding and integration into his primary site which appears to run on WordPress. With the givens, it looks more like he’s spending some additional time running his own separate free-standing social media silo just for bookmarks. Why not have it as part of his primary personal hub online?

I’ve been watching a growing trend of folks both within the IndieWeb/DoOO and edtech spaces begin using their websites like a commonplace book to host a growing majority of their own online and social related data. This makes it all easier to find, reference, consume, and even create new content in the future. On their own sites, they’re conglomerating all their data about what they’re reading, highlighting, annotating, bookmarking, liking, favoriting, and watching in addition to their notes and thoughts. When appropriate, they’re sharing that content publicly (more than half my website is hidden privately on my back end, but still searchable and useful only to me) or even syndicating it out to social sites like Twitter, Facebook, Flickr, Instapaper, et al. to share it within other networks.

Some other examples of educators and researchers doing this other than myself include Aaron Davis, Greg McVerry, John Johnson, and more recently W. Ian O’Byrne and Cathie LeBlanc among many others. Some have chosen to do it on their primary site while others are experimenting using two or even more. I would hope that as Tim explores, he continues to document his process as well as the pros and cons of what he does and the resultant effects. But I also hopes he discovers this growing community of scholars, teachers, programmers and experimenters who have been playing in the same space so that he knows he’s not alone and perhaps to prevent himself from going down some rabbit holes some of us have explored all too well. Or to use what may be a familiar bit of lingo to him, I hope he joins our impromptu, but growing personal learning network (PLN).

Syndicated copies to:

👓 MyData – a Nordic model for human-centered personal data | IIS

Read MyData – a Nordic model for human-centered personal data (iis.se)
MyData is the name of a human centered approach in personal data and Antti Poikola is one of the main initiators. The concept is well known on the open data arena in Finland, but now Antti Poikola wants the concept to be more used in other Nordic countries as well.
Syndicated copies to:

📺 re:publica 2018 – Jim Groom: Domain of One’s Own: Reclaim Your Data | YouTube

Watched re:publica 2018: Domain of One's Own: Reclaim Your Data by Jim GroomJim Groom from YouTube

A Domain of One's Own is an international initiative in higher education to give students and faculty more control over their personal data. The movement started at the University of Mary Washington in 2012, and has since grown to tens of thousands of faculty and students across hundreds of universities. The first part of this presentation (5-10 minutes) will provide a brief overview of how these Domains projects enable not only data portability for coursework, but also a reflective sense of what a digital identity might mean in terms of privacy and data ownership.

The second part of this presentation will explore how Domain of One's Own could provides a powerful example in how higher education could harness application programming interfaces (APIs) to build a more user-empowered data ecosystem at universities. The initial imaginings of this work has already begun at Brigham Young University in collaboration with Reclaim Hosting, and we will share a blueprint of what a vision of the Personal API could mean for a human-centric data future in the realm of education and beyond.

A short talk at the re:publica conference in Germany which touches on the intersection of the Domain of One’s Own which is very similar to the broader IndieWeb movement. POSSE makes a brief appearance at the end of the presentation, although just on a slide with an implicit definition rather than a more full-fledged discussion.

Toward the end, Groom makes mention of MyData, a Nordic Model for human-centered personal data management and processing, which I’d not previously heard of but which has some interesting resources which look like they might dovetail into some of what those in the IndieWeb are looking at. I’m curious if any of the folks in the EU like Sebastian Greger have come across them, and what their thoughts are on the idea/model they’ve proposed? It looks like they’ve got an interesting looking conference coming up at the end of August in Helsinki. There seems to be a white paper outlining a piece of their philosophy, which I’ll link to below:

MyData: A Nordic Model for human-centered personal data management and processing by Antti Poikola (t), Kai Kuikkaniemi (t), Harri Honko (t)

This white paper presents a framework, principles, and a model for a human-centric approach to the managing and processing of personal information. The approach – defined as MyData – is based on the right of individuals to access the data collected about them. The core idea is that individuals should be in control of their own data. The MyData approach aims at strengthening digital human rights while opening new opportunities for businesses to develop innovative personal data based services built on mutual trust.

Based on a quick overview, this is somewhat similar to a model I’ve considered and is reminiscent to some ideas I’ve been harboring about applications of this type of data to the journalism sphere as well.

Syndicated copies to:

👓 Instagram import in Micro.blog | Manton Reece

Read Instagram import in Micro.blog by Manton Reece (manton.org)
Micro.blog for Mac version 1.3 is now available. It features a brand new import feature for uploading an archive of Instagram photos to your blog.

This is an awesome development. I do wish it wasn’t so MacOS-centric, but hopefully its one of many export/import tools that shows up to improve peoples’ ownership and portability of their data.

Syndicated copies to:

Today I finally ran into a particular IndieWeb problem I knew would eventually come. Uploading so much of my content that I’d eventually need to bump up the storage capacity of the server for my online presence. The 12GB cap I ran into does bring into much sharper focus the amount of content I post online.

While Facebook and Twitter may be proverbially endless buckets, even with small inconveniences, I still prefer doing it my way.

Syndicated copies to:

🎧 Episode 3: Freedom from Facebook | Clevercast

Listened to Episode 3: Freedom from Facebook by Jonathan LaCourJonathan LaCour from cleverca.st

This time on clevercast, I discuss my departure from Facebook, including an overview of how I liberated my data from the social giant, and moved it to my own website.

Here are some of the tools that I mention in today’s episode:

Also check out my On This Day page and my Subscribe page, which includes my daily email syndication of my website activity.

There’s a lot going on here and a lot to unpack for such a short episode. This presents an outline at best of what I’m sure was 10 or more hours of work. One day soon, I hope, we’ll have some better automated tools for exporting data from Facebook and doing something actually useful with it.

Syndicated copies to:

🎧 Episode 2: Restoration | Clevercast

Listened to Episode 2: Restoration by Jonathan LaCourJonathan LaCour from cleverca.st
This time, on clevercast, I reminisce about one of my earliest personal websites. What happened to its content? How did I create it? Is there any chance of restoring it back to greatness?

I’ve still got a ways to go to recover some of my older content, but Jonathan has really done some interesting work in this area.

Syndicated copies to:

An IndieWeb Podcast: Episode 1 “Leaving Facebook”

Episode 1 Leaving Facebook

This first half of the episode was originally recorded in March, abruptly ended, and then was not completed until April due to scheduling.

It’s been reported that Cambridge Analytica has improperly taken and used data from Facebook users in an improper manner, an event which has called into question the way that Facebook handles data. David Shanske and I discuss some of the implications from an IndieWeb perspective and where you might go if you decide to leave Facebook.

Show Notes

Articles

The originating articles that kicked off the Facebook/Cambridge Analytica issue:

Related articles and pages

Recent Documented Facebook Quitters

Jonathan LaCourEddie Hinkle, Natalie Wolchover, Cher, Tea Leoni, Adam McKay, Leo Laporte,and Jim Carrey

New York Times Profile of multiple quitters: https://www.nytimes.com/2018/03/21/technology/users-abandon-facebook.html

IndieWeb Wiki related pages of interest

Potential places to move to when leaving Facebook

You’ve made the decision to leave Facebook? Your next question is likely to be: to move where? Along with the links above, we’ve compiled a short list of IndieWeb-related places that might make solid options.

Syndicated copies to:

Because today’s date is 4/04, some in the IndieWeb are celebrating a World-wide Website Day of Remembrance to remember and recognize site-deaths that now 404.

What is your favorite site that’s disappeared? What’s your favorite 404 page? What site do you think will disappear before we celebrate 404 again next April 04?

 

 

Syndicated copies to:
Replied to a tweet by Matt Reed (Twitter)
Wish Twitter would distinguish between "favorite" and "save for later." People could infer some pretty misleading things...

Intent on Twitter is often so muddled, this is the last thing some might worry about. (Yet it’s still a tremendous tool.) Pocket has browser extensions, and I know the one for Chrome has settings one can toggle an icon to appear on Twitter to allow bookmarking things to read for later directly within your Pocket account, which is generally a reasonable experience.

Pocket’s browser extension can add a much better “save to read for later” button to one’s Twitter feed.

I think the much stronger and better solution for one’s personal commonplace book is to simply add these intents to one’s own website and either favorite, bookmark, mark as read, repost, reply to, annotate, highlight, or just about “anything else” them there and syndicate the appropriate response to Twitter separately. (Examples: bookmarks and reads.) This makes it much more difficult to muddle the intent. It’ll also give you a much more highly searchable set of data that you can own on your own website.

Why wait around for Twitter or another social service to build the tools you want/need when it’s relatively easy to cobble them together for yourself on a variety of opensource platforms? While you’re at it, remove some of the other limitations like 280 characters as well…

Syndicated copies to:

👓 All the URLs you need to block to *actually* stop using Facebook | Quartz

Read

Just by the bulk of URLs, this gives a more serious view of just how ingrained Facebook is in tracking your online life.

Syndicated copies to:

Organizing my research related reading

There’s so much great material out there to read and not nearly enough time. The question becomes: “How to best organize it all, so you can read even more?”

I just came across a tweet from Michael Nielsen about the topic, which is far deeper than even a few tweets could do justice to, so I thought I’d sketch out a few basic ideas about how I’ve been approaching it over the last decade or so. Ideally I’d like to circle back around to this and better document more of the individual aspects or maybe even make a short video, but for now this will hopefully suffice to add to the conversation Michael has started.

Keep in mind that this is an evolving system which I still haven’t completely perfected (and may never), but to a great extent it works relatively well and I still easily have the ability to modify and improve it.

Overall Structure

The first piece of the overarching puzzle is to have a general structure for finding, collecting, triaging, and then processing all of the data. I’ve essentially built a simple funnel system for collecting all the basic data in the quickest manner possible. With the basics down, I can later skim through various portions to pick out the things I think are the most valuable and move them along to the next step. Ultimately I end up reading the best pieces on which I make copious notes and highlights. I’m still slowly trying to perfect the system for best keeping all this additional data as well.

Since I’ve seen so many apps and websites come and go over the years and lost lots of data to them, I far prefer to use my own personal website for doing a lot of the basic collection, particularly for online material. Toward this end, I use a variety of web services, RSS feeds, and bookmarklets to quickly accumulate the important pieces into my personal website which I use like a modern day commonplace book.

Collecting

In general, I’ve been using the Inoreader feed reader to track a large variety of RSS feeds from various clearinghouse sources (including things like ProQuest custom searches) down to individual researcher’s blogs as a means of quickly pulling in large amounts of research material. It’s one of the more flexible readers out there with a huge number of useful features including the ability to subscribe to OPML files, which many readers don’t support.

As a simple example arXiv.org has an RSS feed for the topic of “information theory” at http://arxiv.org/rss/math.IT which I subscribe to. I can quickly browse through the feed and based on titles and/or abstracts, I can quickly “star” the items I find most interesting within the reader. I have a custom recipe set up for the IFTTT.com service that pulls in all these starred articles and creates new posts for them on my WordPress blog. To these posts I can add a variety of metadata including top level categories and lower level tags in addition to other additional metadata I’m interested in.

I also have similar incoming funnel entry points via many other web services as well. So on platforms like Twitter, I also have similar workflows that allow me to use services like IFTTT.com or Zapier to push the URLs easily to my website. I can quickly “like” a tweet and a background process will suck that tweet and any URLs within it into my system for future processing. This type of workflow extends to a variety of sites where I might consume potential material I want to read and process. (Think academic social services like Mendeley, Academia.com, Diigo, or even less academic ones like Twitter, LinkedIn, etc.) Many of these services often have storage ability and also have simple browser bookmarklets that allow me to add material to them. So with a quick click, it’s saved to the service and then automatically ported into my website almost without friction.

My WordPress-based site uses the Post Kinds Plugin which takes incoming website URLs and does a very solid job of parsing those pages to extract much of the primary metadata I’d like to have without requiring a lot of work. For well structured web pages, it’ll pull in the page title, authors, date published, date updated, synopsis of the page, categories and tags, and other bits of data automatically. All these fields are also editable and searchable. Further, the plugin allows me to configure simple browser bookmarklets so that with a simple click on a web page, I can pull its URL and associated metadata into my website almost instantaneously. I can then add a note or two about what made me interested in the piece and save it for later.

Note here, that I’m usually more interested in saving material for later as quickly as I possibly can. In this part of the process, I’m rarely ever interested in reading anything immediately. I’m most interested in finding it, collecting it for later, and moving on to the next thing. This is also highly useful for things I find during my busy day that I can’t immediately find time for at the moment.

As an example, here’s a book I’ve bookmarked to read simply by clicking “like” on a tweet I cam across late last year. You’ll notice at the bottom of the post, I’ve optionally syndicated copies of the post to other platforms to “spread the wealth” as it were. Perhaps others following me via other means may see it and find it useful as well?

Triaging

At regular intervals during the week I’ll sit down for an hour or two to triage all the papers and material I’ve been sucking into my website. This typically involves reading through lots of abstracts in a bit more detail to better figure out what I want to read now and what I’d like to read at a later date. I can delete out the irrelevant material if I choose, or I can add follow up dates to custom fields for later reminders.

Slowly but surely I’m funneling down a tremendous amount of potential material into a smaller, more manageable amount that I’m truly interested in reading on a more in-depth basis.

Document storage

Calibre with GoodReads sync

Even for things I’ve winnowed down, there is still a relatively large amount of material, much of it I’ll want to save and personally archive. For a lot of this function I rely on the free multi-platform desktop application Calibre. It’s essentially an iTunes-like interface, but it’s built specifically for e-books and other documents.

Within it I maintain a small handful of libraries. One for personal e-books, one for research related textbooks/e-books, and another for journal articles. It has a very solid interface and is extremely flexible in terms of configuration and customization. You can create a large number of custom libraries and create your own searchable and sort-able fields with a huge variety of metadata. It often does a reasonable job of importing e-books, .pdf files, and other digital media and parsing out their meta data which prevents one from needing to do some of that work manually. With some well maintained metadata, one can very quickly search and sort a huge amount of documents as well as quickly prioritize them for action. Additionally, the system does a pretty solid job of converting files from one format to another, so that things like converting an .epub file into a .mobi format for Kindle are automatic.

Calibre stores the physical documents either in local computer storage, or even better, in the cloud using any of a variety of services including Dropbox, OneDrive, etc. so that one can keep one’s documents in the cloud and view them from a variety of locations (home, work, travel, tablet, etc.)

I’ve been a very heavy user of GoodReads.com for years to bookmark and organize my physical and e-book library and anti-libraries. Calibre has an exceptional plugin for GoodReads that syncs data across the two. This (and a few other plugins) are exceptionally good at pulling in missing metadata to minimize the amount that must be done via hand, which can be tedious.

Within Calibre I can manage my physical books, e-books, journal articles, and a huge variety of other document related forms and formats. I can also use it to further triage and order the things I intend to read and order them to the nth degree. My current Calibre libraries have over 10,000 documents in them including over 2,500 textbooks as well as records of most of my 1,000+ physical books. Calibre can also be used to add document data that one would like to ultimately acquire the actual documents, but currently don’t have access to.

BibTeX and reference management

In addition to everything else Calibre also has some well customized pieces for dovetailing all its metadata as a reference management system. It’ll allow one to export data in a variety of formats for document publishing and reference management including BibTex formats amongst many others.

Reading, Annotations, Highlights

Once I’ve winnowed down the material I’m interested in it’s time to start actually reading. I’ll often use Calibre to directly send my documents to my Kindle or other e-reading device, but one can also read them on one’s desktop with a variety of readers, or even from within Calibre itself. With a click or two, I can automatically email documents to my Kindle and Calibre will also auto-format them appropriately before doing so.

Typically I’ll send them to my Kindle which allows me a variety of easy methods for adding highlights and marginalia. Sometimes I’ll read .pdf files via desktop and use Adobe to add highlights and marginalia as well. When I’m done with a .pdf file, I’ll just resave it (with all the additions) back into my Calibre library.

Exporting highlights/marginalia to my website

For Kindle related documents, once I’m finished, I’ll use direct text file export or tools like clippings.io to export my highlights and marginalia for a particular text into simple HTML and import it into my website system along with all my other data. I’ve briefly written about some of this before, though I ought to better document it. All of this then becomes very easily searchable and sort-able for future potential use as well.

Here’s an example of some public notes, highlights, and other marginalia I’ve posted in the past.

Synthesis

Eventually, over time, I’ve built up a huge amount of research related data in my personal online commonplace book that is highly searchable and sortable! I also have the option to make these posts and pages public, private, or even password protected. I can create accounts on my site for collaborators to use and view private material that isn’t publicly available. I can also share posts via social media and use standards like webmention and tools like brid.gy so that comments and interactions with these pieces on platforms like Facebook, Twitter, Google+, and others is imported back to the relevant portions of my site as comments. (I’m doing it with this post, so feel free to try it out yourself by commenting on one of the syndicated copies.)

Now when I’m ready to begin writing something about what I’ve read, I’ve got all the relevant pieces, notes, and metadata in one centralized location on my website. Synthesis becomes much easier. I can even have open drafts of things as I’m reading and begin laying things out there directly if I choose. Because it’s all stored online, it’s imminently available from almost anywhere I can connect to the web. As an example, I used a few portions of this workflow to actually write this post.

Continued work

Naturally, not all of this is static and it continues to improve and evolve over time. In particular, I’m doing continued work on my personal website so that I’m able to own as much of the workflow and data there. Ideally I’d love to have all of the Calibre related piece on my website as well.

Earlier this week I even had conversations about creating new post types on my website related to things that I want to read to potentially better display and document them explicitly. When I can I try to document some of these pieces either here on my own website or on various places on the IndieWeb wiki. In fact, the IndieWeb for Education page might be a good place to start browsing for those interested.

One of the added benefits of having a lot of this data on my own website is that it not only serves as my research/data platform, but it also has the traditional ability to serve as a publishing and distribution platform!

Currently, I’m doing most of my research related work in private or draft form on the back end of my website, so it’s not always publicly available, though I often think I should make more of it public for the value of the aggregation nature it has as well as the benefit it might provide to improving scientific communication. Just think, if you were interested in some of the obscure topics I am and you could have a pre-curated RSS feed of all the things I’ve filtered through piped into your own system… now multiply this across hundreds of thousands of other scientists? Michael Nielsen posts some useful things to his Twitter feed and his website, but what I wouldn’t give to see far more of who and what he’s following, bookmarking, and actually reading? While many might find these minutiae tedious, I guarantee that people in his associated fields would find some serious value in it.

I’ve tried hundreds of other apps and tools over the years, but more often than not, they only cover a small fraction of the necessary moving pieces within a much larger moving apparatus that a working researcher and writer requires. This often means that one is often using dozens of specialized tools upon which there’s a huge duplication of data efforts. It also presumes these tools will be around for more than a few years and allow easy import/export of one’s hard fought for data and time invested in using them.

If you’re aware of something interesting in this space that might be useful, I’m happy to take a look at it. Even if I might not use the service itself, perhaps it’s got a piece of functionality that I can recreate into my own site and workflow somehow?

If you’d like help in building and fleshing out a system similar to the one I’ve outlined above, I’m happy to help do that too.

Related posts

Syndicated copies to: