Reply to Owning my Reading and 100 Days of Reading Chapters by Eddie Hinkle

Owning my Reading and 100 Days of Reading Chapters by Eddie Hinkle (eddiehinkle.com)
One of my goals in 2018 is to own my reading data rather than using Goodreads for all of that information. This will allow me to track information the way I want rather than have to do it like Goodreads wants me to.

Welcome to the club. It seems like there’s a growing interest in owning read posts lately. Doing a 100 day experiment seems like a brilliant way to self-dogfood it while simultaneously getting more material read. I may have to steal the idea!

Reply to Creating an Archive of a Set of Tweets by Aaron Davis

Creating an Archive of a Set of Tweets by Aaron DavisAaron Davis (collect.readwriterespond.com)
I really like Barnes’ intent to share. I just wonder if there is a means of owning these notes. Ideally, taking a POSSE approach, she might live blog and post this to Twitter. I vaguely remember Chris Aldrich sharing something about this recently, but the reference escapes me. This is also limited with her blog being located at WP.com. I therefore wondered about the option of pasting the content of the tweets into a blog as an archive.

Aaron, the process I use for taking longer streams of Tweets to own them (via PESOS) has Kevin Marks‘ excellent tool Noter Live at its core. Noter Live allows you to log in via Twitter and tweet(storm) from it directly. As its original intent was for live-tweeting at conferences and events, it has some useful built in tools for storing the names of multiple speakers (in advance, or even quickly on the fly) as well as auto-hashtagging your conversation. (I love it so much I took the time to write and contribute a user-manual.)

The best part is that it not only organically threads your tweets together into one continuing conversation, but it also gives you a modified output including the appropriate HTML and microformats classes so that you can cut and paste the entire thread and simply dump it into your favorite CMS and publish it as a standard blog post. (It also strips out the hashtags and repeated speaker references in a nice way.) With a small modification, you can also get your site to add hovercards to your post as well. I’ll also note in passing that it’s also been recently updated to support the longer 280 characters too.

The canonical version I use as an example of what this all looks like is this post: Notes from Day 1 of Dodging the Memory Hole: Saving Online News | Thursday, October 13, 2016.

Another shorter tweetstorm which also has u-syndication links for all of the individual tweets can be found at Indieweb and Education Tweetstorm. This one has the benefit of pulling in all the resultant conversations around my tweetstorm with backfeed from Brid.gy, though they’re not necessarily threaded properly in the comments the way I would ultimately like. As you mention in the last paragraph that having the links to the syndicated copies would be useful, I’ll note that I’ve already submitted it as an issue to Noter Live’s GitHub repo. In some sense, the entire Twitter thread is connected, so having the original tweet URL gives you most of the context, though it isn’t enough for all of the back feed by common methods (Webmentions+Brid.gy) presently.

I’ll also note that I’ve recently heard from a reputable source about a WordPress specific tool called Publishiza that may be useful in this way, but I’ve not had the chance to play with it yet myself.

 

Clearly, you can embed Tweets, often by adding the URL. However, there are more and more people deleting their Tweets and if you embed something that is deleted, this content is then lost. (Not sure where this leaves Storify etc.)

It’s interesting that you ask where this leaves Storify, because literally as I was reading your piece, I got a pop-up notification announcing that Storify was going to be shut down altogether!! (It sounds to me like you may have been unaware when you wrote your note. So Storify and those using it are in more dire circumstances than you had imagined.)

It’s yet another reason in a very long list why one needs to have and own their own digital presence.

As for people deleting their tweets, I’ll note that by doing a full embed (instead of just using a URL) from Twitter to WordPress (or using Noter Live), that the original text is preserved so that even if the original is deleted, a full archival copy of the original still exists.

Also somewhat related in flavor for the mechanism you’re discussing, I also often use Hypothesis to comment on, highlight, and annotate on web pages for academic/research uses. To save these annotations, I’ll add hashtags to the annotations within Hypothesis and then use Kris Shaffer’s excellent Hypothesis Aggregator plugin to parse the data and pull it in the specific parts I want. Though here again, either Hypothesis as a service or the plugin itself may ultimately fail, so I will copy/paste the raw HTML from its output to post onto my site for future safekeeping. In some sense I’m using the plugin as a simple tool to make the transcription and data transport much easier/quicker.

I hope these tips make it easier for you and others to better collect your content and display it for later consumption and archival use.

Syndicated copies to:

Reply to Aggregating the Decentralized Social Web by Jason Green

Aggregating the Decentralized Social Web by Jason Green (þoht-hord)
There are actually three problems to solve, reading, which is relatively easy, posting, which is harder, and social graph management, which is quite complex.

Some brief thoughts:

There are actually three problems to solve, reading, which is relatively easy, posting, which is harder, and social graph management, which is quite complex.

I might submit that posting is possibly the easiest of the three and that the reader problem is the most difficult. This is based on the tremendous number of platforms and CMSs on which one can post, but the dearth of feed readers in existence.

Managing your social graph

Something akin to a following list could help this. Or a modified version of OPML subscription lists could work. They just need to be opened up a tad. Some are working on the idea of an open microsub spec which could be transformative as well: https://indieweb.org/Microsub-spec


How do we decentralize the web without so decentralizing our own social presence that it becomes unmanageable?

You’ve already got a huge headstart in doing this with your own website. Why bother to have thousands of accounts (trust me when I say this) when you could have one? Then, as you suggest, password protected RSS (or other) feeds out to others could allow you to control which audiences get to see which content on your own site.


It looks as if Withknown has made some progress in this area with syndication plugins.

WordPress has lots of ways to syndicate content too. Ideally if everyone had their own website as a central hub, the idea of syndication would ultimately die out altogether. At best syndication is really just a stopgap until that point.


Subscribing to my personal timeline(s) with my favorite RSS reader would bring everything together,

I’ve written some thoughts about how feed readers could continue to evolve for the open web here: http://boffosocko.com/2017/06/09/how-feed-readers-can-grow-market-share-and-take-over-social-media/


listed items chronologically independent of source

Having a variety of ways to chop and dice up content are really required. We need more means of filtering content, not less. I know many who have given up on chronological feed reading. While it can be nice, there are many other useful means as well.

Syndicated copies to:

A reply to Aaron Davis about h-cards

A Further Reply to Chris Aldrich in regards to the IndieWeb by Aaron Davis (collect.readwriterespond.com)
I know that I have provided my perspective [already](https://readwriterespond.com/2017/10/indieweb-reflections/), but I have been doing a lot of thinking about it of late. There are so many elements that just feel so foreign. Take for example H-Cards.

Aaron, thanks for your continued thoughts on my post. These are some good observations. Interestingly, on November 9th of this month I had noticed that the h-card page on the wiki was one of the few around that had absolutely no section heading for IndieWeb Examples which is nearly ubiquitious on most other pages. (Examples of what others have done is not only a helpful guide, but helps to push the limits of what might be possible next.) I naturally added a section for them and added myself and made a call in the chat for others to do the same. One of the bits of feedback that resulted there was that the microformats.org wiki had a large number of examples and that was one of the reasons that the IndieWeb wiki had none. Naturally, for people in generation two and beyond this may be an issue as they’re potentially less likely to go looking for this information on another website. As of this afternoon, there’s now at least a link on that page that also points to the microformats wiki page for those other examples. I’ve also added a few other bits which may be helpful with regard to h-cards for the beginner.

As the IndieWeb continues onward, part of the underlying foundation is that “Each generation is expected to lower barriers for adoption successively for the next generation.” – from the Generations page on the IndieWeb Wiki.

To date, the majority of people in the movement are developers or programmers by trade, but increasingly there are people from generation 2, 3, and even many from generation 4 who are starting to take a look at what is now possible on the web that wasn’t just ten, twenty, or even thirty-six months ago. Many are not only just looking, but, like you, are spending the time, effort, and energy to implement what they’re able to and simultaneously spreading the word to larger circles.

As someone who personally identifies as being on the border of generations 1 and 2, I’m finding more and more people seeing what is happening and wanting the fruits and benefits of these tools for themselves. It’s the raw value they find in these methods and processes that spur them on even when they find themselves in deeper waters than they may have expected. Fortunately there are a large number of giving and helpful developers in the generation 1 crowd who are watching and listening to those coming after them. They’re taking up the mantle to not only improve things for just themselves, but to improve things for their fellow netizens.

All of this to say that there is currently a slow reworking and refining of material that’s on the wiki. It was only just earlier this week that a self-identifying fourth generation member asked about the word POSSE, which many would rightly consider jargon, and inquired about its relation to the more commonly known term of “cross-posting“. Surprisingly, cross-posting didn’t really exist on the wiki yet, but it was quickly added, and then later expanded to bring the ideas of POSSE, PESOS, PESETAS, and PASTA within it and then tied into the broader idea of syndication.

Your questions about h-card are very similar. Yes, the wiki page on the idea is certainly very generation 1 specific and perhaps a bit over-burdened by jargon. While I don’t think the concept of microformats is very difficult, I also realize that saying that is the result of having spent no less than ten hours reading about it, looking at examples, and implementing pieces of it by hand myself. So how can we make it simpler and easier for the next generation? The page needs a bit of overhaul and work for the next generations. Some of this is my goal in writing an IndieWeb book, though it’s geared toward an audience that is less likely to get their information from a wiki or contribute back to one in practice.

While h-card is a specific type of microformat, in practice most instances of it on the wiki are really referring to an object on a webpage that conveys identity. I’d suggest that it’s far easier to look at an h-card as an online version of a business card that contains some basic information about a person (or even a business or other entitiy) online. It has things like their name, their address, their email, their phone number, perhaps a photo, or even other very basic information about them. Each of these pieces of data has its own microformats to indicate to machines what they specifically represent.  While some h-cards are human readable (like mine), some could be hidden in a web pages’ header and are meant to be machine readable.

While h-cards can convey data in other use cases, in most IndieWeb instances they’re conveying information about either the owner of a website (and thus found on the site’s homepage), or they’re found on individual posts as indicators of the authorship of the content on that page.

Depending on how they’re nested into a web page, they can have different meanings. As possibly the most common example on a traditional WordPress article post, the main h-card for the page would indicate information about the author of that post. However, these article posts will often contain comments sections at the bottom and each individual comment will have it’s own separate author and author information and thus its own h-card. Because these comments are properly nested, they only indicate the authorship of each particular sub-section of a page.

For most IndieWeb use, having an h-card on your homepage tells parsers (code run by other computers) who you are and some basic information about yourself. Generally this extends to your name, your avatar, and your homepage URL.

In your case Aaron, when you’ve generally been sending me webmentions from your primary website (readwriterespond.com), I’ve often been missing your avatar in your comments because you didn’t have an h-card available on them. (I typically remedy this on my own website by hand because I’ve been able to guess the email address/”key” you use for your Gravatar account which then automatically fills in that missing data for me on those comments.)

In the particular case here, for your reply you’ll notice in looking at the source for the page with your response that your ZenPress theme smartly and kindly includes an h-card for you automatically. Here’s what it looks like in code:

<div class="entry-meta">
<address class="byline"><span class="author p-author vcard hcard h-card" itemprop="author" itemscope itemtype="http://schema.org/Person"><img alt='' src='https://secure.gravatar.com/avatar/d00e7ca24ca1b9c853da43af229c0e0e?s=40&d=mm&r=g' srcset='https://secure.gravatar.com/avatar/d00e7ca24ca1b9c853da43af229c0e0e?s=80&d=mm&r=g 2x' class='avatar avatar-40 photo u-photo' height='40' width='40' itemprop="image"/> <a class="url uid u-url u-uid fn p-name" href="https://collect.readwriterespond.com/author/admin/" title="View all posts by Aaron Davis" rel="author" itemprop="url"><span itemprop="name">Aaron Davis</span></a></span></address> <span class="sep"> | </span> <a href="https://collect.readwriterespond.com/posts/replies/a-further-reply-to-chris-aldrich-in-regards-to-the-indieweb/" title="10:06 pm" rel="bookmark" class="url u-url"><time class="entry-date updated published dt-updated dt-published" datetime="2017-11-22T22:06:22+00:00" itemprop="dateModified datePublished">November 22, 2017</time></a>
</div>

You’ll see that it includes (and I’ve highlighted them in red with the relevant microformats classes) your name, your website URL, and it also pulls in your Gravatar avatar using the WordPress back end, since you’ve provided your WordPress installation this data. This is the benefit of a smartly built and designed theme! Thus it would seem that for your “Collect” site, you needn’t worry about an h-card because your theme is already handling the details for you to a great extent. Ideally all themes would do this using standard data fields within a WordPress install. But until then…

Anticipating your next question, what about readwriterespond.com? There, your theme isn’t doing this work for you, so you’ll need to do it yourself. The easiest way to pull this off quickly is to use the IndieWeb Plugin for WordPress. The plugin adds a bunch of additional fields to the page under the “Users” menu located at /wp-admin/profile.php within your admin UI. By filling them in you’re providing the details you’d usually add to an h-card or for rel=”me” uses. The IndieWeb plugin then also makes an h-card widget available at /wp-admin/widgets.php. You can drag and drop it to any of the available pieces of your theme which often include sidebars, footers, and sometimes headers.

The widget does a relatively good job, but some will want more control over what and how things are presented and designed. For those, another option is to create your own HTML-based widget and put the code/data for your h-card into it. This is essentially what you’ve seen on my homepage at boffosocko.com. While mine is entirely handcoded, it may be easier for most to use the microformats website which has a fill-in-the-blanks h-card generator that will allow one to input all of the data they’d like to display and it will automatically mark all of it up properly so that one can cut and paste the semantic HTML directly into a web page or a widget.

There are a bevy of other options for dropping an h-card into your site which will work. You mentioned doing something via a child-theme and that’s an option as well as any one of dozens of plugins that will allow you to drop arbitrary code into your header and/or footer. (Incidentally a child-theme is an excellent way of doing small customizations of your theme without preventing future (security) updates of your theme from overwriting them. If you’re not using one, I recommend following one of the tutorials on the wiki to create one. I would hope it shouldn’t take you more than an hour to implement based on what I know of your skill level.)

As I think you’ve mentioned, there are a few simple validators that will accept a URL which they can parse to show the h-card data they find. These include:

People can use these to see if their h-cards are working as they generally expect them to.

Naturally, there are some additional subtleties in h-cards which are noted on both the IndieWeb wiki and the microformats wiki pages, but most of these aren’t of huge consequence to average users or are experimental features which aren’t widely distributed or supported. If it makes you feel better, I’ll also note that it’s not always the case that experienced theme builders or even WordPress core maintainers will properly use microformats as there are frequently cases where they’re wildly misused, abused, or mistreated in the extreme. We can only do our best I suppose…

Hopefully some of this helps put things into perspective. Now that you’re able to sign into the IndieWeb wiki, I invite you to add or modify parts you feel could be clearer or improved as you use and implement them yourself. Surely doing so will help make things easier for those that follow us both.

Syndicated copies to:

Reply to Wat is POSSE en PESOS op het IndieWeb? by Frank Meeuwsen

Wat is POSSE en PESOS op het IndieWeb? by Frank Meeuwsen (Digging the digital)
Nieuwe termen, nieuwe wijn? Of is het meer van wat we al kenden?

I like to think of the IndieWeb as delivering on the original promise of the original decentralized internet. It’s nice that billions of people can now more easily communicate with so-called “free” services like Twitter, Facebook, et al., but it’s at a much larger expense of giving away all of their data, control, and often their privacy and even identities. Social media sites all have their own standards, functionalities, and even quirks, none of which is controllable by individuals, so if you use them, you are forced to use them on their terms instead of your own. The dumpster fire that Twitter has become as a “community” is a prime example. I also think it’s a terrible drawback that if you have a Facebook account and want to communicate with someone on Twitter, you need a Twitter account to do so. Here’s an example of what happens with this type of service-proliferation. Who wants to have to manage all of this, much less remember which service you were having which conversation on?

As you say, much of the data one posts may have little value and feel ephemeral, but certainly not all of it, and certainly not in aggregate. At least the individual should get to decide and have agency over the decision. As it stands, I can delete individual posts from Facebook, but I have no guarantee that the data is physically removed from their servers and still available for either their internal use or for possible future governmental use.

Another way to frame it all is to think of your web presence as a commonplace book.

If you recall the early days of social media, you may appreciate this alternate viewpoint of social media that I wrote about a few months ago: http://boffosocko.com/2017/04/11/a-new-way-to-know-and-master-your-social-media-flow/

Interestingly, I came across your post almost immediately after fleshing out some detail on the wikipage for cross-posting which may be a worthwhile overview from the perspective of a traditional social media user. To help conglomerate all of the various pieces for you and others in the future, I’ve created a category page under the heading “syndication” with links to all of the various pieces which may together make a more coherent whole.

As for your question (excuse my rough translation):

Then I think again, if I put my tweets first on my own site, what about the possible conversations that result from it? If someone answers and I reply again, do I do that on my own site? The IndieWeb wiki is not very clear here…

There isn’t a direct answer within some of the pages you mention, but ideally, yes, all of the conversation takes place in a back and forth manner on your own website (as well as that of those with whom you’re communicating). Sadly, not all of the moving pieces have been solved completely with respect to user interface which could be done in multiple ways. One standard in particular that isn’t supported by many is that of salmention. Until then, some of us are managing to do this manually to maintain the threaded comments so that the entire context of a conversation is still available on our own sites. Even without it, some semblance of threading is possible by providing permalink URLs for all the parts of the conversations on individual pages until such time as it’s more feasible. If you care to experiment, try commenting on this on my site and see what happens.

Incidentally, especially if you haven’t come across it yet, I hope that as you continue to explore and write that you’ll syndicate your content to https://news.indieweb.org/nl for the benefit of others.

Syndicated copies to:

Reply to Pingbacks: hiding in plain sight by Ian Guest

Pingbacks: hiding in plain sight by Ian Guest (Marginal Notes)
Wait! Aren’t you researching Twitter? I am indeed and the preceding discussion has largely centred on pingbacks, a feature of blogs, rather than microblogs. I have two points to make here: firstly that microblogs and Twitter may have features which function in a similar way to pingbacks. The retweet for example provides a similar link to a text or resource that someone else has produced. I’ll admit that it has less permanence than a pingback, patiently ensconced at the foot of a blog and ready to whisk the reader off to the linked blog, but then the structure and function of Twitter is one of flow and change when compared with a blog; it’s a different beast. The second is that my point of entry to the blogs and their interconnected web of enabling pingbacks was a tweet. Two actually. Andrea’s tweet took me to another tweet which referenced Aditi’s blog post; had I not been on Twitter and had Andrea and I not made a connection through that platform, the likelihood of me ever being aware of Aditi’s post and the learning opportunities that it and its wider assemblage brings together would be minimal.

I’m finding your short study and thoughts on pingbacks while I was thinking about Webmentions (and a particular issue that Aaron Davis was having with them) after having spent a chunk of the day remotely following the Dodging the Memory Hole 2017 conference at the Internet Archive in San Francisco.

It’s made me realize that one of the bigger values of the iteration that Webmentions has over its predecessor pingbacks and trackbacks is that at least a snapshot of the content has captured on the receiving site. As you’ve noted that while the receiving site has the scant data from the pingback, there’s not much to look at in general and even less when the sending site has disappeared from the web. In the case of Webmentions, even if the sending site has disappeared from the web, the receiving site can still potentially display more of that missing content if it wishes. Within the WordPress ecosystem simple mentions only show the indication that the article was mentioned, but hiding within the actual database on the back end is a copy of the post itself. With a few quick changes to make the “mention” into a “reply” the content of the original post can be quickly uncovered/recovered. (I do wonder a bit if you cross-referenced the Internet Archive or other sources in your search to attempt to recover those lost links.)

I will admit that I recall the Webmention spec allowing a site to modify and/or update its replies/webmentions, but in practice I’m not sure how many sites actually implement this functionality, so from an archiveal standpoint it’s probably pretty solid/stable at the moment.

Separately, I also find myself looking at your small example and how you’ve expanded it out a level or two within your network to see how it spread. This reminds me of Ryan Barrrett’s work from earlier this year on the IndieWeb network in creating the Indie Map tool which he used to show the interconnections between over three thousand people (or their websites) using links like Webmentions. Depending on your broader study, it might make an interesting example to look at and/or perhaps some code to extend?

With particular regard to your paragraph under “Wait! Aren’t you researching Twitter?” I thought I’d point you to a hybrid approach of melding some of Twitter and older/traditional blogs together. I personally post everything to my own website first and syndicate it to Twitter and then backfeed all of the replies, comments, and reactions via Brid.gy using webmentions. While there aren’t a lot of users on the internet doing something like this at the moment, it may provide a very different microcosm for you to take a look at. I’ve even patched together a means to allow people to @mention me on Twitter that sends the data to my personal website as a means of communication.

After a bit of poking around, I was also glad to find a fellow netizen who is also consciously using their website as a commonplace book of sorts.

Syndicated copies to:

Reply to seanl on literati.org

Reply to post on Mastodon by Sean R. LynchSean R. Lynch (social.literati.org)
@chrisaldrich @sikkdays I must be missing something. Why wouldn't one just add webmention support to Mastodon?

That’s been proposed (see: https://github.com/tootsuite/mastodon/search?q=webmention&type=Issues&utf8=%E2%9C%93) , but hasn’t gotten any uptake by Mastodon devs yet. But, as always, on the internet, the web will find a way. #

Syndicated copies to:

Reply to @sikkdays @seanl I’m happy to help too if you like.

A post on Mastodon by Chris AldrichChris Aldrich (Mastodon)
@sikkdays @seanl I'm happy to help too if you like. There may be some inactive and even forked projects within the broader scope, but then there are lots which are flourishing. WordPress in particular is one of those since, it's what you mentioned: https://indieweb.org/Getting_Started_on_WordPress A good place to start is to jump into the IndieWeb chat (via web, IRC, Slack, etc.) https://indieweb.org/discuss For a quick overview, try here: altplatform.org/2017/07/28/an-introduction-to-the-indieweb/

Testing out to see if I can reply to Mastodon via my own website. This is going to be awesome if it works!!!

Reply toMeredith Fierro on Setting up a Feed with Feedly

Setting up a Feed with Feedly by Meredith Fierro (Meredith Fierro)
Working at Reclaim means I get to interact with people who do incredible work within the Ed Tech community. I was first exposed to this at #domains17 and I remember thinking that I wanted to keep up with all of these wonderful folks and the work their doing. At first, I had no idea how I could keep up with all the blog posts except through twitter. I didn’t really like that idea though because I could lose tweets within my feed. I wanted a place where I could keep them all together. I don’t know too much about RSS feeds but I knew that’s where I needed to start. I a little bit of experience using FeedWordPress to syndicate blog posts to the main class hub but I knew that would chew right through my storage limit.

If you want to take it a step further, you could consider making an open OPML file of the people you’re following from a conference like Domains ’17. Much like Twitter lists, these are sharable (so others don’t need to build them by hand), or more importantly for Feedly importable! Some RSS readers will also allow dynamic updating of these OPML lists so if someone is subscribed to your list and you add a new source, everyone following the list gets the change. I’ve written some thoughts relating to this with respect to the old school blogrolls and included an example here: http://boffosocko.com/2017/06/26/indieweb-blogroll/

If you do set up an OPML file for your Domains ’17, let me know. I’d love to subscribe to it!

Syndicated copies to:

Reply to Colin Walker on the idea of a required reading page

I've been thinking some more about the idea of a required reading page. by Colin Walker (Social Thoughts)
Could the things held here be placed on an About page? Possibly - it depends what they are. If they are links to your own posts then almost certainly. External links? Maybe, maybe not. So, why have a required page and what does it give the reader?

In classical studies in the Renaissance the number of texts which were popular and considered expected/required reading for a “learned” person were a relatively set number and generally completely consumable and completely known by those with an education. Thus a writer could make a reference to the old testament or to Cato and the vast majority of the audience would get that reference (without footnotes or explicit references) having read these same texts.

Sadly the depth and breadth of available literature has exploded since Gutenberg making it nearly impossible for anyone in a modern audience to have read and know what the author may presume them to know. As an example, in Shakespeare’s day many of his side references would be known by even the uneducated, while most modern students have to rely on Cliff’s Notes or annotated editions to understand those cultural references. The modern day equivalent is that most avid fans of the Simpsons television show are also generally well educated on popular film since the 1940s, otherwise they’re missing 90% of the jokes.

Things become much more stilted within the blogging arena, particularly when a writer may cover a dozen areas or more in which they may have significant experience, but which will likely be completely unknown to some of their regular readers, much less new readers who aren’t specialists in these fields themselves. This may turn away readers at worst, but will destroy the conversation at best. (Though I will admit it doesn’t seem deter some of the lookie-loos from taking at shot at interacting on the lowest levels at Terry Tao’s blog.)

In some sense, in knowing their audience, writers have to have some grasp of what they do or don’t know, otherwise it becomes difficult to communicate those progressively more expanding thoughts. Having hyperlinks certainly helps within a piece, much the way academics footnote journal articles, but it can be just as painful for the writer to constantly be referring back to the same handful of articles constantly. In this sense, having a recommended/required reading section may be useful, particularly if it were ubiquitous, but I suspect that the casual drive-by reader may not notice or care very much. However, for that rare <5% it may be just the primer they’re looking for to better understand you and what you’re writing about.

One of the most difficult things to do in a new job or when entering a new field is to become aware of the understood culture and history of the company or the field itself. One must learn the jargon and history to contextualize the overarching conversation. Jumping into Dave Winer’s blog without knowing his background and history is certainly a more painful thing than starting to read someone whose blog is less than a year old and could thus be consumed in a short time versus thousands upon thousands of posts since the literal start of blogging on the internet. It’s somewhat reminiscent of David Shanske’s problem of distilling down a bio for an h-card from the rest of his site and his resume. What do you want someone you’ve just met to know about you to more quickly put you into a broader context, especially when you want them to get to know you better?

I think we’re all in the same boat as David in figuring out the painful path of distilling all this down in a sensible and straightforward manner. I’m curious to see what you come up with and how it evolves over time.

Reply to Homebrew Website Club: One Year In by Jonathan Prozzi

Homebrew Website Club: One Year In by Jonathan Prozzi (jonathanprozzi.net)
There’s some amazing themes and plugins being developed for WordPress that handle some of the more complex technical requirements for implementing the Indieweb principles, so I want to now be able to focus on helping others through two methods of outreach. First, to help any current WordPress users understand and integrate Indieweb principles into their site. Second, to help anyone who is interested in setting up a site and open to using WordPress get an Indieweb web presence up and running from the ground up. This will remain the core thrust of my Indieweb exploration from now on, but I want to also deepen my knowledge of what can be done with WordPress. There’s lots of exciting things on the horizon, and I want to give back to both the WordPress and Indieweb communities through sharing my experiences and lessons learned from the last year.

Congratulations Jonathan!

I really appreciate your “Updated Goals and Purpose” section as they’re something I’ve been slowly beginning to crack away at as well. I’ve begun some work on a book geared toward Gen2+ users as well as doing some additional outreach. I’ve even got a domain registered to target that particular market.)

If you think it would help, I’m happy to help spitball with you to create a more cohesive plan that some of us can work on both individually and as a group.

A reply to Aaron Davis on setting up IndieWeb replies in WordPress

a tweet by Aaron DavisAaron Davis (Twitter)


Aaron, there are a couple of different ways to set up IndieWeb replies in WordPress (or even on other platforms like Known).

Known has a simple reply mechanism, but isn’t always good at including the original context for the reply making the individual post as stand-alone as one might like. Known includes the URL of the post it’s a reply to, but that’s about it. It’s contingent upon the user reading the reply clicking on the link to the original post to put the two together. This is pretty simple and easy when using it to reply to posts on Twitter, but isn’t always as flexible in other contexts.

One of the added values of replies in WordPress is that there’s a bit more flexibility for including a reply context to the post. You’ll note that this reply has some context at the top indicating exactly to what it is I’m replying.

Manual Replies

The first way to generically set up a reply on almost any platform that supports sending Webmentions is to write your reply and and include some simple semantic HTML along with the URL of the post you’re replying to that includes a class “u-in-reply-to” within the anchor tag like so:
<div class="h-entry">
<a class="u-in-reply-to" href="http://example.com/note123">The post you're replying to</a>
<div class="p-name p-content"> Good point! Now what is the next thing we should do?</div>
</div>

Some of this with additional information is detailed in the reply page on the IndieWeb wiki.

If you’re using WordPress, you can do this manually in the traditional content block, though you likely won’t need the div with h-entry as your theme more likely than not already includes it.

More automated replies

If you’d like a quicker method for WordPress, you can use a few simple plugins to get replies working. Generally I recommend David Shanske’s excellent and robust Post Kinds Plugin which handles both reply contexts as well as all of the required markup indicated in the manual example above. Naturally, you’ll also want to have the Webmention Plugin for WordPress installed as well so that the reply is sent via Webmention to the original post so that it can display your reply (if it chooses to–many people moderate their replies, while others simply collect them but don’t display them.)

A few weeks ago I wrote about configuring and using the Post Kinds Plugin in great detail. You should be able to follow the example there, but just choose the “reply” kind instead of the “read” example I’ve used. In the end, it will look a lot like this particular reply you’re reading right now, though in this case, I’ve manually included your original tweet in the body of my reply. A more native Post Kinds generated reply to a tweet can be seen at this example: http://boffosocko.com/2016/08/17/why-norbert-weiner/

Syndicating Elsewhere

Naturally, your next question may be how to POSSE your replies to other services like Twitter. For that, there’s a handful of methods/plugins, though often I suggest doing things manually a few times to familiarize yourself with the process of what’s happening. Then you can experiment around with one or more of the methods/plugins. In general the easier the plugin is to set up (example: JetPack), the less control you have over how it looks while the more complicated it is (example: SNAP), the more control you have over how the output looks.

Experiment

If you’d like, feel free to experiment sending replies back to this post while you try things out. If you need additional help, do join one or more of us in the IndieWeb chat.

Syndicated copies to:

🎧 It’s putrid, it’s paleo, and it’s good for you | Eat This Podcast

It’s putrid, it’s paleo, and it’s good for you by Jeremy Cherfas (Eat This Podcast)
How do you get your vitamin C where no fruit and veg will grow? As our ancestors moved north out of Africa, and especially as they found themselves in climates that supported less gathering and more hunting, they were faced with an acute nutritional problem: scurvy. Humans are one of the few mammals that cannot manufacture this vital little chemical compound (others being the guinea pig and fruit bats). If there are no fruit and veg around, where will that vitamin C come from? That’s a question that puzzled John Speth, an archaeologist and Emeritus Professor of Anthropology at the University of Michigan in Ann Arbor. He found clues in the accounts of sailors and explorers shipwrecked in the Arctic. Those who, often literally, turned their noses up at the “disgusting” diet of the locals sometimes paid with their lives. Those who ate what the locals ate lived to tell the tale. John Speth told me the tale of how he came to propose the idea that putrid meat and fish may have been a key part of Neanderthal and modern human diet during the Palaeolithic.

As always a brilliant episode from Jeremy.

There’s quite a lot to unpack here and I’m sure there’s a few days of research papers to read to even begin to scratch the surface of some of what’s going on here with regard to the disgust portion of the program.

One of the things that strikes me offhand within the conversation of botulism and its increase when Arctic peoples went from traditional life ways to more modern ones are related stories I’ve heard, even recently, from researchers who are looking for replacement antibiotics for evolving superbugs. Often their go-to place for searching for them is in the dirt which can be found all around us. I’m curious if there’s not only specific chemistry (perhaps anaerobic or even affected by temperature) but even antibiotics found in the ground which are killing microbes which could cause these types of sickness? Of course, with extreme cold usually comes frozen ground and permafrost which may make burying foods for fermenting more difficult. I’m curious how and were native peoples were doing their burying to give an idea for what may have been happening to protect them.

Another piece which dovetails with this one is a story I heard yesterday morning on NPR as I woke up. Entitled To Get Calcium, Navajos Burn Juniper Branches To Eat The Ash, it also covered the similar idea that native peoples had methods for fulfilling their dietary needs in unique ways.

Syndicated copies to:

🎧 Jam Tomorrow | Eat This Podcast

Jam tomorrow? by Jeremy Cherfas (Eat This Podcast)
What is jam? “A preserve made from whole fruit boiled to a pulp with sugar.” Lots of opportunities to quibble with that, most especially, if you’re planning to sell the stuff in the UK and label it “jam,” the precise amount of sugar. More than 60% and you’re fine calling it jam. Less than 50% and you need to call it reduced-sugar jam. Lower still, and it becomes a fruit spread. All that is about to change though, thanks to a UK Goverment regulation that will allow products with less than 60% sugar to be labelled jam. There’s nothing like a threat to the traditional British way of life to motivate the masses, although as an expat, I had no idea of the kerfuffle this had raised until I read about it on the website of the Campaign for Real Farming.

I realize that I’m probably ruined by eating soft set American jams and jellies all my life, aside from a half a dozen or so homemade versions I’ve made myself over the years. Here in the states, we’ve slipped even further–most jams are comprised of high fructose corn syrup instead of sugar. If only that revolution had happened after the 1920s instead of the 1770s perhaps things would be different.

I’m curious what’s become of this issue four years on? Did the “hard”-liners win out, or did the regulations turn to (soft set) jelly?

Syndicated copies to: